question_id
int64 59.6M
70.5M
| question_title
stringlengths 15
150
| question_body
stringlengths 134
33.4k
| accepted_answer_id
int64 59.6M
73.3M
| question_creation_date
timestamp[us] | question_answer_count
int64 1
9
| question_favorite_count
float64 0
8
⌀ | question_score
int64 -6
52
| question_view_count
int64 10
79k
| tags
stringclasses 2
values | answer_body
stringlengths 48
16.3k
| answer_creation_date
timestamp[us] | answer_score
int64 -2
59
| link
stringlengths 31
107
| context
stringlengths 134
251k
| answer_start
int64 0
1.28k
| answer_end
int64 49
10.2k
| question
stringlengths 158
33.1k
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
61,808,760
|
Replace column names with quotations with no quotations
|
<p>I am trying to replace my column names that have quotations and simply remove the quotations but when I try this:</p>
<pre><code>for x in df.columns:
x = x.replace('"', '')
print(x)
</code></pre>
<p>Nothing happens and the quotations are still there.</p>
| 61,808,999
| 2020-05-14T22:53:06.573000
| 3
| null | 0
| 59
|
python|pandas
|
<p>I would do something like this</p>
<pre><code>cols = [column_name.replace('"','') for column_name in df.columns]
df.columns = cols
</code></pre>
<p>CODE</p>
<pre><code>import pandas as pd
df=pd.DataFrame({"a":[1,2],'"b"':[3,4]})
print('BEFORE')
print(df)
cols = [column_name.replace('"','') for column_name in df.columns]
df.columns = cols
print('AFTER')
print(df)
</code></pre>
<p>OUTPUT</p>
<pre><code>BEFORE
a "b"
0 1 3
1 2 4
AFTER
a b
0 1 3
1 2 4
</code></pre>
| 2020-05-14T23:12:55.410000
| 3
|
https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.query.html
|
pandas.DataFrame.query#
pandas.DataFrame.query#
DataFrame.query(expr, *, inplace=False, **kwargs)[source]#
Query the columns of a DataFrame with a boolean expression.
Parameters
exprstrThe query string to evaluate.
You can refer to variables
in the environment by prefixing them with an ‘@’ character like
@a + b.
You can refer to column names that are not valid Python variable names
by surrounding them in backticks. Thus, column names containing spaces
or punctuations (besides underscores) or starting with digits must be
I would do something like this
cols = [column_name.replace('"','') for column_name in df.columns]
df.columns = cols
CODE
import pandas as pd
df=pd.DataFrame({"a":[1,2],'"b"':[3,4]})
print('BEFORE')
print(df)
cols = [column_name.replace('"','') for column_name in df.columns]
df.columns = cols
print('AFTER')
print(df)
OUTPUT
BEFORE
a "b"
0 1 3
1 2 4
AFTER
a b
0 1 3
1 2 4
surrounded by backticks. (For example, a column named “Area (cm^2)” would
be referenced as `Area (cm^2)`). Column names which are Python keywords
(like “list”, “for”, “import”, etc) cannot be used.
For example, if one of your columns is called a a and you want
to sum it with b, your query should be `a a` + b.
New in version 0.25.0: Backtick quoting introduced.
New in version 1.0.0: Expanding functionality of backtick quoting for more than only spaces.
inplaceboolWhether to modify the DataFrame rather than creating a new one.
**kwargsSee the documentation for eval() for complete details
on the keyword arguments accepted by DataFrame.query().
Returns
DataFrame or NoneDataFrame resulting from the provided query expression or
None if inplace=True.
See also
evalEvaluate a string describing operations on DataFrame columns.
DataFrame.evalEvaluate a string describing operations on DataFrame columns.
Notes
The result of the evaluation of this expression is first passed to
DataFrame.loc and if that fails because of a
multidimensional key (e.g., a DataFrame) then the result will be passed
to DataFrame.__getitem__().
This method uses the top-level eval() function to
evaluate the passed query.
The query() method uses a slightly
modified Python syntax by default. For example, the & and |
(bitwise) operators have the precedence of their boolean cousins,
and and or. This is syntactically valid Python,
however the semantics are different.
You can change the semantics of the expression by passing the keyword
argument parser='python'. This enforces the same semantics as
evaluation in Python space. Likewise, you can pass engine='python'
to evaluate an expression using Python itself as a backend. This is not
recommended as it is inefficient compared to using numexpr as the
engine.
The DataFrame.index and
DataFrame.columns attributes of the
DataFrame instance are placed in the query namespace
by default, which allows you to treat both the index and columns of the
frame as a column in the frame.
The identifier index is used for the frame index; you can also
use the name of the index to identify it in a query. Please note that
Python keywords may not be used as identifiers.
For further details and examples see the query documentation in
indexing.
Backtick quoted variables
Backtick quoted variables are parsed as literal Python code and
are converted internally to a Python valid identifier.
This can lead to the following problems.
During parsing a number of disallowed characters inside the backtick
quoted string are replaced by strings that are allowed as a Python identifier.
These characters include all operators in Python, the space character, the
question mark, the exclamation mark, the dollar sign, and the euro sign.
For other characters that fall outside the ASCII range (U+0001..U+007F)
and those that are not further specified in PEP 3131,
the query parser will raise an error.
This excludes whitespace different than the space character,
but also the hashtag (as it is used for comments) and the backtick
itself (backtick can also not be escaped).
In a special case, quotes that make a pair around a backtick can
confuse the parser.
For example, `it's` > `that's` will raise an error,
as it forms a quoted string ('s > `that') with a backtick inside.
See also the Python documentation about lexical analysis
(https://docs.python.org/3/reference/lexical_analysis.html)
in combination with the source code in pandas.core.computation.parsing.
Examples
>>> df = pd.DataFrame({'A': range(1, 6),
... 'B': range(10, 0, -2),
... 'C C': range(10, 5, -1)})
>>> df
A B C C
0 1 10 10
1 2 8 9
2 3 6 8
3 4 4 7
4 5 2 6
>>> df.query('A > B')
A B C C
4 5 2 6
The previous expression is equivalent to
>>> df[df.A > df.B]
A B C C
4 5 2 6
For columns with spaces in their name, you can use backtick quoting.
>>> df.query('B == `C C`')
A B C C
0 1 10 10
The previous expression is equivalent to
>>> df[df.B == df['C C']]
A B C C
0 1 10 10
| 532
| 928
|
Replace column names with quotations with no quotations
I am trying to replace my column names that have quotations and simply remove the quotations but when I try this:
for x in df.columns:
x = x.replace('"', '')
print(x)
Nothing happens and the quotations are still there.
|
65,165,617
|
How to aggregate text in pandas according to another column name
|
<p>I want to aggregate the text column of all the identical names. e.g.</p>
<p>I have:</p>
<pre><code>df = pd.DataFrame([['Tom', 'good', 3],
['Jack', 'bad', 6],
['Tom', 'average', 9],
],
columns=['name', 'text', 'day'])
</code></pre>
<p>I want:</p>
<pre><code>df = pd.DataFrame([['Tom', 'good average'],
['Jack', 'bad',],
],
columns=['name', 'text'])
</code></pre>
| 65,165,742
| 2020-12-06T07:14:33.380000
| 2
| null | 1
| 88
|
python|pandas
|
<pre><code>df.groupby(by='name').agg(text=("text", lambda x: ",".join(set(x))))
</code></pre>
| 2020-12-06T07:32:47.433000
| 3
|
https://pandas.pydata.org/docs/user_guide/groupby.html
|
Group by: split-apply-combine#
Group by: split-apply-combine#
By “group by” we are referring to a process involving one or more of the following
steps:
Splitting the data into groups based on some criteria.
Applying a function to each group independently.
Combining the results into a data structure.
Out of these, the split step is the most straightforward. In fact, in many
situations we may wish to split the data set into groups and do something with
df.groupby(by='name').agg(text=("text", lambda x: ",".join(set(x))))
those groups. In the apply step, we might wish to do one of the
following:
Aggregation: compute a summary statistic (or statistics) for each
group. Some examples:
Compute group sums or means.
Compute group sizes / counts.
Transformation: perform some group-specific computations and return a
like-indexed object. Some examples:
Standardize data (zscore) within a group.
Filling NAs within groups with a value derived from each group.
Filtration: discard some groups, according to a group-wise computation
that evaluates True or False. Some examples:
Discard data that belongs to groups with only a few members.
Filter out data based on the group sum or mean.
Some combination of the above: GroupBy will examine the results of the apply
step and try to return a sensibly combined result if it doesn’t fit into
either of the above two categories.
Since the set of object instance methods on pandas data structures are generally
rich and expressive, we often simply want to invoke, say, a DataFrame function
on each group. The name GroupBy should be quite familiar to those who have used
a SQL-based tool (or itertools), in which you can write code like:
SELECT Column1, Column2, mean(Column3), sum(Column4)
FROM SomeTable
GROUP BY Column1, Column2
We aim to make operations like this natural and easy to express using
pandas. We’ll address each area of GroupBy functionality then provide some
non-trivial examples / use cases.
See the cookbook for some advanced strategies.
Splitting an object into groups#
pandas objects can be split on any of their axes. The abstract definition of
grouping is to provide a mapping of labels to group names. To create a GroupBy
object (more on what the GroupBy object is later), you may do the following:
In [1]: df = pd.DataFrame(
...: [
...: ("bird", "Falconiformes", 389.0),
...: ("bird", "Psittaciformes", 24.0),
...: ("mammal", "Carnivora", 80.2),
...: ("mammal", "Primates", np.nan),
...: ("mammal", "Carnivora", 58),
...: ],
...: index=["falcon", "parrot", "lion", "monkey", "leopard"],
...: columns=("class", "order", "max_speed"),
...: )
...:
In [2]: df
Out[2]:
class order max_speed
falcon bird Falconiformes 389.0
parrot bird Psittaciformes 24.0
lion mammal Carnivora 80.2
monkey mammal Primates NaN
leopard mammal Carnivora 58.0
# default is axis=0
In [3]: grouped = df.groupby("class")
In [4]: grouped = df.groupby("order", axis="columns")
In [5]: grouped = df.groupby(["class", "order"])
The mapping can be specified many different ways:
A Python function, to be called on each of the axis labels.
A list or NumPy array of the same length as the selected axis.
A dict or Series, providing a label -> group name mapping.
For DataFrame objects, a string indicating either a column name or
an index level name to be used to group.
df.groupby('A') is just syntactic sugar for df.groupby(df['A']).
A list of any of the above things.
Collectively we refer to the grouping objects as the keys. For example,
consider the following DataFrame:
Note
A string passed to groupby may refer to either a column or an index level.
If a string matches both a column name and an index level name, a
ValueError will be raised.
In [6]: df = pd.DataFrame(
...: {
...: "A": ["foo", "bar", "foo", "bar", "foo", "bar", "foo", "foo"],
...: "B": ["one", "one", "two", "three", "two", "two", "one", "three"],
...: "C": np.random.randn(8),
...: "D": np.random.randn(8),
...: }
...: )
...:
In [7]: df
Out[7]:
A B C D
0 foo one 0.469112 -0.861849
1 bar one -0.282863 -2.104569
2 foo two -1.509059 -0.494929
3 bar three -1.135632 1.071804
4 foo two 1.212112 0.721555
5 bar two -0.173215 -0.706771
6 foo one 0.119209 -1.039575
7 foo three -1.044236 0.271860
On a DataFrame, we obtain a GroupBy object by calling groupby().
We could naturally group by either the A or B columns, or both:
In [8]: grouped = df.groupby("A")
In [9]: grouped = df.groupby(["A", "B"])
If we also have a MultiIndex on columns A and B, we can group by all
but the specified columns
In [10]: df2 = df.set_index(["A", "B"])
In [11]: grouped = df2.groupby(level=df2.index.names.difference(["B"]))
In [12]: grouped.sum()
Out[12]:
C D
A
bar -1.591710 -1.739537
foo -0.752861 -1.402938
These will split the DataFrame on its index (rows). We could also split by the
columns:
In [13]: def get_letter_type(letter):
....: if letter.lower() in 'aeiou':
....: return 'vowel'
....: else:
....: return 'consonant'
....:
In [14]: grouped = df.groupby(get_letter_type, axis=1)
pandas Index objects support duplicate values. If a
non-unique index is used as the group key in a groupby operation, all values
for the same index value will be considered to be in one group and thus the
output of aggregation functions will only contain unique index values:
In [15]: lst = [1, 2, 3, 1, 2, 3]
In [16]: s = pd.Series([1, 2, 3, 10, 20, 30], lst)
In [17]: grouped = s.groupby(level=0)
In [18]: grouped.first()
Out[18]:
1 1
2 2
3 3
dtype: int64
In [19]: grouped.last()
Out[19]:
1 10
2 20
3 30
dtype: int64
In [20]: grouped.sum()
Out[20]:
1 11
2 22
3 33
dtype: int64
Note that no splitting occurs until it’s needed. Creating the GroupBy object
only verifies that you’ve passed a valid mapping.
Note
Many kinds of complicated data manipulations can be expressed in terms of
GroupBy operations (though can’t be guaranteed to be the most
efficient). You can get quite creative with the label mapping functions.
GroupBy sorting#
By default the group keys are sorted during the groupby operation. You may however pass sort=False for potential speedups:
In [21]: df2 = pd.DataFrame({"X": ["B", "B", "A", "A"], "Y": [1, 2, 3, 4]})
In [22]: df2.groupby(["X"]).sum()
Out[22]:
Y
X
A 7
B 3
In [23]: df2.groupby(["X"], sort=False).sum()
Out[23]:
Y
X
B 3
A 7
Note that groupby will preserve the order in which observations are sorted within each group.
For example, the groups created by groupby() below are in the order they appeared in the original DataFrame:
In [24]: df3 = pd.DataFrame({"X": ["A", "B", "A", "B"], "Y": [1, 4, 3, 2]})
In [25]: df3.groupby(["X"]).get_group("A")
Out[25]:
X Y
0 A 1
2 A 3
In [26]: df3.groupby(["X"]).get_group("B")
Out[26]:
X Y
1 B 4
3 B 2
New in version 1.1.0.
GroupBy dropna#
By default NA values are excluded from group keys during the groupby operation. However,
in case you want to include NA values in group keys, you could pass dropna=False to achieve it.
In [27]: df_list = [[1, 2, 3], [1, None, 4], [2, 1, 3], [1, 2, 2]]
In [28]: df_dropna = pd.DataFrame(df_list, columns=["a", "b", "c"])
In [29]: df_dropna
Out[29]:
a b c
0 1 2.0 3
1 1 NaN 4
2 2 1.0 3
3 1 2.0 2
# Default ``dropna`` is set to True, which will exclude NaNs in keys
In [30]: df_dropna.groupby(by=["b"], dropna=True).sum()
Out[30]:
a c
b
1.0 2 3
2.0 2 5
# In order to allow NaN in keys, set ``dropna`` to False
In [31]: df_dropna.groupby(by=["b"], dropna=False).sum()
Out[31]:
a c
b
1.0 2 3
2.0 2 5
NaN 1 4
The default setting of dropna argument is True which means NA are not included in group keys.
GroupBy object attributes#
The groups attribute is a dict whose keys are the computed unique groups
and corresponding values being the axis labels belonging to each group. In the
above example we have:
In [32]: df.groupby("A").groups
Out[32]: {'bar': [1, 3, 5], 'foo': [0, 2, 4, 6, 7]}
In [33]: df.groupby(get_letter_type, axis=1).groups
Out[33]: {'consonant': ['B', 'C', 'D'], 'vowel': ['A']}
Calling the standard Python len function on the GroupBy object just returns
the length of the groups dict, so it is largely just a convenience:
In [34]: grouped = df.groupby(["A", "B"])
In [35]: grouped.groups
Out[35]: {('bar', 'one'): [1], ('bar', 'three'): [3], ('bar', 'two'): [5], ('foo', 'one'): [0, 6], ('foo', 'three'): [7], ('foo', 'two'): [2, 4]}
In [36]: len(grouped)
Out[36]: 6
GroupBy will tab complete column names (and other attributes):
In [37]: df
Out[37]:
height weight gender
2000-01-01 42.849980 157.500553 male
2000-01-02 49.607315 177.340407 male
2000-01-03 56.293531 171.524640 male
2000-01-04 48.421077 144.251986 female
2000-01-05 46.556882 152.526206 male
2000-01-06 68.448851 168.272968 female
2000-01-07 70.757698 136.431469 male
2000-01-08 58.909500 176.499753 female
2000-01-09 76.435631 174.094104 female
2000-01-10 45.306120 177.540920 male
In [38]: gb = df.groupby("gender")
In [39]: gb.<TAB> # noqa: E225, E999
gb.agg gb.boxplot gb.cummin gb.describe gb.filter gb.get_group gb.height gb.last gb.median gb.ngroups gb.plot gb.rank gb.std gb.transform
gb.aggregate gb.count gb.cumprod gb.dtype gb.first gb.groups gb.hist gb.max gb.min gb.nth gb.prod gb.resample gb.sum gb.var
gb.apply gb.cummax gb.cumsum gb.fillna gb.gender gb.head gb.indices gb.mean gb.name gb.ohlc gb.quantile gb.size gb.tail gb.weight
GroupBy with MultiIndex#
With hierarchically-indexed data, it’s quite
natural to group by one of the levels of the hierarchy.
Let’s create a Series with a two-level MultiIndex.
In [40]: arrays = [
....: ["bar", "bar", "baz", "baz", "foo", "foo", "qux", "qux"],
....: ["one", "two", "one", "two", "one", "two", "one", "two"],
....: ]
....:
In [41]: index = pd.MultiIndex.from_arrays(arrays, names=["first", "second"])
In [42]: s = pd.Series(np.random.randn(8), index=index)
In [43]: s
Out[43]:
first second
bar one -0.919854
two -0.042379
baz one 1.247642
two -0.009920
foo one 0.290213
two 0.495767
qux one 0.362949
two 1.548106
dtype: float64
We can then group by one of the levels in s.
In [44]: grouped = s.groupby(level=0)
In [45]: grouped.sum()
Out[45]:
first
bar -0.962232
baz 1.237723
foo 0.785980
qux 1.911055
dtype: float64
If the MultiIndex has names specified, these can be passed instead of the level
number:
In [46]: s.groupby(level="second").sum()
Out[46]:
second
one 0.980950
two 1.991575
dtype: float64
Grouping with multiple levels is supported.
In [47]: s
Out[47]:
first second third
bar doo one -1.131345
two -0.089329
baz bee one 0.337863
two -0.945867
foo bop one -0.932132
two 1.956030
qux bop one 0.017587
two -0.016692
dtype: float64
In [48]: s.groupby(level=["first", "second"]).sum()
Out[48]:
first second
bar doo -1.220674
baz bee -0.608004
foo bop 1.023898
qux bop 0.000895
dtype: float64
Index level names may be supplied as keys.
In [49]: s.groupby(["first", "second"]).sum()
Out[49]:
first second
bar doo -1.220674
baz bee -0.608004
foo bop 1.023898
qux bop 0.000895
dtype: float64
More on the sum function and aggregation later.
Grouping DataFrame with Index levels and columns#
A DataFrame may be grouped by a combination of columns and index levels by
specifying the column names as strings and the index levels as pd.Grouper
objects.
In [50]: arrays = [
....: ["bar", "bar", "baz", "baz", "foo", "foo", "qux", "qux"],
....: ["one", "two", "one", "two", "one", "two", "one", "two"],
....: ]
....:
In [51]: index = pd.MultiIndex.from_arrays(arrays, names=["first", "second"])
In [52]: df = pd.DataFrame({"A": [1, 1, 1, 1, 2, 2, 3, 3], "B": np.arange(8)}, index=index)
In [53]: df
Out[53]:
A B
first second
bar one 1 0
two 1 1
baz one 1 2
two 1 3
foo one 2 4
two 2 5
qux one 3 6
two 3 7
The following example groups df by the second index level and
the A column.
In [54]: df.groupby([pd.Grouper(level=1), "A"]).sum()
Out[54]:
B
second A
one 1 2
2 4
3 6
two 1 4
2 5
3 7
Index levels may also be specified by name.
In [55]: df.groupby([pd.Grouper(level="second"), "A"]).sum()
Out[55]:
B
second A
one 1 2
2 4
3 6
two 1 4
2 5
3 7
Index level names may be specified as keys directly to groupby.
In [56]: df.groupby(["second", "A"]).sum()
Out[56]:
B
second A
one 1 2
2 4
3 6
two 1 4
2 5
3 7
DataFrame column selection in GroupBy#
Once you have created the GroupBy object from a DataFrame, you might want to do
something different for each of the columns. Thus, using [] similar to
getting a column from a DataFrame, you can do:
In [57]: df = pd.DataFrame(
....: {
....: "A": ["foo", "bar", "foo", "bar", "foo", "bar", "foo", "foo"],
....: "B": ["one", "one", "two", "three", "two", "two", "one", "three"],
....: "C": np.random.randn(8),
....: "D": np.random.randn(8),
....: }
....: )
....:
In [58]: df
Out[58]:
A B C D
0 foo one -0.575247 1.346061
1 bar one 0.254161 1.511763
2 foo two -1.143704 1.627081
3 bar three 0.215897 -0.990582
4 foo two 1.193555 -0.441652
5 bar two -0.077118 1.211526
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
In [59]: grouped = df.groupby(["A"])
In [60]: grouped_C = grouped["C"]
In [61]: grouped_D = grouped["D"]
This is mainly syntactic sugar for the alternative and much more verbose:
In [62]: df["C"].groupby(df["A"])
Out[62]: <pandas.core.groupby.generic.SeriesGroupBy object at 0x7f1ea100a490>
Additionally this method avoids recomputing the internal grouping information
derived from the passed key.
Iterating through groups#
With the GroupBy object in hand, iterating through the grouped data is very
natural and functions similarly to itertools.groupby():
In [63]: grouped = df.groupby('A')
In [64]: for name, group in grouped:
....: print(name)
....: print(group)
....:
bar
A B C D
1 bar one 0.254161 1.511763
3 bar three 0.215897 -0.990582
5 bar two -0.077118 1.211526
foo
A B C D
0 foo one -0.575247 1.346061
2 foo two -1.143704 1.627081
4 foo two 1.193555 -0.441652
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
In the case of grouping by multiple keys, the group name will be a tuple:
In [65]: for name, group in df.groupby(['A', 'B']):
....: print(name)
....: print(group)
....:
('bar', 'one')
A B C D
1 bar one 0.254161 1.511763
('bar', 'three')
A B C D
3 bar three 0.215897 -0.990582
('bar', 'two')
A B C D
5 bar two -0.077118 1.211526
('foo', 'one')
A B C D
0 foo one -0.575247 1.346061
6 foo one -0.408530 0.268520
('foo', 'three')
A B C D
7 foo three -0.862495 0.02458
('foo', 'two')
A B C D
2 foo two -1.143704 1.627081
4 foo two 1.193555 -0.441652
See Iterating through groups.
Selecting a group#
A single group can be selected using
get_group():
In [66]: grouped.get_group("bar")
Out[66]:
A B C D
1 bar one 0.254161 1.511763
3 bar three 0.215897 -0.990582
5 bar two -0.077118 1.211526
Or for an object grouped on multiple columns:
In [67]: df.groupby(["A", "B"]).get_group(("bar", "one"))
Out[67]:
A B C D
1 bar one 0.254161 1.511763
Aggregation#
Once the GroupBy object has been created, several methods are available to
perform a computation on the grouped data. These operations are similar to the
aggregating API, window API,
and resample API.
An obvious one is aggregation via the
aggregate() or equivalently
agg() method:
In [68]: grouped = df.groupby("A")
In [69]: grouped[["C", "D"]].aggregate(np.sum)
Out[69]:
C D
A
bar 0.392940 1.732707
foo -1.796421 2.824590
In [70]: grouped = df.groupby(["A", "B"])
In [71]: grouped.aggregate(np.sum)
Out[71]:
C D
A B
bar one 0.254161 1.511763
three 0.215897 -0.990582
two -0.077118 1.211526
foo one -0.983776 1.614581
three -0.862495 0.024580
two 0.049851 1.185429
As you can see, the result of the aggregation will have the group names as the
new index along the grouped axis. In the case of multiple keys, the result is a
MultiIndex by default, though this can be
changed by using the as_index option:
In [72]: grouped = df.groupby(["A", "B"], as_index=False)
In [73]: grouped.aggregate(np.sum)
Out[73]:
A B C D
0 bar one 0.254161 1.511763
1 bar three 0.215897 -0.990582
2 bar two -0.077118 1.211526
3 foo one -0.983776 1.614581
4 foo three -0.862495 0.024580
5 foo two 0.049851 1.185429
In [74]: df.groupby("A", as_index=False)[["C", "D"]].sum()
Out[74]:
A C D
0 bar 0.392940 1.732707
1 foo -1.796421 2.824590
Note that you could use the reset_index DataFrame function to achieve the
same result as the column names are stored in the resulting MultiIndex:
In [75]: df.groupby(["A", "B"]).sum().reset_index()
Out[75]:
A B C D
0 bar one 0.254161 1.511763
1 bar three 0.215897 -0.990582
2 bar two -0.077118 1.211526
3 foo one -0.983776 1.614581
4 foo three -0.862495 0.024580
5 foo two 0.049851 1.185429
Another simple aggregation example is to compute the size of each group.
This is included in GroupBy as the size method. It returns a Series whose
index are the group names and whose values are the sizes of each group.
In [76]: grouped.size()
Out[76]:
A B size
0 bar one 1
1 bar three 1
2 bar two 1
3 foo one 2
4 foo three 1
5 foo two 2
In [77]: grouped.describe()
Out[77]:
C ... D
count mean std min ... 25% 50% 75% max
0 1.0 0.254161 NaN 0.254161 ... 1.511763 1.511763 1.511763 1.511763
1 1.0 0.215897 NaN 0.215897 ... -0.990582 -0.990582 -0.990582 -0.990582
2 1.0 -0.077118 NaN -0.077118 ... 1.211526 1.211526 1.211526 1.211526
3 2.0 -0.491888 0.117887 -0.575247 ... 0.537905 0.807291 1.076676 1.346061
4 1.0 -0.862495 NaN -0.862495 ... 0.024580 0.024580 0.024580 0.024580
5 2.0 0.024925 1.652692 -1.143704 ... 0.075531 0.592714 1.109898 1.627081
[6 rows x 16 columns]
Another aggregation example is to compute the number of unique values of each group. This is similar to the value_counts function, except that it only counts unique values.
In [78]: ll = [['foo', 1], ['foo', 2], ['foo', 2], ['bar', 1], ['bar', 1]]
In [79]: df4 = pd.DataFrame(ll, columns=["A", "B"])
In [80]: df4
Out[80]:
A B
0 foo 1
1 foo 2
2 foo 2
3 bar 1
4 bar 1
In [81]: df4.groupby("A")["B"].nunique()
Out[81]:
A
bar 1
foo 2
Name: B, dtype: int64
Note
Aggregation functions will not return the groups that you are aggregating over
if they are named columns, when as_index=True, the default. The grouped columns will
be the indices of the returned object.
Passing as_index=False will return the groups that you are aggregating over, if they are
named columns.
Aggregating functions are the ones that reduce the dimension of the returned objects.
Some common aggregating functions are tabulated below:
Function
Description
mean()
Compute mean of groups
sum()
Compute sum of group values
size()
Compute group sizes
count()
Compute count of group
std()
Standard deviation of groups
var()
Compute variance of groups
sem()
Standard error of the mean of groups
describe()
Generates descriptive statistics
first()
Compute first of group values
last()
Compute last of group values
nth()
Take nth value, or a subset if n is a list
min()
Compute min of group values
max()
Compute max of group values
The aggregating functions above will exclude NA values. Any function which
reduces a Series to a scalar value is an aggregation function and will work,
a trivial example is df.groupby('A').agg(lambda ser: 1). Note that
nth() can act as a reducer or a
filter, see here.
Applying multiple functions at once#
With grouped Series you can also pass a list or dict of functions to do
aggregation with, outputting a DataFrame:
In [82]: grouped = df.groupby("A")
In [83]: grouped["C"].agg([np.sum, np.mean, np.std])
Out[83]:
sum mean std
A
bar 0.392940 0.130980 0.181231
foo -1.796421 -0.359284 0.912265
On a grouped DataFrame, you can pass a list of functions to apply to each
column, which produces an aggregated result with a hierarchical index:
In [84]: grouped[["C", "D"]].agg([np.sum, np.mean, np.std])
Out[84]:
C D
sum mean std sum mean std
A
bar 0.392940 0.130980 0.181231 1.732707 0.577569 1.366330
foo -1.796421 -0.359284 0.912265 2.824590 0.564918 0.884785
The resulting aggregations are named for the functions themselves. If you
need to rename, then you can add in a chained operation for a Series like this:
In [85]: (
....: grouped["C"]
....: .agg([np.sum, np.mean, np.std])
....: .rename(columns={"sum": "foo", "mean": "bar", "std": "baz"})
....: )
....:
Out[85]:
foo bar baz
A
bar 0.392940 0.130980 0.181231
foo -1.796421 -0.359284 0.912265
For a grouped DataFrame, you can rename in a similar manner:
In [86]: (
....: grouped[["C", "D"]].agg([np.sum, np.mean, np.std]).rename(
....: columns={"sum": "foo", "mean": "bar", "std": "baz"}
....: )
....: )
....:
Out[86]:
C D
foo bar baz foo bar baz
A
bar 0.392940 0.130980 0.181231 1.732707 0.577569 1.366330
foo -1.796421 -0.359284 0.912265 2.824590 0.564918 0.884785
Note
In general, the output column names should be unique. You can’t apply
the same function (or two functions with the same name) to the same
column.
In [87]: grouped["C"].agg(["sum", "sum"])
Out[87]:
sum sum
A
bar 0.392940 0.392940
foo -1.796421 -1.796421
pandas does allow you to provide multiple lambdas. In this case, pandas
will mangle the name of the (nameless) lambda functions, appending _<i>
to each subsequent lambda.
In [88]: grouped["C"].agg([lambda x: x.max() - x.min(), lambda x: x.median() - x.mean()])
Out[88]:
<lambda_0> <lambda_1>
A
bar 0.331279 0.084917
foo 2.337259 -0.215962
Named aggregation#
New in version 0.25.0.
To support column-specific aggregation with control over the output column names, pandas
accepts the special syntax in GroupBy.agg(), known as “named aggregation”, where
The keywords are the output column names
The values are tuples whose first element is the column to select
and the second element is the aggregation to apply to that column. pandas
provides the pandas.NamedAgg namedtuple with the fields ['column', 'aggfunc']
to make it clearer what the arguments are. As usual, the aggregation can
be a callable or a string alias.
In [89]: animals = pd.DataFrame(
....: {
....: "kind": ["cat", "dog", "cat", "dog"],
....: "height": [9.1, 6.0, 9.5, 34.0],
....: "weight": [7.9, 7.5, 9.9, 198.0],
....: }
....: )
....:
In [90]: animals
Out[90]:
kind height weight
0 cat 9.1 7.9
1 dog 6.0 7.5
2 cat 9.5 9.9
3 dog 34.0 198.0
In [91]: animals.groupby("kind").agg(
....: min_height=pd.NamedAgg(column="height", aggfunc="min"),
....: max_height=pd.NamedAgg(column="height", aggfunc="max"),
....: average_weight=pd.NamedAgg(column="weight", aggfunc=np.mean),
....: )
....:
Out[91]:
min_height max_height average_weight
kind
cat 9.1 9.5 8.90
dog 6.0 34.0 102.75
pandas.NamedAgg is just a namedtuple. Plain tuples are allowed as well.
In [92]: animals.groupby("kind").agg(
....: min_height=("height", "min"),
....: max_height=("height", "max"),
....: average_weight=("weight", np.mean),
....: )
....:
Out[92]:
min_height max_height average_weight
kind
cat 9.1 9.5 8.90
dog 6.0 34.0 102.75
If your desired output column names are not valid Python keywords, construct a dictionary
and unpack the keyword arguments
In [93]: animals.groupby("kind").agg(
....: **{
....: "total weight": pd.NamedAgg(column="weight", aggfunc=sum)
....: }
....: )
....:
Out[93]:
total weight
kind
cat 17.8
dog 205.5
Additional keyword arguments are not passed through to the aggregation functions. Only pairs
of (column, aggfunc) should be passed as **kwargs. If your aggregation functions
requires additional arguments, partially apply them with functools.partial().
Note
For Python 3.5 and earlier, the order of **kwargs in a functions was not
preserved. This means that the output column ordering would not be
consistent. To ensure consistent ordering, the keys (and so output columns)
will always be sorted for Python 3.5.
Named aggregation is also valid for Series groupby aggregations. In this case there’s
no column selection, so the values are just the functions.
In [94]: animals.groupby("kind").height.agg(
....: min_height="min",
....: max_height="max",
....: )
....:
Out[94]:
min_height max_height
kind
cat 9.1 9.5
dog 6.0 34.0
Applying different functions to DataFrame columns#
By passing a dict to aggregate you can apply a different aggregation to the
columns of a DataFrame:
In [95]: grouped.agg({"C": np.sum, "D": lambda x: np.std(x, ddof=1)})
Out[95]:
C D
A
bar 0.392940 1.366330
foo -1.796421 0.884785
The function names can also be strings. In order for a string to be valid it
must be either implemented on GroupBy or available via dispatching:
In [96]: grouped.agg({"C": "sum", "D": "std"})
Out[96]:
C D
A
bar 0.392940 1.366330
foo -1.796421 0.884785
Cython-optimized aggregation functions#
Some common aggregations, currently only sum, mean, std, and sem, have
optimized Cython implementations:
In [97]: df.groupby("A")[["C", "D"]].sum()
Out[97]:
C D
A
bar 0.392940 1.732707
foo -1.796421 2.824590
In [98]: df.groupby(["A", "B"]).mean()
Out[98]:
C D
A B
bar one 0.254161 1.511763
three 0.215897 -0.990582
two -0.077118 1.211526
foo one -0.491888 0.807291
three -0.862495 0.024580
two 0.024925 0.592714
Of course sum and mean are implemented on pandas objects, so the above
code would work even without the special versions via dispatching (see below).
Aggregations with User-Defined Functions#
Users can also provide their own functions for custom aggregations. When aggregating
with a User-Defined Function (UDF), the UDF should not mutate the provided Series, see
Mutating with User Defined Function (UDF) methods for more information.
In [99]: animals.groupby("kind")[["height"]].agg(lambda x: set(x))
Out[99]:
height
kind
cat {9.1, 9.5}
dog {34.0, 6.0}
The resulting dtype will reflect that of the aggregating function. If the results from different groups have
different dtypes, then a common dtype will be determined in the same way as DataFrame construction.
In [100]: animals.groupby("kind")[["height"]].agg(lambda x: x.astype(int).sum())
Out[100]:
height
kind
cat 18
dog 40
Transformation#
The transform method returns an object that is indexed the same
as the one being grouped. The transform function must:
Return a result that is either the same size as the group chunk or
broadcastable to the size of the group chunk (e.g., a scalar,
grouped.transform(lambda x: x.iloc[-1])).
Operate column-by-column on the group chunk. The transform is applied to
the first group chunk using chunk.apply.
Not perform in-place operations on the group chunk. Group chunks should
be treated as immutable, and changes to a group chunk may produce unexpected
results. For example, when using fillna, inplace must be False
(grouped.transform(lambda x: x.fillna(inplace=False))).
(Optionally) operates on the entire group chunk. If this is supported, a
fast path is used starting from the second chunk.
Deprecated since version 1.5.0: When using .transform on a grouped DataFrame and the transformation function
returns a DataFrame, currently pandas does not align the result’s index
with the input’s index. This behavior is deprecated and alignment will
be performed in a future version of pandas. You can apply .to_numpy() to the
result of the transformation function to avoid alignment.
Similar to Aggregations with User-Defined Functions, the resulting dtype will reflect that of the
transformation function. If the results from different groups have different dtypes, then
a common dtype will be determined in the same way as DataFrame construction.
Suppose we wished to standardize the data within each group:
In [101]: index = pd.date_range("10/1/1999", periods=1100)
In [102]: ts = pd.Series(np.random.normal(0.5, 2, 1100), index)
In [103]: ts = ts.rolling(window=100, min_periods=100).mean().dropna()
In [104]: ts.head()
Out[104]:
2000-01-08 0.779333
2000-01-09 0.778852
2000-01-10 0.786476
2000-01-11 0.782797
2000-01-12 0.798110
Freq: D, dtype: float64
In [105]: ts.tail()
Out[105]:
2002-09-30 0.660294
2002-10-01 0.631095
2002-10-02 0.673601
2002-10-03 0.709213
2002-10-04 0.719369
Freq: D, dtype: float64
In [106]: transformed = ts.groupby(lambda x: x.year).transform(
.....: lambda x: (x - x.mean()) / x.std()
.....: )
.....:
We would expect the result to now have mean 0 and standard deviation 1 within
each group, which we can easily check:
# Original Data
In [107]: grouped = ts.groupby(lambda x: x.year)
In [108]: grouped.mean()
Out[108]:
2000 0.442441
2001 0.526246
2002 0.459365
dtype: float64
In [109]: grouped.std()
Out[109]:
2000 0.131752
2001 0.210945
2002 0.128753
dtype: float64
# Transformed Data
In [110]: grouped_trans = transformed.groupby(lambda x: x.year)
In [111]: grouped_trans.mean()
Out[111]:
2000 -4.870756e-16
2001 -1.545187e-16
2002 4.136282e-16
dtype: float64
In [112]: grouped_trans.std()
Out[112]:
2000 1.0
2001 1.0
2002 1.0
dtype: float64
We can also visually compare the original and transformed data sets.
In [113]: compare = pd.DataFrame({"Original": ts, "Transformed": transformed})
In [114]: compare.plot()
Out[114]: <AxesSubplot: >
Transformation functions that have lower dimension outputs are broadcast to
match the shape of the input array.
In [115]: ts.groupby(lambda x: x.year).transform(lambda x: x.max() - x.min())
Out[115]:
2000-01-08 0.623893
2000-01-09 0.623893
2000-01-10 0.623893
2000-01-11 0.623893
2000-01-12 0.623893
...
2002-09-30 0.558275
2002-10-01 0.558275
2002-10-02 0.558275
2002-10-03 0.558275
2002-10-04 0.558275
Freq: D, Length: 1001, dtype: float64
Alternatively, the built-in methods could be used to produce the same outputs.
In [116]: max_ts = ts.groupby(lambda x: x.year).transform("max")
In [117]: min_ts = ts.groupby(lambda x: x.year).transform("min")
In [118]: max_ts - min_ts
Out[118]:
2000-01-08 0.623893
2000-01-09 0.623893
2000-01-10 0.623893
2000-01-11 0.623893
2000-01-12 0.623893
...
2002-09-30 0.558275
2002-10-01 0.558275
2002-10-02 0.558275
2002-10-03 0.558275
2002-10-04 0.558275
Freq: D, Length: 1001, dtype: float64
Another common data transform is to replace missing data with the group mean.
In [119]: data_df
Out[119]:
A B C
0 1.539708 -1.166480 0.533026
1 1.302092 -0.505754 NaN
2 -0.371983 1.104803 -0.651520
3 -1.309622 1.118697 -1.161657
4 -1.924296 0.396437 0.812436
.. ... ... ...
995 -0.093110 0.683847 -0.774753
996 -0.185043 1.438572 NaN
997 -0.394469 -0.642343 0.011374
998 -1.174126 1.857148 NaN
999 0.234564 0.517098 0.393534
[1000 rows x 3 columns]
In [120]: countries = np.array(["US", "UK", "GR", "JP"])
In [121]: key = countries[np.random.randint(0, 4, 1000)]
In [122]: grouped = data_df.groupby(key)
# Non-NA count in each group
In [123]: grouped.count()
Out[123]:
A B C
GR 209 217 189
JP 240 255 217
UK 216 231 193
US 239 250 217
In [124]: transformed = grouped.transform(lambda x: x.fillna(x.mean()))
We can verify that the group means have not changed in the transformed data
and that the transformed data contains no NAs.
In [125]: grouped_trans = transformed.groupby(key)
In [126]: grouped.mean() # original group means
Out[126]:
A B C
GR -0.098371 -0.015420 0.068053
JP 0.069025 0.023100 -0.077324
UK 0.034069 -0.052580 -0.116525
US 0.058664 -0.020399 0.028603
In [127]: grouped_trans.mean() # transformation did not change group means
Out[127]:
A B C
GR -0.098371 -0.015420 0.068053
JP 0.069025 0.023100 -0.077324
UK 0.034069 -0.052580 -0.116525
US 0.058664 -0.020399 0.028603
In [128]: grouped.count() # original has some missing data points
Out[128]:
A B C
GR 209 217 189
JP 240 255 217
UK 216 231 193
US 239 250 217
In [129]: grouped_trans.count() # counts after transformation
Out[129]:
A B C
GR 228 228 228
JP 267 267 267
UK 247 247 247
US 258 258 258
In [130]: grouped_trans.size() # Verify non-NA count equals group size
Out[130]:
GR 228
JP 267
UK 247
US 258
dtype: int64
Note
Some functions will automatically transform the input when applied to a
GroupBy object, but returning an object of the same shape as the original.
Passing as_index=False will not affect these transformation methods.
For example: fillna, ffill, bfill, shift..
In [131]: grouped.ffill()
Out[131]:
A B C
0 1.539708 -1.166480 0.533026
1 1.302092 -0.505754 0.533026
2 -0.371983 1.104803 -0.651520
3 -1.309622 1.118697 -1.161657
4 -1.924296 0.396437 0.812436
.. ... ... ...
995 -0.093110 0.683847 -0.774753
996 -0.185043 1.438572 -0.774753
997 -0.394469 -0.642343 0.011374
998 -1.174126 1.857148 -0.774753
999 0.234564 0.517098 0.393534
[1000 rows x 3 columns]
Window and resample operations#
It is possible to use resample(), expanding() and
rolling() as methods on groupbys.
The example below will apply the rolling() method on the samples of
the column B based on the groups of column A.
In [132]: df_re = pd.DataFrame({"A": [1] * 10 + [5] * 10, "B": np.arange(20)})
In [133]: df_re
Out[133]:
A B
0 1 0
1 1 1
2 1 2
3 1 3
4 1 4
.. .. ..
15 5 15
16 5 16
17 5 17
18 5 18
19 5 19
[20 rows x 2 columns]
In [134]: df_re.groupby("A").rolling(4).B.mean()
Out[134]:
A
1 0 NaN
1 NaN
2 NaN
3 1.5
4 2.5
...
5 15 13.5
16 14.5
17 15.5
18 16.5
19 17.5
Name: B, Length: 20, dtype: float64
The expanding() method will accumulate a given operation
(sum() in the example) for all the members of each particular
group.
In [135]: df_re.groupby("A").expanding().sum()
Out[135]:
B
A
1 0 0.0
1 1.0
2 3.0
3 6.0
4 10.0
... ...
5 15 75.0
16 91.0
17 108.0
18 126.0
19 145.0
[20 rows x 1 columns]
Suppose you want to use the resample() method to get a daily
frequency in each group of your dataframe and wish to complete the
missing values with the ffill() method.
In [136]: df_re = pd.DataFrame(
.....: {
.....: "date": pd.date_range(start="2016-01-01", periods=4, freq="W"),
.....: "group": [1, 1, 2, 2],
.....: "val": [5, 6, 7, 8],
.....: }
.....: ).set_index("date")
.....:
In [137]: df_re
Out[137]:
group val
date
2016-01-03 1 5
2016-01-10 1 6
2016-01-17 2 7
2016-01-24 2 8
In [138]: df_re.groupby("group").resample("1D").ffill()
Out[138]:
group val
group date
1 2016-01-03 1 5
2016-01-04 1 5
2016-01-05 1 5
2016-01-06 1 5
2016-01-07 1 5
... ... ...
2 2016-01-20 2 7
2016-01-21 2 7
2016-01-22 2 7
2016-01-23 2 7
2016-01-24 2 8
[16 rows x 2 columns]
Filtration#
The filter method returns a subset of the original object. Suppose we
want to take only elements that belong to groups with a group sum greater
than 2.
In [139]: sf = pd.Series([1, 1, 2, 3, 3, 3])
In [140]: sf.groupby(sf).filter(lambda x: x.sum() > 2)
Out[140]:
3 3
4 3
5 3
dtype: int64
The argument of filter must be a function that, applied to the group as a
whole, returns True or False.
Another useful operation is filtering out elements that belong to groups
with only a couple members.
In [141]: dff = pd.DataFrame({"A": np.arange(8), "B": list("aabbbbcc")})
In [142]: dff.groupby("B").filter(lambda x: len(x) > 2)
Out[142]:
A B
2 2 b
3 3 b
4 4 b
5 5 b
Alternatively, instead of dropping the offending groups, we can return a
like-indexed objects where the groups that do not pass the filter are filled
with NaNs.
In [143]: dff.groupby("B").filter(lambda x: len(x) > 2, dropna=False)
Out[143]:
A B
0 NaN NaN
1 NaN NaN
2 2.0 b
3 3.0 b
4 4.0 b
5 5.0 b
6 NaN NaN
7 NaN NaN
For DataFrames with multiple columns, filters should explicitly specify a column as the filter criterion.
In [144]: dff["C"] = np.arange(8)
In [145]: dff.groupby("B").filter(lambda x: len(x["C"]) > 2)
Out[145]:
A B C
2 2 b 2
3 3 b 3
4 4 b 4
5 5 b 5
Note
Some functions when applied to a groupby object will act as a filter on the input, returning
a reduced shape of the original (and potentially eliminating groups), but with the index unchanged.
Passing as_index=False will not affect these transformation methods.
For example: head, tail.
In [146]: dff.groupby("B").head(2)
Out[146]:
A B C
0 0 a 0
1 1 a 1
2 2 b 2
3 3 b 3
6 6 c 6
7 7 c 7
Dispatching to instance methods#
When doing an aggregation or transformation, you might just want to call an
instance method on each data group. This is pretty easy to do by passing lambda
functions:
In [147]: grouped = df.groupby("A")
In [148]: grouped.agg(lambda x: x.std())
Out[148]:
C D
A
bar 0.181231 1.366330
foo 0.912265 0.884785
But, it’s rather verbose and can be untidy if you need to pass additional
arguments. Using a bit of metaprogramming cleverness, GroupBy now has the
ability to “dispatch” method calls to the groups:
In [149]: grouped.std()
Out[149]:
C D
A
bar 0.181231 1.366330
foo 0.912265 0.884785
What is actually happening here is that a function wrapper is being
generated. When invoked, it takes any passed arguments and invokes the function
with any arguments on each group (in the above example, the std
function). The results are then combined together much in the style of agg
and transform (it actually uses apply to infer the gluing, documented
next). This enables some operations to be carried out rather succinctly:
In [150]: tsdf = pd.DataFrame(
.....: np.random.randn(1000, 3),
.....: index=pd.date_range("1/1/2000", periods=1000),
.....: columns=["A", "B", "C"],
.....: )
.....:
In [151]: tsdf.iloc[::2] = np.nan
In [152]: grouped = tsdf.groupby(lambda x: x.year)
In [153]: grouped.fillna(method="pad")
Out[153]:
A B C
2000-01-01 NaN NaN NaN
2000-01-02 -0.353501 -0.080957 -0.876864
2000-01-03 -0.353501 -0.080957 -0.876864
2000-01-04 0.050976 0.044273 -0.559849
2000-01-05 0.050976 0.044273 -0.559849
... ... ... ...
2002-09-22 0.005011 0.053897 -1.026922
2002-09-23 0.005011 0.053897 -1.026922
2002-09-24 -0.456542 -1.849051 1.559856
2002-09-25 -0.456542 -1.849051 1.559856
2002-09-26 1.123162 0.354660 1.128135
[1000 rows x 3 columns]
In this example, we chopped the collection of time series into yearly chunks
then independently called fillna on the
groups.
The nlargest and nsmallest methods work on Series style groupbys:
In [154]: s = pd.Series([9, 8, 7, 5, 19, 1, 4.2, 3.3])
In [155]: g = pd.Series(list("abababab"))
In [156]: gb = s.groupby(g)
In [157]: gb.nlargest(3)
Out[157]:
a 4 19.0
0 9.0
2 7.0
b 1 8.0
3 5.0
7 3.3
dtype: float64
In [158]: gb.nsmallest(3)
Out[158]:
a 6 4.2
2 7.0
0 9.0
b 5 1.0
7 3.3
3 5.0
dtype: float64
Flexible apply#
Some operations on the grouped data might not fit into either the aggregate or
transform categories. Or, you may simply want GroupBy to infer how to combine
the results. For these, use the apply function, which can be substituted
for both aggregate and transform in many standard use cases. However,
apply can handle some exceptional use cases.
Note
apply can act as a reducer, transformer, or filter function, depending
on exactly what is passed to it. It can depend on the passed function and
exactly what you are grouping. Thus the grouped column(s) may be included in
the output as well as set the indices.
In [159]: df
Out[159]:
A B C D
0 foo one -0.575247 1.346061
1 bar one 0.254161 1.511763
2 foo two -1.143704 1.627081
3 bar three 0.215897 -0.990582
4 foo two 1.193555 -0.441652
5 bar two -0.077118 1.211526
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
In [160]: grouped = df.groupby("A")
# could also just call .describe()
In [161]: grouped["C"].apply(lambda x: x.describe())
Out[161]:
A
bar count 3.000000
mean 0.130980
std 0.181231
min -0.077118
25% 0.069390
...
foo min -1.143704
25% -0.862495
50% -0.575247
75% -0.408530
max 1.193555
Name: C, Length: 16, dtype: float64
The dimension of the returned result can also change:
In [162]: grouped = df.groupby('A')['C']
In [163]: def f(group):
.....: return pd.DataFrame({'original': group,
.....: 'demeaned': group - group.mean()})
.....:
apply on a Series can operate on a returned value from the applied function,
that is itself a series, and possibly upcast the result to a DataFrame:
In [164]: def f(x):
.....: return pd.Series([x, x ** 2], index=["x", "x^2"])
.....:
In [165]: s = pd.Series(np.random.rand(5))
In [166]: s
Out[166]:
0 0.321438
1 0.493496
2 0.139505
3 0.910103
4 0.194158
dtype: float64
In [167]: s.apply(f)
Out[167]:
x x^2
0 0.321438 0.103323
1 0.493496 0.243538
2 0.139505 0.019462
3 0.910103 0.828287
4 0.194158 0.037697
Control grouped column(s) placement with group_keys#
Note
If group_keys=True is specified when calling groupby(),
functions passed to apply that return like-indexed outputs will have the
group keys added to the result index. Previous versions of pandas would add
the group keys only when the result from the applied function had a different
index than the input. If group_keys is not specified, the group keys will
not be added for like-indexed outputs. In the future this behavior
will change to always respect group_keys, which defaults to True.
Changed in version 1.5.0.
To control whether the grouped column(s) are included in the indices, you can use
the argument group_keys. Compare
In [168]: df.groupby("A", group_keys=True).apply(lambda x: x)
Out[168]:
A B C D
A
bar 1 bar one 0.254161 1.511763
3 bar three 0.215897 -0.990582
5 bar two -0.077118 1.211526
foo 0 foo one -0.575247 1.346061
2 foo two -1.143704 1.627081
4 foo two 1.193555 -0.441652
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
with
In [169]: df.groupby("A", group_keys=False).apply(lambda x: x)
Out[169]:
A B C D
0 foo one -0.575247 1.346061
1 bar one 0.254161 1.511763
2 foo two -1.143704 1.627081
3 bar three 0.215897 -0.990582
4 foo two 1.193555 -0.441652
5 bar two -0.077118 1.211526
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
Similar to Aggregations with User-Defined Functions, the resulting dtype will reflect that of the
apply function. If the results from different groups have different dtypes, then
a common dtype will be determined in the same way as DataFrame construction.
Numba Accelerated Routines#
New in version 1.1.
If Numba is installed as an optional dependency, the transform and
aggregate methods support engine='numba' and engine_kwargs arguments.
See enhancing performance with Numba for general usage of the arguments
and performance considerations.
The function signature must start with values, index exactly as the data belonging to each group
will be passed into values, and the group index will be passed into index.
Warning
When using engine='numba', there will be no “fall back” behavior internally. The group
data and group index will be passed as NumPy arrays to the JITed user defined function, and no
alternative execution attempts will be tried.
Other useful features#
Automatic exclusion of “nuisance” columns#
Again consider the example DataFrame we’ve been looking at:
In [170]: df
Out[170]:
A B C D
0 foo one -0.575247 1.346061
1 bar one 0.254161 1.511763
2 foo two -1.143704 1.627081
3 bar three 0.215897 -0.990582
4 foo two 1.193555 -0.441652
5 bar two -0.077118 1.211526
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
Suppose we wish to compute the standard deviation grouped by the A
column. There is a slight problem, namely that we don’t care about the data in
column B. We refer to this as a “nuisance” column. You can avoid nuisance
columns by specifying numeric_only=True:
In [171]: df.groupby("A").std(numeric_only=True)
Out[171]:
C D
A
bar 0.181231 1.366330
foo 0.912265 0.884785
Note that df.groupby('A').colname.std(). is more efficient than
df.groupby('A').std().colname, so if the result of an aggregation function
is only interesting over one column (here colname), it may be filtered
before applying the aggregation function.
Note
Any object column, also if it contains numerical values such as Decimal
objects, is considered as a “nuisance” columns. They are excluded from
aggregate functions automatically in groupby.
If you do wish to include decimal or object columns in an aggregation with
other non-nuisance data types, you must do so explicitly.
Warning
The automatic dropping of nuisance columns has been deprecated and will be removed
in a future version of pandas. If columns are included that cannot be operated
on, pandas will instead raise an error. In order to avoid this, either select
the columns you wish to operate on or specify numeric_only=True.
In [172]: from decimal import Decimal
In [173]: df_dec = pd.DataFrame(
.....: {
.....: "id": [1, 2, 1, 2],
.....: "int_column": [1, 2, 3, 4],
.....: "dec_column": [
.....: Decimal("0.50"),
.....: Decimal("0.15"),
.....: Decimal("0.25"),
.....: Decimal("0.40"),
.....: ],
.....: }
.....: )
.....:
# Decimal columns can be sum'd explicitly by themselves...
In [174]: df_dec.groupby(["id"])[["dec_column"]].sum()
Out[174]:
dec_column
id
1 0.75
2 0.55
# ...but cannot be combined with standard data types or they will be excluded
In [175]: df_dec.groupby(["id"])[["int_column", "dec_column"]].sum()
Out[175]:
int_column
id
1 4
2 6
# Use .agg function to aggregate over standard and "nuisance" data types
# at the same time
In [176]: df_dec.groupby(["id"]).agg({"int_column": "sum", "dec_column": "sum"})
Out[176]:
int_column dec_column
id
1 4 0.75
2 6 0.55
Handling of (un)observed Categorical values#
When using a Categorical grouper (as a single grouper, or as part of multiple groupers), the observed keyword
controls whether to return a cartesian product of all possible groupers values (observed=False) or only those
that are observed groupers (observed=True).
Show all values:
In [177]: pd.Series([1, 1, 1]).groupby(
.....: pd.Categorical(["a", "a", "a"], categories=["a", "b"]), observed=False
.....: ).count()
.....:
Out[177]:
a 3
b 0
dtype: int64
Show only the observed values:
In [178]: pd.Series([1, 1, 1]).groupby(
.....: pd.Categorical(["a", "a", "a"], categories=["a", "b"]), observed=True
.....: ).count()
.....:
Out[178]:
a 3
dtype: int64
The returned dtype of the grouped will always include all of the categories that were grouped.
In [179]: s = (
.....: pd.Series([1, 1, 1])
.....: .groupby(pd.Categorical(["a", "a", "a"], categories=["a", "b"]), observed=False)
.....: .count()
.....: )
.....:
In [180]: s.index.dtype
Out[180]: CategoricalDtype(categories=['a', 'b'], ordered=False)
NA and NaT group handling#
If there are any NaN or NaT values in the grouping key, these will be
automatically excluded. In other words, there will never be an “NA group” or
“NaT group”. This was not the case in older versions of pandas, but users were
generally discarding the NA group anyway (and supporting it was an
implementation headache).
Grouping with ordered factors#
Categorical variables represented as instance of pandas’s Categorical class
can be used as group keys. If so, the order of the levels will be preserved:
In [181]: data = pd.Series(np.random.randn(100))
In [182]: factor = pd.qcut(data, [0, 0.25, 0.5, 0.75, 1.0])
In [183]: data.groupby(factor).mean()
Out[183]:
(-2.645, -0.523] -1.362896
(-0.523, 0.0296] -0.260266
(0.0296, 0.654] 0.361802
(0.654, 2.21] 1.073801
dtype: float64
Grouping with a grouper specification#
You may need to specify a bit more data to properly group. You can
use the pd.Grouper to provide this local control.
In [184]: import datetime
In [185]: df = pd.DataFrame(
.....: {
.....: "Branch": "A A A A A A A B".split(),
.....: "Buyer": "Carl Mark Carl Carl Joe Joe Joe Carl".split(),
.....: "Quantity": [1, 3, 5, 1, 8, 1, 9, 3],
.....: "Date": [
.....: datetime.datetime(2013, 1, 1, 13, 0),
.....: datetime.datetime(2013, 1, 1, 13, 5),
.....: datetime.datetime(2013, 10, 1, 20, 0),
.....: datetime.datetime(2013, 10, 2, 10, 0),
.....: datetime.datetime(2013, 10, 1, 20, 0),
.....: datetime.datetime(2013, 10, 2, 10, 0),
.....: datetime.datetime(2013, 12, 2, 12, 0),
.....: datetime.datetime(2013, 12, 2, 14, 0),
.....: ],
.....: }
.....: )
.....:
In [186]: df
Out[186]:
Branch Buyer Quantity Date
0 A Carl 1 2013-01-01 13:00:00
1 A Mark 3 2013-01-01 13:05:00
2 A Carl 5 2013-10-01 20:00:00
3 A Carl 1 2013-10-02 10:00:00
4 A Joe 8 2013-10-01 20:00:00
5 A Joe 1 2013-10-02 10:00:00
6 A Joe 9 2013-12-02 12:00:00
7 B Carl 3 2013-12-02 14:00:00
Groupby a specific column with the desired frequency. This is like resampling.
In [187]: df.groupby([pd.Grouper(freq="1M", key="Date"), "Buyer"])[["Quantity"]].sum()
Out[187]:
Quantity
Date Buyer
2013-01-31 Carl 1
Mark 3
2013-10-31 Carl 6
Joe 9
2013-12-31 Carl 3
Joe 9
You have an ambiguous specification in that you have a named index and a column
that could be potential groupers.
In [188]: df = df.set_index("Date")
In [189]: df["Date"] = df.index + pd.offsets.MonthEnd(2)
In [190]: df.groupby([pd.Grouper(freq="6M", key="Date"), "Buyer"])[["Quantity"]].sum()
Out[190]:
Quantity
Date Buyer
2013-02-28 Carl 1
Mark 3
2014-02-28 Carl 9
Joe 18
In [191]: df.groupby([pd.Grouper(freq="6M", level="Date"), "Buyer"])[["Quantity"]].sum()
Out[191]:
Quantity
Date Buyer
2013-01-31 Carl 1
Mark 3
2014-01-31 Carl 9
Joe 18
Taking the first rows of each group#
Just like for a DataFrame or Series you can call head and tail on a groupby:
In [192]: df = pd.DataFrame([[1, 2], [1, 4], [5, 6]], columns=["A", "B"])
In [193]: df
Out[193]:
A B
0 1 2
1 1 4
2 5 6
In [194]: g = df.groupby("A")
In [195]: g.head(1)
Out[195]:
A B
0 1 2
2 5 6
In [196]: g.tail(1)
Out[196]:
A B
1 1 4
2 5 6
This shows the first or last n rows from each group.
Taking the nth row of each group#
To select from a DataFrame or Series the nth item, use
nth(). This is a reduction method, and
will return a single row (or no row) per group if you pass an int for n:
In [197]: df = pd.DataFrame([[1, np.nan], [1, 4], [5, 6]], columns=["A", "B"])
In [198]: g = df.groupby("A")
In [199]: g.nth(0)
Out[199]:
B
A
1 NaN
5 6.0
In [200]: g.nth(-1)
Out[200]:
B
A
1 4.0
5 6.0
In [201]: g.nth(1)
Out[201]:
B
A
1 4.0
If you want to select the nth not-null item, use the dropna kwarg. For a DataFrame this should be either 'any' or 'all' just like you would pass to dropna:
# nth(0) is the same as g.first()
In [202]: g.nth(0, dropna="any")
Out[202]:
B
A
1 4.0
5 6.0
In [203]: g.first()
Out[203]:
B
A
1 4.0
5 6.0
# nth(-1) is the same as g.last()
In [204]: g.nth(-1, dropna="any") # NaNs denote group exhausted when using dropna
Out[204]:
B
A
1 4.0
5 6.0
In [205]: g.last()
Out[205]:
B
A
1 4.0
5 6.0
In [206]: g.B.nth(0, dropna="all")
Out[206]:
A
1 4.0
5 6.0
Name: B, dtype: float64
As with other methods, passing as_index=False, will achieve a filtration, which returns the grouped row.
In [207]: df = pd.DataFrame([[1, np.nan], [1, 4], [5, 6]], columns=["A", "B"])
In [208]: g = df.groupby("A", as_index=False)
In [209]: g.nth(0)
Out[209]:
A B
0 1 NaN
2 5 6.0
In [210]: g.nth(-1)
Out[210]:
A B
1 1 4.0
2 5 6.0
You can also select multiple rows from each group by specifying multiple nth values as a list of ints.
In [211]: business_dates = pd.date_range(start="4/1/2014", end="6/30/2014", freq="B")
In [212]: df = pd.DataFrame(1, index=business_dates, columns=["a", "b"])
# get the first, 4th, and last date index for each month
In [213]: df.groupby([df.index.year, df.index.month]).nth([0, 3, -1])
Out[213]:
a b
2014 4 1 1
4 1 1
4 1 1
5 1 1
5 1 1
5 1 1
6 1 1
6 1 1
6 1 1
Enumerate group items#
To see the order in which each row appears within its group, use the
cumcount method:
In [214]: dfg = pd.DataFrame(list("aaabba"), columns=["A"])
In [215]: dfg
Out[215]:
A
0 a
1 a
2 a
3 b
4 b
5 a
In [216]: dfg.groupby("A").cumcount()
Out[216]:
0 0
1 1
2 2
3 0
4 1
5 3
dtype: int64
In [217]: dfg.groupby("A").cumcount(ascending=False)
Out[217]:
0 3
1 2
2 1
3 1
4 0
5 0
dtype: int64
Enumerate groups#
To see the ordering of the groups (as opposed to the order of rows
within a group given by cumcount) you can use
ngroup().
Note that the numbers given to the groups match the order in which the
groups would be seen when iterating over the groupby object, not the
order they are first observed.
In [218]: dfg = pd.DataFrame(list("aaabba"), columns=["A"])
In [219]: dfg
Out[219]:
A
0 a
1 a
2 a
3 b
4 b
5 a
In [220]: dfg.groupby("A").ngroup()
Out[220]:
0 0
1 0
2 0
3 1
4 1
5 0
dtype: int64
In [221]: dfg.groupby("A").ngroup(ascending=False)
Out[221]:
0 1
1 1
2 1
3 0
4 0
5 1
dtype: int64
Plotting#
Groupby also works with some plotting methods. For example, suppose we
suspect that some features in a DataFrame may differ by group, in this case,
the values in column 1 where the group is “B” are 3 higher on average.
In [222]: np.random.seed(1234)
In [223]: df = pd.DataFrame(np.random.randn(50, 2))
In [224]: df["g"] = np.random.choice(["A", "B"], size=50)
In [225]: df.loc[df["g"] == "B", 1] += 3
We can easily visualize this with a boxplot:
In [226]: df.groupby("g").boxplot()
Out[226]:
A AxesSubplot(0.1,0.15;0.363636x0.75)
B AxesSubplot(0.536364,0.15;0.363636x0.75)
dtype: object
The result of calling boxplot is a dictionary whose keys are the values
of our grouping column g (“A” and “B”). The values of the resulting dictionary
can be controlled by the return_type keyword of boxplot.
See the visualization documentation for more.
Warning
For historical reasons, df.groupby("g").boxplot() is not equivalent
to df.boxplot(by="g"). See here for
an explanation.
Piping function calls#
Similar to the functionality provided by DataFrame and Series, functions
that take GroupBy objects can be chained together using a pipe method to
allow for a cleaner, more readable syntax. To read about .pipe in general terms,
see here.
Combining .groupby and .pipe is often useful when you need to reuse
GroupBy objects.
As an example, imagine having a DataFrame with columns for stores, products,
revenue and quantity sold. We’d like to do a groupwise calculation of prices
(i.e. revenue/quantity) per store and per product. We could do this in a
multi-step operation, but expressing it in terms of piping can make the
code more readable. First we set the data:
In [227]: n = 1000
In [228]: df = pd.DataFrame(
.....: {
.....: "Store": np.random.choice(["Store_1", "Store_2"], n),
.....: "Product": np.random.choice(["Product_1", "Product_2"], n),
.....: "Revenue": (np.random.random(n) * 50 + 10).round(2),
.....: "Quantity": np.random.randint(1, 10, size=n),
.....: }
.....: )
.....:
In [229]: df.head(2)
Out[229]:
Store Product Revenue Quantity
0 Store_2 Product_1 26.12 1
1 Store_2 Product_1 28.86 1
Now, to find prices per store/product, we can simply do:
In [230]: (
.....: df.groupby(["Store", "Product"])
.....: .pipe(lambda grp: grp.Revenue.sum() / grp.Quantity.sum())
.....: .unstack()
.....: .round(2)
.....: )
.....:
Out[230]:
Product Product_1 Product_2
Store
Store_1 6.82 7.05
Store_2 6.30 6.64
Piping can also be expressive when you want to deliver a grouped object to some
arbitrary function, for example:
In [231]: def mean(groupby):
.....: return groupby.mean()
.....:
In [232]: df.groupby(["Store", "Product"]).pipe(mean)
Out[232]:
Revenue Quantity
Store Product
Store_1 Product_1 34.622727 5.075758
Product_2 35.482815 5.029630
Store_2 Product_1 32.972837 5.237589
Product_2 34.684360 5.224000
where mean takes a GroupBy object and finds the mean of the Revenue and Quantity
columns respectively for each Store-Product combination. The mean function can
be any function that takes in a GroupBy object; the .pipe will pass the GroupBy
object as a parameter into the function you specify.
Examples#
Regrouping by factor#
Regroup columns of a DataFrame according to their sum, and sum the aggregated ones.
In [233]: df = pd.DataFrame({"a": [1, 0, 0], "b": [0, 1, 0], "c": [1, 0, 0], "d": [2, 3, 4]})
In [234]: df
Out[234]:
a b c d
0 1 0 1 2
1 0 1 0 3
2 0 0 0 4
In [235]: df.groupby(df.sum(), axis=1).sum()
Out[235]:
1 9
0 2 2
1 1 3
2 0 4
Multi-column factorization#
By using ngroup(), we can extract
information about the groups in a way similar to factorize() (as described
further in the reshaping API) but which applies
naturally to multiple columns of mixed type and different
sources. This can be useful as an intermediate categorical-like step
in processing, when the relationships between the group rows are more
important than their content, or as input to an algorithm which only
accepts the integer encoding. (For more information about support in
pandas for full categorical data, see the Categorical
introduction and the
API documentation.)
In [236]: dfg = pd.DataFrame({"A": [1, 1, 2, 3, 2], "B": list("aaaba")})
In [237]: dfg
Out[237]:
A B
0 1 a
1 1 a
2 2 a
3 3 b
4 2 a
In [238]: dfg.groupby(["A", "B"]).ngroup()
Out[238]:
0 0
1 0
2 1
3 2
4 1
dtype: int64
In [239]: dfg.groupby(["A", [0, 0, 0, 1, 1]]).ngroup()
Out[239]:
0 0
1 0
2 1
3 3
4 2
dtype: int64
Groupby by indexer to ‘resample’ data#
Resampling produces new hypothetical samples (resamples) from already existing observed data or from a model that generates data. These new samples are similar to the pre-existing samples.
In order to resample to work on indices that are non-datetimelike, the following procedure can be utilized.
In the following examples, df.index // 5 returns a binary array which is used to determine what gets selected for the groupby operation.
Note
The below example shows how we can downsample by consolidation of samples into fewer samples. Here by using df.index // 5, we are aggregating the samples in bins. By applying std() function, we aggregate the information contained in many samples into a small subset of values which is their standard deviation thereby reducing the number of samples.
In [240]: df = pd.DataFrame(np.random.randn(10, 2))
In [241]: df
Out[241]:
0 1
0 -0.793893 0.321153
1 0.342250 1.618906
2 -0.975807 1.918201
3 -0.810847 -1.405919
4 -1.977759 0.461659
5 0.730057 -1.316938
6 -0.751328 0.528290
7 -0.257759 -1.081009
8 0.505895 -1.701948
9 -1.006349 0.020208
In [242]: df.index // 5
Out[242]: Int64Index([0, 0, 0, 0, 0, 1, 1, 1, 1, 1], dtype='int64')
In [243]: df.groupby(df.index // 5).std()
Out[243]:
0 1
0 0.823647 1.312912
1 0.760109 0.942941
Returning a Series to propagate names#
Group DataFrame columns, compute a set of metrics and return a named Series.
The Series name is used as the name for the column index. This is especially
useful in conjunction with reshaping operations such as stacking in which the
column index name will be used as the name of the inserted column:
In [244]: df = pd.DataFrame(
.....: {
.....: "a": [0, 0, 0, 0, 1, 1, 1, 1, 2, 2, 2, 2],
.....: "b": [0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1],
.....: "c": [1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0],
.....: "d": [0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1],
.....: }
.....: )
.....:
In [245]: def compute_metrics(x):
.....: result = {"b_sum": x["b"].sum(), "c_mean": x["c"].mean()}
.....: return pd.Series(result, name="metrics")
.....:
In [246]: result = df.groupby("a").apply(compute_metrics)
In [247]: result
Out[247]:
metrics b_sum c_mean
a
0 2.0 0.5
1 2.0 0.5
2 2.0 0.5
In [248]: result.stack()
Out[248]:
a metrics
0 b_sum 2.0
c_mean 0.5
1 b_sum 2.0
c_mean 0.5
2 b_sum 2.0
c_mean 0.5
dtype: float64
| 459
| 528
|
How to aggregate text in pandas according to another column name
I want to aggregate the text column of all the identical names. e.g.
I have:
df = pd.DataFrame([['Tom', 'good', 3],
['Jack', 'bad', 6],
['Tom', 'average', 9],
],
columns=['name', 'text', 'day'])
I want:
df = pd.DataFrame([['Tom', 'good average'],
['Jack', 'bad',],
],
columns=['name', 'text'])
|
67,413,064
|
Masking dataframe text column to a new column in pandas dataframe
|
<p>I have pandas dataframe below and I would like to mask ProductId column with a new column. Assign each id to a new numeric value. How can I do that?
Thanks</p>
<pre><code>import pandas as pd
df=pd.DataFrame({'ProductId':['AXX11','CS22','AXX11','FV34','FV34','DF23','CS22'],'Sales':
[10,34,23,45,23,54,65]})
df
</code></pre>
<p>Desired outcome below:</p>
<pre><code>ProductId Mask_ProductId Sales
AXX1 20 10
CS22 21 34
AXX1 20 23
FV34 8 45
FV34 8 23
DF23 12 54
CS22 21 65
</code></pre>
<p>Please help thank you</p>
| 67,413,100
| 2021-05-06T06:42:11.653000
| 2
| 0
| 0
| 89
|
python|pandas
|
<p>Use <a href="https://pandas.pydata.org/docs/reference/api/pandas.Categorical.html" rel="nofollow noreferrer"><code>categorical</code></a>:</p>
<pre><code>In [96]: df['Mask_ProductId'] = df.ProductId.astype('category').cat.codes
In [97]: df
Out[97]:
ProductId Sales Mask_ProductId
0 AXX11 10 0
1 CS22 34 1
2 AXX11 23 0
3 FV34 45 3
4 FV34 23 3
5 DF23 54 2
6 CS22 65 1
</code></pre>
| 2021-05-06T06:45:25.127000
| 3
|
https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.mask.html
|
pandas.DataFrame.mask#
pandas.DataFrame.mask#
DataFrame.mask(cond, other=nan, *, inplace=False, axis=None, level=None, errors='raise', try_cast=_NoDefault.no_default)[source]#
Replace values where the condition is True.
Parameters
condbool Series/DataFrame, array-like, or callableWhere cond is False, keep the original value. Where
Use categorical:
In [96]: df['Mask_ProductId'] = df.ProductId.astype('category').cat.codes
In [97]: df
Out[97]:
ProductId Sales Mask_ProductId
0 AXX11 10 0
1 CS22 34 1
2 AXX11 23 0
3 FV34 45 3
4 FV34 23 3
5 DF23 54 2
6 CS22 65 1
True, replace with corresponding value from other.
If cond is callable, it is computed on the Series/DataFrame and
should return boolean Series/DataFrame or array. The callable must
not change input Series/DataFrame (though pandas doesn’t check it).
otherscalar, Series/DataFrame, or callableEntries where cond is True are replaced with
corresponding value from other.
If other is callable, it is computed on the Series/DataFrame and
should return scalar or Series/DataFrame. The callable must not
change input Series/DataFrame (though pandas doesn’t check it).
inplacebool, default FalseWhether to perform the operation in place on the data.
axisint, default NoneAlignment axis if needed. For Series this parameter is
unused and defaults to 0.
levelint, default NoneAlignment level if needed.
errorsstr, {‘raise’, ‘ignore’}, default ‘raise’Note that currently this parameter won’t affect
the results and will always coerce to a suitable dtype.
‘raise’ : allow exceptions to be raised.
‘ignore’ : suppress exceptions. On error return original object.
Deprecated since version 1.5.0: This argument had no effect.
try_castbool, default NoneTry to cast the result back to the input type (if possible).
Deprecated since version 1.3.0: Manually cast back if necessary.
Returns
Same type as caller or None if inplace=True.
See also
DataFrame.where()Return an object of same shape as self.
Notes
The mask method is an application of the if-then idiom. For each
element in the calling DataFrame, if cond is False the
element is used; otherwise the corresponding element from the DataFrame
other is used. If the axis of other does not align with axis of
cond Series/DataFrame, the misaligned index positions will be filled with
True.
The signature for DataFrame.where() differs from
numpy.where(). Roughly df1.where(m, df2) is equivalent to
np.where(m, df1, df2).
For further details and examples see the mask documentation in
indexing.
The dtype of the object takes precedence. The fill value is casted to
the object’s dtype, if this can be done losslessly.
Examples
>>> s = pd.Series(range(5))
>>> s.where(s > 0)
0 NaN
1 1.0
2 2.0
3 3.0
4 4.0
dtype: float64
>>> s.mask(s > 0)
0 0.0
1 NaN
2 NaN
3 NaN
4 NaN
dtype: float64
>>> s = pd.Series(range(5))
>>> t = pd.Series([True, False])
>>> s.where(t, 99)
0 0
1 99
2 99
3 99
4 99
dtype: int64
>>> s.mask(t, 99)
0 99
1 1
2 99
3 99
4 99
dtype: int64
>>> s.where(s > 1, 10)
0 10
1 10
2 2
3 3
4 4
dtype: int64
>>> s.mask(s > 1, 10)
0 0
1 1
2 10
3 10
4 10
dtype: int64
>>> df = pd.DataFrame(np.arange(10).reshape(-1, 2), columns=['A', 'B'])
>>> df
A B
0 0 1
1 2 3
2 4 5
3 6 7
4 8 9
>>> m = df % 3 == 0
>>> df.where(m, -df)
A B
0 0 -1
1 -2 3
2 -4 -5
3 6 -7
4 -8 9
>>> df.where(m, -df) == np.where(m, df, -df)
A B
0 True True
1 True True
2 True True
3 True True
4 True True
>>> df.where(m, -df) == df.mask(~m, -df)
A B
0 True True
1 True True
2 True True
3 True True
4 True True
| 339
| 733
|
Masking dataframe text column to a new column in pandas dataframe
I have pandas dataframe below and I would like to mask ProductId column with a new column. Assign each id to a new numeric value. How can I do that?
Thanks
import pandas as pd
df=pd.DataFrame({'ProductId':['AXX11','CS22','AXX11','FV34','FV34','DF23','CS22'],'Sales':
[10,34,23,45,23,54,65]})
df
Desired outcome below:
ProductId Mask_ProductId Sales
AXX1 20 10
CS22 21 34
AXX1 20 23
FV34 8 45
FV34 8 23
DF23 12 54
CS22 21 65
Please help thank you
|
63,512,742
|
How to find a component of one column in another column?
|
<p>I'm stuck trying to figure out why I am unable to locate something in a pandas data frame. This is where I am stuck:</p>
<pre class="lang-py prettyprint-override"><code>area_codes = "area_codes.csv"
contacts = 'leads.csv'
df_contacts = pd.read_csv(contacts, header=0)
df_areas = pd.read_csv(area_codes, header=0)
for i in df_contacts['Phone']:
if type(i) is str:
if str(i[0:3]) in df_areas['Areas']:
print('Found.')
else:
print('Not Found.')
else:
pass
</code></pre>
<p>This line in particular is where my question is:</p>
<pre><code>if str(i[0:3]) in df_areas['Areas']:
</code></pre>
<p>What I am <em>attempting</em> to do is see if the first 3 digits of a phone number <code>str(i[0:3])</code> is in the list of known area codes <code>df_areas['Areas']</code>.</p>
<p>For whatever reason I cannot figure out why every check is coming up as false? I also went as far as doing some list comprehension and check it that way. Example: <code>a = [i for i in df_areas['Areas']]</code> and then loop over this list.</p>
<p>I've made sure to cast the value to a string so they are both the same object type as originally I thought that was the issue. Which brings me here. I'm just lost at this point. I'm new to programming and just really write little scripts like this that I'll use once or twice. It doesn't need to be performant at all, it just needs to work. So, why is this not working? And just to get ahead of it; yes, I checked to see if there were actually matches.</p>
<p>All the phone numbers in the area code list are 3 digits. Example (fake numbers):</p>
<pre><code>1 2014029520
2 2349212706
3 2394944200
4 5166867073
...
Name: Phone, Length: 4305, dtype: object
</code></pre>
<p>All the phone numbers in the contacts list are 10 digits (or blank lines) with no spaces. Example:</p>
<pre><code>0 201
1 202
2 203
3 204
4 205
...
401 980
402 984
403 985
404 986
405 989
Name: Areas, Length: 406, dtype: int64
</code></pre>
<p>I am casting the values to strings (which I think I'm doing correctly) but I've included the Pandas DF information like the dtype if that helps.</p>
| 63,512,871
| 2020-08-20T20:30:59.797000
| 1
| null | 0
| 92
|
python|pandas
|
<ul>
<li>With <code>codes</code> and <code>numbs</code> starting as integers</li>
<li>Use <code>.astype(str)</code> to cast the columns as <code>str</code> type, and then use <code>.str</code> methods to determine if the first 3 characters of <code>numbers</code> is in a list of <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.unique.html" rel="nofollow noreferrer"><code>.unique</code></a> codes.
<ul>
<li><a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.astype.html" rel="nofollow noreferrer"><code>pandas.Series.astype</code></a></li>
<li><a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.isin.html" rel="nofollow noreferrer"><code>pandas.Series.isin</code></a></li>
<li><a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.str.contains.html" rel="nofollow noreferrer"><code>pandas.Series.str.contains</code></a></li>
<li><a href="https://pandas.pydata.org/pandas-docs/stable/user_guide/text.html#working-with-text-data" rel="nofollow noreferrer">Pandas: Working with text data</a></li>
<li>If the column of <code>numbers</code> or <code>codes</code> is already a <code>str</code> type, <code>.astype(str)</code> is not needed.</li>
</ul>
</li>
<li><code>codes.codes.astype(str).unique()</code> creates a list of unique <code>codes</code>, where each value is a <code>str</code> type.</li>
</ul>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
# test data
codes = pd.DataFrame({'codes': [201, 202, 203, 204, 205, 980, 984, 985, 986, 989]})
numbs = pd.DataFrame({'numbers': [2014029520, 2349212706, 2394944200, 5166867073]})
# vectorized comparison
numbs['valid code'] = numbs.numbers.astype(str).str[:3].isin(codes.codes.unique())
# display(numbs)
numbers valid code
0 2014029520 True
1 2349212706 False
2 2394944200 False
3 5166867073 False
</code></pre>
<h2>With your function</h2>
<pre class="lang-py prettyprint-override"><code>for i in numbs.numbers:
i = str(i) # convert the number to a string
if i[:3] in codes.codes.astype(str).unique():
print('Found.')
else:
print('Not Found.')
[out]:
Found.
Not Found.
Not Found.
Not Found.
</code></pre>
<h2>If <code>numbs</code> is multiple columns and contains <code>NaN</code>s</h2>
<ul>
<li>Use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.apply.html" rel="nofollow noreferrer"><code>pandas.DataFrame.apply</code></a> to test multiple columns.</li>
</ul>
<pre class="lang-py prettyprint-override"><code>import numpy as np
# test data
codes = pd.DataFrame({'codes': [201, 202, 203, 204, 205, 980, 984, 985, 986, 989]})
numbs = pd.DataFrame({'leads1': [2014029520, 2349212706, 2394944200, 5166867073, np.nan], 'leads2': [2014029520, 2349212706, 2394944200, 5166867073, np.nan]})
# cast the dataframe as str type
codes = codes.astype(str)
numbs = numbs.astype(str)
# use apply to test all columns
valid = numbs.apply(lambda x: x.str[:3].isin(codes.codes.astype(str).unique()))
# display(valid)
leads1 leads2
0 True True
1 False False
2 False False
3 False False
4 False False
</code></pre>
<h2>Loading from CSV and Implementation</h2>
<ul>
<li>Added per question from comment.</li>
<li>Set the column <code>dtype</code> when reading data from the CSV.</li>
</ul>
<pre class="lang-py prettyprint-override"><code># load data from csv
df_contacts = pd.read_csv('leads.csv', dtype={'Phone': str}, header=0)
df_areas = pd.read_csv('area_codes.csv', dtype={'Areas': str} header=0)
# remove any duplicate values
df_areas = df_areas.drop_duplicates().reset_index(drop=True)
# create a column with True or False
df_contacts['phone_valid_bool'] = df_contacts.Phone.str[:3].isin(df_areas.Areas.to_list())
</code></pre>
| 2020-08-20T20:42:04.343000
| 3
|
https://pandas.pydata.org/docs/user_guide/groupby.html
|
With codes and numbs starting as integers
Use .astype(str) to cast the columns as str type, and then use .str methods to determine if the first 3 characters of numbers is in a list of .unique codes.
pandas.Series.astype
pandas.Series.isin
pandas.Series.str.contains
Pandas: Working with text data
If the column of numbers or codes is already a str type, .astype(str) is not needed.
codes.codes.astype(str).unique() creates a list of unique codes, where each value is a str type.
import pandas as pd
# test data
codes = pd.DataFrame({'codes': [201, 202, 203, 204, 205, 980, 984, 985, 986, 989]})
numbs = pd.DataFrame({'numbers': [2014029520, 2349212706, 2394944200, 5166867073]})
# vectorized comparison
numbs['valid code'] = numbs.numbers.astype(str).str[:3].isin(codes.codes.unique())
# display(numbs)
numbers valid code
0 2014029520 True
1 2349212706 False
2 2394944200 False
3 5166867073 False
With your function
for i in numbs.numbers:
i = str(i) # convert the number to a string
if i[:3] in codes.codes.astype(str).unique():
print('Found.')
else:
print('Not Found.')
[out]:
Found.
Not Found.
Not Found.
Not Found.
If numbs is multiple columns and contains NaNs
Use pandas.DataFrame.apply to test multiple columns.
import numpy as np
# test data
codes = pd.DataFrame({'codes': [201, 202, 203, 204, 205, 980, 984, 985, 986, 989]})
numbs = pd.DataFrame({'leads1': [2014029520, 2349212706, 2394944200, 5166867073, np.nan], 'leads2': [2014029520, 2349212706, 2394944200, 5166867073, np.nan]})
# cast the dataframe as str type
codes = codes.astype(str)
numbs = numbs.astype(str)
# use apply to test all columns
valid = numbs.apply(lambda x: x.str[:3].isin(codes.codes.astype(str).unique()))
# display(valid)
leads1 leads2
0 True True
1 False False
2 False False
3 False False
4 False False
Loading from CSV and Implementation
Added per question from comment.
Set the column dtype when reading data from the CSV.
# load data from csv
df_contacts = pd.read_csv('leads.csv', dtype={'Phone': str}, header=0)
df_areas = pd.read_csv('area_codes.csv', dtype={'Areas': str} header=0)
# remove any duplicate values
df_areas = df_areas.drop_duplicates().reset_index(drop=True)
# create a column with True or False
df_contacts['phone_valid_bool'] = df_contacts.Phone.str[:3].isin(df_areas.Areas.to_list())
| 0
| 2,408
|
How to find a component of one column in another column?
I'm stuck trying to figure out why I am unable to locate something in a pandas data frame. This is where I am stuck:
area_codes = "area_codes.csv"
contacts = 'leads.csv'
df_contacts = pd.read_csv(contacts, header=0)
df_areas = pd.read_csv(area_codes, header=0)
for i in df_contacts['Phone']:
if type(i) is str:
if str(i[0:3]) in df_areas['Areas']:
print('Found.')
else:
print('Not Found.')
else:
pass
This line in particular is where my question is:
if str(i[0:3]) in df_areas['Areas']:
What I am attempting to do is see if the first 3 digits of a phone number str(i[0:3]) is in the list of known area codes df_areas['Areas'].
For whatever reason I cannot figure out why every check is coming up as false? I also went as far as doing some list comprehension and check it that way. Example: a = [i for i in df_areas['Areas']] and then loop over this list.
I've made sure to cast the value to a string so they are both the same object type as originally I thought that was the issue. Which brings me here. I'm just lost at this point. I'm new to programming and just really write little scripts like this that I'll use once or twice. It doesn't need to be performant at all, it just needs to work. So, why is this not working? And just to get ahead of it; yes, I checked to see if there were actually matches.
All the phone numbers in the area code list are 3 digits. Example (fake numbers):
1 2014029520
2 2349212706
3 2394944200
4 5166867073
...
Name: Phone, Length: 4305, dtype: object
All the phone numbers in the contacts list are 10 digits (or blank lines) with no spaces. Example:
0 201
1 202
2 203
3 204
4 205
...
401 980
402 984
403 985
404 986
405 989
Name: Areas, Length: 406, dtype: int64
I am casting the values to strings (which I think I'm doing correctly) but I've included the Pandas DF information like the dtype if that helps.
|
59,829,531
|
How to return value_counts() when grouped by another column in pandas
|
<p>I'm want to return the values in a value_counts of col2 back to the original dataframe after a pandas groupby based on col1.</p>
<p>i.e. I have...</p>
<pre><code> col1 col2
0 1111 A
1 1111 B
2 1111 B
3 1111 B
4 1111 C
5 2222 A
6 2222 B
7 2222 C
8 2222 C
</code></pre>
<p>and I'd like...</p>
<pre><code> col1 col2 col3
0 1111 A 1
1 1111 B 3
2 1111 B 3
3 1111 B 3
4 1111 C 1
5 2222 A 1
6 2222 B 1
7 2222 C 2
8 2222 C 2
</code></pre>
<p>I can get the values of col3 using a groupby and then passing the col2 value into value_counts, but I'm not sure how to then get this back into the dataframe.</p>
<p>Example:</p>
<pre><code>d1 = {'col1': ['1111', '1111', '1111', '1111', '1111', '2222', '2222', '2222', '2222'],
'col2': ['A', 'B', 'B', 'B', 'C', 'A', 'B', 'C', 'C']}
df1 = pd.DataFrame(data=d1)
d2 = {'col1': ['1111', '1111', '1111', '1111', '1111', '2222', '2222', '2222', '2222'],
'col2': ['A', 'B', 'B', 'B', 'C', 'A', 'B', 'C', 'C'],
'col3': [1, 3, 3, 3, 1, 1, 1, 2, 2]}
df2 = pd.DataFrame(data=d2)
print(df1)
print(df2)
counts = df1.groupby('col1').apply(lambda x: x.col2.value_counts()[x.col2])
print(counts)
</code></pre>
| 59,829,659
| 2020-01-20T19:10:45.297000
| 3
| null | 2
| 1,376
|
python|pandas
|
<p>you can make this with <code>groupby</code> and <code>transform</code>.</p>
<pre><code>df['col3'] = df1.groupby(['col1','col2'])['col2'].transform('count')
print(df)
col1 col2 col3
0 1111 A 1
1 1111 B 3
2 1111 B 3
3 1111 B 3
4 1111 C 1
5 2222 A 1
6 2222 B 1
7 2222 C 2
8 2222 C 2
</code></pre>
| 2020-01-20T19:22:04.187000
| 3
|
https://pandas.pydata.org/docs/getting_started/intro_tutorials/06_calculate_statistics.html
|
How to calculate summary statistics?#
In [1]: import pandas as pd
Data used for this tutorial:
Titanic data
This tutorial uses the Titanic data set, stored as CSV. The data
consists of the following data columns:
PassengerId: Id of every passenger.
Survived: Indication whether passenger survived. 0 for yes and 1 for no.
Pclass: One out of the 3 ticket classes: Class 1, Class 2 and Class 3.
Name: Name of passenger.
Sex: Gender of passenger.
Age: Age of passenger in years.
SibSp: Number of siblings or spouses aboard.
Parch: Number of parents or children aboard.
Ticket: Ticket number of passenger.
Fare: Indicating the fare.
Cabin: Cabin number of passenger.
Embarked: Port of embarkation.
you can make this with groupby and transform.
df['col3'] = df1.groupby(['col1','col2'])['col2'].transform('count')
print(df)
col1 col2 col3
0 1111 A 1
1 1111 B 3
2 1111 B 3
3 1111 B 3
4 1111 C 1
5 2222 A 1
6 2222 B 1
7 2222 C 2
8 2222 C 2
To raw data
In [2]: titanic = pd.read_csv("data/titanic.csv")
In [3]: titanic.head()
Out[3]:
PassengerId Survived Pclass ... Fare Cabin Embarked
0 1 0 3 ... 7.2500 NaN S
1 2 1 1 ... 71.2833 C85 C
2 3 1 3 ... 7.9250 NaN S
3 4 1 1 ... 53.1000 C123 S
4 5 0 3 ... 8.0500 NaN S
[5 rows x 12 columns]
How to calculate summary statistics?#
Aggregating statistics#
What is the average age of the Titanic passengers?
In [4]: titanic["Age"].mean()
Out[4]: 29.69911764705882
Different statistics are available and can be applied to columns with
numerical data. Operations in general exclude missing data and operate
across rows by default.
What is the median age and ticket fare price of the Titanic passengers?
In [5]: titanic[["Age", "Fare"]].median()
Out[5]:
Age 28.0000
Fare 14.4542
dtype: float64
The statistic applied to multiple columns of a DataFrame (the selection of two columns
returns a DataFrame, see the subset data tutorial) is calculated for each numeric column.
The aggregating statistic can be calculated for multiple columns at the
same time. Remember the describe function from the first tutorial?
In [6]: titanic[["Age", "Fare"]].describe()
Out[6]:
Age Fare
count 714.000000 891.000000
mean 29.699118 32.204208
std 14.526497 49.693429
min 0.420000 0.000000
25% 20.125000 7.910400
50% 28.000000 14.454200
75% 38.000000 31.000000
max 80.000000 512.329200
Instead of the predefined statistics, specific combinations of
aggregating statistics for given columns can be defined using the
DataFrame.agg() method:
In [7]: titanic.agg(
...: {
...: "Age": ["min", "max", "median", "skew"],
...: "Fare": ["min", "max", "median", "mean"],
...: }
...: )
...:
Out[7]:
Age Fare
min 0.420000 0.000000
max 80.000000 512.329200
median 28.000000 14.454200
skew 0.389108 NaN
mean NaN 32.204208
To user guideDetails about descriptive statistics are provided in the user guide section on descriptive statistics.
Aggregating statistics grouped by category#
What is the average age for male versus female Titanic passengers?
In [8]: titanic[["Sex", "Age"]].groupby("Sex").mean()
Out[8]:
Age
Sex
female 27.915709
male 30.726645
As our interest is the average age for each gender, a subselection on
these two columns is made first: titanic[["Sex", "Age"]]. Next, the
groupby() method is applied on the Sex column to make a group per
category. The average age for each gender is calculated and
returned.
Calculating a given statistic (e.g. mean age) for each category in
a column (e.g. male/female in the Sex column) is a common pattern.
The groupby method is used to support this type of operations. This
fits in the more general split-apply-combine pattern:
Split the data into groups
Apply a function to each group independently
Combine the results into a data structure
The apply and combine steps are typically done together in pandas.
In the previous example, we explicitly selected the 2 columns first. If
not, the mean method is applied to each column containing numerical
columns by passing numeric_only=True:
In [9]: titanic.groupby("Sex").mean(numeric_only=True)
Out[9]:
PassengerId Survived Pclass ... SibSp Parch Fare
Sex ...
female 431.028662 0.742038 2.159236 ... 0.694268 0.649682 44.479818
male 454.147314 0.188908 2.389948 ... 0.429809 0.235702 25.523893
[2 rows x 7 columns]
It does not make much sense to get the average value of the Pclass.
If we are only interested in the average age for each gender, the
selection of columns (rectangular brackets [] as usual) is supported
on the grouped data as well:
In [10]: titanic.groupby("Sex")["Age"].mean()
Out[10]:
Sex
female 27.915709
male 30.726645
Name: Age, dtype: float64
Note
The Pclass column contains numerical data but actually
represents 3 categories (or factors) with respectively the labels ‘1’,
‘2’ and ‘3’. Calculating statistics on these does not make much sense.
Therefore, pandas provides a Categorical data type to handle this
type of data. More information is provided in the user guide
Categorical data section.
What is the mean ticket fare price for each of the sex and cabin class combinations?
In [11]: titanic.groupby(["Sex", "Pclass"])["Fare"].mean()
Out[11]:
Sex Pclass
female 1 106.125798
2 21.970121
3 16.118810
male 1 67.226127
2 19.741782
3 12.661633
Name: Fare, dtype: float64
Grouping can be done by multiple columns at the same time. Provide the
column names as a list to the groupby() method.
To user guideA full description on the split-apply-combine approach is provided in the user guide section on groupby operations.
Count number of records by category#
What is the number of passengers in each of the cabin classes?
In [12]: titanic["Pclass"].value_counts()
Out[12]:
3 491
1 216
2 184
Name: Pclass, dtype: int64
The value_counts() method counts the number of records for each
category in a column.
The function is a shortcut, as it is actually a groupby operation in combination with counting of the number of records
within each group:
In [13]: titanic.groupby("Pclass")["Pclass"].count()
Out[13]:
Pclass
1 216
2 184
3 491
Name: Pclass, dtype: int64
Note
Both size and count can be used in combination with
groupby. Whereas size includes NaN values and just provides
the number of rows (size of the table), count excludes the missing
values. In the value_counts method, use the dropna argument to
include or exclude the NaN values.
To user guideThe user guide has a dedicated section on value_counts , see the page on discretization.
REMEMBER
Aggregation statistics can be calculated on entire columns or rows.
groupby provides the power of the split-apply-combine pattern.
value_counts is a convenient shortcut to count the number of
entries in each category of a variable.
To user guideA full description on the split-apply-combine approach is provided in the user guide pages about groupby operations.
| 708
| 1,020
|
How to return value_counts() when grouped by another column in pandas
I'm want to return the values in a value_counts of col2 back to the original dataframe after a pandas groupby based on col1.
i.e. I have...
col1 col2
0 1111 A
1 1111 B
2 1111 B
3 1111 B
4 1111 C
5 2222 A
6 2222 B
7 2222 C
8 2222 C
and I'd like...
col1 col2 col3
0 1111 A 1
1 1111 B 3
2 1111 B 3
3 1111 B 3
4 1111 C 1
5 2222 A 1
6 2222 B 1
7 2222 C 2
8 2222 C 2
I can get the values of col3 using a groupby and then passing the col2 value into value_counts, but I'm not sure how to then get this back into the dataframe.
Example:
d1 = {'col1': ['1111', '1111', '1111', '1111', '1111', '2222', '2222', '2222', '2222'],
'col2': ['A', 'B', 'B', 'B', 'C', 'A', 'B', 'C', 'C']}
df1 = pd.DataFrame(data=d1)
d2 = {'col1': ['1111', '1111', '1111', '1111', '1111', '2222', '2222', '2222', '2222'],
'col2': ['A', 'B', 'B', 'B', 'C', 'A', 'B', 'C', 'C'],
'col3': [1, 3, 3, 3, 1, 1, 1, 2, 2]}
df2 = pd.DataFrame(data=d2)
print(df1)
print(df2)
counts = df1.groupby('col1').apply(lambda x: x.col2.value_counts()[x.col2])
print(counts)
|
65,806,080
|
In Pandas, how to create a unique ID based on the common interrelation of other columns?
|
<p>I have a dataframe with two IDs columns. I need to set a unique common interrelated ID with te following condition: if either ID1 or ID2 has some of them in common, they must have the same common_ID (ID_3).</p>
<p>The dataframe looks like:</p>
<pre><code>df = pd.DataFrame({'ID_1': ['111', '111', '222', '333', '333', '444', '555', '666', '666', '777'],
'ID_2': ['AAA', 'BBB', 'AAA', 'BBB', 'CCC', 'DDD', 'EEE', 'DDD', 'FFF', 'CCC']})
</code></pre>
<p>The desired output should be as follow:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>ID_1</th>
<th>ID_2</th>
<th>ID_3</th>
</tr>
</thead>
<tbody>
<tr>
<td>111</td>
<td>AAA</td>
<td>1</td>
</tr>
<tr>
<td>111</td>
<td>BBB</td>
<td>1</td>
</tr>
<tr>
<td>222</td>
<td>AAA</td>
<td>1</td>
</tr>
<tr>
<td>333</td>
<td>BBB</td>
<td>1</td>
</tr>
<tr>
<td>333</td>
<td>CCC</td>
<td>1</td>
</tr>
<tr>
<td>444</td>
<td>DDD</td>
<td>2</td>
</tr>
<tr>
<td>555</td>
<td>EEE</td>
<td>3</td>
</tr>
<tr>
<td>666</td>
<td>DDD</td>
<td>2</td>
</tr>
<tr>
<td>666</td>
<td>FFF</td>
<td>2</td>
</tr>
<tr>
<td>777</td>
<td>CCC</td>
<td>1</td>
</tr>
</tbody>
</table>
</div>
<pre><code>df_output = pd.DataFrame({'ID_1': ['111', '111', '222', '333', '333', '444', '555', '666', '666', '777'],
'ID_2': ['AAA', 'BBB', 'AAA', 'BBB', 'CCC', 'DDD', 'EEE', 'DDD', 'FFF', 'CCC'],
'ID_3': ['1', '1', '1', '1', '1', '2', '3', '2', '2', '1']})
</code></pre>
<p>to clarify the conditions</p>
<p>In 1st and 2nd row ID_1 the same, so they must have the same ID_3.</p>
<p>The 3rd row has the same ID_2 as 1st row, so its ID_3 must be the same as 1st row = 1.</p>
<p>The 4th row has the same ID_2 as 2nd row, that's why it must be set the same ID_3 as 2nd = 1.</p>
<p>The 5th row has the same ID_1 as 4th, so ID_3 = 1.</p>
<p>The 6th row has a unique combination of ID_1 and ID_2 at this moment, so it's marked as ID_3 = 2.</p>
<p>Than 7th row = 3.</p>
<p>But 8th has the same ID_2 as 6th, so ID_3 = 2.</p>
<p>and so on</p>
| 65,806,488
| 2021-01-20T08:53:49.430000
| 1
| 2
| 2
| 120
|
python|pandas
|
<p>I think we can use <a href="https://pypi.org/project/networkx/" rel="nofollow noreferrer"><code>networkx</code></a> to solve this:</p>
<pre><code>import networkx as nx
G=nx.Graph()
G.add_edges_from(df[['ID_1','ID_2']].to_numpy().tolist())
cc = list(nx.connected_components(G))
L=[dict.fromkeys(b,a) for a, b in enumerate(cc,1)]
d={k: v for d in L for k, v in d.items()}
out = df.assign(ID_3=df['ID_2'].map(d))
</code></pre>
<hr />
<pre><code>print(out)
ID_1 ID_2 ID_3
0 111 AAA 1
1 111 BBB 1
2 222 AAA 1
3 333 BBB 1
4 333 CCC 1
5 444 DDD 2
6 555 EEE 3
7 666 DDD 2
8 666 FFF 2
9 777 CCC 1
</code></pre>
<p>To see connected components:</p>
<pre><code>print(cc)
[{'111', '777', '222', 'AAA', '333', 'BBB', 'CCC'},
{'DDD', 'FFF', '666', '444'}, {'555', 'EEE'}]
</code></pre>
| 2021-01-20T09:17:42.873000
| 3
|
https://pandas.pydata.org/docs/getting_started/intro_tutorials/08_combine_dataframes.html
|
How to combine data from multiple tables?#
In [1]: import pandas as pd
Data used for this tutorial:
Air quality Nitrate data
For this tutorial, air quality data about \(NO_2\) is used, made available by
OpenAQ and downloaded using the
py-openaq package.
The air_quality_no2_long.csv data set provides \(NO_2\)
values for the measurement stations FR04014, BETR801 and London
I think we can use networkx to solve this:
import networkx as nx
G=nx.Graph()
G.add_edges_from(df[['ID_1','ID_2']].to_numpy().tolist())
cc = list(nx.connected_components(G))
L=[dict.fromkeys(b,a) for a, b in enumerate(cc,1)]
d={k: v for d in L for k, v in d.items()}
out = df.assign(ID_3=df['ID_2'].map(d))
print(out)
ID_1 ID_2 ID_3
0 111 AAA 1
1 111 BBB 1
2 222 AAA 1
3 333 BBB 1
4 333 CCC 1
5 444 DDD 2
6 555 EEE 3
7 666 DDD 2
8 666 FFF 2
9 777 CCC 1
To see connected components:
print(cc)
[{'111', '777', '222', 'AAA', '333', 'BBB', 'CCC'},
{'DDD', 'FFF', '666', '444'}, {'555', 'EEE'}]
Westminster in respectively Paris, Antwerp and London.
To raw data
In [2]: air_quality_no2 = pd.read_csv("data/air_quality_no2_long.csv",
...: parse_dates=True)
...:
In [3]: air_quality_no2 = air_quality_no2[["date.utc", "location",
...: "parameter", "value"]]
...:
In [4]: air_quality_no2.head()
Out[4]:
date.utc location parameter value
0 2019-06-21 00:00:00+00:00 FR04014 no2 20.0
1 2019-06-20 23:00:00+00:00 FR04014 no2 21.8
2 2019-06-20 22:00:00+00:00 FR04014 no2 26.5
3 2019-06-20 21:00:00+00:00 FR04014 no2 24.9
4 2019-06-20 20:00:00+00:00 FR04014 no2 21.4
Air quality Particulate matter data
For this tutorial, air quality data about Particulate
matter less than 2.5 micrometers is used, made available by
OpenAQ and downloaded using the
py-openaq package.
The air_quality_pm25_long.csv data set provides \(PM_{25}\)
values for the measurement stations FR04014, BETR801 and London
Westminster in respectively Paris, Antwerp and London.
To raw data
In [5]: air_quality_pm25 = pd.read_csv("data/air_quality_pm25_long.csv",
...: parse_dates=True)
...:
In [6]: air_quality_pm25 = air_quality_pm25[["date.utc", "location",
...: "parameter", "value"]]
...:
In [7]: air_quality_pm25.head()
Out[7]:
date.utc location parameter value
0 2019-06-18 06:00:00+00:00 BETR801 pm25 18.0
1 2019-06-17 08:00:00+00:00 BETR801 pm25 6.5
2 2019-06-17 07:00:00+00:00 BETR801 pm25 18.5
3 2019-06-17 06:00:00+00:00 BETR801 pm25 16.0
4 2019-06-17 05:00:00+00:00 BETR801 pm25 7.5
How to combine data from multiple tables?#
Concatenating objects#
I want to combine the measurements of \(NO_2\) and \(PM_{25}\), two tables with a similar structure, in a single table.
In [8]: air_quality = pd.concat([air_quality_pm25, air_quality_no2], axis=0)
In [9]: air_quality.head()
Out[9]:
date.utc location parameter value
0 2019-06-18 06:00:00+00:00 BETR801 pm25 18.0
1 2019-06-17 08:00:00+00:00 BETR801 pm25 6.5
2 2019-06-17 07:00:00+00:00 BETR801 pm25 18.5
3 2019-06-17 06:00:00+00:00 BETR801 pm25 16.0
4 2019-06-17 05:00:00+00:00 BETR801 pm25 7.5
The concat() function performs concatenation operations of multiple
tables along one of the axes (row-wise or column-wise).
By default concatenation is along axis 0, so the resulting table combines the rows
of the input tables. Let’s check the shape of the original and the
concatenated tables to verify the operation:
In [10]: print('Shape of the ``air_quality_pm25`` table: ', air_quality_pm25.shape)
Shape of the ``air_quality_pm25`` table: (1110, 4)
In [11]: print('Shape of the ``air_quality_no2`` table: ', air_quality_no2.shape)
Shape of the ``air_quality_no2`` table: (2068, 4)
In [12]: print('Shape of the resulting ``air_quality`` table: ', air_quality.shape)
Shape of the resulting ``air_quality`` table: (3178, 4)
Hence, the resulting table has 3178 = 1110 + 2068 rows.
Note
The axis argument will return in a number of pandas
methods that can be applied along an axis. A DataFrame has two
corresponding axes: the first running vertically downwards across rows
(axis 0), and the second running horizontally across columns (axis 1).
Most operations like concatenation or summary statistics are by default
across rows (axis 0), but can be applied across columns as well.
Sorting the table on the datetime information illustrates also the
combination of both tables, with the parameter column defining the
origin of the table (either no2 from table air_quality_no2 or
pm25 from table air_quality_pm25):
In [13]: air_quality = air_quality.sort_values("date.utc")
In [14]: air_quality.head()
Out[14]:
date.utc location parameter value
2067 2019-05-07 01:00:00+00:00 London Westminster no2 23.0
1003 2019-05-07 01:00:00+00:00 FR04014 no2 25.0
100 2019-05-07 01:00:00+00:00 BETR801 pm25 12.5
1098 2019-05-07 01:00:00+00:00 BETR801 no2 50.5
1109 2019-05-07 01:00:00+00:00 London Westminster pm25 8.0
In this specific example, the parameter column provided by the data
ensures that each of the original tables can be identified. This is not
always the case. The concat function provides a convenient solution
with the keys argument, adding an additional (hierarchical) row
index. For example:
In [15]: air_quality_ = pd.concat([air_quality_pm25, air_quality_no2], keys=["PM25", "NO2"])
In [16]: air_quality_.head()
Out[16]:
date.utc location parameter value
PM25 0 2019-06-18 06:00:00+00:00 BETR801 pm25 18.0
1 2019-06-17 08:00:00+00:00 BETR801 pm25 6.5
2 2019-06-17 07:00:00+00:00 BETR801 pm25 18.5
3 2019-06-17 06:00:00+00:00 BETR801 pm25 16.0
4 2019-06-17 05:00:00+00:00 BETR801 pm25 7.5
Note
The existence of multiple row/column indices at the same time
has not been mentioned within these tutorials. Hierarchical indexing
or MultiIndex is an advanced and powerful pandas feature to analyze
higher dimensional data.
Multi-indexing is out of scope for this pandas introduction. For the
moment, remember that the function reset_index can be used to
convert any level of an index to a column, e.g.
air_quality.reset_index(level=0)
To user guideFeel free to dive into the world of multi-indexing at the user guide section on advanced indexing.
To user guideMore options on table concatenation (row and column
wise) and how concat can be used to define the logic (union or
intersection) of the indexes on the other axes is provided at the section on
object concatenation.
Join tables using a common identifier#
Add the station coordinates, provided by the stations metadata table, to the corresponding rows in the measurements table.
Warning
The air quality measurement station coordinates are stored in a data
file air_quality_stations.csv, downloaded using the
py-openaq package.
In [17]: stations_coord = pd.read_csv("data/air_quality_stations.csv")
In [18]: stations_coord.head()
Out[18]:
location coordinates.latitude coordinates.longitude
0 BELAL01 51.23619 4.38522
1 BELHB23 51.17030 4.34100
2 BELLD01 51.10998 5.00486
3 BELLD02 51.12038 5.02155
4 BELR833 51.32766 4.36226
Note
The stations used in this example (FR04014, BETR801 and London
Westminster) are just three entries enlisted in the metadata table. We
only want to add the coordinates of these three to the measurements
table, each on the corresponding rows of the air_quality table.
In [19]: air_quality.head()
Out[19]:
date.utc location parameter value
2067 2019-05-07 01:00:00+00:00 London Westminster no2 23.0
1003 2019-05-07 01:00:00+00:00 FR04014 no2 25.0
100 2019-05-07 01:00:00+00:00 BETR801 pm25 12.5
1098 2019-05-07 01:00:00+00:00 BETR801 no2 50.5
1109 2019-05-07 01:00:00+00:00 London Westminster pm25 8.0
In [20]: air_quality = pd.merge(air_quality, stations_coord, how="left", on="location")
In [21]: air_quality.head()
Out[21]:
date.utc ... coordinates.longitude
0 2019-05-07 01:00:00+00:00 ... -0.13193
1 2019-05-07 01:00:00+00:00 ... 2.39390
2 2019-05-07 01:00:00+00:00 ... 2.39390
3 2019-05-07 01:00:00+00:00 ... 4.43182
4 2019-05-07 01:00:00+00:00 ... 4.43182
[5 rows x 6 columns]
Using the merge() function, for each of the rows in the
air_quality table, the corresponding coordinates are added from the
air_quality_stations_coord table. Both tables have the column
location in common which is used as a key to combine the
information. By choosing the left join, only the locations available
in the air_quality (left) table, i.e. FR04014, BETR801 and London
Westminster, end up in the resulting table. The merge function
supports multiple join options similar to database-style operations.
Add the parameters’ full description and name, provided by the parameters metadata table, to the measurements table.
Warning
The air quality parameters metadata are stored in a data file
air_quality_parameters.csv, downloaded using the
py-openaq package.
In [22]: air_quality_parameters = pd.read_csv("data/air_quality_parameters.csv")
In [23]: air_quality_parameters.head()
Out[23]:
id description name
0 bc Black Carbon BC
1 co Carbon Monoxide CO
2 no2 Nitrogen Dioxide NO2
3 o3 Ozone O3
4 pm10 Particulate matter less than 10 micrometers in... PM10
In [24]: air_quality = pd.merge(air_quality, air_quality_parameters,
....: how='left', left_on='parameter', right_on='id')
....:
In [25]: air_quality.head()
Out[25]:
date.utc ... name
0 2019-05-07 01:00:00+00:00 ... NO2
1 2019-05-07 01:00:00+00:00 ... NO2
2 2019-05-07 01:00:00+00:00 ... NO2
3 2019-05-07 01:00:00+00:00 ... PM2.5
4 2019-05-07 01:00:00+00:00 ... NO2
[5 rows x 9 columns]
Compared to the previous example, there is no common column name.
However, the parameter column in the air_quality table and the
id column in the air_quality_parameters_name both provide the
measured variable in a common format. The left_on and right_on
arguments are used here (instead of just on) to make the link
between the two tables.
To user guidepandas supports also inner, outer, and right joins.
More information on join/merge of tables is provided in the user guide section on
database style merging of tables. Or have a look at the
comparison with SQL page.
REMEMBER
Multiple tables can be concatenated both column-wise and row-wise using
the concat function.
For database-like merging/joining of tables, use the merge
function.
To user guideSee the user guide for a full description of the various facilities to combine data tables.
| 388
| 1,048
|
In Pandas, how to create a unique ID based on the common interrelation of other columns?
I have a dataframe with two IDs columns. I need to set a unique common interrelated ID with te following condition: if either ID1 or ID2 has some of them in common, they must have the same common_ID (ID_3).
The dataframe looks like:
df = pd.DataFrame({'ID_1': ['111', '111', '222', '333', '333', '444', '555', '666', '666', '777'],
'ID_2': ['AAA', 'BBB', 'AAA', 'BBB', 'CCC', 'DDD', 'EEE', 'DDD', 'FFF', 'CCC']})
The desired output should be as follow:
ID_1
ID_2
ID_3
111
AAA
1
111
BBB
1
222
AAA
1
333
BBB
1
333
CCC
1
444
DDD
2
555
EEE
3
666
DDD
2
666
FFF
2
777
CCC
1
df_output = pd.DataFrame({'ID_1': ['111', '111', '222', '333', '333', '444', '555', '666', '666', '777'],
'ID_2': ['AAA', 'BBB', 'AAA', 'BBB', 'CCC', 'DDD', 'EEE', 'DDD', 'FFF', 'CCC'],
'ID_3': ['1', '1', '1', '1', '1', '2', '3', '2', '2', '1']})
to clarify the conditions
In 1st and 2nd row ID_1 the same, so they must have the same ID_3.
The 3rd row has the same ID_2 as 1st row, so its ID_3 must be the same as 1st row = 1.
The 4th row has the same ID_2 as 2nd row, that's why it must be set the same ID_3 as 2nd = 1.
The 5th row has the same ID_1 as 4th, so ID_3 = 1.
The 6th row has a unique combination of ID_1 and ID_2 at this moment, so it's marked as ID_3 = 2.
Than 7th row = 3.
But 8th has the same ID_2 as 6th, so ID_3 = 2.
and so on
|
65,383,866
|
Flatten lists of list for each cell in a pandas column
|
<p>I have a DF that looks like this</p>
<pre><code>DF =
index goal features
0 1 [[5.20281045, 5.3353545, 7.343434, ...],[2.33435, 4.2133, ...], ...]]
1 0 [[7.23123213, 1.2323123, 2.232133, ...],[1,45456, 0.2313, 2.23213], ...]]
...
</code></pre>
<p>The features column has a very large amount of numbers in a list of lists. The actual amount of its elements is not the same across multiple rows and I therefore wanted to fill in 0 to create a singular input and also flattening the list of lists to a single list.</p>
<pre><code>DF_Desired
index goal features
0 1 [5.20281045, 5.3353545, 7.343434, ..., 2.33435, 4.2133, ... , ...]
0 0 [7.23123213, 1.2323123, 2.232133, ..., 1,45456, 0.2313, 2.23213, ...]
</code></pre>
<p>Here is my code:</p>
<pre><code># Flatten each Lists
flat_list = []
for sublist in data["features"]:
for item in sublist:
flat_list.append(item)
or
flat_list = list(itertools.chain.from_iterable(data["features"]))
</code></pre>
<p>I (of course) cannot enter flat_list straight into the DF as its length does not match
"ValueError: Length of values (478) does not match length of index (2)"</p>
<pre><code># Make the Lists equal in length:
length = max(map(len, df["features"]))
X = np.array([xi+[0]*(length-len(xi)) for xi in df["features"])
print(X)
</code></pre>
<p>What this should do is flatten each cell of df["features"] into a single list and then adding 0 to fit each list where needed. But it just returns:</p>
<pre><code>[[5.20281045, 5.3353545, 7.343434, ...]
[2.33435, 4.2133, ...]
[...]
...
[7.23123213, 1.2323123, 2.232133, ...]
[1,45456, 0.2313, 2.23213 ...]]
</code></pre>
<p>So what exactly did I do wrong?</p>
| 65,384,469
| 2020-12-20T19:21:07.103000
| 2
| null | 0
| 890
|
python|pandas
|
<p>You can sum each list with a empty one to get a flat list:</p>
<pre><code>DF['features'] = DF.features.apply(lambda x: sum(x, []))
</code></pre>
| 2020-12-20T20:27:21.943000
| 3
|
https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.explode.html
|
pandas.DataFrame.explode#
pandas.DataFrame.explode#
DataFrame.explode(column, ignore_index=False)[source]#
Transform each element of a list-like to a row, replicating index values.
New in version 0.25.0.
Parameters
columnIndexLabelColumn(s) to explode.
For multiple columns, specify a non-empty list with each element
be str or tuple, and all specified columns their list-like data
You can sum each list with a empty one to get a flat list:
DF['features'] = DF.features.apply(lambda x: sum(x, []))
on same row of the frame must have matching length.
New in version 1.3.0: Multi-column explode
ignore_indexbool, default FalseIf True, the resulting index will be labeled 0, 1, …, n - 1.
New in version 1.1.0.
Returns
DataFrameExploded lists to rows of the subset columns;
index will be duplicated for these rows.
Raises
ValueError
If columns of the frame are not unique.
If specified columns to explode is empty list.
If specified columns to explode have not matching count of
elements rowwise in the frame.
See also
DataFrame.unstackPivot a level of the (necessarily hierarchical) index labels.
DataFrame.meltUnpivot a DataFrame from wide format to long format.
Series.explodeExplode a DataFrame from list-like columns to long format.
Notes
This routine will explode list-likes including lists, tuples, sets,
Series, and np.ndarray. The result dtype of the subset rows will
be object. Scalars will be returned unchanged, and empty list-likes will
result in a np.nan for that row. In addition, the ordering of rows in the
output will be non-deterministic when exploding sets.
Reference the user guide for more examples.
Examples
>>> df = pd.DataFrame({'A': [[0, 1, 2], 'foo', [], [3, 4]],
... 'B': 1,
... 'C': [['a', 'b', 'c'], np.nan, [], ['d', 'e']]})
>>> df
A B C
0 [0, 1, 2] 1 [a, b, c]
1 foo 1 NaN
2 [] 1 []
3 [3, 4] 1 [d, e]
Single-column explode.
>>> df.explode('A')
A B C
0 0 1 [a, b, c]
0 1 1 [a, b, c]
0 2 1 [a, b, c]
1 foo 1 NaN
2 NaN 1 []
3 3 1 [d, e]
3 4 1 [d, e]
Multi-column explode.
>>> df.explode(list('AC'))
A B C
0 0 1 a
0 1 1 b
0 2 1 c
1 foo 1 NaN
2 NaN 1 NaN
3 3 1 d
3 4 1 e
| 390
| 506
|
Flatten lists of list for each cell in a pandas column
I have a DF that looks like this
DF =
index goal features
0 1 [[5.20281045, 5.3353545, 7.343434, ...],[2.33435, 4.2133, ...], ...]]
1 0 [[7.23123213, 1.2323123, 2.232133, ...],[1,45456, 0.2313, 2.23213], ...]]
...
The features column has a very large amount of numbers in a list of lists. The actual amount of its elements is not the same across multiple rows and I therefore wanted to fill in 0 to create a singular input and also flattening the list of lists to a single list.
DF_Desired
index goal features
0 1 [5.20281045, 5.3353545, 7.343434, ..., 2.33435, 4.2133, ... , ...]
0 0 [7.23123213, 1.2323123, 2.232133, ..., 1,45456, 0.2313, 2.23213, ...]
Here is my code:
# Flatten each Lists
flat_list = []
for sublist in data["features"]:
for item in sublist:
flat_list.append(item)
or
flat_list = list(itertools.chain.from_iterable(data["features"]))
I (of course) cannot enter flat_list straight into the DF as its length does not match
"ValueError: Length of values (478) does not match length of index (2)"
# Make the Lists equal in length:
length = max(map(len, df["features"]))
X = np.array([xi+[0]*(length-len(xi)) for xi in df["features"])
print(X)
What this should do is flatten each cell of df["features"] into a single list and then adding 0 to fit each list where needed. But it just returns:
[[5.20281045, 5.3353545, 7.343434, ...]
[2.33435, 4.2133, ...]
[...]
...
[7.23123213, 1.2323123, 2.232133, ...]
[1,45456, 0.2313, 2.23213 ...]]
So what exactly did I do wrong?
|
63,938,911
|
Concat sequence number to each row in a group using Pandas and R
|
<p>I have a data frame like as shown below (Both R and Python data frame codes are given below)</p>
<pre><code>df = pd.DataFrame({'person_id': [11,11,11,12,12,12,12,13,13,13,13,13,14,14,14]})
df['enc_id'] = [1134567890,1134567890,1134567890,3456789210,3456789210,3456789210,3456789210,5643271890,5643271890,5643271890,5643271890,5643271890,2468013579,2468013579,2468013579]
person_id <- c(11,11,11,12,12,12,12,13,13,13,13,13,14,14,14)
enc_id <- c(1134567890,1134567890,1134567890,3456789210,3456789210,3456789210,3456789210,5643271890,5643271890,5643271890,5643271890,5643271890,2468013579,2468013579,2468013579)
df <- data.frame(person_id, enc_id)
</code></pre>
<p>I would like to concat a sequence number to <code>enc_id</code> for each person</p>
<p>I wrote something like below in Python</p>
<pre><code>df['new_enc_id'] = df['enc_id'].map(str) + (df.groupby('person_id').cumcount()+1).map(str)
</code></pre>
<p>Can you help me with the below questions?</p>
<ol>
<li><p>How can I do this in R?</p>
</li>
<li><p>Any elegant way to do this in Python?</p>
</li>
</ol>
<p>I expect my output to be like as shown below. You can see that <code>sequence number</code> is concatenated for each group and <code>not added</code>.</p>
<p><a href="https://i.stack.imgur.com/vBfKq.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vBfKq.png" alt="enter image description here" /></a></p>
| 63,939,056
| 2020-09-17T13:17:51.113000
| 4
| null | 2
| 128
|
python|pandas
|
<p>In R</p>
<pre><code>df = df %>% group_by(person_id) %>% dplyr::mutate(new_enc_id = paste0(enc_id,row_number()) )
</code></pre>
| 2020-09-17T13:24:25.860000
| 3
|
https://pandas.pydata.org/docs/dev/user_guide/merging.html
|
Merge, join, concatenate and compare#
Merge, join, concatenate and compare#
pandas provides various facilities for easily combining together Series or
DataFrame with various kinds of set logic for the indexes
and relational algebra functionality in the case of join / merge-type
operations.
In addition, pandas also provides utilities to compare two Series or DataFrame
and summarize their differences.
Concatenating objects#
The concat() function (in the main pandas namespace) does all of
the heavy lifting of performing concatenation operations along an axis while
performing optional set logic (union or intersection) of the indexes (if any) on
the other axes. Note that I say “if any” because there is only a single possible
axis of concatenation for Series.
Before diving into all of the details of concat and what it can do, here is
a simple example:
In [1]: df1 = pd.DataFrame(
...: {
...: "A": ["A0", "A1", "A2", "A3"],
...: "B": ["B0", "B1", "B2", "B3"],
...: "C": ["C0", "C1", "C2", "C3"],
...: "D": ["D0", "D1", "D2", "D3"],
In R
df = df %>% group_by(person_id) %>% dplyr::mutate(new_enc_id = paste0(enc_id,row_number()) )
...: },
...: index=[0, 1, 2, 3],
...: )
...:
In [2]: df2 = pd.DataFrame(
...: {
...: "A": ["A4", "A5", "A6", "A7"],
...: "B": ["B4", "B5", "B6", "B7"],
...: "C": ["C4", "C5", "C6", "C7"],
...: "D": ["D4", "D5", "D6", "D7"],
...: },
...: index=[4, 5, 6, 7],
...: )
...:
In [3]: df3 = pd.DataFrame(
...: {
...: "A": ["A8", "A9", "A10", "A11"],
...: "B": ["B8", "B9", "B10", "B11"],
...: "C": ["C8", "C9", "C10", "C11"],
...: "D": ["D8", "D9", "D10", "D11"],
...: },
...: index=[8, 9, 10, 11],
...: )
...:
In [4]: frames = [df1, df2, df3]
In [5]: result = pd.concat(frames)
Like its sibling function on ndarrays, numpy.concatenate, pandas.concat
takes a list or dict of homogeneously-typed objects and concatenates them with
some configurable handling of “what to do with the other axes”:
pd.concat(
objs,
axis=0,
join="outer",
ignore_index=False,
keys=None,
levels=None,
names=None,
verify_integrity=False,
copy=True,
)
objs : a sequence or mapping of Series or DataFrame objects. If a
dict is passed, the sorted keys will be used as the keys argument, unless
it is passed, in which case the values will be selected (see below). Any None
objects will be dropped silently unless they are all None in which case a
ValueError will be raised.
axis : {0, 1, …}, default 0. The axis to concatenate along.
join : {‘inner’, ‘outer’}, default ‘outer’. How to handle indexes on
other axis(es). Outer for union and inner for intersection.
ignore_index : boolean, default False. If True, do not use the index
values on the concatenation axis. The resulting axis will be labeled 0, …,
n - 1. This is useful if you are concatenating objects where the
concatenation axis does not have meaningful indexing information. Note
the index values on the other axes are still respected in the join.
keys : sequence, default None. Construct hierarchical index using the
passed keys as the outermost level. If multiple levels passed, should
contain tuples.
levels : list of sequences, default None. Specific levels (unique values)
to use for constructing a MultiIndex. Otherwise they will be inferred from the
keys.
names : list, default None. Names for the levels in the resulting
hierarchical index.
verify_integrity : boolean, default False. Check whether the new
concatenated axis contains duplicates. This can be very expensive relative
to the actual data concatenation.
copy : boolean, default True. If False, do not copy data unnecessarily.
Without a little bit of context many of these arguments don’t make much sense.
Let’s revisit the above example. Suppose we wanted to associate specific keys
with each of the pieces of the chopped up DataFrame. We can do this using the
keys argument:
In [6]: result = pd.concat(frames, keys=["x", "y", "z"])
As you can see (if you’ve read the rest of the documentation), the resulting
object’s index has a hierarchical index. This
means that we can now select out each chunk by key:
In [7]: result.loc["y"]
Out[7]:
A B C D
4 A4 B4 C4 D4
5 A5 B5 C5 D5
6 A6 B6 C6 D6
7 A7 B7 C7 D7
It’s not a stretch to see how this can be very useful. More detail on this
functionality below.
Note
It is worth noting that concat() makes a full copy of the data, and that constantly
reusing this function can create a significant performance hit. If you need
to use the operation over several datasets, use a list comprehension.
frames = [ process_your_file(f) for f in files ]
result = pd.concat(frames)
Note
When concatenating DataFrames with named axes, pandas will attempt to preserve
these index/column names whenever possible. In the case where all inputs share a
common name, this name will be assigned to the result. When the input names do
not all agree, the result will be unnamed. The same is true for MultiIndex,
but the logic is applied separately on a level-by-level basis.
Set logic on the other axes#
When gluing together multiple DataFrames, you have a choice of how to handle
the other axes (other than the one being concatenated). This can be done in
the following two ways:
Take the union of them all, join='outer'. This is the default
option as it results in zero information loss.
Take the intersection, join='inner'.
Here is an example of each of these methods. First, the default join='outer'
behavior:
In [8]: df4 = pd.DataFrame(
...: {
...: "B": ["B2", "B3", "B6", "B7"],
...: "D": ["D2", "D3", "D6", "D7"],
...: "F": ["F2", "F3", "F6", "F7"],
...: },
...: index=[2, 3, 6, 7],
...: )
...:
In [9]: result = pd.concat([df1, df4], axis=1)
Here is the same thing with join='inner':
In [10]: result = pd.concat([df1, df4], axis=1, join="inner")
Lastly, suppose we just wanted to reuse the exact index from the original
DataFrame:
In [11]: result = pd.concat([df1, df4], axis=1).reindex(df1.index)
Similarly, we could index before the concatenation:
In [12]: pd.concat([df1, df4.reindex(df1.index)], axis=1)
Out[12]:
A B C D B D F
0 A0 B0 C0 D0 NaN NaN NaN
1 A1 B1 C1 D1 NaN NaN NaN
2 A2 B2 C2 D2 B2 D2 F2
3 A3 B3 C3 D3 B3 D3 F3
Ignoring indexes on the concatenation axis#
For DataFrame objects which don’t have a meaningful index, you may wish
to append them and ignore the fact that they may have overlapping indexes. To
do this, use the ignore_index argument:
In [13]: result = pd.concat([df1, df4], ignore_index=True, sort=False)
Concatenating with mixed ndims#
You can concatenate a mix of Series and DataFrame objects. The
Series will be transformed to DataFrame with the column name as
the name of the Series.
In [14]: s1 = pd.Series(["X0", "X1", "X2", "X3"], name="X")
In [15]: result = pd.concat([df1, s1], axis=1)
Note
Since we’re concatenating a Series to a DataFrame, we could have
achieved the same result with DataFrame.assign(). To concatenate an
arbitrary number of pandas objects (DataFrame or Series), use
concat.
If unnamed Series are passed they will be numbered consecutively.
In [16]: s2 = pd.Series(["_0", "_1", "_2", "_3"])
In [17]: result = pd.concat([df1, s2, s2, s2], axis=1)
Passing ignore_index=True will drop all name references.
In [18]: result = pd.concat([df1, s1], axis=1, ignore_index=True)
More concatenating with group keys#
A fairly common use of the keys argument is to override the column names
when creating a new DataFrame based on existing Series.
Notice how the default behaviour consists on letting the resulting DataFrame
inherit the parent Series’ name, when these existed.
In [19]: s3 = pd.Series([0, 1, 2, 3], name="foo")
In [20]: s4 = pd.Series([0, 1, 2, 3])
In [21]: s5 = pd.Series([0, 1, 4, 5])
In [22]: pd.concat([s3, s4, s5], axis=1)
Out[22]:
foo 0 1
0 0 0 0
1 1 1 1
2 2 2 4
3 3 3 5
Through the keys argument we can override the existing column names.
In [23]: pd.concat([s3, s4, s5], axis=1, keys=["red", "blue", "yellow"])
Out[23]:
red blue yellow
0 0 0 0
1 1 1 1
2 2 2 4
3 3 3 5
Let’s consider a variation of the very first example presented:
In [24]: result = pd.concat(frames, keys=["x", "y", "z"])
You can also pass a dict to concat in which case the dict keys will be used
for the keys argument (unless other keys are specified):
In [25]: pieces = {"x": df1, "y": df2, "z": df3}
In [26]: result = pd.concat(pieces)
In [27]: result = pd.concat(pieces, keys=["z", "y"])
The MultiIndex created has levels that are constructed from the passed keys and
the index of the DataFrame pieces:
In [28]: result.index.levels
Out[28]: FrozenList([['z', 'y'], [4, 5, 6, 7, 8, 9, 10, 11]])
If you wish to specify other levels (as will occasionally be the case), you can
do so using the levels argument:
In [29]: result = pd.concat(
....: pieces, keys=["x", "y", "z"], levels=[["z", "y", "x", "w"]], names=["group_key"]
....: )
....:
In [30]: result.index.levels
Out[30]: FrozenList([['z', 'y', 'x', 'w'], [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]])
This is fairly esoteric, but it is actually necessary for implementing things
like GroupBy where the order of a categorical variable is meaningful.
Appending rows to a DataFrame#
If you have a series that you want to append as a single row to a DataFrame, you can convert the row into a
DataFrame and use concat
In [31]: s2 = pd.Series(["X0", "X1", "X2", "X3"], index=["A", "B", "C", "D"])
In [32]: result = pd.concat([df1, s2.to_frame().T], ignore_index=True)
You should use ignore_index with this method to instruct DataFrame to
discard its index. If you wish to preserve the index, you should construct an
appropriately-indexed DataFrame and append or concatenate those objects.
Database-style DataFrame or named Series joining/merging#
pandas has full-featured, high performance in-memory join operations
idiomatically very similar to relational databases like SQL. These methods
perform significantly better (in some cases well over an order of magnitude
better) than other open source implementations (like base::merge.data.frame
in R). The reason for this is careful algorithmic design and the internal layout
of the data in DataFrame.
See the cookbook for some advanced strategies.
Users who are familiar with SQL but new to pandas might be interested in a
comparison with SQL.
pandas provides a single function, merge(), as the entry point for
all standard database join operations between DataFrame or named Series objects:
pd.merge(
left,
right,
how="inner",
on=None,
left_on=None,
right_on=None,
left_index=False,
right_index=False,
sort=True,
suffixes=("_x", "_y"),
copy=True,
indicator=False,
validate=None,
)
left: A DataFrame or named Series object.
right: Another DataFrame or named Series object.
on: Column or index level names to join on. Must be found in both the left
and right DataFrame and/or Series objects. If not passed and left_index and
right_index are False, the intersection of the columns in the
DataFrames and/or Series will be inferred to be the join keys.
left_on: Columns or index levels from the left DataFrame or Series to use as
keys. Can either be column names, index level names, or arrays with length
equal to the length of the DataFrame or Series.
right_on: Columns or index levels from the right DataFrame or Series to use as
keys. Can either be column names, index level names, or arrays with length
equal to the length of the DataFrame or Series.
left_index: If True, use the index (row labels) from the left
DataFrame or Series as its join key(s). In the case of a DataFrame or Series with a MultiIndex
(hierarchical), the number of levels must match the number of join keys
from the right DataFrame or Series.
right_index: Same usage as left_index for the right DataFrame or Series
how: One of 'left', 'right', 'outer', 'inner', 'cross'. Defaults
to inner. See below for more detailed description of each method.
sort: Sort the result DataFrame by the join keys in lexicographical
order. Defaults to True, setting to False will improve performance
substantially in many cases.
suffixes: A tuple of string suffixes to apply to overlapping
columns. Defaults to ('_x', '_y').
copy: Always copy data (default True) from the passed DataFrame or named Series
objects, even when reindexing is not necessary. Cannot be avoided in many
cases but may improve performance / memory usage. The cases where copying
can be avoided are somewhat pathological but this option is provided
nonetheless.
indicator: Add a column to the output DataFrame called _merge
with information on the source of each row. _merge is Categorical-type
and takes on a value of left_only for observations whose merge key
only appears in 'left' DataFrame or Series, right_only for observations whose
merge key only appears in 'right' DataFrame or Series, and both if the
observation’s merge key is found in both.
validate : string, default None.
If specified, checks if merge is of specified type.
“one_to_one” or “1:1”: checks if merge keys are unique in both
left and right datasets.
“one_to_many” or “1:m”: checks if merge keys are unique in left
dataset.
“many_to_one” or “m:1”: checks if merge keys are unique in right
dataset.
“many_to_many” or “m:m”: allowed, but does not result in checks.
Note
Support for specifying index levels as the on, left_on, and
right_on parameters was added in version 0.23.0.
Support for merging named Series objects was added in version 0.24.0.
The return type will be the same as left. If left is a DataFrame or named Series
and right is a subclass of DataFrame, the return type will still be DataFrame.
merge is a function in the pandas namespace, and it is also available as a
DataFrame instance method merge(), with the calling
DataFrame being implicitly considered the left object in the join.
The related join() method, uses merge internally for the
index-on-index (by default) and column(s)-on-index join. If you are joining on
index only, you may wish to use DataFrame.join to save yourself some typing.
Brief primer on merge methods (relational algebra)#
Experienced users of relational databases like SQL will be familiar with the
terminology used to describe join operations between two SQL-table like
structures (DataFrame objects). There are several cases to consider which
are very important to understand:
one-to-one joins: for example when joining two DataFrame objects on
their indexes (which must contain unique values).
many-to-one joins: for example when joining an index (unique) to one or
more columns in a different DataFrame.
many-to-many joins: joining columns on columns.
Note
When joining columns on columns (potentially a many-to-many join), any
indexes on the passed DataFrame objects will be discarded.
It is worth spending some time understanding the result of the many-to-many
join case. In SQL / standard relational algebra, if a key combination appears
more than once in both tables, the resulting table will have the Cartesian
product of the associated data. Here is a very basic example with one unique
key combination:
In [33]: left = pd.DataFrame(
....: {
....: "key": ["K0", "K1", "K2", "K3"],
....: "A": ["A0", "A1", "A2", "A3"],
....: "B": ["B0", "B1", "B2", "B3"],
....: }
....: )
....:
In [34]: right = pd.DataFrame(
....: {
....: "key": ["K0", "K1", "K2", "K3"],
....: "C": ["C0", "C1", "C2", "C3"],
....: "D": ["D0", "D1", "D2", "D3"],
....: }
....: )
....:
In [35]: result = pd.merge(left, right, on="key")
Here is a more complicated example with multiple join keys. Only the keys
appearing in left and right are present (the intersection), since
how='inner' by default.
In [36]: left = pd.DataFrame(
....: {
....: "key1": ["K0", "K0", "K1", "K2"],
....: "key2": ["K0", "K1", "K0", "K1"],
....: "A": ["A0", "A1", "A2", "A3"],
....: "B": ["B0", "B1", "B2", "B3"],
....: }
....: )
....:
In [37]: right = pd.DataFrame(
....: {
....: "key1": ["K0", "K1", "K1", "K2"],
....: "key2": ["K0", "K0", "K0", "K0"],
....: "C": ["C0", "C1", "C2", "C3"],
....: "D": ["D0", "D1", "D2", "D3"],
....: }
....: )
....:
In [38]: result = pd.merge(left, right, on=["key1", "key2"])
The how argument to merge specifies how to determine which keys are to
be included in the resulting table. If a key combination does not appear in
either the left or right tables, the values in the joined table will be
NA. Here is a summary of the how options and their SQL equivalent names:
Merge method
SQL Join Name
Description
left
LEFT OUTER JOIN
Use keys from left frame only
right
RIGHT OUTER JOIN
Use keys from right frame only
outer
FULL OUTER JOIN
Use union of keys from both frames
inner
INNER JOIN
Use intersection of keys from both frames
cross
CROSS JOIN
Create the cartesian product of rows of both frames
In [39]: result = pd.merge(left, right, how="left", on=["key1", "key2"])
In [40]: result = pd.merge(left, right, how="right", on=["key1", "key2"])
In [41]: result = pd.merge(left, right, how="outer", on=["key1", "key2"])
In [42]: result = pd.merge(left, right, how="inner", on=["key1", "key2"])
In [43]: result = pd.merge(left, right, how="cross")
You can merge a mult-indexed Series and a DataFrame, if the names of
the MultiIndex correspond to the columns from the DataFrame. Transform
the Series to a DataFrame using Series.reset_index() before merging,
as shown in the following example.
In [44]: df = pd.DataFrame({"Let": ["A", "B", "C"], "Num": [1, 2, 3]})
In [45]: df
Out[45]:
Let Num
0 A 1
1 B 2
2 C 3
In [46]: ser = pd.Series(
....: ["a", "b", "c", "d", "e", "f"],
....: index=pd.MultiIndex.from_arrays(
....: [["A", "B", "C"] * 2, [1, 2, 3, 4, 5, 6]], names=["Let", "Num"]
....: ),
....: )
....:
In [47]: ser
Out[47]:
Let Num
A 1 a
B 2 b
C 3 c
A 4 d
B 5 e
C 6 f
dtype: object
In [48]: pd.merge(df, ser.reset_index(), on=["Let", "Num"])
Out[48]:
Let Num 0
0 A 1 a
1 B 2 b
2 C 3 c
Here is another example with duplicate join keys in DataFrames:
In [49]: left = pd.DataFrame({"A": [1, 2], "B": [2, 2]})
In [50]: right = pd.DataFrame({"A": [4, 5, 6], "B": [2, 2, 2]})
In [51]: result = pd.merge(left, right, on="B", how="outer")
Warning
Joining / merging on duplicate keys can cause a returned frame that is the multiplication of the row dimensions, which may result in memory overflow. It is the user’ s responsibility to manage duplicate values in keys before joining large DataFrames.
Checking for duplicate keys#
Users can use the validate argument to automatically check whether there
are unexpected duplicates in their merge keys. Key uniqueness is checked before
merge operations and so should protect against memory overflows. Checking key
uniqueness is also a good way to ensure user data structures are as expected.
In the following example, there are duplicate values of B in the right
DataFrame. As this is not a one-to-one merge – as specified in the
validate argument – an exception will be raised.
In [52]: left = pd.DataFrame({"A": [1, 2], "B": [1, 2]})
In [53]: right = pd.DataFrame({"A": [4, 5, 6], "B": [2, 2, 2]})
In [53]: result = pd.merge(left, right, on="B", how="outer", validate="one_to_one")
...
MergeError: Merge keys are not unique in right dataset; not a one-to-one merge
If the user is aware of the duplicates in the right DataFrame but wants to
ensure there are no duplicates in the left DataFrame, one can use the
validate='one_to_many' argument instead, which will not raise an exception.
In [54]: pd.merge(left, right, on="B", how="outer", validate="one_to_many")
Out[54]:
A_x B A_y
0 1 1 NaN
1 2 2 4.0
2 2 2 5.0
3 2 2 6.0
The merge indicator#
merge() accepts the argument indicator. If True, a
Categorical-type column called _merge will be added to the output object
that takes on values:
Observation Origin
_merge value
Merge key only in 'left' frame
left_only
Merge key only in 'right' frame
right_only
Merge key in both frames
both
In [55]: df1 = pd.DataFrame({"col1": [0, 1], "col_left": ["a", "b"]})
In [56]: df2 = pd.DataFrame({"col1": [1, 2, 2], "col_right": [2, 2, 2]})
In [57]: pd.merge(df1, df2, on="col1", how="outer", indicator=True)
Out[57]:
col1 col_left col_right _merge
0 0 a NaN left_only
1 1 b 2.0 both
2 2 NaN 2.0 right_only
3 2 NaN 2.0 right_only
The indicator argument will also accept string arguments, in which case the indicator function will use the value of the passed string as the name for the indicator column.
In [58]: pd.merge(df1, df2, on="col1", how="outer", indicator="indicator_column")
Out[58]:
col1 col_left col_right indicator_column
0 0 a NaN left_only
1 1 b 2.0 both
2 2 NaN 2.0 right_only
3 2 NaN 2.0 right_only
Merge dtypes#
Merging will preserve the dtype of the join keys.
In [59]: left = pd.DataFrame({"key": [1], "v1": [10]})
In [60]: left
Out[60]:
key v1
0 1 10
In [61]: right = pd.DataFrame({"key": [1, 2], "v1": [20, 30]})
In [62]: right
Out[62]:
key v1
0 1 20
1 2 30
We are able to preserve the join keys:
In [63]: pd.merge(left, right, how="outer")
Out[63]:
key v1
0 1 10
1 1 20
2 2 30
In [64]: pd.merge(left, right, how="outer").dtypes
Out[64]:
key int64
v1 int64
dtype: object
Of course if you have missing values that are introduced, then the
resulting dtype will be upcast.
In [65]: pd.merge(left, right, how="outer", on="key")
Out[65]:
key v1_x v1_y
0 1 10.0 20
1 2 NaN 30
In [66]: pd.merge(left, right, how="outer", on="key").dtypes
Out[66]:
key int64
v1_x float64
v1_y int64
dtype: object
Merging will preserve category dtypes of the mergands. See also the section on categoricals.
The left frame.
In [67]: from pandas.api.types import CategoricalDtype
In [68]: X = pd.Series(np.random.choice(["foo", "bar"], size=(10,)))
In [69]: X = X.astype(CategoricalDtype(categories=["foo", "bar"]))
In [70]: left = pd.DataFrame(
....: {"X": X, "Y": np.random.choice(["one", "two", "three"], size=(10,))}
....: )
....:
In [71]: left
Out[71]:
X Y
0 bar one
1 foo one
2 foo three
3 bar three
4 foo one
5 bar one
6 bar three
7 bar three
8 bar three
9 foo three
In [72]: left.dtypes
Out[72]:
X category
Y object
dtype: object
The right frame.
In [73]: right = pd.DataFrame(
....: {
....: "X": pd.Series(["foo", "bar"], dtype=CategoricalDtype(["foo", "bar"])),
....: "Z": [1, 2],
....: }
....: )
....:
In [74]: right
Out[74]:
X Z
0 foo 1
1 bar 2
In [75]: right.dtypes
Out[75]:
X category
Z int64
dtype: object
The merged result:
In [76]: result = pd.merge(left, right, how="outer")
In [77]: result
Out[77]:
X Y Z
0 bar one 2
1 bar three 2
2 bar one 2
3 bar three 2
4 bar three 2
5 bar three 2
6 foo one 1
7 foo three 1
8 foo one 1
9 foo three 1
In [78]: result.dtypes
Out[78]:
X category
Y object
Z int64
dtype: object
Note
The category dtypes must be exactly the same, meaning the same categories and the ordered attribute.
Otherwise the result will coerce to the categories’ dtype.
Note
Merging on category dtypes that are the same can be quite performant compared to object dtype merging.
Joining on index#
DataFrame.join() is a convenient method for combining the columns of two
potentially differently-indexed DataFrames into a single result
DataFrame. Here is a very basic example:
In [79]: left = pd.DataFrame(
....: {"A": ["A0", "A1", "A2"], "B": ["B0", "B1", "B2"]}, index=["K0", "K1", "K2"]
....: )
....:
In [80]: right = pd.DataFrame(
....: {"C": ["C0", "C2", "C3"], "D": ["D0", "D2", "D3"]}, index=["K0", "K2", "K3"]
....: )
....:
In [81]: result = left.join(right)
In [82]: result = left.join(right, how="outer")
The same as above, but with how='inner'.
In [83]: result = left.join(right, how="inner")
The data alignment here is on the indexes (row labels). This same behavior can
be achieved using merge plus additional arguments instructing it to use the
indexes:
In [84]: result = pd.merge(left, right, left_index=True, right_index=True, how="outer")
In [85]: result = pd.merge(left, right, left_index=True, right_index=True, how="inner")
Joining key columns on an index#
join() takes an optional on argument which may be a column
or multiple column names, which specifies that the passed DataFrame is to be
aligned on that column in the DataFrame. These two function calls are
completely equivalent:
left.join(right, on=key_or_keys)
pd.merge(
left, right, left_on=key_or_keys, right_index=True, how="left", sort=False
)
Obviously you can choose whichever form you find more convenient. For
many-to-one joins (where one of the DataFrame’s is already indexed by the
join key), using join may be more convenient. Here is a simple example:
In [86]: left = pd.DataFrame(
....: {
....: "A": ["A0", "A1", "A2", "A3"],
....: "B": ["B0", "B1", "B2", "B3"],
....: "key": ["K0", "K1", "K0", "K1"],
....: }
....: )
....:
In [87]: right = pd.DataFrame({"C": ["C0", "C1"], "D": ["D0", "D1"]}, index=["K0", "K1"])
In [88]: result = left.join(right, on="key")
In [89]: result = pd.merge(
....: left, right, left_on="key", right_index=True, how="left", sort=False
....: )
....:
To join on multiple keys, the passed DataFrame must have a MultiIndex:
In [90]: left = pd.DataFrame(
....: {
....: "A": ["A0", "A1", "A2", "A3"],
....: "B": ["B0", "B1", "B2", "B3"],
....: "key1": ["K0", "K0", "K1", "K2"],
....: "key2": ["K0", "K1", "K0", "K1"],
....: }
....: )
....:
In [91]: index = pd.MultiIndex.from_tuples(
....: [("K0", "K0"), ("K1", "K0"), ("K2", "K0"), ("K2", "K1")]
....: )
....:
In [92]: right = pd.DataFrame(
....: {"C": ["C0", "C1", "C2", "C3"], "D": ["D0", "D1", "D2", "D3"]}, index=index
....: )
....:
Now this can be joined by passing the two key column names:
In [93]: result = left.join(right, on=["key1", "key2"])
The default for DataFrame.join is to perform a left join (essentially a
“VLOOKUP” operation, for Excel users), which uses only the keys found in the
calling DataFrame. Other join types, for example inner join, can be just as
easily performed:
In [94]: result = left.join(right, on=["key1", "key2"], how="inner")
As you can see, this drops any rows where there was no match.
Joining a single Index to a MultiIndex#
You can join a singly-indexed DataFrame with a level of a MultiIndexed DataFrame.
The level will match on the name of the index of the singly-indexed frame against
a level name of the MultiIndexed frame.
In [95]: left = pd.DataFrame(
....: {"A": ["A0", "A1", "A2"], "B": ["B0", "B1", "B2"]},
....: index=pd.Index(["K0", "K1", "K2"], name="key"),
....: )
....:
In [96]: index = pd.MultiIndex.from_tuples(
....: [("K0", "Y0"), ("K1", "Y1"), ("K2", "Y2"), ("K2", "Y3")],
....: names=["key", "Y"],
....: )
....:
In [97]: right = pd.DataFrame(
....: {"C": ["C0", "C1", "C2", "C3"], "D": ["D0", "D1", "D2", "D3"]},
....: index=index,
....: )
....:
In [98]: result = left.join(right, how="inner")
This is equivalent but less verbose and more memory efficient / faster than this.
In [99]: result = pd.merge(
....: left.reset_index(), right.reset_index(), on=["key"], how="inner"
....: ).set_index(["key","Y"])
....:
Joining with two MultiIndexes#
This is supported in a limited way, provided that the index for the right
argument is completely used in the join, and is a subset of the indices in
the left argument, as in this example:
In [100]: leftindex = pd.MultiIndex.from_product(
.....: [list("abc"), list("xy"), [1, 2]], names=["abc", "xy", "num"]
.....: )
.....:
In [101]: left = pd.DataFrame({"v1": range(12)}, index=leftindex)
In [102]: left
Out[102]:
v1
abc xy num
a x 1 0
2 1
y 1 2
2 3
b x 1 4
2 5
y 1 6
2 7
c x 1 8
2 9
y 1 10
2 11
In [103]: rightindex = pd.MultiIndex.from_product(
.....: [list("abc"), list("xy")], names=["abc", "xy"]
.....: )
.....:
In [104]: right = pd.DataFrame({"v2": [100 * i for i in range(1, 7)]}, index=rightindex)
In [105]: right
Out[105]:
v2
abc xy
a x 100
y 200
b x 300
y 400
c x 500
y 600
In [106]: left.join(right, on=["abc", "xy"], how="inner")
Out[106]:
v1 v2
abc xy num
a x 1 0 100
2 1 100
y 1 2 200
2 3 200
b x 1 4 300
2 5 300
y 1 6 400
2 7 400
c x 1 8 500
2 9 500
y 1 10 600
2 11 600
If that condition is not satisfied, a join with two multi-indexes can be
done using the following code.
In [107]: leftindex = pd.MultiIndex.from_tuples(
.....: [("K0", "X0"), ("K0", "X1"), ("K1", "X2")], names=["key", "X"]
.....: )
.....:
In [108]: left = pd.DataFrame(
.....: {"A": ["A0", "A1", "A2"], "B": ["B0", "B1", "B2"]}, index=leftindex
.....: )
.....:
In [109]: rightindex = pd.MultiIndex.from_tuples(
.....: [("K0", "Y0"), ("K1", "Y1"), ("K2", "Y2"), ("K2", "Y3")], names=["key", "Y"]
.....: )
.....:
In [110]: right = pd.DataFrame(
.....: {"C": ["C0", "C1", "C2", "C3"], "D": ["D0", "D1", "D2", "D3"]}, index=rightindex
.....: )
.....:
In [111]: result = pd.merge(
.....: left.reset_index(), right.reset_index(), on=["key"], how="inner"
.....: ).set_index(["key", "X", "Y"])
.....:
Merging on a combination of columns and index levels#
Strings passed as the on, left_on, and right_on parameters
may refer to either column names or index level names. This enables merging
DataFrame instances on a combination of index levels and columns without
resetting indexes.
In [112]: left_index = pd.Index(["K0", "K0", "K1", "K2"], name="key1")
In [113]: left = pd.DataFrame(
.....: {
.....: "A": ["A0", "A1", "A2", "A3"],
.....: "B": ["B0", "B1", "B2", "B3"],
.....: "key2": ["K0", "K1", "K0", "K1"],
.....: },
.....: index=left_index,
.....: )
.....:
In [114]: right_index = pd.Index(["K0", "K1", "K2", "K2"], name="key1")
In [115]: right = pd.DataFrame(
.....: {
.....: "C": ["C0", "C1", "C2", "C3"],
.....: "D": ["D0", "D1", "D2", "D3"],
.....: "key2": ["K0", "K0", "K0", "K1"],
.....: },
.....: index=right_index,
.....: )
.....:
In [116]: result = left.merge(right, on=["key1", "key2"])
Note
When DataFrames are merged on a string that matches an index level in both
frames, the index level is preserved as an index level in the resulting
DataFrame.
Note
When DataFrames are merged using only some of the levels of a MultiIndex,
the extra levels will be dropped from the resulting merge. In order to
preserve those levels, use reset_index on those level names to move
those levels to columns prior to doing the merge.
Note
If a string matches both a column name and an index level name, then a
warning is issued and the column takes precedence. This will result in an
ambiguity error in a future version.
Overlapping value columns#
The merge suffixes argument takes a tuple of list of strings to append to
overlapping column names in the input DataFrames to disambiguate the result
columns:
In [117]: left = pd.DataFrame({"k": ["K0", "K1", "K2"], "v": [1, 2, 3]})
In [118]: right = pd.DataFrame({"k": ["K0", "K0", "K3"], "v": [4, 5, 6]})
In [119]: result = pd.merge(left, right, on="k")
In [120]: result = pd.merge(left, right, on="k", suffixes=("_l", "_r"))
DataFrame.join() has lsuffix and rsuffix arguments which behave
similarly.
In [121]: left = left.set_index("k")
In [122]: right = right.set_index("k")
In [123]: result = left.join(right, lsuffix="_l", rsuffix="_r")
Joining multiple DataFrames#
A list or tuple of DataFrames can also be passed to join()
to join them together on their indexes.
In [124]: right2 = pd.DataFrame({"v": [7, 8, 9]}, index=["K1", "K1", "K2"])
In [125]: result = left.join([right, right2])
Merging together values within Series or DataFrame columns#
Another fairly common situation is to have two like-indexed (or similarly
indexed) Series or DataFrame objects and wanting to “patch” values in
one object from values for matching indices in the other. Here is an example:
In [126]: df1 = pd.DataFrame(
.....: [[np.nan, 3.0, 5.0], [-4.6, np.nan, np.nan], [np.nan, 7.0, np.nan]]
.....: )
.....:
In [127]: df2 = pd.DataFrame([[-42.6, np.nan, -8.2], [-5.0, 1.6, 4]], index=[1, 2])
For this, use the combine_first() method:
In [128]: result = df1.combine_first(df2)
Note that this method only takes values from the right DataFrame if they are
missing in the left DataFrame. A related method, update(),
alters non-NA values in place:
In [129]: df1.update(df2)
Timeseries friendly merging#
Merging ordered data#
A merge_ordered() function allows combining time series and other
ordered data. In particular it has an optional fill_method keyword to
fill/interpolate missing data:
In [130]: left = pd.DataFrame(
.....: {"k": ["K0", "K1", "K1", "K2"], "lv": [1, 2, 3, 4], "s": ["a", "b", "c", "d"]}
.....: )
.....:
In [131]: right = pd.DataFrame({"k": ["K1", "K2", "K4"], "rv": [1, 2, 3]})
In [132]: pd.merge_ordered(left, right, fill_method="ffill", left_by="s")
Out[132]:
k lv s rv
0 K0 1.0 a NaN
1 K1 1.0 a 1.0
2 K2 1.0 a 2.0
3 K4 1.0 a 3.0
4 K1 2.0 b 1.0
5 K2 2.0 b 2.0
6 K4 2.0 b 3.0
7 K1 3.0 c 1.0
8 K2 3.0 c 2.0
9 K4 3.0 c 3.0
10 K1 NaN d 1.0
11 K2 4.0 d 2.0
12 K4 4.0 d 3.0
Merging asof#
A merge_asof() is similar to an ordered left-join except that we match on
nearest key rather than equal keys. For each row in the left DataFrame,
we select the last row in the right DataFrame whose on key is less
than the left’s key. Both DataFrames must be sorted by the key.
Optionally an asof merge can perform a group-wise merge. This matches the
by key equally, in addition to the nearest match on the on key.
For example; we might have trades and quotes and we want to asof
merge them.
In [133]: trades = pd.DataFrame(
.....: {
.....: "time": pd.to_datetime(
.....: [
.....: "20160525 13:30:00.023",
.....: "20160525 13:30:00.038",
.....: "20160525 13:30:00.048",
.....: "20160525 13:30:00.048",
.....: "20160525 13:30:00.048",
.....: ]
.....: ),
.....: "ticker": ["MSFT", "MSFT", "GOOG", "GOOG", "AAPL"],
.....: "price": [51.95, 51.95, 720.77, 720.92, 98.00],
.....: "quantity": [75, 155, 100, 100, 100],
.....: },
.....: columns=["time", "ticker", "price", "quantity"],
.....: )
.....:
In [134]: quotes = pd.DataFrame(
.....: {
.....: "time": pd.to_datetime(
.....: [
.....: "20160525 13:30:00.023",
.....: "20160525 13:30:00.023",
.....: "20160525 13:30:00.030",
.....: "20160525 13:30:00.041",
.....: "20160525 13:30:00.048",
.....: "20160525 13:30:00.049",
.....: "20160525 13:30:00.072",
.....: "20160525 13:30:00.075",
.....: ]
.....: ),
.....: "ticker": ["GOOG", "MSFT", "MSFT", "MSFT", "GOOG", "AAPL", "GOOG", "MSFT"],
.....: "bid": [720.50, 51.95, 51.97, 51.99, 720.50, 97.99, 720.50, 52.01],
.....: "ask": [720.93, 51.96, 51.98, 52.00, 720.93, 98.01, 720.88, 52.03],
.....: },
.....: columns=["time", "ticker", "bid", "ask"],
.....: )
.....:
In [135]: trades
Out[135]:
time ticker price quantity
0 2016-05-25 13:30:00.023 MSFT 51.95 75
1 2016-05-25 13:30:00.038 MSFT 51.95 155
2 2016-05-25 13:30:00.048 GOOG 720.77 100
3 2016-05-25 13:30:00.048 GOOG 720.92 100
4 2016-05-25 13:30:00.048 AAPL 98.00 100
In [136]: quotes
Out[136]:
time ticker bid ask
0 2016-05-25 13:30:00.023 GOOG 720.50 720.93
1 2016-05-25 13:30:00.023 MSFT 51.95 51.96
2 2016-05-25 13:30:00.030 MSFT 51.97 51.98
3 2016-05-25 13:30:00.041 MSFT 51.99 52.00
4 2016-05-25 13:30:00.048 GOOG 720.50 720.93
5 2016-05-25 13:30:00.049 AAPL 97.99 98.01
6 2016-05-25 13:30:00.072 GOOG 720.50 720.88
7 2016-05-25 13:30:00.075 MSFT 52.01 52.03
By default we are taking the asof of the quotes.
In [137]: pd.merge_asof(trades, quotes, on="time", by="ticker")
Out[137]:
time ticker price quantity bid ask
0 2016-05-25 13:30:00.023 MSFT 51.95 75 51.95 51.96
1 2016-05-25 13:30:00.038 MSFT 51.95 155 51.97 51.98
2 2016-05-25 13:30:00.048 GOOG 720.77 100 720.50 720.93
3 2016-05-25 13:30:00.048 GOOG 720.92 100 720.50 720.93
4 2016-05-25 13:30:00.048 AAPL 98.00 100 NaN NaN
We only asof within 2ms between the quote time and the trade time.
In [138]: pd.merge_asof(trades, quotes, on="time", by="ticker", tolerance=pd.Timedelta("2ms"))
Out[138]:
time ticker price quantity bid ask
0 2016-05-25 13:30:00.023 MSFT 51.95 75 51.95 51.96
1 2016-05-25 13:30:00.038 MSFT 51.95 155 NaN NaN
2 2016-05-25 13:30:00.048 GOOG 720.77 100 720.50 720.93
3 2016-05-25 13:30:00.048 GOOG 720.92 100 720.50 720.93
4 2016-05-25 13:30:00.048 AAPL 98.00 100 NaN NaN
We only asof within 10ms between the quote time and the trade time and we
exclude exact matches on time. Note that though we exclude the exact matches
(of the quotes), prior quotes do propagate to that point in time.
In [139]: pd.merge_asof(
.....: trades,
.....: quotes,
.....: on="time",
.....: by="ticker",
.....: tolerance=pd.Timedelta("10ms"),
.....: allow_exact_matches=False,
.....: )
.....:
Out[139]:
time ticker price quantity bid ask
0 2016-05-25 13:30:00.023 MSFT 51.95 75 NaN NaN
1 2016-05-25 13:30:00.038 MSFT 51.95 155 51.97 51.98
2 2016-05-25 13:30:00.048 GOOG 720.77 100 NaN NaN
3 2016-05-25 13:30:00.048 GOOG 720.92 100 NaN NaN
4 2016-05-25 13:30:00.048 AAPL 98.00 100 NaN NaN
Comparing objects#
The compare() and compare() methods allow you to
compare two DataFrame or Series, respectively, and summarize their differences.
This feature was added in V1.1.0.
For example, you might want to compare two DataFrame and stack their differences
side by side.
In [140]: df = pd.DataFrame(
.....: {
.....: "col1": ["a", "a", "b", "b", "a"],
.....: "col2": [1.0, 2.0, 3.0, np.nan, 5.0],
.....: "col3": [1.0, 2.0, 3.0, 4.0, 5.0],
.....: },
.....: columns=["col1", "col2", "col3"],
.....: )
.....:
In [141]: df
Out[141]:
col1 col2 col3
0 a 1.0 1.0
1 a 2.0 2.0
2 b 3.0 3.0
3 b NaN 4.0
4 a 5.0 5.0
In [142]: df2 = df.copy()
In [143]: df2.loc[0, "col1"] = "c"
In [144]: df2.loc[2, "col3"] = 4.0
In [145]: df2
Out[145]:
col1 col2 col3
0 c 1.0 1.0
1 a 2.0 2.0
2 b 3.0 4.0
3 b NaN 4.0
4 a 5.0 5.0
In [146]: df.compare(df2)
Out[146]:
col1 col3
self other self other
0 a c NaN NaN
2 NaN NaN 3.0 4.0
By default, if two corresponding values are equal, they will be shown as NaN.
Furthermore, if all values in an entire row / column, the row / column will be
omitted from the result. The remaining differences will be aligned on columns.
If you wish, you may choose to stack the differences on rows.
In [147]: df.compare(df2, align_axis=0)
Out[147]:
col1 col3
0 self a NaN
other c NaN
2 self NaN 3.0
other NaN 4.0
If you wish to keep all original rows and columns, set keep_shape argument
to True.
In [148]: df.compare(df2, keep_shape=True)
Out[148]:
col1 col2 col3
self other self other self other
0 a c NaN NaN NaN NaN
1 NaN NaN NaN NaN NaN NaN
2 NaN NaN NaN NaN 3.0 4.0
3 NaN NaN NaN NaN NaN NaN
4 NaN NaN NaN NaN NaN NaN
You may also keep all the original values even if they are equal.
In [149]: df.compare(df2, keep_shape=True, keep_equal=True)
Out[149]:
col1 col2 col3
self other self other self other
0 a c 1.0 1.0 1.0 1.0
1 a a 2.0 2.0 2.0 2.0
2 b b 3.0 3.0 3.0 4.0
3 b b NaN NaN 4.0 4.0
4 a a 5.0 5.0 5.0 5.0
| 1,091
| 1,189
|
Concat sequence number to each row in a group using Pandas and R
I have a data frame like as shown below (Both R and Python data frame codes are given below)
df = pd.DataFrame({'person_id': [11,11,11,12,12,12,12,13,13,13,13,13,14,14,14]})
df['enc_id'] = [1134567890,1134567890,1134567890,3456789210,3456789210,3456789210,3456789210,5643271890,5643271890,5643271890,5643271890,5643271890,2468013579,2468013579,2468013579]
person_id <- c(11,11,11,12,12,12,12,13,13,13,13,13,14,14,14)
enc_id <- c(1134567890,1134567890,1134567890,3456789210,3456789210,3456789210,3456789210,5643271890,5643271890,5643271890,5643271890,5643271890,2468013579,2468013579,2468013579)
df <- data.frame(person_id, enc_id)
I would like to concat a sequence number to enc_id for each person
I wrote something like below in Python
df['new_enc_id'] = df['enc_id'].map(str) + (df.groupby('person_id').cumcount()+1).map(str)
Can you help me with the below questions?
How can I do this in R?
Any elegant way to do this in Python?
I expect my output to be like as shown below. You can see that sequence number is concatenated for each group and not added.
|
62,246,698
|
How to calculate the difference between 2 consecutive dataframes using pandas
|
<p>I am fairly new in using pandas I have the following dataframe:</p>
<pre><code>Date
2019-06-01 195.585770
2019-07-01 210.527466
2019-08-01 206.278168
2019-09-01 222.169479
2019-10-01 246.760193
2019-11-01 265.101562
2019-12-01 292.163818
2020-01-01 307.943604
2020-02-01 271.976532
2020-03-01 253.603500
2020-04-01 293.006836
2020-05-01 317.081665
2020-06-01 331.500000
2020-06-05 331.500000
Name: AAPL, dtype: float64
</code></pre>
<p>How can I quickly calculate the difference between 2 dates in days? In the end I want to calculate the average monthly increase percentage-wise.
The result should be that the difference is alternately 30 and 31 days. There must be a quick command to calculate the difference between two consecutive dates but I can't seem to find it.</p>
| 62,246,758
| 2020-06-07T14:20:24.027000
| 2
| null | 1
| 154
|
python|pandas
|
<p>We can do <code>pct_change</code> and <code>mean</code>:</p>
<pre><code>df['AAPL'].pct_change().mean()
</code></pre>
<p>Or in case your series:</p>
<pre><code>s.pct_change().mean()
</code></pre>
<hr>
<p>If you want to find out the daily percentage change:</p>
<pre><code>s.pct_change()/s.index.to_series().diff().dt.days
</code></pre>
<p>Output:</p>
<pre><code>Date
2019-06-01 NaN
2019-07-01 0.002546
2019-08-01 -0.000651
2019-09-01 0.002485
2019-10-01 0.003689
2019-11-01 0.002398
2019-12-01 0.003403
2020-01-01 0.001742
2020-02-01 -0.003768
2020-03-01 -0.002329
2020-04-01 0.005012
2020-05-01 0.002739
2020-06-01 0.001467
2020-06-05 0.000000
dtype: float64
</code></pre>
| 2020-06-07T14:25:10.153000
| 3
|
https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.diff.html
|
pandas.DataFrame.diff#
pandas.DataFrame.diff#
DataFrame.diff(periods=1, axis=0)[source]#
First discrete difference of element.
Calculates the difference of a DataFrame element compared with another
element in the DataFrame (default is element in previous row).
Parameters
periodsint, default 1Periods to shift for calculating difference, accepts negative
values.
axis{0 or ‘index’, 1 or ‘columns’}, default 0Take difference over rows (0) or columns (1).
Returns
DataFrameFirst differences of the Series.
See also
DataFrame.pct_changePercent change over given number of periods.
DataFrame.shiftShift index by desired number of periods with an optional time freq.
Series.diffFirst discrete difference of object.
We can do pct_change and mean:
df['AAPL'].pct_change().mean()
Or in case your series:
s.pct_change().mean()
If you want to find out the daily percentage change:
s.pct_change()/s.index.to_series().diff().dt.days
Output:
Date
2019-06-01 NaN
2019-07-01 0.002546
2019-08-01 -0.000651
2019-09-01 0.002485
2019-10-01 0.003689
2019-11-01 0.002398
2019-12-01 0.003403
2020-01-01 0.001742
2020-02-01 -0.003768
2020-03-01 -0.002329
2020-04-01 0.005012
2020-05-01 0.002739
2020-06-01 0.001467
2020-06-05 0.000000
dtype: float64
Notes
For boolean dtypes, this uses operator.xor() rather than
operator.sub().
The result is calculated according to current dtype in DataFrame,
however dtype of the result is always float64.
Examples
Difference with previous row
>>> df = pd.DataFrame({'a': [1, 2, 3, 4, 5, 6],
... 'b': [1, 1, 2, 3, 5, 8],
... 'c': [1, 4, 9, 16, 25, 36]})
>>> df
a b c
0 1 1 1
1 2 1 4
2 3 2 9
3 4 3 16
4 5 5 25
5 6 8 36
>>> df.diff()
a b c
0 NaN NaN NaN
1 1.0 0.0 3.0
2 1.0 1.0 5.0
3 1.0 1.0 7.0
4 1.0 2.0 9.0
5 1.0 3.0 11.0
Difference with previous column
>>> df.diff(axis=1)
a b c
0 NaN 0 0
1 NaN -1 3
2 NaN -1 7
3 NaN -1 13
4 NaN 0 20
5 NaN 2 28
Difference with 3rd previous row
>>> df.diff(periods=3)
a b c
0 NaN NaN NaN
1 NaN NaN NaN
2 NaN NaN NaN
3 3.0 2.0 15.0
4 3.0 4.0 21.0
5 3.0 6.0 27.0
Difference with following row
>>> df.diff(periods=-1)
a b c
0 -1.0 0.0 -3.0
1 -1.0 -1.0 -5.0
2 -1.0 -1.0 -7.0
3 -1.0 -2.0 -9.0
4 -1.0 -3.0 -11.0
5 NaN NaN NaN
Overflow in input dtype
>>> df = pd.DataFrame({'a': [1, 0]}, dtype=np.uint8)
>>> df.diff()
a
0 NaN
1 255.0
| 729
| 1,294
|
How to calculate the difference between 2 consecutive dataframes using pandas
I am fairly new in using pandas I have the following dataframe:
Date
2019-06-01 195.585770
2019-07-01 210.527466
2019-08-01 206.278168
2019-09-01 222.169479
2019-10-01 246.760193
2019-11-01 265.101562
2019-12-01 292.163818
2020-01-01 307.943604
2020-02-01 271.976532
2020-03-01 253.603500
2020-04-01 293.006836
2020-05-01 317.081665
2020-06-01 331.500000
2020-06-05 331.500000
Name: AAPL, dtype: float64
How can I quickly calculate the difference between 2 dates in days? In the end I want to calculate the average monthly increase percentage-wise.
The result should be that the difference is alternately 30 and 31 days. There must be a quick command to calculate the difference between two consecutive dates but I can't seem to find it.
|
69,194,912
|
Fill sequential date between start & end date from two different column of pandas data frame
|
<p>I'm using jupyterlab version 3.1.9. I have a pandas dataframe <code>df</code>. df contains start & end date. I would like to create a new data frame df1 from df so that it will have all the date between start & end date & all other columns remain same. My Sample <code>df</code> data looks like</p>
<pre><code>ProductId StartDate EndDate
1 2020-05-21 2020-05-22
2 2020-04-16 2020-04-18
3 2020-07-25 2020-07-26
4 2020-09-16 2020-09-20
</code></pre>
<p>My new data frame df1 will look like</p>
<pre><code>ProductId Date
1 2020-05-21
1 2020-05-22
2 2020-04-16
2 2020-04-17
2 2020-04-18
3 2020-07-25
3 2020-07-26
4 2020-09-16
4 2020-09-17
4 2020-09-18
4 2020-09-19
4 2020-09-20
</code></pre>
<p>Can you suggest me how to do this in python?</p>
| 69,195,025
| 2021-09-15T14:20:34.250000
| 2
| 1
| 1
| 193
|
python|pandas
|
<p>Create the list of date then <code>explode</code> it</p>
<pre><code>df['new'] = [pd.date_range(x, y ) for x, y in zip(df.StartDate, df.EndDate)]
out = df.explode('new')
Out[37]:
ProductId StartDate EndDate new
0 1 2020-05-21 2020-05-22 2020-05-21
0 1 2020-05-21 2020-05-22 2020-05-22
1 2 2020-04-16 2020-04-18 2020-04-16
1 2 2020-04-16 2020-04-18 2020-04-17
1 2 2020-04-16 2020-04-18 2020-04-18
2 3 2020-07-25 2020-07-26 2020-07-25
2 3 2020-07-25 2020-07-26 2020-07-26
3 4 2020-09-16 2020-09-20 2020-09-16
3 4 2020-09-16 2020-09-20 2020-09-17
3 4 2020-09-16 2020-09-20 2020-09-18
3 4 2020-09-16 2020-09-20 2020-09-19
3 4 2020-09-16 2020-09-20 2020-09-20
</code></pre>
| 2021-09-15T14:26:50.747000
| 3
|
https://pandas.pydata.org/docs/reference/api/pandas.date_range.html
|
pandas.date_range#
pandas.date_range#
pandas.date_range(start=None, end=None, periods=None, freq=None, tz=None, normalize=False, name=None, closed=_NoDefault.no_default, inclusive=None, **kwargs)[source]#
Return a fixed frequency DatetimeIndex.
Returns the range of equally spaced time points (where the difference between any
two adjacent points is specified by the given frequency) such that they all
satisfy start <[=] x <[=] end, where the first one and the last one are, resp.,
the first and last time points in that range that fall on the boundary of freq
Create the list of date then explode it
df['new'] = [pd.date_range(x, y ) for x, y in zip(df.StartDate, df.EndDate)]
out = df.explode('new')
Out[37]:
ProductId StartDate EndDate new
0 1 2020-05-21 2020-05-22 2020-05-21
0 1 2020-05-21 2020-05-22 2020-05-22
1 2 2020-04-16 2020-04-18 2020-04-16
1 2 2020-04-16 2020-04-18 2020-04-17
1 2 2020-04-16 2020-04-18 2020-04-18
2 3 2020-07-25 2020-07-26 2020-07-25
2 3 2020-07-25 2020-07-26 2020-07-26
3 4 2020-09-16 2020-09-20 2020-09-16
3 4 2020-09-16 2020-09-20 2020-09-17
3 4 2020-09-16 2020-09-20 2020-09-18
3 4 2020-09-16 2020-09-20 2020-09-19
3 4 2020-09-16 2020-09-20 2020-09-20
(if given as a frequency string) or that are valid for freq (if given as a
pandas.tseries.offsets.DateOffset). (If exactly one of start,
end, or freq is not specified, this missing parameter can be computed
given periods, the number of timesteps in the range. See the note below.)
Parameters
startstr or datetime-like, optionalLeft bound for generating dates.
endstr or datetime-like, optionalRight bound for generating dates.
periodsint, optionalNumber of periods to generate.
freqstr or DateOffset, default ‘D’Frequency strings can have multiples, e.g. ‘5H’. See
here for a list of
frequency aliases.
tzstr or tzinfo, optionalTime zone name for returning localized DatetimeIndex, for example
‘Asia/Hong_Kong’. By default, the resulting DatetimeIndex is
timezone-naive.
normalizebool, default FalseNormalize start/end dates to midnight before generating date range.
namestr, default NoneName of the resulting DatetimeIndex.
closed{None, ‘left’, ‘right’}, optionalMake the interval closed with respect to the given frequency to
the ‘left’, ‘right’, or both sides (None, the default).
Deprecated since version 1.4.0: Argument closed has been deprecated to standardize boundary inputs.
Use inclusive instead, to set each bound as closed or open.
inclusive{“both”, “neither”, “left”, “right”}, default “both”Include boundaries; Whether to set each bound as closed or open.
New in version 1.4.0.
**kwargsFor compatibility. Has no effect on the result.
Returns
rngDatetimeIndex
See also
DatetimeIndexAn immutable container for datetimes.
timedelta_rangeReturn a fixed frequency TimedeltaIndex.
period_rangeReturn a fixed frequency PeriodIndex.
interval_rangeReturn a fixed frequency IntervalIndex.
Notes
Of the four parameters start, end, periods, and freq,
exactly three must be specified. If freq is omitted, the resulting
DatetimeIndex will have periods linearly spaced elements between
start and end (closed on both sides).
To learn more about the frequency strings, please see this link.
Examples
Specifying the values
The next four examples generate the same DatetimeIndex, but vary
the combination of start, end and periods.
Specify start and end, with the default daily frequency.
>>> pd.date_range(start='1/1/2018', end='1/08/2018')
DatetimeIndex(['2018-01-01', '2018-01-02', '2018-01-03', '2018-01-04',
'2018-01-05', '2018-01-06', '2018-01-07', '2018-01-08'],
dtype='datetime64[ns]', freq='D')
Specify start and periods, the number of periods (days).
>>> pd.date_range(start='1/1/2018', periods=8)
DatetimeIndex(['2018-01-01', '2018-01-02', '2018-01-03', '2018-01-04',
'2018-01-05', '2018-01-06', '2018-01-07', '2018-01-08'],
dtype='datetime64[ns]', freq='D')
Specify end and periods, the number of periods (days).
>>> pd.date_range(end='1/1/2018', periods=8)
DatetimeIndex(['2017-12-25', '2017-12-26', '2017-12-27', '2017-12-28',
'2017-12-29', '2017-12-30', '2017-12-31', '2018-01-01'],
dtype='datetime64[ns]', freq='D')
Specify start, end, and periods; the frequency is generated
automatically (linearly spaced).
>>> pd.date_range(start='2018-04-24', end='2018-04-27', periods=3)
DatetimeIndex(['2018-04-24 00:00:00', '2018-04-25 12:00:00',
'2018-04-27 00:00:00'],
dtype='datetime64[ns]', freq=None)
Other Parameters
Changed the freq (frequency) to 'M' (month end frequency).
>>> pd.date_range(start='1/1/2018', periods=5, freq='M')
DatetimeIndex(['2018-01-31', '2018-02-28', '2018-03-31', '2018-04-30',
'2018-05-31'],
dtype='datetime64[ns]', freq='M')
Multiples are allowed
>>> pd.date_range(start='1/1/2018', periods=5, freq='3M')
DatetimeIndex(['2018-01-31', '2018-04-30', '2018-07-31', '2018-10-31',
'2019-01-31'],
dtype='datetime64[ns]', freq='3M')
freq can also be specified as an Offset object.
>>> pd.date_range(start='1/1/2018', periods=5, freq=pd.offsets.MonthEnd(3))
DatetimeIndex(['2018-01-31', '2018-04-30', '2018-07-31', '2018-10-31',
'2019-01-31'],
dtype='datetime64[ns]', freq='3M')
Specify tz to set the timezone.
>>> pd.date_range(start='1/1/2018', periods=5, tz='Asia/Tokyo')
DatetimeIndex(['2018-01-01 00:00:00+09:00', '2018-01-02 00:00:00+09:00',
'2018-01-03 00:00:00+09:00', '2018-01-04 00:00:00+09:00',
'2018-01-05 00:00:00+09:00'],
dtype='datetime64[ns, Asia/Tokyo]', freq='D')
inclusive controls whether to include start and end that are on the
boundary. The default, “both”, includes boundary points on either end.
>>> pd.date_range(start='2017-01-01', end='2017-01-04', inclusive="both")
DatetimeIndex(['2017-01-01', '2017-01-02', '2017-01-03', '2017-01-04'],
dtype='datetime64[ns]', freq='D')
Use inclusive='left' to exclude end if it falls on the boundary.
>>> pd.date_range(start='2017-01-01', end='2017-01-04', inclusive='left')
DatetimeIndex(['2017-01-01', '2017-01-02', '2017-01-03'],
dtype='datetime64[ns]', freq='D')
Use inclusive='right' to exclude start if it falls on the boundary, and
similarly inclusive='neither' will exclude both start and end.
>>> pd.date_range(start='2017-01-01', end='2017-01-04', inclusive='right')
DatetimeIndex(['2017-01-02', '2017-01-03', '2017-01-04'],
dtype='datetime64[ns]', freq='D')
| 566
| 1,315
|
Fill sequential date between start & end date from two different column of pandas data frame
I'm using jupyterlab version 3.1.9. I have a pandas dataframe df. df contains start & end date. I would like to create a new data frame df1 from df so that it will have all the date between start & end date & all other columns remain same. My Sample df data looks like
ProductId StartDate EndDate
1 2020-05-21 2020-05-22
2 2020-04-16 2020-04-18
3 2020-07-25 2020-07-26
4 2020-09-16 2020-09-20
My new data frame df1 will look like
ProductId Date
1 2020-05-21
1 2020-05-22
2 2020-04-16
2 2020-04-17
2 2020-04-18
3 2020-07-25
3 2020-07-26
4 2020-09-16
4 2020-09-17
4 2020-09-18
4 2020-09-19
4 2020-09-20
Can you suggest me how to do this in python?
|
63,298,234
|
How to find first occurrence for each id based on datetime column with pandas?
|
<p>I have seen a lot of similar questions but didn't quite find an answer to my specific problem. Let's say I have a df:</p>
<pre><code> sample_id tested_at test_value
1 2020-07-21 5
1 2020-07-22 4
1 2020-07-23 6
2 2020-07-26 6
2 2020-07-28 5
3 2020-07-22 4
3 2020-07-27 4
3 2020-07-30 6
</code></pre>
<p>The df is already sorted for ascending by <code>tested_at</code> column. I now need to add another column <code>first_test</code> which would indicate the first test value for each <code>sample_id</code> in every line, regardless if it is highest or not. The output should be:</p>
<pre><code> sample_id tested_at test_value first_test
1 2020-07-21 5 5
1 2020-07-22 4 5
1 2020-07-23 6 5
2 2020-07-26 6 6
2 2020-07-28 5 6
3 2020-07-22 4 4
3 2020-07-27 4 4
3 2020-07-30 6 4
</code></pre>
<p>The df is also quite big, so a faster way would be very much appreciated.</p>
| 63,298,365
| 2020-08-07T08:37:27.287000
| 1
| 1
| 4
| 2,253
|
python|pandas
|
<p>You can use pandas' <code>groupby</code> to group by sample ID, and then use the <code>transform</code> method to get the first value per sample ID. Note that this takes the first value by row number, not the first value by date, so make sure the rows are ordered by date.</p>
<pre><code>df = pd.DataFrame(
[
[1, "2020-07-21", 5],
[1, "2020-07-22", 4],
[1, "2020-07-23", 6],
[2, "2020-07-26", 6],
[2, "2020-07-28", 5],
[3, "2020-07-22", 4],
[3, "2020-07-27", 4],
[3, "2020-07-30", 6],
],
columns=["sample_id", "tested_at", "test_value"],
)
df["first_test"] = df.groupby("sample_id")["test_value"].transform("first")
</code></pre>
<p>Which results in:</p>
<pre><code> sample_id tested_at test_value first_test
0 1 2020-07-21 5 5
1 1 2020-07-22 4 5
2 1 2020-07-23 6 5
3 2 2020-07-26 6 6
4 2 2020-07-28 5 6
5 3 2020-07-22 4 4
6 3 2020-07-27 4 4
7 3 2020-07-30 6 4
</code></pre>
| 2020-08-07T08:45:29.360000
| 3
|
https://pandas.pydata.org/docs/reference/api/pandas.Index.drop_duplicates.html
|
pandas.Index.drop_duplicates#
pandas.Index.drop_duplicates#
Index.drop_duplicates(*, keep='first')[source]#
Return Index with duplicate values removed.
You can use pandas' groupby to group by sample ID, and then use the transform method to get the first value per sample ID. Note that this takes the first value by row number, not the first value by date, so make sure the rows are ordered by date.
df = pd.DataFrame(
[
[1, "2020-07-21", 5],
[1, "2020-07-22", 4],
[1, "2020-07-23", 6],
[2, "2020-07-26", 6],
[2, "2020-07-28", 5],
[3, "2020-07-22", 4],
[3, "2020-07-27", 4],
[3, "2020-07-30", 6],
],
columns=["sample_id", "tested_at", "test_value"],
)
df["first_test"] = df.groupby("sample_id")["test_value"].transform("first")
Which results in:
sample_id tested_at test_value first_test
0 1 2020-07-21 5 5
1 1 2020-07-22 4 5
2 1 2020-07-23 6 5
3 2 2020-07-26 6 6
4 2 2020-07-28 5 6
5 3 2020-07-22 4 4
6 3 2020-07-27 4 4
7 3 2020-07-30 6 4
Parameters
keep{‘first’, ‘last’, False}, default ‘first’
‘first’ : Drop duplicates except for the first occurrence.
‘last’ : Drop duplicates except for the last occurrence.
False : Drop all duplicates.
Returns
deduplicatedIndex
See also
Series.drop_duplicatesEquivalent method on Series.
DataFrame.drop_duplicatesEquivalent method on DataFrame.
Index.duplicatedRelated method on Index, indicating duplicate Index values.
Examples
Generate an pandas.Index with duplicate values.
>>> idx = pd.Index(['lama', 'cow', 'lama', 'beetle', 'lama', 'hippo'])
The keep parameter controls which duplicate values are removed.
The value ‘first’ keeps the first occurrence for each
set of duplicated entries. The default value of keep is ‘first’.
>>> idx.drop_duplicates(keep='first')
Index(['lama', 'cow', 'beetle', 'hippo'], dtype='object')
The value ‘last’ keeps the last occurrence for each set of duplicated
entries.
>>> idx.drop_duplicates(keep='last')
Index(['cow', 'beetle', 'lama', 'hippo'], dtype='object')
The value False discards all sets of duplicated entries.
>>> idx.drop_duplicates(keep=False)
Index(['cow', 'beetle', 'hippo'], dtype='object')
| 156
| 1,212
|
How to find first occurrence for each id based on datetime column with pandas?
I have seen a lot of similar questions but didn't quite find an answer to my specific problem. Let's say I have a df:
sample_id tested_at test_value
1 2020-07-21 5
1 2020-07-22 4
1 2020-07-23 6
2 2020-07-26 6
2 2020-07-28 5
3 2020-07-22 4
3 2020-07-27 4
3 2020-07-30 6
The df is already sorted for ascending by tested_at column. I now need to add another column first_test which would indicate the first test value for each sample_id in every line, regardless if it is highest or not. The output should be:
sample_id tested_at test_value first_test
1 2020-07-21 5 5
1 2020-07-22 4 5
1 2020-07-23 6 5
2 2020-07-26 6 6
2 2020-07-28 5 6
3 2020-07-22 4 4
3 2020-07-27 4 4
3 2020-07-30 6 4
The df is also quite big, so a faster way would be very much appreciated.
|
60,818,048
|
How to explode the column value without duplicating the other columns values in panda dataframe?
|
<p>I have df like this:</p>
<pre><code>id ColumnA ColumnB ColumnC
1 Audi_BMW_VW BMW_Audi VW
2 VW Audi Audi_BMW_VW
</code></pre>
<p>I want to explode the columns based on explode when _ appear. For example for "Column A" like this</p>
<pre><code>df['Column A'].str.split('_')).explode('Column A')
</code></pre>
<p>but when i use similar query for column B then it repeats the values of column A, but i really want that only ID should duplicate. <strong>The desired output would be something like this:</strong></p>
<pre><code>id ColumnA ColumnB ColumnC
1 Audi BMW VW
1 BMW Audi
1 VW
2 VW Audi Audi
2 BMW
2 VW
</code></pre>
| 60,818,210
| 2020-03-23T16:53:36.590000
| 3
| null | 3
| 527
|
python|pandas
|
<p>Lots of reshaping. The key point is to stack then call <code>Series.str.split</code> on a single Series with the <code>id</code> as the Index.</p>
<pre><code>(df.set_index('id') # keep 'id' bound to cells in the row
.stack() # to a single Series
.str.split('_', expand=True) # split into separate cells on '_'
.unstack(-1).stack(0) # original column labels back to columns
.reset_index(-1, drop=True) # remove split number label
)
</code></pre>
<hr>
<pre><code> ColumnA ColumnB ColumnC
id
1 Audi BMW VW
1 BMW Audi None
1 VW None None
2 VW Audi Audi
2 None None BMW
2 None None VW
</code></pre>
| 2020-03-23T17:03:09.330000
| 4
|
https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.explode.html
|
pandas.DataFrame.explode#
pandas.DataFrame.explode#
DataFrame.explode(column, ignore_index=False)[source]#
Transform each element of a list-like to a row, replicating index values.
Lots of reshaping. The key point is to stack then call Series.str.split on a single Series with the id as the Index.
(df.set_index('id') # keep 'id' bound to cells in the row
.stack() # to a single Series
.str.split('_', expand=True) # split into separate cells on '_'
.unstack(-1).stack(0) # original column labels back to columns
.reset_index(-1, drop=True) # remove split number label
)
ColumnA ColumnB ColumnC
id
1 Audi BMW VW
1 BMW Audi None
1 VW None None
2 VW Audi Audi
2 None None BMW
2 None None VW
New in version 0.25.0.
Parameters
columnIndexLabelColumn(s) to explode.
For multiple columns, specify a non-empty list with each element
be str or tuple, and all specified columns their list-like data
on same row of the frame must have matching length.
New in version 1.3.0: Multi-column explode
ignore_indexbool, default FalseIf True, the resulting index will be labeled 0, 1, …, n - 1.
New in version 1.1.0.
Returns
DataFrameExploded lists to rows of the subset columns;
index will be duplicated for these rows.
Raises
ValueError
If columns of the frame are not unique.
If specified columns to explode is empty list.
If specified columns to explode have not matching count of
elements rowwise in the frame.
See also
DataFrame.unstackPivot a level of the (necessarily hierarchical) index labels.
DataFrame.meltUnpivot a DataFrame from wide format to long format.
Series.explodeExplode a DataFrame from list-like columns to long format.
Notes
This routine will explode list-likes including lists, tuples, sets,
Series, and np.ndarray. The result dtype of the subset rows will
be object. Scalars will be returned unchanged, and empty list-likes will
result in a np.nan for that row. In addition, the ordering of rows in the
output will be non-deterministic when exploding sets.
Reference the user guide for more examples.
Examples
>>> df = pd.DataFrame({'A': [[0, 1, 2], 'foo', [], [3, 4]],
... 'B': 1,
... 'C': [['a', 'b', 'c'], np.nan, [], ['d', 'e']]})
>>> df
A B C
0 [0, 1, 2] 1 [a, b, c]
1 foo 1 NaN
2 [] 1 []
3 [3, 4] 1 [d, e]
Single-column explode.
>>> df.explode('A')
A B C
0 0 1 [a, b, c]
0 1 1 [a, b, c]
0 2 1 [a, b, c]
1 foo 1 NaN
2 NaN 1 []
3 3 1 [d, e]
3 4 1 [d, e]
Multi-column explode.
>>> df.explode(list('AC'))
A B C
0 0 1 a
0 1 1 b
0 2 1 c
1 foo 1 NaN
2 NaN 1 NaN
3 3 1 d
3 4 1 e
| 185
| 846
|
How to explode the column value without duplicating the other columns values in panda dataframe?
I have df like this:
id ColumnA ColumnB ColumnC
1 Audi_BMW_VW BMW_Audi VW
2 VW Audi Audi_BMW_VW
I want to explode the columns based on explode when _ appear. For example for "Column A" like this
df['Column A'].str.split('_')).explode('Column A')
but when i use similar query for column B then it repeats the values of column A, but i really want that only ID should duplicate. The desired output would be something like this:
id ColumnA ColumnB ColumnC
1 Audi BMW VW
1 BMW Audi
1 VW
2 VW Audi Audi
2 BMW
2 VW
|
66,379,865
|
Pandas return separate column value in current index if two separate columns match
|
<p>Say I have the following data frame:</p>
<pre><code> A B C
0 n1 n2 n4
1 n2 n3 n5
2 n3 n1 n6
</code></pre>
<p>I have been trying to:</p>
<ol>
<li>Loop through <code>Column A</code> to find a matching value in <code>Column B</code></li>
<li>If there is a match in <code>Column B</code> I want to grab the value in <code>Column C</code> <em>for the current index</em> and create a <code>Column D</code> with that value.</li>
<li>Given the example data frame above, below would be the solution I'm trying to achieve.</li>
</ol>
<pre><code> A B C D
0 n1 n2 n4 n6
1 n2 n3 n5 n4
2 n3 n1 n6 n5
</code></pre>
<p>I've seen lots of answers for excel utilizing match and index, but I literally can't find anything to help me solve this problem. Any help would be appreciated.</p>
| 66,379,876
| 2021-02-26T03:51:47.340000
| 2
| null | 2
| 36
|
python|pandas
|
<p>Use <code>map</code> with <code>set_index</code>:</p>
<pre><code>df['D'] = df['A'].map(df.set_index('B')['C'])
</code></pre>
<p>Output:</p>
<pre><code> A B C D
0 n1 n2 n4 n6
1 n2 n3 n5 n4
2 n3 n1 n6 n5
</code></pre>
| 2021-02-26T03:52:46.470000
| 4
|
https://pandas.pydata.org/docs/user_guide/groupby.html
|
Group by: split-apply-combine#
Group by: split-apply-combine#
By “group by” we are referring to a process involving one or more of the following
steps:
Splitting the data into groups based on some criteria.
Applying a function to each group independently.
Combining the results into a data structure.
Out of these, the split step is the most straightforward. In fact, in many
situations we may wish to split the data set into groups and do something with
those groups. In the apply step, we might wish to do one of the
following:
Aggregation: compute a summary statistic (or statistics) for each
Use map with set_index:
df['D'] = df['A'].map(df.set_index('B')['C'])
Output:
A B C D
0 n1 n2 n4 n6
1 n2 n3 n5 n4
2 n3 n1 n6 n5
group. Some examples:
Compute group sums or means.
Compute group sizes / counts.
Transformation: perform some group-specific computations and return a
like-indexed object. Some examples:
Standardize data (zscore) within a group.
Filling NAs within groups with a value derived from each group.
Filtration: discard some groups, according to a group-wise computation
that evaluates True or False. Some examples:
Discard data that belongs to groups with only a few members.
Filter out data based on the group sum or mean.
Some combination of the above: GroupBy will examine the results of the apply
step and try to return a sensibly combined result if it doesn’t fit into
either of the above two categories.
Since the set of object instance methods on pandas data structures are generally
rich and expressive, we often simply want to invoke, say, a DataFrame function
on each group. The name GroupBy should be quite familiar to those who have used
a SQL-based tool (or itertools), in which you can write code like:
SELECT Column1, Column2, mean(Column3), sum(Column4)
FROM SomeTable
GROUP BY Column1, Column2
We aim to make operations like this natural and easy to express using
pandas. We’ll address each area of GroupBy functionality then provide some
non-trivial examples / use cases.
See the cookbook for some advanced strategies.
Splitting an object into groups#
pandas objects can be split on any of their axes. The abstract definition of
grouping is to provide a mapping of labels to group names. To create a GroupBy
object (more on what the GroupBy object is later), you may do the following:
In [1]: df = pd.DataFrame(
...: [
...: ("bird", "Falconiformes", 389.0),
...: ("bird", "Psittaciformes", 24.0),
...: ("mammal", "Carnivora", 80.2),
...: ("mammal", "Primates", np.nan),
...: ("mammal", "Carnivora", 58),
...: ],
...: index=["falcon", "parrot", "lion", "monkey", "leopard"],
...: columns=("class", "order", "max_speed"),
...: )
...:
In [2]: df
Out[2]:
class order max_speed
falcon bird Falconiformes 389.0
parrot bird Psittaciformes 24.0
lion mammal Carnivora 80.2
monkey mammal Primates NaN
leopard mammal Carnivora 58.0
# default is axis=0
In [3]: grouped = df.groupby("class")
In [4]: grouped = df.groupby("order", axis="columns")
In [5]: grouped = df.groupby(["class", "order"])
The mapping can be specified many different ways:
A Python function, to be called on each of the axis labels.
A list or NumPy array of the same length as the selected axis.
A dict or Series, providing a label -> group name mapping.
For DataFrame objects, a string indicating either a column name or
an index level name to be used to group.
df.groupby('A') is just syntactic sugar for df.groupby(df['A']).
A list of any of the above things.
Collectively we refer to the grouping objects as the keys. For example,
consider the following DataFrame:
Note
A string passed to groupby may refer to either a column or an index level.
If a string matches both a column name and an index level name, a
ValueError will be raised.
In [6]: df = pd.DataFrame(
...: {
...: "A": ["foo", "bar", "foo", "bar", "foo", "bar", "foo", "foo"],
...: "B": ["one", "one", "two", "three", "two", "two", "one", "three"],
...: "C": np.random.randn(8),
...: "D": np.random.randn(8),
...: }
...: )
...:
In [7]: df
Out[7]:
A B C D
0 foo one 0.469112 -0.861849
1 bar one -0.282863 -2.104569
2 foo two -1.509059 -0.494929
3 bar three -1.135632 1.071804
4 foo two 1.212112 0.721555
5 bar two -0.173215 -0.706771
6 foo one 0.119209 -1.039575
7 foo three -1.044236 0.271860
On a DataFrame, we obtain a GroupBy object by calling groupby().
We could naturally group by either the A or B columns, or both:
In [8]: grouped = df.groupby("A")
In [9]: grouped = df.groupby(["A", "B"])
If we also have a MultiIndex on columns A and B, we can group by all
but the specified columns
In [10]: df2 = df.set_index(["A", "B"])
In [11]: grouped = df2.groupby(level=df2.index.names.difference(["B"]))
In [12]: grouped.sum()
Out[12]:
C D
A
bar -1.591710 -1.739537
foo -0.752861 -1.402938
These will split the DataFrame on its index (rows). We could also split by the
columns:
In [13]: def get_letter_type(letter):
....: if letter.lower() in 'aeiou':
....: return 'vowel'
....: else:
....: return 'consonant'
....:
In [14]: grouped = df.groupby(get_letter_type, axis=1)
pandas Index objects support duplicate values. If a
non-unique index is used as the group key in a groupby operation, all values
for the same index value will be considered to be in one group and thus the
output of aggregation functions will only contain unique index values:
In [15]: lst = [1, 2, 3, 1, 2, 3]
In [16]: s = pd.Series([1, 2, 3, 10, 20, 30], lst)
In [17]: grouped = s.groupby(level=0)
In [18]: grouped.first()
Out[18]:
1 1
2 2
3 3
dtype: int64
In [19]: grouped.last()
Out[19]:
1 10
2 20
3 30
dtype: int64
In [20]: grouped.sum()
Out[20]:
1 11
2 22
3 33
dtype: int64
Note that no splitting occurs until it’s needed. Creating the GroupBy object
only verifies that you’ve passed a valid mapping.
Note
Many kinds of complicated data manipulations can be expressed in terms of
GroupBy operations (though can’t be guaranteed to be the most
efficient). You can get quite creative with the label mapping functions.
GroupBy sorting#
By default the group keys are sorted during the groupby operation. You may however pass sort=False for potential speedups:
In [21]: df2 = pd.DataFrame({"X": ["B", "B", "A", "A"], "Y": [1, 2, 3, 4]})
In [22]: df2.groupby(["X"]).sum()
Out[22]:
Y
X
A 7
B 3
In [23]: df2.groupby(["X"], sort=False).sum()
Out[23]:
Y
X
B 3
A 7
Note that groupby will preserve the order in which observations are sorted within each group.
For example, the groups created by groupby() below are in the order they appeared in the original DataFrame:
In [24]: df3 = pd.DataFrame({"X": ["A", "B", "A", "B"], "Y": [1, 4, 3, 2]})
In [25]: df3.groupby(["X"]).get_group("A")
Out[25]:
X Y
0 A 1
2 A 3
In [26]: df3.groupby(["X"]).get_group("B")
Out[26]:
X Y
1 B 4
3 B 2
New in version 1.1.0.
GroupBy dropna#
By default NA values are excluded from group keys during the groupby operation. However,
in case you want to include NA values in group keys, you could pass dropna=False to achieve it.
In [27]: df_list = [[1, 2, 3], [1, None, 4], [2, 1, 3], [1, 2, 2]]
In [28]: df_dropna = pd.DataFrame(df_list, columns=["a", "b", "c"])
In [29]: df_dropna
Out[29]:
a b c
0 1 2.0 3
1 1 NaN 4
2 2 1.0 3
3 1 2.0 2
# Default ``dropna`` is set to True, which will exclude NaNs in keys
In [30]: df_dropna.groupby(by=["b"], dropna=True).sum()
Out[30]:
a c
b
1.0 2 3
2.0 2 5
# In order to allow NaN in keys, set ``dropna`` to False
In [31]: df_dropna.groupby(by=["b"], dropna=False).sum()
Out[31]:
a c
b
1.0 2 3
2.0 2 5
NaN 1 4
The default setting of dropna argument is True which means NA are not included in group keys.
GroupBy object attributes#
The groups attribute is a dict whose keys are the computed unique groups
and corresponding values being the axis labels belonging to each group. In the
above example we have:
In [32]: df.groupby("A").groups
Out[32]: {'bar': [1, 3, 5], 'foo': [0, 2, 4, 6, 7]}
In [33]: df.groupby(get_letter_type, axis=1).groups
Out[33]: {'consonant': ['B', 'C', 'D'], 'vowel': ['A']}
Calling the standard Python len function on the GroupBy object just returns
the length of the groups dict, so it is largely just a convenience:
In [34]: grouped = df.groupby(["A", "B"])
In [35]: grouped.groups
Out[35]: {('bar', 'one'): [1], ('bar', 'three'): [3], ('bar', 'two'): [5], ('foo', 'one'): [0, 6], ('foo', 'three'): [7], ('foo', 'two'): [2, 4]}
In [36]: len(grouped)
Out[36]: 6
GroupBy will tab complete column names (and other attributes):
In [37]: df
Out[37]:
height weight gender
2000-01-01 42.849980 157.500553 male
2000-01-02 49.607315 177.340407 male
2000-01-03 56.293531 171.524640 male
2000-01-04 48.421077 144.251986 female
2000-01-05 46.556882 152.526206 male
2000-01-06 68.448851 168.272968 female
2000-01-07 70.757698 136.431469 male
2000-01-08 58.909500 176.499753 female
2000-01-09 76.435631 174.094104 female
2000-01-10 45.306120 177.540920 male
In [38]: gb = df.groupby("gender")
In [39]: gb.<TAB> # noqa: E225, E999
gb.agg gb.boxplot gb.cummin gb.describe gb.filter gb.get_group gb.height gb.last gb.median gb.ngroups gb.plot gb.rank gb.std gb.transform
gb.aggregate gb.count gb.cumprod gb.dtype gb.first gb.groups gb.hist gb.max gb.min gb.nth gb.prod gb.resample gb.sum gb.var
gb.apply gb.cummax gb.cumsum gb.fillna gb.gender gb.head gb.indices gb.mean gb.name gb.ohlc gb.quantile gb.size gb.tail gb.weight
GroupBy with MultiIndex#
With hierarchically-indexed data, it’s quite
natural to group by one of the levels of the hierarchy.
Let’s create a Series with a two-level MultiIndex.
In [40]: arrays = [
....: ["bar", "bar", "baz", "baz", "foo", "foo", "qux", "qux"],
....: ["one", "two", "one", "two", "one", "two", "one", "two"],
....: ]
....:
In [41]: index = pd.MultiIndex.from_arrays(arrays, names=["first", "second"])
In [42]: s = pd.Series(np.random.randn(8), index=index)
In [43]: s
Out[43]:
first second
bar one -0.919854
two -0.042379
baz one 1.247642
two -0.009920
foo one 0.290213
two 0.495767
qux one 0.362949
two 1.548106
dtype: float64
We can then group by one of the levels in s.
In [44]: grouped = s.groupby(level=0)
In [45]: grouped.sum()
Out[45]:
first
bar -0.962232
baz 1.237723
foo 0.785980
qux 1.911055
dtype: float64
If the MultiIndex has names specified, these can be passed instead of the level
number:
In [46]: s.groupby(level="second").sum()
Out[46]:
second
one 0.980950
two 1.991575
dtype: float64
Grouping with multiple levels is supported.
In [47]: s
Out[47]:
first second third
bar doo one -1.131345
two -0.089329
baz bee one 0.337863
two -0.945867
foo bop one -0.932132
two 1.956030
qux bop one 0.017587
two -0.016692
dtype: float64
In [48]: s.groupby(level=["first", "second"]).sum()
Out[48]:
first second
bar doo -1.220674
baz bee -0.608004
foo bop 1.023898
qux bop 0.000895
dtype: float64
Index level names may be supplied as keys.
In [49]: s.groupby(["first", "second"]).sum()
Out[49]:
first second
bar doo -1.220674
baz bee -0.608004
foo bop 1.023898
qux bop 0.000895
dtype: float64
More on the sum function and aggregation later.
Grouping DataFrame with Index levels and columns#
A DataFrame may be grouped by a combination of columns and index levels by
specifying the column names as strings and the index levels as pd.Grouper
objects.
In [50]: arrays = [
....: ["bar", "bar", "baz", "baz", "foo", "foo", "qux", "qux"],
....: ["one", "two", "one", "two", "one", "two", "one", "two"],
....: ]
....:
In [51]: index = pd.MultiIndex.from_arrays(arrays, names=["first", "second"])
In [52]: df = pd.DataFrame({"A": [1, 1, 1, 1, 2, 2, 3, 3], "B": np.arange(8)}, index=index)
In [53]: df
Out[53]:
A B
first second
bar one 1 0
two 1 1
baz one 1 2
two 1 3
foo one 2 4
two 2 5
qux one 3 6
two 3 7
The following example groups df by the second index level and
the A column.
In [54]: df.groupby([pd.Grouper(level=1), "A"]).sum()
Out[54]:
B
second A
one 1 2
2 4
3 6
two 1 4
2 5
3 7
Index levels may also be specified by name.
In [55]: df.groupby([pd.Grouper(level="second"), "A"]).sum()
Out[55]:
B
second A
one 1 2
2 4
3 6
two 1 4
2 5
3 7
Index level names may be specified as keys directly to groupby.
In [56]: df.groupby(["second", "A"]).sum()
Out[56]:
B
second A
one 1 2
2 4
3 6
two 1 4
2 5
3 7
DataFrame column selection in GroupBy#
Once you have created the GroupBy object from a DataFrame, you might want to do
something different for each of the columns. Thus, using [] similar to
getting a column from a DataFrame, you can do:
In [57]: df = pd.DataFrame(
....: {
....: "A": ["foo", "bar", "foo", "bar", "foo", "bar", "foo", "foo"],
....: "B": ["one", "one", "two", "three", "two", "two", "one", "three"],
....: "C": np.random.randn(8),
....: "D": np.random.randn(8),
....: }
....: )
....:
In [58]: df
Out[58]:
A B C D
0 foo one -0.575247 1.346061
1 bar one 0.254161 1.511763
2 foo two -1.143704 1.627081
3 bar three 0.215897 -0.990582
4 foo two 1.193555 -0.441652
5 bar two -0.077118 1.211526
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
In [59]: grouped = df.groupby(["A"])
In [60]: grouped_C = grouped["C"]
In [61]: grouped_D = grouped["D"]
This is mainly syntactic sugar for the alternative and much more verbose:
In [62]: df["C"].groupby(df["A"])
Out[62]: <pandas.core.groupby.generic.SeriesGroupBy object at 0x7f1ea100a490>
Additionally this method avoids recomputing the internal grouping information
derived from the passed key.
Iterating through groups#
With the GroupBy object in hand, iterating through the grouped data is very
natural and functions similarly to itertools.groupby():
In [63]: grouped = df.groupby('A')
In [64]: for name, group in grouped:
....: print(name)
....: print(group)
....:
bar
A B C D
1 bar one 0.254161 1.511763
3 bar three 0.215897 -0.990582
5 bar two -0.077118 1.211526
foo
A B C D
0 foo one -0.575247 1.346061
2 foo two -1.143704 1.627081
4 foo two 1.193555 -0.441652
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
In the case of grouping by multiple keys, the group name will be a tuple:
In [65]: for name, group in df.groupby(['A', 'B']):
....: print(name)
....: print(group)
....:
('bar', 'one')
A B C D
1 bar one 0.254161 1.511763
('bar', 'three')
A B C D
3 bar three 0.215897 -0.990582
('bar', 'two')
A B C D
5 bar two -0.077118 1.211526
('foo', 'one')
A B C D
0 foo one -0.575247 1.346061
6 foo one -0.408530 0.268520
('foo', 'three')
A B C D
7 foo three -0.862495 0.02458
('foo', 'two')
A B C D
2 foo two -1.143704 1.627081
4 foo two 1.193555 -0.441652
See Iterating through groups.
Selecting a group#
A single group can be selected using
get_group():
In [66]: grouped.get_group("bar")
Out[66]:
A B C D
1 bar one 0.254161 1.511763
3 bar three 0.215897 -0.990582
5 bar two -0.077118 1.211526
Or for an object grouped on multiple columns:
In [67]: df.groupby(["A", "B"]).get_group(("bar", "one"))
Out[67]:
A B C D
1 bar one 0.254161 1.511763
Aggregation#
Once the GroupBy object has been created, several methods are available to
perform a computation on the grouped data. These operations are similar to the
aggregating API, window API,
and resample API.
An obvious one is aggregation via the
aggregate() or equivalently
agg() method:
In [68]: grouped = df.groupby("A")
In [69]: grouped[["C", "D"]].aggregate(np.sum)
Out[69]:
C D
A
bar 0.392940 1.732707
foo -1.796421 2.824590
In [70]: grouped = df.groupby(["A", "B"])
In [71]: grouped.aggregate(np.sum)
Out[71]:
C D
A B
bar one 0.254161 1.511763
three 0.215897 -0.990582
two -0.077118 1.211526
foo one -0.983776 1.614581
three -0.862495 0.024580
two 0.049851 1.185429
As you can see, the result of the aggregation will have the group names as the
new index along the grouped axis. In the case of multiple keys, the result is a
MultiIndex by default, though this can be
changed by using the as_index option:
In [72]: grouped = df.groupby(["A", "B"], as_index=False)
In [73]: grouped.aggregate(np.sum)
Out[73]:
A B C D
0 bar one 0.254161 1.511763
1 bar three 0.215897 -0.990582
2 bar two -0.077118 1.211526
3 foo one -0.983776 1.614581
4 foo three -0.862495 0.024580
5 foo two 0.049851 1.185429
In [74]: df.groupby("A", as_index=False)[["C", "D"]].sum()
Out[74]:
A C D
0 bar 0.392940 1.732707
1 foo -1.796421 2.824590
Note that you could use the reset_index DataFrame function to achieve the
same result as the column names are stored in the resulting MultiIndex:
In [75]: df.groupby(["A", "B"]).sum().reset_index()
Out[75]:
A B C D
0 bar one 0.254161 1.511763
1 bar three 0.215897 -0.990582
2 bar two -0.077118 1.211526
3 foo one -0.983776 1.614581
4 foo three -0.862495 0.024580
5 foo two 0.049851 1.185429
Another simple aggregation example is to compute the size of each group.
This is included in GroupBy as the size method. It returns a Series whose
index are the group names and whose values are the sizes of each group.
In [76]: grouped.size()
Out[76]:
A B size
0 bar one 1
1 bar three 1
2 bar two 1
3 foo one 2
4 foo three 1
5 foo two 2
In [77]: grouped.describe()
Out[77]:
C ... D
count mean std min ... 25% 50% 75% max
0 1.0 0.254161 NaN 0.254161 ... 1.511763 1.511763 1.511763 1.511763
1 1.0 0.215897 NaN 0.215897 ... -0.990582 -0.990582 -0.990582 -0.990582
2 1.0 -0.077118 NaN -0.077118 ... 1.211526 1.211526 1.211526 1.211526
3 2.0 -0.491888 0.117887 -0.575247 ... 0.537905 0.807291 1.076676 1.346061
4 1.0 -0.862495 NaN -0.862495 ... 0.024580 0.024580 0.024580 0.024580
5 2.0 0.024925 1.652692 -1.143704 ... 0.075531 0.592714 1.109898 1.627081
[6 rows x 16 columns]
Another aggregation example is to compute the number of unique values of each group. This is similar to the value_counts function, except that it only counts unique values.
In [78]: ll = [['foo', 1], ['foo', 2], ['foo', 2], ['bar', 1], ['bar', 1]]
In [79]: df4 = pd.DataFrame(ll, columns=["A", "B"])
In [80]: df4
Out[80]:
A B
0 foo 1
1 foo 2
2 foo 2
3 bar 1
4 bar 1
In [81]: df4.groupby("A")["B"].nunique()
Out[81]:
A
bar 1
foo 2
Name: B, dtype: int64
Note
Aggregation functions will not return the groups that you are aggregating over
if they are named columns, when as_index=True, the default. The grouped columns will
be the indices of the returned object.
Passing as_index=False will return the groups that you are aggregating over, if they are
named columns.
Aggregating functions are the ones that reduce the dimension of the returned objects.
Some common aggregating functions are tabulated below:
Function
Description
mean()
Compute mean of groups
sum()
Compute sum of group values
size()
Compute group sizes
count()
Compute count of group
std()
Standard deviation of groups
var()
Compute variance of groups
sem()
Standard error of the mean of groups
describe()
Generates descriptive statistics
first()
Compute first of group values
last()
Compute last of group values
nth()
Take nth value, or a subset if n is a list
min()
Compute min of group values
max()
Compute max of group values
The aggregating functions above will exclude NA values. Any function which
reduces a Series to a scalar value is an aggregation function and will work,
a trivial example is df.groupby('A').agg(lambda ser: 1). Note that
nth() can act as a reducer or a
filter, see here.
Applying multiple functions at once#
With grouped Series you can also pass a list or dict of functions to do
aggregation with, outputting a DataFrame:
In [82]: grouped = df.groupby("A")
In [83]: grouped["C"].agg([np.sum, np.mean, np.std])
Out[83]:
sum mean std
A
bar 0.392940 0.130980 0.181231
foo -1.796421 -0.359284 0.912265
On a grouped DataFrame, you can pass a list of functions to apply to each
column, which produces an aggregated result with a hierarchical index:
In [84]: grouped[["C", "D"]].agg([np.sum, np.mean, np.std])
Out[84]:
C D
sum mean std sum mean std
A
bar 0.392940 0.130980 0.181231 1.732707 0.577569 1.366330
foo -1.796421 -0.359284 0.912265 2.824590 0.564918 0.884785
The resulting aggregations are named for the functions themselves. If you
need to rename, then you can add in a chained operation for a Series like this:
In [85]: (
....: grouped["C"]
....: .agg([np.sum, np.mean, np.std])
....: .rename(columns={"sum": "foo", "mean": "bar", "std": "baz"})
....: )
....:
Out[85]:
foo bar baz
A
bar 0.392940 0.130980 0.181231
foo -1.796421 -0.359284 0.912265
For a grouped DataFrame, you can rename in a similar manner:
In [86]: (
....: grouped[["C", "D"]].agg([np.sum, np.mean, np.std]).rename(
....: columns={"sum": "foo", "mean": "bar", "std": "baz"}
....: )
....: )
....:
Out[86]:
C D
foo bar baz foo bar baz
A
bar 0.392940 0.130980 0.181231 1.732707 0.577569 1.366330
foo -1.796421 -0.359284 0.912265 2.824590 0.564918 0.884785
Note
In general, the output column names should be unique. You can’t apply
the same function (or two functions with the same name) to the same
column.
In [87]: grouped["C"].agg(["sum", "sum"])
Out[87]:
sum sum
A
bar 0.392940 0.392940
foo -1.796421 -1.796421
pandas does allow you to provide multiple lambdas. In this case, pandas
will mangle the name of the (nameless) lambda functions, appending _<i>
to each subsequent lambda.
In [88]: grouped["C"].agg([lambda x: x.max() - x.min(), lambda x: x.median() - x.mean()])
Out[88]:
<lambda_0> <lambda_1>
A
bar 0.331279 0.084917
foo 2.337259 -0.215962
Named aggregation#
New in version 0.25.0.
To support column-specific aggregation with control over the output column names, pandas
accepts the special syntax in GroupBy.agg(), known as “named aggregation”, where
The keywords are the output column names
The values are tuples whose first element is the column to select
and the second element is the aggregation to apply to that column. pandas
provides the pandas.NamedAgg namedtuple with the fields ['column', 'aggfunc']
to make it clearer what the arguments are. As usual, the aggregation can
be a callable or a string alias.
In [89]: animals = pd.DataFrame(
....: {
....: "kind": ["cat", "dog", "cat", "dog"],
....: "height": [9.1, 6.0, 9.5, 34.0],
....: "weight": [7.9, 7.5, 9.9, 198.0],
....: }
....: )
....:
In [90]: animals
Out[90]:
kind height weight
0 cat 9.1 7.9
1 dog 6.0 7.5
2 cat 9.5 9.9
3 dog 34.0 198.0
In [91]: animals.groupby("kind").agg(
....: min_height=pd.NamedAgg(column="height", aggfunc="min"),
....: max_height=pd.NamedAgg(column="height", aggfunc="max"),
....: average_weight=pd.NamedAgg(column="weight", aggfunc=np.mean),
....: )
....:
Out[91]:
min_height max_height average_weight
kind
cat 9.1 9.5 8.90
dog 6.0 34.0 102.75
pandas.NamedAgg is just a namedtuple. Plain tuples are allowed as well.
In [92]: animals.groupby("kind").agg(
....: min_height=("height", "min"),
....: max_height=("height", "max"),
....: average_weight=("weight", np.mean),
....: )
....:
Out[92]:
min_height max_height average_weight
kind
cat 9.1 9.5 8.90
dog 6.0 34.0 102.75
If your desired output column names are not valid Python keywords, construct a dictionary
and unpack the keyword arguments
In [93]: animals.groupby("kind").agg(
....: **{
....: "total weight": pd.NamedAgg(column="weight", aggfunc=sum)
....: }
....: )
....:
Out[93]:
total weight
kind
cat 17.8
dog 205.5
Additional keyword arguments are not passed through to the aggregation functions. Only pairs
of (column, aggfunc) should be passed as **kwargs. If your aggregation functions
requires additional arguments, partially apply them with functools.partial().
Note
For Python 3.5 and earlier, the order of **kwargs in a functions was not
preserved. This means that the output column ordering would not be
consistent. To ensure consistent ordering, the keys (and so output columns)
will always be sorted for Python 3.5.
Named aggregation is also valid for Series groupby aggregations. In this case there’s
no column selection, so the values are just the functions.
In [94]: animals.groupby("kind").height.agg(
....: min_height="min",
....: max_height="max",
....: )
....:
Out[94]:
min_height max_height
kind
cat 9.1 9.5
dog 6.0 34.0
Applying different functions to DataFrame columns#
By passing a dict to aggregate you can apply a different aggregation to the
columns of a DataFrame:
In [95]: grouped.agg({"C": np.sum, "D": lambda x: np.std(x, ddof=1)})
Out[95]:
C D
A
bar 0.392940 1.366330
foo -1.796421 0.884785
The function names can also be strings. In order for a string to be valid it
must be either implemented on GroupBy or available via dispatching:
In [96]: grouped.agg({"C": "sum", "D": "std"})
Out[96]:
C D
A
bar 0.392940 1.366330
foo -1.796421 0.884785
Cython-optimized aggregation functions#
Some common aggregations, currently only sum, mean, std, and sem, have
optimized Cython implementations:
In [97]: df.groupby("A")[["C", "D"]].sum()
Out[97]:
C D
A
bar 0.392940 1.732707
foo -1.796421 2.824590
In [98]: df.groupby(["A", "B"]).mean()
Out[98]:
C D
A B
bar one 0.254161 1.511763
three 0.215897 -0.990582
two -0.077118 1.211526
foo one -0.491888 0.807291
three -0.862495 0.024580
two 0.024925 0.592714
Of course sum and mean are implemented on pandas objects, so the above
code would work even without the special versions via dispatching (see below).
Aggregations with User-Defined Functions#
Users can also provide their own functions for custom aggregations. When aggregating
with a User-Defined Function (UDF), the UDF should not mutate the provided Series, see
Mutating with User Defined Function (UDF) methods for more information.
In [99]: animals.groupby("kind")[["height"]].agg(lambda x: set(x))
Out[99]:
height
kind
cat {9.1, 9.5}
dog {34.0, 6.0}
The resulting dtype will reflect that of the aggregating function. If the results from different groups have
different dtypes, then a common dtype will be determined in the same way as DataFrame construction.
In [100]: animals.groupby("kind")[["height"]].agg(lambda x: x.astype(int).sum())
Out[100]:
height
kind
cat 18
dog 40
Transformation#
The transform method returns an object that is indexed the same
as the one being grouped. The transform function must:
Return a result that is either the same size as the group chunk or
broadcastable to the size of the group chunk (e.g., a scalar,
grouped.transform(lambda x: x.iloc[-1])).
Operate column-by-column on the group chunk. The transform is applied to
the first group chunk using chunk.apply.
Not perform in-place operations on the group chunk. Group chunks should
be treated as immutable, and changes to a group chunk may produce unexpected
results. For example, when using fillna, inplace must be False
(grouped.transform(lambda x: x.fillna(inplace=False))).
(Optionally) operates on the entire group chunk. If this is supported, a
fast path is used starting from the second chunk.
Deprecated since version 1.5.0: When using .transform on a grouped DataFrame and the transformation function
returns a DataFrame, currently pandas does not align the result’s index
with the input’s index. This behavior is deprecated and alignment will
be performed in a future version of pandas. You can apply .to_numpy() to the
result of the transformation function to avoid alignment.
Similar to Aggregations with User-Defined Functions, the resulting dtype will reflect that of the
transformation function. If the results from different groups have different dtypes, then
a common dtype will be determined in the same way as DataFrame construction.
Suppose we wished to standardize the data within each group:
In [101]: index = pd.date_range("10/1/1999", periods=1100)
In [102]: ts = pd.Series(np.random.normal(0.5, 2, 1100), index)
In [103]: ts = ts.rolling(window=100, min_periods=100).mean().dropna()
In [104]: ts.head()
Out[104]:
2000-01-08 0.779333
2000-01-09 0.778852
2000-01-10 0.786476
2000-01-11 0.782797
2000-01-12 0.798110
Freq: D, dtype: float64
In [105]: ts.tail()
Out[105]:
2002-09-30 0.660294
2002-10-01 0.631095
2002-10-02 0.673601
2002-10-03 0.709213
2002-10-04 0.719369
Freq: D, dtype: float64
In [106]: transformed = ts.groupby(lambda x: x.year).transform(
.....: lambda x: (x - x.mean()) / x.std()
.....: )
.....:
We would expect the result to now have mean 0 and standard deviation 1 within
each group, which we can easily check:
# Original Data
In [107]: grouped = ts.groupby(lambda x: x.year)
In [108]: grouped.mean()
Out[108]:
2000 0.442441
2001 0.526246
2002 0.459365
dtype: float64
In [109]: grouped.std()
Out[109]:
2000 0.131752
2001 0.210945
2002 0.128753
dtype: float64
# Transformed Data
In [110]: grouped_trans = transformed.groupby(lambda x: x.year)
In [111]: grouped_trans.mean()
Out[111]:
2000 -4.870756e-16
2001 -1.545187e-16
2002 4.136282e-16
dtype: float64
In [112]: grouped_trans.std()
Out[112]:
2000 1.0
2001 1.0
2002 1.0
dtype: float64
We can also visually compare the original and transformed data sets.
In [113]: compare = pd.DataFrame({"Original": ts, "Transformed": transformed})
In [114]: compare.plot()
Out[114]: <AxesSubplot: >
Transformation functions that have lower dimension outputs are broadcast to
match the shape of the input array.
In [115]: ts.groupby(lambda x: x.year).transform(lambda x: x.max() - x.min())
Out[115]:
2000-01-08 0.623893
2000-01-09 0.623893
2000-01-10 0.623893
2000-01-11 0.623893
2000-01-12 0.623893
...
2002-09-30 0.558275
2002-10-01 0.558275
2002-10-02 0.558275
2002-10-03 0.558275
2002-10-04 0.558275
Freq: D, Length: 1001, dtype: float64
Alternatively, the built-in methods could be used to produce the same outputs.
In [116]: max_ts = ts.groupby(lambda x: x.year).transform("max")
In [117]: min_ts = ts.groupby(lambda x: x.year).transform("min")
In [118]: max_ts - min_ts
Out[118]:
2000-01-08 0.623893
2000-01-09 0.623893
2000-01-10 0.623893
2000-01-11 0.623893
2000-01-12 0.623893
...
2002-09-30 0.558275
2002-10-01 0.558275
2002-10-02 0.558275
2002-10-03 0.558275
2002-10-04 0.558275
Freq: D, Length: 1001, dtype: float64
Another common data transform is to replace missing data with the group mean.
In [119]: data_df
Out[119]:
A B C
0 1.539708 -1.166480 0.533026
1 1.302092 -0.505754 NaN
2 -0.371983 1.104803 -0.651520
3 -1.309622 1.118697 -1.161657
4 -1.924296 0.396437 0.812436
.. ... ... ...
995 -0.093110 0.683847 -0.774753
996 -0.185043 1.438572 NaN
997 -0.394469 -0.642343 0.011374
998 -1.174126 1.857148 NaN
999 0.234564 0.517098 0.393534
[1000 rows x 3 columns]
In [120]: countries = np.array(["US", "UK", "GR", "JP"])
In [121]: key = countries[np.random.randint(0, 4, 1000)]
In [122]: grouped = data_df.groupby(key)
# Non-NA count in each group
In [123]: grouped.count()
Out[123]:
A B C
GR 209 217 189
JP 240 255 217
UK 216 231 193
US 239 250 217
In [124]: transformed = grouped.transform(lambda x: x.fillna(x.mean()))
We can verify that the group means have not changed in the transformed data
and that the transformed data contains no NAs.
In [125]: grouped_trans = transformed.groupby(key)
In [126]: grouped.mean() # original group means
Out[126]:
A B C
GR -0.098371 -0.015420 0.068053
JP 0.069025 0.023100 -0.077324
UK 0.034069 -0.052580 -0.116525
US 0.058664 -0.020399 0.028603
In [127]: grouped_trans.mean() # transformation did not change group means
Out[127]:
A B C
GR -0.098371 -0.015420 0.068053
JP 0.069025 0.023100 -0.077324
UK 0.034069 -0.052580 -0.116525
US 0.058664 -0.020399 0.028603
In [128]: grouped.count() # original has some missing data points
Out[128]:
A B C
GR 209 217 189
JP 240 255 217
UK 216 231 193
US 239 250 217
In [129]: grouped_trans.count() # counts after transformation
Out[129]:
A B C
GR 228 228 228
JP 267 267 267
UK 247 247 247
US 258 258 258
In [130]: grouped_trans.size() # Verify non-NA count equals group size
Out[130]:
GR 228
JP 267
UK 247
US 258
dtype: int64
Note
Some functions will automatically transform the input when applied to a
GroupBy object, but returning an object of the same shape as the original.
Passing as_index=False will not affect these transformation methods.
For example: fillna, ffill, bfill, shift..
In [131]: grouped.ffill()
Out[131]:
A B C
0 1.539708 -1.166480 0.533026
1 1.302092 -0.505754 0.533026
2 -0.371983 1.104803 -0.651520
3 -1.309622 1.118697 -1.161657
4 -1.924296 0.396437 0.812436
.. ... ... ...
995 -0.093110 0.683847 -0.774753
996 -0.185043 1.438572 -0.774753
997 -0.394469 -0.642343 0.011374
998 -1.174126 1.857148 -0.774753
999 0.234564 0.517098 0.393534
[1000 rows x 3 columns]
Window and resample operations#
It is possible to use resample(), expanding() and
rolling() as methods on groupbys.
The example below will apply the rolling() method on the samples of
the column B based on the groups of column A.
In [132]: df_re = pd.DataFrame({"A": [1] * 10 + [5] * 10, "B": np.arange(20)})
In [133]: df_re
Out[133]:
A B
0 1 0
1 1 1
2 1 2
3 1 3
4 1 4
.. .. ..
15 5 15
16 5 16
17 5 17
18 5 18
19 5 19
[20 rows x 2 columns]
In [134]: df_re.groupby("A").rolling(4).B.mean()
Out[134]:
A
1 0 NaN
1 NaN
2 NaN
3 1.5
4 2.5
...
5 15 13.5
16 14.5
17 15.5
18 16.5
19 17.5
Name: B, Length: 20, dtype: float64
The expanding() method will accumulate a given operation
(sum() in the example) for all the members of each particular
group.
In [135]: df_re.groupby("A").expanding().sum()
Out[135]:
B
A
1 0 0.0
1 1.0
2 3.0
3 6.0
4 10.0
... ...
5 15 75.0
16 91.0
17 108.0
18 126.0
19 145.0
[20 rows x 1 columns]
Suppose you want to use the resample() method to get a daily
frequency in each group of your dataframe and wish to complete the
missing values with the ffill() method.
In [136]: df_re = pd.DataFrame(
.....: {
.....: "date": pd.date_range(start="2016-01-01", periods=4, freq="W"),
.....: "group": [1, 1, 2, 2],
.....: "val": [5, 6, 7, 8],
.....: }
.....: ).set_index("date")
.....:
In [137]: df_re
Out[137]:
group val
date
2016-01-03 1 5
2016-01-10 1 6
2016-01-17 2 7
2016-01-24 2 8
In [138]: df_re.groupby("group").resample("1D").ffill()
Out[138]:
group val
group date
1 2016-01-03 1 5
2016-01-04 1 5
2016-01-05 1 5
2016-01-06 1 5
2016-01-07 1 5
... ... ...
2 2016-01-20 2 7
2016-01-21 2 7
2016-01-22 2 7
2016-01-23 2 7
2016-01-24 2 8
[16 rows x 2 columns]
Filtration#
The filter method returns a subset of the original object. Suppose we
want to take only elements that belong to groups with a group sum greater
than 2.
In [139]: sf = pd.Series([1, 1, 2, 3, 3, 3])
In [140]: sf.groupby(sf).filter(lambda x: x.sum() > 2)
Out[140]:
3 3
4 3
5 3
dtype: int64
The argument of filter must be a function that, applied to the group as a
whole, returns True or False.
Another useful operation is filtering out elements that belong to groups
with only a couple members.
In [141]: dff = pd.DataFrame({"A": np.arange(8), "B": list("aabbbbcc")})
In [142]: dff.groupby("B").filter(lambda x: len(x) > 2)
Out[142]:
A B
2 2 b
3 3 b
4 4 b
5 5 b
Alternatively, instead of dropping the offending groups, we can return a
like-indexed objects where the groups that do not pass the filter are filled
with NaNs.
In [143]: dff.groupby("B").filter(lambda x: len(x) > 2, dropna=False)
Out[143]:
A B
0 NaN NaN
1 NaN NaN
2 2.0 b
3 3.0 b
4 4.0 b
5 5.0 b
6 NaN NaN
7 NaN NaN
For DataFrames with multiple columns, filters should explicitly specify a column as the filter criterion.
In [144]: dff["C"] = np.arange(8)
In [145]: dff.groupby("B").filter(lambda x: len(x["C"]) > 2)
Out[145]:
A B C
2 2 b 2
3 3 b 3
4 4 b 4
5 5 b 5
Note
Some functions when applied to a groupby object will act as a filter on the input, returning
a reduced shape of the original (and potentially eliminating groups), but with the index unchanged.
Passing as_index=False will not affect these transformation methods.
For example: head, tail.
In [146]: dff.groupby("B").head(2)
Out[146]:
A B C
0 0 a 0
1 1 a 1
2 2 b 2
3 3 b 3
6 6 c 6
7 7 c 7
Dispatching to instance methods#
When doing an aggregation or transformation, you might just want to call an
instance method on each data group. This is pretty easy to do by passing lambda
functions:
In [147]: grouped = df.groupby("A")
In [148]: grouped.agg(lambda x: x.std())
Out[148]:
C D
A
bar 0.181231 1.366330
foo 0.912265 0.884785
But, it’s rather verbose and can be untidy if you need to pass additional
arguments. Using a bit of metaprogramming cleverness, GroupBy now has the
ability to “dispatch” method calls to the groups:
In [149]: grouped.std()
Out[149]:
C D
A
bar 0.181231 1.366330
foo 0.912265 0.884785
What is actually happening here is that a function wrapper is being
generated. When invoked, it takes any passed arguments and invokes the function
with any arguments on each group (in the above example, the std
function). The results are then combined together much in the style of agg
and transform (it actually uses apply to infer the gluing, documented
next). This enables some operations to be carried out rather succinctly:
In [150]: tsdf = pd.DataFrame(
.....: np.random.randn(1000, 3),
.....: index=pd.date_range("1/1/2000", periods=1000),
.....: columns=["A", "B", "C"],
.....: )
.....:
In [151]: tsdf.iloc[::2] = np.nan
In [152]: grouped = tsdf.groupby(lambda x: x.year)
In [153]: grouped.fillna(method="pad")
Out[153]:
A B C
2000-01-01 NaN NaN NaN
2000-01-02 -0.353501 -0.080957 -0.876864
2000-01-03 -0.353501 -0.080957 -0.876864
2000-01-04 0.050976 0.044273 -0.559849
2000-01-05 0.050976 0.044273 -0.559849
... ... ... ...
2002-09-22 0.005011 0.053897 -1.026922
2002-09-23 0.005011 0.053897 -1.026922
2002-09-24 -0.456542 -1.849051 1.559856
2002-09-25 -0.456542 -1.849051 1.559856
2002-09-26 1.123162 0.354660 1.128135
[1000 rows x 3 columns]
In this example, we chopped the collection of time series into yearly chunks
then independently called fillna on the
groups.
The nlargest and nsmallest methods work on Series style groupbys:
In [154]: s = pd.Series([9, 8, 7, 5, 19, 1, 4.2, 3.3])
In [155]: g = pd.Series(list("abababab"))
In [156]: gb = s.groupby(g)
In [157]: gb.nlargest(3)
Out[157]:
a 4 19.0
0 9.0
2 7.0
b 1 8.0
3 5.0
7 3.3
dtype: float64
In [158]: gb.nsmallest(3)
Out[158]:
a 6 4.2
2 7.0
0 9.0
b 5 1.0
7 3.3
3 5.0
dtype: float64
Flexible apply#
Some operations on the grouped data might not fit into either the aggregate or
transform categories. Or, you may simply want GroupBy to infer how to combine
the results. For these, use the apply function, which can be substituted
for both aggregate and transform in many standard use cases. However,
apply can handle some exceptional use cases.
Note
apply can act as a reducer, transformer, or filter function, depending
on exactly what is passed to it. It can depend on the passed function and
exactly what you are grouping. Thus the grouped column(s) may be included in
the output as well as set the indices.
In [159]: df
Out[159]:
A B C D
0 foo one -0.575247 1.346061
1 bar one 0.254161 1.511763
2 foo two -1.143704 1.627081
3 bar three 0.215897 -0.990582
4 foo two 1.193555 -0.441652
5 bar two -0.077118 1.211526
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
In [160]: grouped = df.groupby("A")
# could also just call .describe()
In [161]: grouped["C"].apply(lambda x: x.describe())
Out[161]:
A
bar count 3.000000
mean 0.130980
std 0.181231
min -0.077118
25% 0.069390
...
foo min -1.143704
25% -0.862495
50% -0.575247
75% -0.408530
max 1.193555
Name: C, Length: 16, dtype: float64
The dimension of the returned result can also change:
In [162]: grouped = df.groupby('A')['C']
In [163]: def f(group):
.....: return pd.DataFrame({'original': group,
.....: 'demeaned': group - group.mean()})
.....:
apply on a Series can operate on a returned value from the applied function,
that is itself a series, and possibly upcast the result to a DataFrame:
In [164]: def f(x):
.....: return pd.Series([x, x ** 2], index=["x", "x^2"])
.....:
In [165]: s = pd.Series(np.random.rand(5))
In [166]: s
Out[166]:
0 0.321438
1 0.493496
2 0.139505
3 0.910103
4 0.194158
dtype: float64
In [167]: s.apply(f)
Out[167]:
x x^2
0 0.321438 0.103323
1 0.493496 0.243538
2 0.139505 0.019462
3 0.910103 0.828287
4 0.194158 0.037697
Control grouped column(s) placement with group_keys#
Note
If group_keys=True is specified when calling groupby(),
functions passed to apply that return like-indexed outputs will have the
group keys added to the result index. Previous versions of pandas would add
the group keys only when the result from the applied function had a different
index than the input. If group_keys is not specified, the group keys will
not be added for like-indexed outputs. In the future this behavior
will change to always respect group_keys, which defaults to True.
Changed in version 1.5.0.
To control whether the grouped column(s) are included in the indices, you can use
the argument group_keys. Compare
In [168]: df.groupby("A", group_keys=True).apply(lambda x: x)
Out[168]:
A B C D
A
bar 1 bar one 0.254161 1.511763
3 bar three 0.215897 -0.990582
5 bar two -0.077118 1.211526
foo 0 foo one -0.575247 1.346061
2 foo two -1.143704 1.627081
4 foo two 1.193555 -0.441652
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
with
In [169]: df.groupby("A", group_keys=False).apply(lambda x: x)
Out[169]:
A B C D
0 foo one -0.575247 1.346061
1 bar one 0.254161 1.511763
2 foo two -1.143704 1.627081
3 bar three 0.215897 -0.990582
4 foo two 1.193555 -0.441652
5 bar two -0.077118 1.211526
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
Similar to Aggregations with User-Defined Functions, the resulting dtype will reflect that of the
apply function. If the results from different groups have different dtypes, then
a common dtype will be determined in the same way as DataFrame construction.
Numba Accelerated Routines#
New in version 1.1.
If Numba is installed as an optional dependency, the transform and
aggregate methods support engine='numba' and engine_kwargs arguments.
See enhancing performance with Numba for general usage of the arguments
and performance considerations.
The function signature must start with values, index exactly as the data belonging to each group
will be passed into values, and the group index will be passed into index.
Warning
When using engine='numba', there will be no “fall back” behavior internally. The group
data and group index will be passed as NumPy arrays to the JITed user defined function, and no
alternative execution attempts will be tried.
Other useful features#
Automatic exclusion of “nuisance” columns#
Again consider the example DataFrame we’ve been looking at:
In [170]: df
Out[170]:
A B C D
0 foo one -0.575247 1.346061
1 bar one 0.254161 1.511763
2 foo two -1.143704 1.627081
3 bar three 0.215897 -0.990582
4 foo two 1.193555 -0.441652
5 bar two -0.077118 1.211526
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
Suppose we wish to compute the standard deviation grouped by the A
column. There is a slight problem, namely that we don’t care about the data in
column B. We refer to this as a “nuisance” column. You can avoid nuisance
columns by specifying numeric_only=True:
In [171]: df.groupby("A").std(numeric_only=True)
Out[171]:
C D
A
bar 0.181231 1.366330
foo 0.912265 0.884785
Note that df.groupby('A').colname.std(). is more efficient than
df.groupby('A').std().colname, so if the result of an aggregation function
is only interesting over one column (here colname), it may be filtered
before applying the aggregation function.
Note
Any object column, also if it contains numerical values such as Decimal
objects, is considered as a “nuisance” columns. They are excluded from
aggregate functions automatically in groupby.
If you do wish to include decimal or object columns in an aggregation with
other non-nuisance data types, you must do so explicitly.
Warning
The automatic dropping of nuisance columns has been deprecated and will be removed
in a future version of pandas. If columns are included that cannot be operated
on, pandas will instead raise an error. In order to avoid this, either select
the columns you wish to operate on or specify numeric_only=True.
In [172]: from decimal import Decimal
In [173]: df_dec = pd.DataFrame(
.....: {
.....: "id": [1, 2, 1, 2],
.....: "int_column": [1, 2, 3, 4],
.....: "dec_column": [
.....: Decimal("0.50"),
.....: Decimal("0.15"),
.....: Decimal("0.25"),
.....: Decimal("0.40"),
.....: ],
.....: }
.....: )
.....:
# Decimal columns can be sum'd explicitly by themselves...
In [174]: df_dec.groupby(["id"])[["dec_column"]].sum()
Out[174]:
dec_column
id
1 0.75
2 0.55
# ...but cannot be combined with standard data types or they will be excluded
In [175]: df_dec.groupby(["id"])[["int_column", "dec_column"]].sum()
Out[175]:
int_column
id
1 4
2 6
# Use .agg function to aggregate over standard and "nuisance" data types
# at the same time
In [176]: df_dec.groupby(["id"]).agg({"int_column": "sum", "dec_column": "sum"})
Out[176]:
int_column dec_column
id
1 4 0.75
2 6 0.55
Handling of (un)observed Categorical values#
When using a Categorical grouper (as a single grouper, or as part of multiple groupers), the observed keyword
controls whether to return a cartesian product of all possible groupers values (observed=False) or only those
that are observed groupers (observed=True).
Show all values:
In [177]: pd.Series([1, 1, 1]).groupby(
.....: pd.Categorical(["a", "a", "a"], categories=["a", "b"]), observed=False
.....: ).count()
.....:
Out[177]:
a 3
b 0
dtype: int64
Show only the observed values:
In [178]: pd.Series([1, 1, 1]).groupby(
.....: pd.Categorical(["a", "a", "a"], categories=["a", "b"]), observed=True
.....: ).count()
.....:
Out[178]:
a 3
dtype: int64
The returned dtype of the grouped will always include all of the categories that were grouped.
In [179]: s = (
.....: pd.Series([1, 1, 1])
.....: .groupby(pd.Categorical(["a", "a", "a"], categories=["a", "b"]), observed=False)
.....: .count()
.....: )
.....:
In [180]: s.index.dtype
Out[180]: CategoricalDtype(categories=['a', 'b'], ordered=False)
NA and NaT group handling#
If there are any NaN or NaT values in the grouping key, these will be
automatically excluded. In other words, there will never be an “NA group” or
“NaT group”. This was not the case in older versions of pandas, but users were
generally discarding the NA group anyway (and supporting it was an
implementation headache).
Grouping with ordered factors#
Categorical variables represented as instance of pandas’s Categorical class
can be used as group keys. If so, the order of the levels will be preserved:
In [181]: data = pd.Series(np.random.randn(100))
In [182]: factor = pd.qcut(data, [0, 0.25, 0.5, 0.75, 1.0])
In [183]: data.groupby(factor).mean()
Out[183]:
(-2.645, -0.523] -1.362896
(-0.523, 0.0296] -0.260266
(0.0296, 0.654] 0.361802
(0.654, 2.21] 1.073801
dtype: float64
Grouping with a grouper specification#
You may need to specify a bit more data to properly group. You can
use the pd.Grouper to provide this local control.
In [184]: import datetime
In [185]: df = pd.DataFrame(
.....: {
.....: "Branch": "A A A A A A A B".split(),
.....: "Buyer": "Carl Mark Carl Carl Joe Joe Joe Carl".split(),
.....: "Quantity": [1, 3, 5, 1, 8, 1, 9, 3],
.....: "Date": [
.....: datetime.datetime(2013, 1, 1, 13, 0),
.....: datetime.datetime(2013, 1, 1, 13, 5),
.....: datetime.datetime(2013, 10, 1, 20, 0),
.....: datetime.datetime(2013, 10, 2, 10, 0),
.....: datetime.datetime(2013, 10, 1, 20, 0),
.....: datetime.datetime(2013, 10, 2, 10, 0),
.....: datetime.datetime(2013, 12, 2, 12, 0),
.....: datetime.datetime(2013, 12, 2, 14, 0),
.....: ],
.....: }
.....: )
.....:
In [186]: df
Out[186]:
Branch Buyer Quantity Date
0 A Carl 1 2013-01-01 13:00:00
1 A Mark 3 2013-01-01 13:05:00
2 A Carl 5 2013-10-01 20:00:00
3 A Carl 1 2013-10-02 10:00:00
4 A Joe 8 2013-10-01 20:00:00
5 A Joe 1 2013-10-02 10:00:00
6 A Joe 9 2013-12-02 12:00:00
7 B Carl 3 2013-12-02 14:00:00
Groupby a specific column with the desired frequency. This is like resampling.
In [187]: df.groupby([pd.Grouper(freq="1M", key="Date"), "Buyer"])[["Quantity"]].sum()
Out[187]:
Quantity
Date Buyer
2013-01-31 Carl 1
Mark 3
2013-10-31 Carl 6
Joe 9
2013-12-31 Carl 3
Joe 9
You have an ambiguous specification in that you have a named index and a column
that could be potential groupers.
In [188]: df = df.set_index("Date")
In [189]: df["Date"] = df.index + pd.offsets.MonthEnd(2)
In [190]: df.groupby([pd.Grouper(freq="6M", key="Date"), "Buyer"])[["Quantity"]].sum()
Out[190]:
Quantity
Date Buyer
2013-02-28 Carl 1
Mark 3
2014-02-28 Carl 9
Joe 18
In [191]: df.groupby([pd.Grouper(freq="6M", level="Date"), "Buyer"])[["Quantity"]].sum()
Out[191]:
Quantity
Date Buyer
2013-01-31 Carl 1
Mark 3
2014-01-31 Carl 9
Joe 18
Taking the first rows of each group#
Just like for a DataFrame or Series you can call head and tail on a groupby:
In [192]: df = pd.DataFrame([[1, 2], [1, 4], [5, 6]], columns=["A", "B"])
In [193]: df
Out[193]:
A B
0 1 2
1 1 4
2 5 6
In [194]: g = df.groupby("A")
In [195]: g.head(1)
Out[195]:
A B
0 1 2
2 5 6
In [196]: g.tail(1)
Out[196]:
A B
1 1 4
2 5 6
This shows the first or last n rows from each group.
Taking the nth row of each group#
To select from a DataFrame or Series the nth item, use
nth(). This is a reduction method, and
will return a single row (or no row) per group if you pass an int for n:
In [197]: df = pd.DataFrame([[1, np.nan], [1, 4], [5, 6]], columns=["A", "B"])
In [198]: g = df.groupby("A")
In [199]: g.nth(0)
Out[199]:
B
A
1 NaN
5 6.0
In [200]: g.nth(-1)
Out[200]:
B
A
1 4.0
5 6.0
In [201]: g.nth(1)
Out[201]:
B
A
1 4.0
If you want to select the nth not-null item, use the dropna kwarg. For a DataFrame this should be either 'any' or 'all' just like you would pass to dropna:
# nth(0) is the same as g.first()
In [202]: g.nth(0, dropna="any")
Out[202]:
B
A
1 4.0
5 6.0
In [203]: g.first()
Out[203]:
B
A
1 4.0
5 6.0
# nth(-1) is the same as g.last()
In [204]: g.nth(-1, dropna="any") # NaNs denote group exhausted when using dropna
Out[204]:
B
A
1 4.0
5 6.0
In [205]: g.last()
Out[205]:
B
A
1 4.0
5 6.0
In [206]: g.B.nth(0, dropna="all")
Out[206]:
A
1 4.0
5 6.0
Name: B, dtype: float64
As with other methods, passing as_index=False, will achieve a filtration, which returns the grouped row.
In [207]: df = pd.DataFrame([[1, np.nan], [1, 4], [5, 6]], columns=["A", "B"])
In [208]: g = df.groupby("A", as_index=False)
In [209]: g.nth(0)
Out[209]:
A B
0 1 NaN
2 5 6.0
In [210]: g.nth(-1)
Out[210]:
A B
1 1 4.0
2 5 6.0
You can also select multiple rows from each group by specifying multiple nth values as a list of ints.
In [211]: business_dates = pd.date_range(start="4/1/2014", end="6/30/2014", freq="B")
In [212]: df = pd.DataFrame(1, index=business_dates, columns=["a", "b"])
# get the first, 4th, and last date index for each month
In [213]: df.groupby([df.index.year, df.index.month]).nth([0, 3, -1])
Out[213]:
a b
2014 4 1 1
4 1 1
4 1 1
5 1 1
5 1 1
5 1 1
6 1 1
6 1 1
6 1 1
Enumerate group items#
To see the order in which each row appears within its group, use the
cumcount method:
In [214]: dfg = pd.DataFrame(list("aaabba"), columns=["A"])
In [215]: dfg
Out[215]:
A
0 a
1 a
2 a
3 b
4 b
5 a
In [216]: dfg.groupby("A").cumcount()
Out[216]:
0 0
1 1
2 2
3 0
4 1
5 3
dtype: int64
In [217]: dfg.groupby("A").cumcount(ascending=False)
Out[217]:
0 3
1 2
2 1
3 1
4 0
5 0
dtype: int64
Enumerate groups#
To see the ordering of the groups (as opposed to the order of rows
within a group given by cumcount) you can use
ngroup().
Note that the numbers given to the groups match the order in which the
groups would be seen when iterating over the groupby object, not the
order they are first observed.
In [218]: dfg = pd.DataFrame(list("aaabba"), columns=["A"])
In [219]: dfg
Out[219]:
A
0 a
1 a
2 a
3 b
4 b
5 a
In [220]: dfg.groupby("A").ngroup()
Out[220]:
0 0
1 0
2 0
3 1
4 1
5 0
dtype: int64
In [221]: dfg.groupby("A").ngroup(ascending=False)
Out[221]:
0 1
1 1
2 1
3 0
4 0
5 1
dtype: int64
Plotting#
Groupby also works with some plotting methods. For example, suppose we
suspect that some features in a DataFrame may differ by group, in this case,
the values in column 1 where the group is “B” are 3 higher on average.
In [222]: np.random.seed(1234)
In [223]: df = pd.DataFrame(np.random.randn(50, 2))
In [224]: df["g"] = np.random.choice(["A", "B"], size=50)
In [225]: df.loc[df["g"] == "B", 1] += 3
We can easily visualize this with a boxplot:
In [226]: df.groupby("g").boxplot()
Out[226]:
A AxesSubplot(0.1,0.15;0.363636x0.75)
B AxesSubplot(0.536364,0.15;0.363636x0.75)
dtype: object
The result of calling boxplot is a dictionary whose keys are the values
of our grouping column g (“A” and “B”). The values of the resulting dictionary
can be controlled by the return_type keyword of boxplot.
See the visualization documentation for more.
Warning
For historical reasons, df.groupby("g").boxplot() is not equivalent
to df.boxplot(by="g"). See here for
an explanation.
Piping function calls#
Similar to the functionality provided by DataFrame and Series, functions
that take GroupBy objects can be chained together using a pipe method to
allow for a cleaner, more readable syntax. To read about .pipe in general terms,
see here.
Combining .groupby and .pipe is often useful when you need to reuse
GroupBy objects.
As an example, imagine having a DataFrame with columns for stores, products,
revenue and quantity sold. We’d like to do a groupwise calculation of prices
(i.e. revenue/quantity) per store and per product. We could do this in a
multi-step operation, but expressing it in terms of piping can make the
code more readable. First we set the data:
In [227]: n = 1000
In [228]: df = pd.DataFrame(
.....: {
.....: "Store": np.random.choice(["Store_1", "Store_2"], n),
.....: "Product": np.random.choice(["Product_1", "Product_2"], n),
.....: "Revenue": (np.random.random(n) * 50 + 10).round(2),
.....: "Quantity": np.random.randint(1, 10, size=n),
.....: }
.....: )
.....:
In [229]: df.head(2)
Out[229]:
Store Product Revenue Quantity
0 Store_2 Product_1 26.12 1
1 Store_2 Product_1 28.86 1
Now, to find prices per store/product, we can simply do:
In [230]: (
.....: df.groupby(["Store", "Product"])
.....: .pipe(lambda grp: grp.Revenue.sum() / grp.Quantity.sum())
.....: .unstack()
.....: .round(2)
.....: )
.....:
Out[230]:
Product Product_1 Product_2
Store
Store_1 6.82 7.05
Store_2 6.30 6.64
Piping can also be expressive when you want to deliver a grouped object to some
arbitrary function, for example:
In [231]: def mean(groupby):
.....: return groupby.mean()
.....:
In [232]: df.groupby(["Store", "Product"]).pipe(mean)
Out[232]:
Revenue Quantity
Store Product
Store_1 Product_1 34.622727 5.075758
Product_2 35.482815 5.029630
Store_2 Product_1 32.972837 5.237589
Product_2 34.684360 5.224000
where mean takes a GroupBy object and finds the mean of the Revenue and Quantity
columns respectively for each Store-Product combination. The mean function can
be any function that takes in a GroupBy object; the .pipe will pass the GroupBy
object as a parameter into the function you specify.
Examples#
Regrouping by factor#
Regroup columns of a DataFrame according to their sum, and sum the aggregated ones.
In [233]: df = pd.DataFrame({"a": [1, 0, 0], "b": [0, 1, 0], "c": [1, 0, 0], "d": [2, 3, 4]})
In [234]: df
Out[234]:
a b c d
0 1 0 1 2
1 0 1 0 3
2 0 0 0 4
In [235]: df.groupby(df.sum(), axis=1).sum()
Out[235]:
1 9
0 2 2
1 1 3
2 0 4
Multi-column factorization#
By using ngroup(), we can extract
information about the groups in a way similar to factorize() (as described
further in the reshaping API) but which applies
naturally to multiple columns of mixed type and different
sources. This can be useful as an intermediate categorical-like step
in processing, when the relationships between the group rows are more
important than their content, or as input to an algorithm which only
accepts the integer encoding. (For more information about support in
pandas for full categorical data, see the Categorical
introduction and the
API documentation.)
In [236]: dfg = pd.DataFrame({"A": [1, 1, 2, 3, 2], "B": list("aaaba")})
In [237]: dfg
Out[237]:
A B
0 1 a
1 1 a
2 2 a
3 3 b
4 2 a
In [238]: dfg.groupby(["A", "B"]).ngroup()
Out[238]:
0 0
1 0
2 1
3 2
4 1
dtype: int64
In [239]: dfg.groupby(["A", [0, 0, 0, 1, 1]]).ngroup()
Out[239]:
0 0
1 0
2 1
3 3
4 2
dtype: int64
Groupby by indexer to ‘resample’ data#
Resampling produces new hypothetical samples (resamples) from already existing observed data or from a model that generates data. These new samples are similar to the pre-existing samples.
In order to resample to work on indices that are non-datetimelike, the following procedure can be utilized.
In the following examples, df.index // 5 returns a binary array which is used to determine what gets selected for the groupby operation.
Note
The below example shows how we can downsample by consolidation of samples into fewer samples. Here by using df.index // 5, we are aggregating the samples in bins. By applying std() function, we aggregate the information contained in many samples into a small subset of values which is their standard deviation thereby reducing the number of samples.
In [240]: df = pd.DataFrame(np.random.randn(10, 2))
In [241]: df
Out[241]:
0 1
0 -0.793893 0.321153
1 0.342250 1.618906
2 -0.975807 1.918201
3 -0.810847 -1.405919
4 -1.977759 0.461659
5 0.730057 -1.316938
6 -0.751328 0.528290
7 -0.257759 -1.081009
8 0.505895 -1.701948
9 -1.006349 0.020208
In [242]: df.index // 5
Out[242]: Int64Index([0, 0, 0, 0, 0, 1, 1, 1, 1, 1], dtype='int64')
In [243]: df.groupby(df.index // 5).std()
Out[243]:
0 1
0 0.823647 1.312912
1 0.760109 0.942941
Returning a Series to propagate names#
Group DataFrame columns, compute a set of metrics and return a named Series.
The Series name is used as the name for the column index. This is especially
useful in conjunction with reshaping operations such as stacking in which the
column index name will be used as the name of the inserted column:
In [244]: df = pd.DataFrame(
.....: {
.....: "a": [0, 0, 0, 0, 1, 1, 1, 1, 2, 2, 2, 2],
.....: "b": [0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1],
.....: "c": [1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0],
.....: "d": [0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1],
.....: }
.....: )
.....:
In [245]: def compute_metrics(x):
.....: result = {"b_sum": x["b"].sum(), "c_mean": x["c"].mean()}
.....: return pd.Series(result, name="metrics")
.....:
In [246]: result = df.groupby("a").apply(compute_metrics)
In [247]: result
Out[247]:
metrics b_sum c_mean
a
0 2.0 0.5
1 2.0 0.5
2 2.0 0.5
In [248]: result.stack()
Out[248]:
a metrics
0 b_sum 2.0
c_mean 0.5
1 b_sum 2.0
c_mean 0.5
2 b_sum 2.0
c_mean 0.5
dtype: float64
| 601
| 752
|
Pandas return separate column value in current index if two separate columns match
Say I have the following data frame:
A B C
0 n1 n2 n4
1 n2 n3 n5
2 n3 n1 n6
I have been trying to:
Loop through Column A to find a matching value in Column B
If there is a match in Column B I want to grab the value in Column C for the current index and create a Column D with that value.
Given the example data frame above, below would be the solution I'm trying to achieve.
A B C D
0 n1 n2 n4 n6
1 n2 n3 n5 n4
2 n3 n1 n6 n5
I've seen lots of answers for excel utilizing match and index, but I literally can't find anything to help me solve this problem. Any help would be appreciated.
|
60,339,803
|
Count frequency of each word contained in column string values
|
<p>For example, I have a dataframe like this:</p>
<pre><code>data = {'id': [1,1,1,2,2],
'value': ['red','red and blue','yellow','oak','oak wood']
}
df = pd.DataFrame (data, columns = ['id','value'])
</code></pre>
<p>I want :</p>
<pre><code>id value count
1 red 2
1 blue 1
1 yellow 1
2 oak 2
2 wood 1
</code></pre>
<p>Many thanks!</p>
| 60,339,839
| 2020-02-21T13:39:43.503000
| 1
| null | 2
| 47
|
python|pandas
|
<p>Solution for pandas 0.25+ with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.explode.html" rel="nofollow noreferrer"><code>DataFrame.explode</code></a> by lists created by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.str.split.html" rel="nofollow noreferrer"><code>Series.str.split</code></a> and <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.GroupBy.size.html" rel="nofollow noreferrer"><code>GroupBy.size</code></a>:</p>
<pre><code>df1 = (df.assign(value = df['value'].str.split())
.explode('value')
.groupby(['id','value'], sort=False)
.size()
.reset_index(name='count'))
print (df1)
id value count
0 1 red 2
1 1 and 1
2 1 blue 1
3 1 yellow 1
4 2 oak 2
5 2 wood 1
</code></pre>
<p>For lower pandas versions use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.set_index.html" rel="nofollow noreferrer"><code>DataFrame.set_index</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.str.split.html" rel="nofollow noreferrer"><code>Series.str.split</code></a> and <code>expand=True</code> for <code>DataFrame</code>, reshape by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.stack.html" rel="nofollow noreferrer"><code>DataFrame.stack</code></a>, create columns from <code>MultiIndex Series</code> ands use same solution like above:</p>
<pre><code>df1 = (df.set_index('id')['value']
.str.split(expand=True)
.stack()
.reset_index(name='value')
.groupby(['id','value'], sort=False)
.size()
.reset_index(name='count')
)
print (df1)
id value count
0 1 red 2
1 1 and 1
2 1 blue 1
3 1 yellow 1
4 2 oak 2
5 2 wood 1
</code></pre>
| 2020-02-21T13:42:03.370000
| 4
|
https://pandas.pydata.org/docs/reference/api/pandas.Series.str.count.html
|
pandas.Series.str.count#
pandas.Series.str.count#
Series.str.count(pat, flags=0)[source]#
Solution for pandas 0.25+ with DataFrame.explode by lists created by Series.str.split and GroupBy.size:
df1 = (df.assign(value = df['value'].str.split())
.explode('value')
.groupby(['id','value'], sort=False)
.size()
.reset_index(name='count'))
print (df1)
id value count
0 1 red 2
1 1 and 1
2 1 blue 1
3 1 yellow 1
4 2 oak 2
5 2 wood 1
For lower pandas versions use DataFrame.set_index with Series.str.split and expand=True for DataFrame, reshape by DataFrame.stack, create columns from MultiIndex Series ands use same solution like above:
df1 = (df.set_index('id')['value']
.str.split(expand=True)
.stack()
.reset_index(name='value')
.groupby(['id','value'], sort=False)
.size()
.reset_index(name='count')
)
print (df1)
id value count
0 1 red 2
1 1 and 1
2 1 blue 1
3 1 yellow 1
4 2 oak 2
5 2 wood 1
Count occurrences of pattern in each string of the Series/Index.
This function is used to count the number of times a particular regex
pattern is repeated in each of the string elements of the
Series.
Parameters
patstrValid regular expression.
flagsint, default 0, meaning no flagsFlags for the re module. For a complete list, see here.
**kwargsFor compatibility with other string methods. Not used.
Returns
Series or IndexSame type as the calling object containing the integer counts.
See also
reStandard library module for regular expressions.
str.countStandard library version, without regular expression support.
Notes
Some characters need to be escaped when passing in pat.
eg. '$' has a special meaning in regex and must be escaped when
finding this literal character.
Examples
>>> s = pd.Series(['A', 'B', 'Aaba', 'Baca', np.nan, 'CABA', 'cat'])
>>> s.str.count('a')
0 0.0
1 0.0
2 2.0
3 2.0
4 NaN
5 0.0
6 1.0
dtype: float64
Escape '$' to find the literal dollar sign.
>>> s = pd.Series(['$', 'B', 'Aab$', '$$ca', 'C$B$', 'cat'])
>>> s.str.count('\\$')
0 1
1 0
2 1
3 2
4 2
5 0
dtype: int64
This is also available on Index
>>> pd.Index(['A', 'A', 'Aaba', 'cat']).str.count('a')
Int64Index([0, 0, 2, 1], dtype='int64')
| 94
| 1,130
|
Count frequency of each word contained in column string values
For example, I have a dataframe like this:
data = {'id': [1,1,1,2,2],
'value': ['red','red and blue','yellow','oak','oak wood']
}
df = pd.DataFrame (data, columns = ['id','value'])
I want :
id value count
1 red 2
1 blue 1
1 yellow 1
2 oak 2
2 wood 1
Many thanks!
|
62,874,419
|
How to return the highest value from multiple columns to a new column in a pandas df
|
<p>Apologies for the opaque question name (not sure how to word it). I have the following dataframe:</p>
<pre><code>import pandas as pd
import numpy as np
data = [['tom', 1,1,6,4],
['tom', 1,2,2,3],
['tom', 1,2,3,1],
['tom', 2,3,2,7],
['jim', 1,4,3,6],
['jim', 2,6,5,3]]
df = pd.DataFrame(data, columns = ['Name', 'Day','A','B','C'])
df = df.groupby(by=['Name','Day']).agg('sum').reset_index()
df
</code></pre>
<p><a href="https://i.stack.imgur.com/F7gnJ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/F7gnJ.png" alt="enter image description here" /></a></p>
<p>I would like to add another column that returns text according to which column of <code>A,B,C</code> is the highest:</p>
<p>For example I would like <code>Apple</code> if <code>A</code> is highest, <code>Banana</code> if <code>B</code> is highest, and <code>Carrot</code> if <code>C</code> is highest. So in the example above the values for the 4 columns should be:</p>
<pre><code>New Col
Carrot
Apple
Banana
Carrot
</code></pre>
<p>Any help would be much appreciated! Thanks</p>
| 62,874,481
| 2020-07-13T10:59:34.937000
| 2
| null | 3
| 815
|
python|pandas
|
<p>Use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.idxmax.html" rel="nofollow noreferrer"><code>DataFrame.idxmax</code></a> along <code>axis=1</code> with <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.map.html" rel="nofollow noreferrer"><code>Series.map</code></a>:</p>
<pre><code>dct = {'A': 'Apple', 'B': 'Banana', 'C': 'Carrot'}
df['New col'] = df[['A', 'B', 'C']].idxmax(axis=1).map(dct)
</code></pre>
<p>Result:</p>
<pre><code> Name Day A B C New col
0 jim 1 4 3 6 Carrot
1 jim 2 6 5 3 Apple
2 tom 1 5 11 8 Banana
3 tom 2 3 2 7 Carrot
</code></pre>
| 2020-07-13T11:02:48.047000
| 4
|
https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.sort_values.html
|
pandas.DataFrame.sort_values#
pandas.DataFrame.sort_values#
DataFrame.sort_values(by, *, axis=0, ascending=True, inplace=False, kind='quicksort', na_position='last', ignore_index=False, key=None)[source]#
Use DataFrame.idxmax along axis=1 with Series.map:
dct = {'A': 'Apple', 'B': 'Banana', 'C': 'Carrot'}
df['New col'] = df[['A', 'B', 'C']].idxmax(axis=1).map(dct)
Result:
Name Day A B C New col
0 jim 1 4 3 6 Carrot
1 jim 2 6 5 3 Apple
2 tom 1 5 11 8 Banana
3 tom 2 3 2 7 Carrot
Sort by the values along either axis.
Parameters
bystr or list of strName or list of names to sort by.
if axis is 0 or ‘index’ then by may contain index
levels and/or column labels.
if axis is 1 or ‘columns’ then by may contain column
levels and/or index labels.
axis{0 or ‘index’, 1 or ‘columns’}, default 0Axis to be sorted.
ascendingbool or list of bool, default TrueSort ascending vs. descending. Specify list for multiple sort
orders. If this is a list of bools, must match the length of
the by.
inplacebool, default FalseIf True, perform operation in-place.
kind{‘quicksort’, ‘mergesort’, ‘heapsort’, ‘stable’}, default ‘quicksort’Choice of sorting algorithm. See also numpy.sort() for more
information. mergesort and stable are the only stable algorithms. For
DataFrames, this option is only applied when sorting on a single
column or label.
na_position{‘first’, ‘last’}, default ‘last’Puts NaNs at the beginning if first; last puts NaNs at the
end.
ignore_indexbool, default FalseIf True, the resulting axis will be labeled 0, 1, …, n - 1.
New in version 1.0.0.
keycallable, optionalApply the key function to the values
before sorting. This is similar to the key argument in the
builtin sorted() function, with the notable difference that
this key function should be vectorized. It should expect a
Series and return a Series with the same shape as the input.
It will be applied to each column in by independently.
New in version 1.1.0.
Returns
DataFrame or NoneDataFrame with sorted values or None if inplace=True.
See also
DataFrame.sort_indexSort a DataFrame by the index.
Series.sort_valuesSimilar method for a Series.
Examples
>>> df = pd.DataFrame({
... 'col1': ['A', 'A', 'B', np.nan, 'D', 'C'],
... 'col2': [2, 1, 9, 8, 7, 4],
... 'col3': [0, 1, 9, 4, 2, 3],
... 'col4': ['a', 'B', 'c', 'D', 'e', 'F']
... })
>>> df
col1 col2 col3 col4
0 A 2 0 a
1 A 1 1 B
2 B 9 9 c
3 NaN 8 4 D
4 D 7 2 e
5 C 4 3 F
Sort by col1
>>> df.sort_values(by=['col1'])
col1 col2 col3 col4
0 A 2 0 a
1 A 1 1 B
2 B 9 9 c
5 C 4 3 F
4 D 7 2 e
3 NaN 8 4 D
Sort by multiple columns
>>> df.sort_values(by=['col1', 'col2'])
col1 col2 col3 col4
1 A 1 1 B
0 A 2 0 a
2 B 9 9 c
5 C 4 3 F
4 D 7 2 e
3 NaN 8 4 D
Sort Descending
>>> df.sort_values(by='col1', ascending=False)
col1 col2 col3 col4
4 D 7 2 e
5 C 4 3 F
2 B 9 9 c
0 A 2 0 a
1 A 1 1 B
3 NaN 8 4 D
Putting NAs first
>>> df.sort_values(by='col1', ascending=False, na_position='first')
col1 col2 col3 col4
3 NaN 8 4 D
4 D 7 2 e
5 C 4 3 F
2 B 9 9 c
0 A 2 0 a
1 A 1 1 B
Sorting with a key function
>>> df.sort_values(by='col4', key=lambda col: col.str.lower())
col1 col2 col3 col4
0 A 2 0 a
1 A 1 1 B
2 B 9 9 c
3 NaN 8 4 D
4 D 7 2 e
5 C 4 3 F
Natural sort with the key argument,
using the natsort <https://github.com/SethMMorton/natsort> package.
>>> df = pd.DataFrame({
... "time": ['0hr', '128hr', '72hr', '48hr', '96hr'],
... "value": [10, 20, 30, 40, 50]
... })
>>> df
time value
0 0hr 10
1 128hr 20
2 72hr 30
3 48hr 40
4 96hr 50
>>> from natsort import index_natsorted
>>> df.sort_values(
... by="time",
... key=lambda x: np.argsort(index_natsorted(df["time"]))
... )
time value
0 0hr 10
3 48hr 40
2 72hr 30
4 96hr 50
1 128hr 20
| 209
| 530
|
How to return the highest value from multiple columns to a new column in a pandas df
Apologies for the opaque question name (not sure how to word it). I have the following dataframe:
import pandas as pd
import numpy as np
data = [['tom', 1,1,6,4],
['tom', 1,2,2,3],
['tom', 1,2,3,1],
['tom', 2,3,2,7],
['jim', 1,4,3,6],
['jim', 2,6,5,3]]
df = pd.DataFrame(data, columns = ['Name', 'Day','A','B','C'])
df = df.groupby(by=['Name','Day']).agg('sum').reset_index()
df
I would like to add another column that returns text according to which column of A,B,C is the highest:
For example I would like Apple if A is highest, Banana if B is highest, and Carrot if C is highest. So in the example above the values for the 4 columns should be:
New Col
Carrot
Apple
Banana
Carrot
Any help would be much appreciated! Thanks
|
68,531,888
|
How to make separate rows in Pandas by column value?
|
<p>I have a df like this:</p>
<pre><code> name total
bob 10
</code></pre>
<p>What I need is this: What is the best way to achieve this?</p>
<pre><code> name total
bob 1
bob 1
bob 1
bob 1
bob 1
bob 1
bob 1
bob 1
bob 1
bob 1
</code></pre>
| 68,532,012
| 2021-07-26T14:42:47.553000
| 2
| null | -2
| 58
|
python|pandas
|
<p>From your <code>DataFrame</code> :</p>
<pre class="lang-py prettyprint-override"><code>>>> import pandas as pd
>>> df = pd.DataFrame({'name': ['bob'],
... 'total': [10]},
... index = [0])
>>> df
name total
0 bob 10
</code></pre>
<p>We can use the <code>repeat</code> function on the value from <code>total</code> like so :</p>
<pre class="lang-py prettyprint-override"><code>>>> df = df.loc[df.index.repeat(df.total)].reset_index(drop=True)
>>> df
name total
0 bob 10
1 bob 10
2 bob 10
3 bob 10
4 bob 10
5 bob 10
6 bob 10
7 bob 10
8 bob 10
9 bob 10
</code></pre>
<p>And set <code>total</code> to one to get the expected result :</p>
<pre class="lang-py prettyprint-override"><code>>>> df['total'] = 1
>>> df
name total
0 bob 1
1 bob 1
2 bob 1
3 bob 1
4 bob 1
5 bob 1
6 bob 1
7 bob 1
8 bob 1
9 bob 1
</code></pre>
| 2021-07-26T14:49:52.703000
| 4
|
https://pandas.pydata.org/docs/user_guide/groupby.html
|
Group by: split-apply-combine#
Group by: split-apply-combine#
From your DataFrame :
>>> import pandas as pd
>>> df = pd.DataFrame({'name': ['bob'],
... 'total': [10]},
... index = [0])
>>> df
name total
0 bob 10
We can use the repeat function on the value from total like so :
>>> df = df.loc[df.index.repeat(df.total)].reset_index(drop=True)
>>> df
name total
0 bob 10
1 bob 10
2 bob 10
3 bob 10
4 bob 10
5 bob 10
6 bob 10
7 bob 10
8 bob 10
9 bob 10
And set total to one to get the expected result :
>>> df['total'] = 1
>>> df
name total
0 bob 1
1 bob 1
2 bob 1
3 bob 1
4 bob 1
5 bob 1
6 bob 1
7 bob 1
8 bob 1
9 bob 1
By “group by” we are referring to a process involving one or more of the following
steps:
Splitting the data into groups based on some criteria.
Applying a function to each group independently.
Combining the results into a data structure.
Out of these, the split step is the most straightforward. In fact, in many
situations we may wish to split the data set into groups and do something with
those groups. In the apply step, we might wish to do one of the
following:
Aggregation: compute a summary statistic (or statistics) for each
group. Some examples:
Compute group sums or means.
Compute group sizes / counts.
Transformation: perform some group-specific computations and return a
like-indexed object. Some examples:
Standardize data (zscore) within a group.
Filling NAs within groups with a value derived from each group.
Filtration: discard some groups, according to a group-wise computation
that evaluates True or False. Some examples:
Discard data that belongs to groups with only a few members.
Filter out data based on the group sum or mean.
Some combination of the above: GroupBy will examine the results of the apply
step and try to return a sensibly combined result if it doesn’t fit into
either of the above two categories.
Since the set of object instance methods on pandas data structures are generally
rich and expressive, we often simply want to invoke, say, a DataFrame function
on each group. The name GroupBy should be quite familiar to those who have used
a SQL-based tool (or itertools), in which you can write code like:
SELECT Column1, Column2, mean(Column3), sum(Column4)
FROM SomeTable
GROUP BY Column1, Column2
We aim to make operations like this natural and easy to express using
pandas. We’ll address each area of GroupBy functionality then provide some
non-trivial examples / use cases.
See the cookbook for some advanced strategies.
Splitting an object into groups#
pandas objects can be split on any of their axes. The abstract definition of
grouping is to provide a mapping of labels to group names. To create a GroupBy
object (more on what the GroupBy object is later), you may do the following:
In [1]: df = pd.DataFrame(
...: [
...: ("bird", "Falconiformes", 389.0),
...: ("bird", "Psittaciformes", 24.0),
...: ("mammal", "Carnivora", 80.2),
...: ("mammal", "Primates", np.nan),
...: ("mammal", "Carnivora", 58),
...: ],
...: index=["falcon", "parrot", "lion", "monkey", "leopard"],
...: columns=("class", "order", "max_speed"),
...: )
...:
In [2]: df
Out[2]:
class order max_speed
falcon bird Falconiformes 389.0
parrot bird Psittaciformes 24.0
lion mammal Carnivora 80.2
monkey mammal Primates NaN
leopard mammal Carnivora 58.0
# default is axis=0
In [3]: grouped = df.groupby("class")
In [4]: grouped = df.groupby("order", axis="columns")
In [5]: grouped = df.groupby(["class", "order"])
The mapping can be specified many different ways:
A Python function, to be called on each of the axis labels.
A list or NumPy array of the same length as the selected axis.
A dict or Series, providing a label -> group name mapping.
For DataFrame objects, a string indicating either a column name or
an index level name to be used to group.
df.groupby('A') is just syntactic sugar for df.groupby(df['A']).
A list of any of the above things.
Collectively we refer to the grouping objects as the keys. For example,
consider the following DataFrame:
Note
A string passed to groupby may refer to either a column or an index level.
If a string matches both a column name and an index level name, a
ValueError will be raised.
In [6]: df = pd.DataFrame(
...: {
...: "A": ["foo", "bar", "foo", "bar", "foo", "bar", "foo", "foo"],
...: "B": ["one", "one", "two", "three", "two", "two", "one", "three"],
...: "C": np.random.randn(8),
...: "D": np.random.randn(8),
...: }
...: )
...:
In [7]: df
Out[7]:
A B C D
0 foo one 0.469112 -0.861849
1 bar one -0.282863 -2.104569
2 foo two -1.509059 -0.494929
3 bar three -1.135632 1.071804
4 foo two 1.212112 0.721555
5 bar two -0.173215 -0.706771
6 foo one 0.119209 -1.039575
7 foo three -1.044236 0.271860
On a DataFrame, we obtain a GroupBy object by calling groupby().
We could naturally group by either the A or B columns, or both:
In [8]: grouped = df.groupby("A")
In [9]: grouped = df.groupby(["A", "B"])
If we also have a MultiIndex on columns A and B, we can group by all
but the specified columns
In [10]: df2 = df.set_index(["A", "B"])
In [11]: grouped = df2.groupby(level=df2.index.names.difference(["B"]))
In [12]: grouped.sum()
Out[12]:
C D
A
bar -1.591710 -1.739537
foo -0.752861 -1.402938
These will split the DataFrame on its index (rows). We could also split by the
columns:
In [13]: def get_letter_type(letter):
....: if letter.lower() in 'aeiou':
....: return 'vowel'
....: else:
....: return 'consonant'
....:
In [14]: grouped = df.groupby(get_letter_type, axis=1)
pandas Index objects support duplicate values. If a
non-unique index is used as the group key in a groupby operation, all values
for the same index value will be considered to be in one group and thus the
output of aggregation functions will only contain unique index values:
In [15]: lst = [1, 2, 3, 1, 2, 3]
In [16]: s = pd.Series([1, 2, 3, 10, 20, 30], lst)
In [17]: grouped = s.groupby(level=0)
In [18]: grouped.first()
Out[18]:
1 1
2 2
3 3
dtype: int64
In [19]: grouped.last()
Out[19]:
1 10
2 20
3 30
dtype: int64
In [20]: grouped.sum()
Out[20]:
1 11
2 22
3 33
dtype: int64
Note that no splitting occurs until it’s needed. Creating the GroupBy object
only verifies that you’ve passed a valid mapping.
Note
Many kinds of complicated data manipulations can be expressed in terms of
GroupBy operations (though can’t be guaranteed to be the most
efficient). You can get quite creative with the label mapping functions.
GroupBy sorting#
By default the group keys are sorted during the groupby operation. You may however pass sort=False for potential speedups:
In [21]: df2 = pd.DataFrame({"X": ["B", "B", "A", "A"], "Y": [1, 2, 3, 4]})
In [22]: df2.groupby(["X"]).sum()
Out[22]:
Y
X
A 7
B 3
In [23]: df2.groupby(["X"], sort=False).sum()
Out[23]:
Y
X
B 3
A 7
Note that groupby will preserve the order in which observations are sorted within each group.
For example, the groups created by groupby() below are in the order they appeared in the original DataFrame:
In [24]: df3 = pd.DataFrame({"X": ["A", "B", "A", "B"], "Y": [1, 4, 3, 2]})
In [25]: df3.groupby(["X"]).get_group("A")
Out[25]:
X Y
0 A 1
2 A 3
In [26]: df3.groupby(["X"]).get_group("B")
Out[26]:
X Y
1 B 4
3 B 2
New in version 1.1.0.
GroupBy dropna#
By default NA values are excluded from group keys during the groupby operation. However,
in case you want to include NA values in group keys, you could pass dropna=False to achieve it.
In [27]: df_list = [[1, 2, 3], [1, None, 4], [2, 1, 3], [1, 2, 2]]
In [28]: df_dropna = pd.DataFrame(df_list, columns=["a", "b", "c"])
In [29]: df_dropna
Out[29]:
a b c
0 1 2.0 3
1 1 NaN 4
2 2 1.0 3
3 1 2.0 2
# Default ``dropna`` is set to True, which will exclude NaNs in keys
In [30]: df_dropna.groupby(by=["b"], dropna=True).sum()
Out[30]:
a c
b
1.0 2 3
2.0 2 5
# In order to allow NaN in keys, set ``dropna`` to False
In [31]: df_dropna.groupby(by=["b"], dropna=False).sum()
Out[31]:
a c
b
1.0 2 3
2.0 2 5
NaN 1 4
The default setting of dropna argument is True which means NA are not included in group keys.
GroupBy object attributes#
The groups attribute is a dict whose keys are the computed unique groups
and corresponding values being the axis labels belonging to each group. In the
above example we have:
In [32]: df.groupby("A").groups
Out[32]: {'bar': [1, 3, 5], 'foo': [0, 2, 4, 6, 7]}
In [33]: df.groupby(get_letter_type, axis=1).groups
Out[33]: {'consonant': ['B', 'C', 'D'], 'vowel': ['A']}
Calling the standard Python len function on the GroupBy object just returns
the length of the groups dict, so it is largely just a convenience:
In [34]: grouped = df.groupby(["A", "B"])
In [35]: grouped.groups
Out[35]: {('bar', 'one'): [1], ('bar', 'three'): [3], ('bar', 'two'): [5], ('foo', 'one'): [0, 6], ('foo', 'three'): [7], ('foo', 'two'): [2, 4]}
In [36]: len(grouped)
Out[36]: 6
GroupBy will tab complete column names (and other attributes):
In [37]: df
Out[37]:
height weight gender
2000-01-01 42.849980 157.500553 male
2000-01-02 49.607315 177.340407 male
2000-01-03 56.293531 171.524640 male
2000-01-04 48.421077 144.251986 female
2000-01-05 46.556882 152.526206 male
2000-01-06 68.448851 168.272968 female
2000-01-07 70.757698 136.431469 male
2000-01-08 58.909500 176.499753 female
2000-01-09 76.435631 174.094104 female
2000-01-10 45.306120 177.540920 male
In [38]: gb = df.groupby("gender")
In [39]: gb.<TAB> # noqa: E225, E999
gb.agg gb.boxplot gb.cummin gb.describe gb.filter gb.get_group gb.height gb.last gb.median gb.ngroups gb.plot gb.rank gb.std gb.transform
gb.aggregate gb.count gb.cumprod gb.dtype gb.first gb.groups gb.hist gb.max gb.min gb.nth gb.prod gb.resample gb.sum gb.var
gb.apply gb.cummax gb.cumsum gb.fillna gb.gender gb.head gb.indices gb.mean gb.name gb.ohlc gb.quantile gb.size gb.tail gb.weight
GroupBy with MultiIndex#
With hierarchically-indexed data, it’s quite
natural to group by one of the levels of the hierarchy.
Let’s create a Series with a two-level MultiIndex.
In [40]: arrays = [
....: ["bar", "bar", "baz", "baz", "foo", "foo", "qux", "qux"],
....: ["one", "two", "one", "two", "one", "two", "one", "two"],
....: ]
....:
In [41]: index = pd.MultiIndex.from_arrays(arrays, names=["first", "second"])
In [42]: s = pd.Series(np.random.randn(8), index=index)
In [43]: s
Out[43]:
first second
bar one -0.919854
two -0.042379
baz one 1.247642
two -0.009920
foo one 0.290213
two 0.495767
qux one 0.362949
two 1.548106
dtype: float64
We can then group by one of the levels in s.
In [44]: grouped = s.groupby(level=0)
In [45]: grouped.sum()
Out[45]:
first
bar -0.962232
baz 1.237723
foo 0.785980
qux 1.911055
dtype: float64
If the MultiIndex has names specified, these can be passed instead of the level
number:
In [46]: s.groupby(level="second").sum()
Out[46]:
second
one 0.980950
two 1.991575
dtype: float64
Grouping with multiple levels is supported.
In [47]: s
Out[47]:
first second third
bar doo one -1.131345
two -0.089329
baz bee one 0.337863
two -0.945867
foo bop one -0.932132
two 1.956030
qux bop one 0.017587
two -0.016692
dtype: float64
In [48]: s.groupby(level=["first", "second"]).sum()
Out[48]:
first second
bar doo -1.220674
baz bee -0.608004
foo bop 1.023898
qux bop 0.000895
dtype: float64
Index level names may be supplied as keys.
In [49]: s.groupby(["first", "second"]).sum()
Out[49]:
first second
bar doo -1.220674
baz bee -0.608004
foo bop 1.023898
qux bop 0.000895
dtype: float64
More on the sum function and aggregation later.
Grouping DataFrame with Index levels and columns#
A DataFrame may be grouped by a combination of columns and index levels by
specifying the column names as strings and the index levels as pd.Grouper
objects.
In [50]: arrays = [
....: ["bar", "bar", "baz", "baz", "foo", "foo", "qux", "qux"],
....: ["one", "two", "one", "two", "one", "two", "one", "two"],
....: ]
....:
In [51]: index = pd.MultiIndex.from_arrays(arrays, names=["first", "second"])
In [52]: df = pd.DataFrame({"A": [1, 1, 1, 1, 2, 2, 3, 3], "B": np.arange(8)}, index=index)
In [53]: df
Out[53]:
A B
first second
bar one 1 0
two 1 1
baz one 1 2
two 1 3
foo one 2 4
two 2 5
qux one 3 6
two 3 7
The following example groups df by the second index level and
the A column.
In [54]: df.groupby([pd.Grouper(level=1), "A"]).sum()
Out[54]:
B
second A
one 1 2
2 4
3 6
two 1 4
2 5
3 7
Index levels may also be specified by name.
In [55]: df.groupby([pd.Grouper(level="second"), "A"]).sum()
Out[55]:
B
second A
one 1 2
2 4
3 6
two 1 4
2 5
3 7
Index level names may be specified as keys directly to groupby.
In [56]: df.groupby(["second", "A"]).sum()
Out[56]:
B
second A
one 1 2
2 4
3 6
two 1 4
2 5
3 7
DataFrame column selection in GroupBy#
Once you have created the GroupBy object from a DataFrame, you might want to do
something different for each of the columns. Thus, using [] similar to
getting a column from a DataFrame, you can do:
In [57]: df = pd.DataFrame(
....: {
....: "A": ["foo", "bar", "foo", "bar", "foo", "bar", "foo", "foo"],
....: "B": ["one", "one", "two", "three", "two", "two", "one", "three"],
....: "C": np.random.randn(8),
....: "D": np.random.randn(8),
....: }
....: )
....:
In [58]: df
Out[58]:
A B C D
0 foo one -0.575247 1.346061
1 bar one 0.254161 1.511763
2 foo two -1.143704 1.627081
3 bar three 0.215897 -0.990582
4 foo two 1.193555 -0.441652
5 bar two -0.077118 1.211526
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
In [59]: grouped = df.groupby(["A"])
In [60]: grouped_C = grouped["C"]
In [61]: grouped_D = grouped["D"]
This is mainly syntactic sugar for the alternative and much more verbose:
In [62]: df["C"].groupby(df["A"])
Out[62]: <pandas.core.groupby.generic.SeriesGroupBy object at 0x7f1ea100a490>
Additionally this method avoids recomputing the internal grouping information
derived from the passed key.
Iterating through groups#
With the GroupBy object in hand, iterating through the grouped data is very
natural and functions similarly to itertools.groupby():
In [63]: grouped = df.groupby('A')
In [64]: for name, group in grouped:
....: print(name)
....: print(group)
....:
bar
A B C D
1 bar one 0.254161 1.511763
3 bar three 0.215897 -0.990582
5 bar two -0.077118 1.211526
foo
A B C D
0 foo one -0.575247 1.346061
2 foo two -1.143704 1.627081
4 foo two 1.193555 -0.441652
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
In the case of grouping by multiple keys, the group name will be a tuple:
In [65]: for name, group in df.groupby(['A', 'B']):
....: print(name)
....: print(group)
....:
('bar', 'one')
A B C D
1 bar one 0.254161 1.511763
('bar', 'three')
A B C D
3 bar three 0.215897 -0.990582
('bar', 'two')
A B C D
5 bar two -0.077118 1.211526
('foo', 'one')
A B C D
0 foo one -0.575247 1.346061
6 foo one -0.408530 0.268520
('foo', 'three')
A B C D
7 foo three -0.862495 0.02458
('foo', 'two')
A B C D
2 foo two -1.143704 1.627081
4 foo two 1.193555 -0.441652
See Iterating through groups.
Selecting a group#
A single group can be selected using
get_group():
In [66]: grouped.get_group("bar")
Out[66]:
A B C D
1 bar one 0.254161 1.511763
3 bar three 0.215897 -0.990582
5 bar two -0.077118 1.211526
Or for an object grouped on multiple columns:
In [67]: df.groupby(["A", "B"]).get_group(("bar", "one"))
Out[67]:
A B C D
1 bar one 0.254161 1.511763
Aggregation#
Once the GroupBy object has been created, several methods are available to
perform a computation on the grouped data. These operations are similar to the
aggregating API, window API,
and resample API.
An obvious one is aggregation via the
aggregate() or equivalently
agg() method:
In [68]: grouped = df.groupby("A")
In [69]: grouped[["C", "D"]].aggregate(np.sum)
Out[69]:
C D
A
bar 0.392940 1.732707
foo -1.796421 2.824590
In [70]: grouped = df.groupby(["A", "B"])
In [71]: grouped.aggregate(np.sum)
Out[71]:
C D
A B
bar one 0.254161 1.511763
three 0.215897 -0.990582
two -0.077118 1.211526
foo one -0.983776 1.614581
three -0.862495 0.024580
two 0.049851 1.185429
As you can see, the result of the aggregation will have the group names as the
new index along the grouped axis. In the case of multiple keys, the result is a
MultiIndex by default, though this can be
changed by using the as_index option:
In [72]: grouped = df.groupby(["A", "B"], as_index=False)
In [73]: grouped.aggregate(np.sum)
Out[73]:
A B C D
0 bar one 0.254161 1.511763
1 bar three 0.215897 -0.990582
2 bar two -0.077118 1.211526
3 foo one -0.983776 1.614581
4 foo three -0.862495 0.024580
5 foo two 0.049851 1.185429
In [74]: df.groupby("A", as_index=False)[["C", "D"]].sum()
Out[74]:
A C D
0 bar 0.392940 1.732707
1 foo -1.796421 2.824590
Note that you could use the reset_index DataFrame function to achieve the
same result as the column names are stored in the resulting MultiIndex:
In [75]: df.groupby(["A", "B"]).sum().reset_index()
Out[75]:
A B C D
0 bar one 0.254161 1.511763
1 bar three 0.215897 -0.990582
2 bar two -0.077118 1.211526
3 foo one -0.983776 1.614581
4 foo three -0.862495 0.024580
5 foo two 0.049851 1.185429
Another simple aggregation example is to compute the size of each group.
This is included in GroupBy as the size method. It returns a Series whose
index are the group names and whose values are the sizes of each group.
In [76]: grouped.size()
Out[76]:
A B size
0 bar one 1
1 bar three 1
2 bar two 1
3 foo one 2
4 foo three 1
5 foo two 2
In [77]: grouped.describe()
Out[77]:
C ... D
count mean std min ... 25% 50% 75% max
0 1.0 0.254161 NaN 0.254161 ... 1.511763 1.511763 1.511763 1.511763
1 1.0 0.215897 NaN 0.215897 ... -0.990582 -0.990582 -0.990582 -0.990582
2 1.0 -0.077118 NaN -0.077118 ... 1.211526 1.211526 1.211526 1.211526
3 2.0 -0.491888 0.117887 -0.575247 ... 0.537905 0.807291 1.076676 1.346061
4 1.0 -0.862495 NaN -0.862495 ... 0.024580 0.024580 0.024580 0.024580
5 2.0 0.024925 1.652692 -1.143704 ... 0.075531 0.592714 1.109898 1.627081
[6 rows x 16 columns]
Another aggregation example is to compute the number of unique values of each group. This is similar to the value_counts function, except that it only counts unique values.
In [78]: ll = [['foo', 1], ['foo', 2], ['foo', 2], ['bar', 1], ['bar', 1]]
In [79]: df4 = pd.DataFrame(ll, columns=["A", "B"])
In [80]: df4
Out[80]:
A B
0 foo 1
1 foo 2
2 foo 2
3 bar 1
4 bar 1
In [81]: df4.groupby("A")["B"].nunique()
Out[81]:
A
bar 1
foo 2
Name: B, dtype: int64
Note
Aggregation functions will not return the groups that you are aggregating over
if they are named columns, when as_index=True, the default. The grouped columns will
be the indices of the returned object.
Passing as_index=False will return the groups that you are aggregating over, if they are
named columns.
Aggregating functions are the ones that reduce the dimension of the returned objects.
Some common aggregating functions are tabulated below:
Function
Description
mean()
Compute mean of groups
sum()
Compute sum of group values
size()
Compute group sizes
count()
Compute count of group
std()
Standard deviation of groups
var()
Compute variance of groups
sem()
Standard error of the mean of groups
describe()
Generates descriptive statistics
first()
Compute first of group values
last()
Compute last of group values
nth()
Take nth value, or a subset if n is a list
min()
Compute min of group values
max()
Compute max of group values
The aggregating functions above will exclude NA values. Any function which
reduces a Series to a scalar value is an aggregation function and will work,
a trivial example is df.groupby('A').agg(lambda ser: 1). Note that
nth() can act as a reducer or a
filter, see here.
Applying multiple functions at once#
With grouped Series you can also pass a list or dict of functions to do
aggregation with, outputting a DataFrame:
In [82]: grouped = df.groupby("A")
In [83]: grouped["C"].agg([np.sum, np.mean, np.std])
Out[83]:
sum mean std
A
bar 0.392940 0.130980 0.181231
foo -1.796421 -0.359284 0.912265
On a grouped DataFrame, you can pass a list of functions to apply to each
column, which produces an aggregated result with a hierarchical index:
In [84]: grouped[["C", "D"]].agg([np.sum, np.mean, np.std])
Out[84]:
C D
sum mean std sum mean std
A
bar 0.392940 0.130980 0.181231 1.732707 0.577569 1.366330
foo -1.796421 -0.359284 0.912265 2.824590 0.564918 0.884785
The resulting aggregations are named for the functions themselves. If you
need to rename, then you can add in a chained operation for a Series like this:
In [85]: (
....: grouped["C"]
....: .agg([np.sum, np.mean, np.std])
....: .rename(columns={"sum": "foo", "mean": "bar", "std": "baz"})
....: )
....:
Out[85]:
foo bar baz
A
bar 0.392940 0.130980 0.181231
foo -1.796421 -0.359284 0.912265
For a grouped DataFrame, you can rename in a similar manner:
In [86]: (
....: grouped[["C", "D"]].agg([np.sum, np.mean, np.std]).rename(
....: columns={"sum": "foo", "mean": "bar", "std": "baz"}
....: )
....: )
....:
Out[86]:
C D
foo bar baz foo bar baz
A
bar 0.392940 0.130980 0.181231 1.732707 0.577569 1.366330
foo -1.796421 -0.359284 0.912265 2.824590 0.564918 0.884785
Note
In general, the output column names should be unique. You can’t apply
the same function (or two functions with the same name) to the same
column.
In [87]: grouped["C"].agg(["sum", "sum"])
Out[87]:
sum sum
A
bar 0.392940 0.392940
foo -1.796421 -1.796421
pandas does allow you to provide multiple lambdas. In this case, pandas
will mangle the name of the (nameless) lambda functions, appending _<i>
to each subsequent lambda.
In [88]: grouped["C"].agg([lambda x: x.max() - x.min(), lambda x: x.median() - x.mean()])
Out[88]:
<lambda_0> <lambda_1>
A
bar 0.331279 0.084917
foo 2.337259 -0.215962
Named aggregation#
New in version 0.25.0.
To support column-specific aggregation with control over the output column names, pandas
accepts the special syntax in GroupBy.agg(), known as “named aggregation”, where
The keywords are the output column names
The values are tuples whose first element is the column to select
and the second element is the aggregation to apply to that column. pandas
provides the pandas.NamedAgg namedtuple with the fields ['column', 'aggfunc']
to make it clearer what the arguments are. As usual, the aggregation can
be a callable or a string alias.
In [89]: animals = pd.DataFrame(
....: {
....: "kind": ["cat", "dog", "cat", "dog"],
....: "height": [9.1, 6.0, 9.5, 34.0],
....: "weight": [7.9, 7.5, 9.9, 198.0],
....: }
....: )
....:
In [90]: animals
Out[90]:
kind height weight
0 cat 9.1 7.9
1 dog 6.0 7.5
2 cat 9.5 9.9
3 dog 34.0 198.0
In [91]: animals.groupby("kind").agg(
....: min_height=pd.NamedAgg(column="height", aggfunc="min"),
....: max_height=pd.NamedAgg(column="height", aggfunc="max"),
....: average_weight=pd.NamedAgg(column="weight", aggfunc=np.mean),
....: )
....:
Out[91]:
min_height max_height average_weight
kind
cat 9.1 9.5 8.90
dog 6.0 34.0 102.75
pandas.NamedAgg is just a namedtuple. Plain tuples are allowed as well.
In [92]: animals.groupby("kind").agg(
....: min_height=("height", "min"),
....: max_height=("height", "max"),
....: average_weight=("weight", np.mean),
....: )
....:
Out[92]:
min_height max_height average_weight
kind
cat 9.1 9.5 8.90
dog 6.0 34.0 102.75
If your desired output column names are not valid Python keywords, construct a dictionary
and unpack the keyword arguments
In [93]: animals.groupby("kind").agg(
....: **{
....: "total weight": pd.NamedAgg(column="weight", aggfunc=sum)
....: }
....: )
....:
Out[93]:
total weight
kind
cat 17.8
dog 205.5
Additional keyword arguments are not passed through to the aggregation functions. Only pairs
of (column, aggfunc) should be passed as **kwargs. If your aggregation functions
requires additional arguments, partially apply them with functools.partial().
Note
For Python 3.5 and earlier, the order of **kwargs in a functions was not
preserved. This means that the output column ordering would not be
consistent. To ensure consistent ordering, the keys (and so output columns)
will always be sorted for Python 3.5.
Named aggregation is also valid for Series groupby aggregations. In this case there’s
no column selection, so the values are just the functions.
In [94]: animals.groupby("kind").height.agg(
....: min_height="min",
....: max_height="max",
....: )
....:
Out[94]:
min_height max_height
kind
cat 9.1 9.5
dog 6.0 34.0
Applying different functions to DataFrame columns#
By passing a dict to aggregate you can apply a different aggregation to the
columns of a DataFrame:
In [95]: grouped.agg({"C": np.sum, "D": lambda x: np.std(x, ddof=1)})
Out[95]:
C D
A
bar 0.392940 1.366330
foo -1.796421 0.884785
The function names can also be strings. In order for a string to be valid it
must be either implemented on GroupBy or available via dispatching:
In [96]: grouped.agg({"C": "sum", "D": "std"})
Out[96]:
C D
A
bar 0.392940 1.366330
foo -1.796421 0.884785
Cython-optimized aggregation functions#
Some common aggregations, currently only sum, mean, std, and sem, have
optimized Cython implementations:
In [97]: df.groupby("A")[["C", "D"]].sum()
Out[97]:
C D
A
bar 0.392940 1.732707
foo -1.796421 2.824590
In [98]: df.groupby(["A", "B"]).mean()
Out[98]:
C D
A B
bar one 0.254161 1.511763
three 0.215897 -0.990582
two -0.077118 1.211526
foo one -0.491888 0.807291
three -0.862495 0.024580
two 0.024925 0.592714
Of course sum and mean are implemented on pandas objects, so the above
code would work even without the special versions via dispatching (see below).
Aggregations with User-Defined Functions#
Users can also provide their own functions for custom aggregations. When aggregating
with a User-Defined Function (UDF), the UDF should not mutate the provided Series, see
Mutating with User Defined Function (UDF) methods for more information.
In [99]: animals.groupby("kind")[["height"]].agg(lambda x: set(x))
Out[99]:
height
kind
cat {9.1, 9.5}
dog {34.0, 6.0}
The resulting dtype will reflect that of the aggregating function. If the results from different groups have
different dtypes, then a common dtype will be determined in the same way as DataFrame construction.
In [100]: animals.groupby("kind")[["height"]].agg(lambda x: x.astype(int).sum())
Out[100]:
height
kind
cat 18
dog 40
Transformation#
The transform method returns an object that is indexed the same
as the one being grouped. The transform function must:
Return a result that is either the same size as the group chunk or
broadcastable to the size of the group chunk (e.g., a scalar,
grouped.transform(lambda x: x.iloc[-1])).
Operate column-by-column on the group chunk. The transform is applied to
the first group chunk using chunk.apply.
Not perform in-place operations on the group chunk. Group chunks should
be treated as immutable, and changes to a group chunk may produce unexpected
results. For example, when using fillna, inplace must be False
(grouped.transform(lambda x: x.fillna(inplace=False))).
(Optionally) operates on the entire group chunk. If this is supported, a
fast path is used starting from the second chunk.
Deprecated since version 1.5.0: When using .transform on a grouped DataFrame and the transformation function
returns a DataFrame, currently pandas does not align the result’s index
with the input’s index. This behavior is deprecated and alignment will
be performed in a future version of pandas. You can apply .to_numpy() to the
result of the transformation function to avoid alignment.
Similar to Aggregations with User-Defined Functions, the resulting dtype will reflect that of the
transformation function. If the results from different groups have different dtypes, then
a common dtype will be determined in the same way as DataFrame construction.
Suppose we wished to standardize the data within each group:
In [101]: index = pd.date_range("10/1/1999", periods=1100)
In [102]: ts = pd.Series(np.random.normal(0.5, 2, 1100), index)
In [103]: ts = ts.rolling(window=100, min_periods=100).mean().dropna()
In [104]: ts.head()
Out[104]:
2000-01-08 0.779333
2000-01-09 0.778852
2000-01-10 0.786476
2000-01-11 0.782797
2000-01-12 0.798110
Freq: D, dtype: float64
In [105]: ts.tail()
Out[105]:
2002-09-30 0.660294
2002-10-01 0.631095
2002-10-02 0.673601
2002-10-03 0.709213
2002-10-04 0.719369
Freq: D, dtype: float64
In [106]: transformed = ts.groupby(lambda x: x.year).transform(
.....: lambda x: (x - x.mean()) / x.std()
.....: )
.....:
We would expect the result to now have mean 0 and standard deviation 1 within
each group, which we can easily check:
# Original Data
In [107]: grouped = ts.groupby(lambda x: x.year)
In [108]: grouped.mean()
Out[108]:
2000 0.442441
2001 0.526246
2002 0.459365
dtype: float64
In [109]: grouped.std()
Out[109]:
2000 0.131752
2001 0.210945
2002 0.128753
dtype: float64
# Transformed Data
In [110]: grouped_trans = transformed.groupby(lambda x: x.year)
In [111]: grouped_trans.mean()
Out[111]:
2000 -4.870756e-16
2001 -1.545187e-16
2002 4.136282e-16
dtype: float64
In [112]: grouped_trans.std()
Out[112]:
2000 1.0
2001 1.0
2002 1.0
dtype: float64
We can also visually compare the original and transformed data sets.
In [113]: compare = pd.DataFrame({"Original": ts, "Transformed": transformed})
In [114]: compare.plot()
Out[114]: <AxesSubplot: >
Transformation functions that have lower dimension outputs are broadcast to
match the shape of the input array.
In [115]: ts.groupby(lambda x: x.year).transform(lambda x: x.max() - x.min())
Out[115]:
2000-01-08 0.623893
2000-01-09 0.623893
2000-01-10 0.623893
2000-01-11 0.623893
2000-01-12 0.623893
...
2002-09-30 0.558275
2002-10-01 0.558275
2002-10-02 0.558275
2002-10-03 0.558275
2002-10-04 0.558275
Freq: D, Length: 1001, dtype: float64
Alternatively, the built-in methods could be used to produce the same outputs.
In [116]: max_ts = ts.groupby(lambda x: x.year).transform("max")
In [117]: min_ts = ts.groupby(lambda x: x.year).transform("min")
In [118]: max_ts - min_ts
Out[118]:
2000-01-08 0.623893
2000-01-09 0.623893
2000-01-10 0.623893
2000-01-11 0.623893
2000-01-12 0.623893
...
2002-09-30 0.558275
2002-10-01 0.558275
2002-10-02 0.558275
2002-10-03 0.558275
2002-10-04 0.558275
Freq: D, Length: 1001, dtype: float64
Another common data transform is to replace missing data with the group mean.
In [119]: data_df
Out[119]:
A B C
0 1.539708 -1.166480 0.533026
1 1.302092 -0.505754 NaN
2 -0.371983 1.104803 -0.651520
3 -1.309622 1.118697 -1.161657
4 -1.924296 0.396437 0.812436
.. ... ... ...
995 -0.093110 0.683847 -0.774753
996 -0.185043 1.438572 NaN
997 -0.394469 -0.642343 0.011374
998 -1.174126 1.857148 NaN
999 0.234564 0.517098 0.393534
[1000 rows x 3 columns]
In [120]: countries = np.array(["US", "UK", "GR", "JP"])
In [121]: key = countries[np.random.randint(0, 4, 1000)]
In [122]: grouped = data_df.groupby(key)
# Non-NA count in each group
In [123]: grouped.count()
Out[123]:
A B C
GR 209 217 189
JP 240 255 217
UK 216 231 193
US 239 250 217
In [124]: transformed = grouped.transform(lambda x: x.fillna(x.mean()))
We can verify that the group means have not changed in the transformed data
and that the transformed data contains no NAs.
In [125]: grouped_trans = transformed.groupby(key)
In [126]: grouped.mean() # original group means
Out[126]:
A B C
GR -0.098371 -0.015420 0.068053
JP 0.069025 0.023100 -0.077324
UK 0.034069 -0.052580 -0.116525
US 0.058664 -0.020399 0.028603
In [127]: grouped_trans.mean() # transformation did not change group means
Out[127]:
A B C
GR -0.098371 -0.015420 0.068053
JP 0.069025 0.023100 -0.077324
UK 0.034069 -0.052580 -0.116525
US 0.058664 -0.020399 0.028603
In [128]: grouped.count() # original has some missing data points
Out[128]:
A B C
GR 209 217 189
JP 240 255 217
UK 216 231 193
US 239 250 217
In [129]: grouped_trans.count() # counts after transformation
Out[129]:
A B C
GR 228 228 228
JP 267 267 267
UK 247 247 247
US 258 258 258
In [130]: grouped_trans.size() # Verify non-NA count equals group size
Out[130]:
GR 228
JP 267
UK 247
US 258
dtype: int64
Note
Some functions will automatically transform the input when applied to a
GroupBy object, but returning an object of the same shape as the original.
Passing as_index=False will not affect these transformation methods.
For example: fillna, ffill, bfill, shift..
In [131]: grouped.ffill()
Out[131]:
A B C
0 1.539708 -1.166480 0.533026
1 1.302092 -0.505754 0.533026
2 -0.371983 1.104803 -0.651520
3 -1.309622 1.118697 -1.161657
4 -1.924296 0.396437 0.812436
.. ... ... ...
995 -0.093110 0.683847 -0.774753
996 -0.185043 1.438572 -0.774753
997 -0.394469 -0.642343 0.011374
998 -1.174126 1.857148 -0.774753
999 0.234564 0.517098 0.393534
[1000 rows x 3 columns]
Window and resample operations#
It is possible to use resample(), expanding() and
rolling() as methods on groupbys.
The example below will apply the rolling() method on the samples of
the column B based on the groups of column A.
In [132]: df_re = pd.DataFrame({"A": [1] * 10 + [5] * 10, "B": np.arange(20)})
In [133]: df_re
Out[133]:
A B
0 1 0
1 1 1
2 1 2
3 1 3
4 1 4
.. .. ..
15 5 15
16 5 16
17 5 17
18 5 18
19 5 19
[20 rows x 2 columns]
In [134]: df_re.groupby("A").rolling(4).B.mean()
Out[134]:
A
1 0 NaN
1 NaN
2 NaN
3 1.5
4 2.5
...
5 15 13.5
16 14.5
17 15.5
18 16.5
19 17.5
Name: B, Length: 20, dtype: float64
The expanding() method will accumulate a given operation
(sum() in the example) for all the members of each particular
group.
In [135]: df_re.groupby("A").expanding().sum()
Out[135]:
B
A
1 0 0.0
1 1.0
2 3.0
3 6.0
4 10.0
... ...
5 15 75.0
16 91.0
17 108.0
18 126.0
19 145.0
[20 rows x 1 columns]
Suppose you want to use the resample() method to get a daily
frequency in each group of your dataframe and wish to complete the
missing values with the ffill() method.
In [136]: df_re = pd.DataFrame(
.....: {
.....: "date": pd.date_range(start="2016-01-01", periods=4, freq="W"),
.....: "group": [1, 1, 2, 2],
.....: "val": [5, 6, 7, 8],
.....: }
.....: ).set_index("date")
.....:
In [137]: df_re
Out[137]:
group val
date
2016-01-03 1 5
2016-01-10 1 6
2016-01-17 2 7
2016-01-24 2 8
In [138]: df_re.groupby("group").resample("1D").ffill()
Out[138]:
group val
group date
1 2016-01-03 1 5
2016-01-04 1 5
2016-01-05 1 5
2016-01-06 1 5
2016-01-07 1 5
... ... ...
2 2016-01-20 2 7
2016-01-21 2 7
2016-01-22 2 7
2016-01-23 2 7
2016-01-24 2 8
[16 rows x 2 columns]
Filtration#
The filter method returns a subset of the original object. Suppose we
want to take only elements that belong to groups with a group sum greater
than 2.
In [139]: sf = pd.Series([1, 1, 2, 3, 3, 3])
In [140]: sf.groupby(sf).filter(lambda x: x.sum() > 2)
Out[140]:
3 3
4 3
5 3
dtype: int64
The argument of filter must be a function that, applied to the group as a
whole, returns True or False.
Another useful operation is filtering out elements that belong to groups
with only a couple members.
In [141]: dff = pd.DataFrame({"A": np.arange(8), "B": list("aabbbbcc")})
In [142]: dff.groupby("B").filter(lambda x: len(x) > 2)
Out[142]:
A B
2 2 b
3 3 b
4 4 b
5 5 b
Alternatively, instead of dropping the offending groups, we can return a
like-indexed objects where the groups that do not pass the filter are filled
with NaNs.
In [143]: dff.groupby("B").filter(lambda x: len(x) > 2, dropna=False)
Out[143]:
A B
0 NaN NaN
1 NaN NaN
2 2.0 b
3 3.0 b
4 4.0 b
5 5.0 b
6 NaN NaN
7 NaN NaN
For DataFrames with multiple columns, filters should explicitly specify a column as the filter criterion.
In [144]: dff["C"] = np.arange(8)
In [145]: dff.groupby("B").filter(lambda x: len(x["C"]) > 2)
Out[145]:
A B C
2 2 b 2
3 3 b 3
4 4 b 4
5 5 b 5
Note
Some functions when applied to a groupby object will act as a filter on the input, returning
a reduced shape of the original (and potentially eliminating groups), but with the index unchanged.
Passing as_index=False will not affect these transformation methods.
For example: head, tail.
In [146]: dff.groupby("B").head(2)
Out[146]:
A B C
0 0 a 0
1 1 a 1
2 2 b 2
3 3 b 3
6 6 c 6
7 7 c 7
Dispatching to instance methods#
When doing an aggregation or transformation, you might just want to call an
instance method on each data group. This is pretty easy to do by passing lambda
functions:
In [147]: grouped = df.groupby("A")
In [148]: grouped.agg(lambda x: x.std())
Out[148]:
C D
A
bar 0.181231 1.366330
foo 0.912265 0.884785
But, it’s rather verbose and can be untidy if you need to pass additional
arguments. Using a bit of metaprogramming cleverness, GroupBy now has the
ability to “dispatch” method calls to the groups:
In [149]: grouped.std()
Out[149]:
C D
A
bar 0.181231 1.366330
foo 0.912265 0.884785
What is actually happening here is that a function wrapper is being
generated. When invoked, it takes any passed arguments and invokes the function
with any arguments on each group (in the above example, the std
function). The results are then combined together much in the style of agg
and transform (it actually uses apply to infer the gluing, documented
next). This enables some operations to be carried out rather succinctly:
In [150]: tsdf = pd.DataFrame(
.....: np.random.randn(1000, 3),
.....: index=pd.date_range("1/1/2000", periods=1000),
.....: columns=["A", "B", "C"],
.....: )
.....:
In [151]: tsdf.iloc[::2] = np.nan
In [152]: grouped = tsdf.groupby(lambda x: x.year)
In [153]: grouped.fillna(method="pad")
Out[153]:
A B C
2000-01-01 NaN NaN NaN
2000-01-02 -0.353501 -0.080957 -0.876864
2000-01-03 -0.353501 -0.080957 -0.876864
2000-01-04 0.050976 0.044273 -0.559849
2000-01-05 0.050976 0.044273 -0.559849
... ... ... ...
2002-09-22 0.005011 0.053897 -1.026922
2002-09-23 0.005011 0.053897 -1.026922
2002-09-24 -0.456542 -1.849051 1.559856
2002-09-25 -0.456542 -1.849051 1.559856
2002-09-26 1.123162 0.354660 1.128135
[1000 rows x 3 columns]
In this example, we chopped the collection of time series into yearly chunks
then independently called fillna on the
groups.
The nlargest and nsmallest methods work on Series style groupbys:
In [154]: s = pd.Series([9, 8, 7, 5, 19, 1, 4.2, 3.3])
In [155]: g = pd.Series(list("abababab"))
In [156]: gb = s.groupby(g)
In [157]: gb.nlargest(3)
Out[157]:
a 4 19.0
0 9.0
2 7.0
b 1 8.0
3 5.0
7 3.3
dtype: float64
In [158]: gb.nsmallest(3)
Out[158]:
a 6 4.2
2 7.0
0 9.0
b 5 1.0
7 3.3
3 5.0
dtype: float64
Flexible apply#
Some operations on the grouped data might not fit into either the aggregate or
transform categories. Or, you may simply want GroupBy to infer how to combine
the results. For these, use the apply function, which can be substituted
for both aggregate and transform in many standard use cases. However,
apply can handle some exceptional use cases.
Note
apply can act as a reducer, transformer, or filter function, depending
on exactly what is passed to it. It can depend on the passed function and
exactly what you are grouping. Thus the grouped column(s) may be included in
the output as well as set the indices.
In [159]: df
Out[159]:
A B C D
0 foo one -0.575247 1.346061
1 bar one 0.254161 1.511763
2 foo two -1.143704 1.627081
3 bar three 0.215897 -0.990582
4 foo two 1.193555 -0.441652
5 bar two -0.077118 1.211526
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
In [160]: grouped = df.groupby("A")
# could also just call .describe()
In [161]: grouped["C"].apply(lambda x: x.describe())
Out[161]:
A
bar count 3.000000
mean 0.130980
std 0.181231
min -0.077118
25% 0.069390
...
foo min -1.143704
25% -0.862495
50% -0.575247
75% -0.408530
max 1.193555
Name: C, Length: 16, dtype: float64
The dimension of the returned result can also change:
In [162]: grouped = df.groupby('A')['C']
In [163]: def f(group):
.....: return pd.DataFrame({'original': group,
.....: 'demeaned': group - group.mean()})
.....:
apply on a Series can operate on a returned value from the applied function,
that is itself a series, and possibly upcast the result to a DataFrame:
In [164]: def f(x):
.....: return pd.Series([x, x ** 2], index=["x", "x^2"])
.....:
In [165]: s = pd.Series(np.random.rand(5))
In [166]: s
Out[166]:
0 0.321438
1 0.493496
2 0.139505
3 0.910103
4 0.194158
dtype: float64
In [167]: s.apply(f)
Out[167]:
x x^2
0 0.321438 0.103323
1 0.493496 0.243538
2 0.139505 0.019462
3 0.910103 0.828287
4 0.194158 0.037697
Control grouped column(s) placement with group_keys#
Note
If group_keys=True is specified when calling groupby(),
functions passed to apply that return like-indexed outputs will have the
group keys added to the result index. Previous versions of pandas would add
the group keys only when the result from the applied function had a different
index than the input. If group_keys is not specified, the group keys will
not be added for like-indexed outputs. In the future this behavior
will change to always respect group_keys, which defaults to True.
Changed in version 1.5.0.
To control whether the grouped column(s) are included in the indices, you can use
the argument group_keys. Compare
In [168]: df.groupby("A", group_keys=True).apply(lambda x: x)
Out[168]:
A B C D
A
bar 1 bar one 0.254161 1.511763
3 bar three 0.215897 -0.990582
5 bar two -0.077118 1.211526
foo 0 foo one -0.575247 1.346061
2 foo two -1.143704 1.627081
4 foo two 1.193555 -0.441652
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
with
In [169]: df.groupby("A", group_keys=False).apply(lambda x: x)
Out[169]:
A B C D
0 foo one -0.575247 1.346061
1 bar one 0.254161 1.511763
2 foo two -1.143704 1.627081
3 bar three 0.215897 -0.990582
4 foo two 1.193555 -0.441652
5 bar two -0.077118 1.211526
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
Similar to Aggregations with User-Defined Functions, the resulting dtype will reflect that of the
apply function. If the results from different groups have different dtypes, then
a common dtype will be determined in the same way as DataFrame construction.
Numba Accelerated Routines#
New in version 1.1.
If Numba is installed as an optional dependency, the transform and
aggregate methods support engine='numba' and engine_kwargs arguments.
See enhancing performance with Numba for general usage of the arguments
and performance considerations.
The function signature must start with values, index exactly as the data belonging to each group
will be passed into values, and the group index will be passed into index.
Warning
When using engine='numba', there will be no “fall back” behavior internally. The group
data and group index will be passed as NumPy arrays to the JITed user defined function, and no
alternative execution attempts will be tried.
Other useful features#
Automatic exclusion of “nuisance” columns#
Again consider the example DataFrame we’ve been looking at:
In [170]: df
Out[170]:
A B C D
0 foo one -0.575247 1.346061
1 bar one 0.254161 1.511763
2 foo two -1.143704 1.627081
3 bar three 0.215897 -0.990582
4 foo two 1.193555 -0.441652
5 bar two -0.077118 1.211526
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
Suppose we wish to compute the standard deviation grouped by the A
column. There is a slight problem, namely that we don’t care about the data in
column B. We refer to this as a “nuisance” column. You can avoid nuisance
columns by specifying numeric_only=True:
In [171]: df.groupby("A").std(numeric_only=True)
Out[171]:
C D
A
bar 0.181231 1.366330
foo 0.912265 0.884785
Note that df.groupby('A').colname.std(). is more efficient than
df.groupby('A').std().colname, so if the result of an aggregation function
is only interesting over one column (here colname), it may be filtered
before applying the aggregation function.
Note
Any object column, also if it contains numerical values such as Decimal
objects, is considered as a “nuisance” columns. They are excluded from
aggregate functions automatically in groupby.
If you do wish to include decimal or object columns in an aggregation with
other non-nuisance data types, you must do so explicitly.
Warning
The automatic dropping of nuisance columns has been deprecated and will be removed
in a future version of pandas. If columns are included that cannot be operated
on, pandas will instead raise an error. In order to avoid this, either select
the columns you wish to operate on or specify numeric_only=True.
In [172]: from decimal import Decimal
In [173]: df_dec = pd.DataFrame(
.....: {
.....: "id": [1, 2, 1, 2],
.....: "int_column": [1, 2, 3, 4],
.....: "dec_column": [
.....: Decimal("0.50"),
.....: Decimal("0.15"),
.....: Decimal("0.25"),
.....: Decimal("0.40"),
.....: ],
.....: }
.....: )
.....:
# Decimal columns can be sum'd explicitly by themselves...
In [174]: df_dec.groupby(["id"])[["dec_column"]].sum()
Out[174]:
dec_column
id
1 0.75
2 0.55
# ...but cannot be combined with standard data types or they will be excluded
In [175]: df_dec.groupby(["id"])[["int_column", "dec_column"]].sum()
Out[175]:
int_column
id
1 4
2 6
# Use .agg function to aggregate over standard and "nuisance" data types
# at the same time
In [176]: df_dec.groupby(["id"]).agg({"int_column": "sum", "dec_column": "sum"})
Out[176]:
int_column dec_column
id
1 4 0.75
2 6 0.55
Handling of (un)observed Categorical values#
When using a Categorical grouper (as a single grouper, or as part of multiple groupers), the observed keyword
controls whether to return a cartesian product of all possible groupers values (observed=False) or only those
that are observed groupers (observed=True).
Show all values:
In [177]: pd.Series([1, 1, 1]).groupby(
.....: pd.Categorical(["a", "a", "a"], categories=["a", "b"]), observed=False
.....: ).count()
.....:
Out[177]:
a 3
b 0
dtype: int64
Show only the observed values:
In [178]: pd.Series([1, 1, 1]).groupby(
.....: pd.Categorical(["a", "a", "a"], categories=["a", "b"]), observed=True
.....: ).count()
.....:
Out[178]:
a 3
dtype: int64
The returned dtype of the grouped will always include all of the categories that were grouped.
In [179]: s = (
.....: pd.Series([1, 1, 1])
.....: .groupby(pd.Categorical(["a", "a", "a"], categories=["a", "b"]), observed=False)
.....: .count()
.....: )
.....:
In [180]: s.index.dtype
Out[180]: CategoricalDtype(categories=['a', 'b'], ordered=False)
NA and NaT group handling#
If there are any NaN or NaT values in the grouping key, these will be
automatically excluded. In other words, there will never be an “NA group” or
“NaT group”. This was not the case in older versions of pandas, but users were
generally discarding the NA group anyway (and supporting it was an
implementation headache).
Grouping with ordered factors#
Categorical variables represented as instance of pandas’s Categorical class
can be used as group keys. If so, the order of the levels will be preserved:
In [181]: data = pd.Series(np.random.randn(100))
In [182]: factor = pd.qcut(data, [0, 0.25, 0.5, 0.75, 1.0])
In [183]: data.groupby(factor).mean()
Out[183]:
(-2.645, -0.523] -1.362896
(-0.523, 0.0296] -0.260266
(0.0296, 0.654] 0.361802
(0.654, 2.21] 1.073801
dtype: float64
Grouping with a grouper specification#
You may need to specify a bit more data to properly group. You can
use the pd.Grouper to provide this local control.
In [184]: import datetime
In [185]: df = pd.DataFrame(
.....: {
.....: "Branch": "A A A A A A A B".split(),
.....: "Buyer": "Carl Mark Carl Carl Joe Joe Joe Carl".split(),
.....: "Quantity": [1, 3, 5, 1, 8, 1, 9, 3],
.....: "Date": [
.....: datetime.datetime(2013, 1, 1, 13, 0),
.....: datetime.datetime(2013, 1, 1, 13, 5),
.....: datetime.datetime(2013, 10, 1, 20, 0),
.....: datetime.datetime(2013, 10, 2, 10, 0),
.....: datetime.datetime(2013, 10, 1, 20, 0),
.....: datetime.datetime(2013, 10, 2, 10, 0),
.....: datetime.datetime(2013, 12, 2, 12, 0),
.....: datetime.datetime(2013, 12, 2, 14, 0),
.....: ],
.....: }
.....: )
.....:
In [186]: df
Out[186]:
Branch Buyer Quantity Date
0 A Carl 1 2013-01-01 13:00:00
1 A Mark 3 2013-01-01 13:05:00
2 A Carl 5 2013-10-01 20:00:00
3 A Carl 1 2013-10-02 10:00:00
4 A Joe 8 2013-10-01 20:00:00
5 A Joe 1 2013-10-02 10:00:00
6 A Joe 9 2013-12-02 12:00:00
7 B Carl 3 2013-12-02 14:00:00
Groupby a specific column with the desired frequency. This is like resampling.
In [187]: df.groupby([pd.Grouper(freq="1M", key="Date"), "Buyer"])[["Quantity"]].sum()
Out[187]:
Quantity
Date Buyer
2013-01-31 Carl 1
Mark 3
2013-10-31 Carl 6
Joe 9
2013-12-31 Carl 3
Joe 9
You have an ambiguous specification in that you have a named index and a column
that could be potential groupers.
In [188]: df = df.set_index("Date")
In [189]: df["Date"] = df.index + pd.offsets.MonthEnd(2)
In [190]: df.groupby([pd.Grouper(freq="6M", key="Date"), "Buyer"])[["Quantity"]].sum()
Out[190]:
Quantity
Date Buyer
2013-02-28 Carl 1
Mark 3
2014-02-28 Carl 9
Joe 18
In [191]: df.groupby([pd.Grouper(freq="6M", level="Date"), "Buyer"])[["Quantity"]].sum()
Out[191]:
Quantity
Date Buyer
2013-01-31 Carl 1
Mark 3
2014-01-31 Carl 9
Joe 18
Taking the first rows of each group#
Just like for a DataFrame or Series you can call head and tail on a groupby:
In [192]: df = pd.DataFrame([[1, 2], [1, 4], [5, 6]], columns=["A", "B"])
In [193]: df
Out[193]:
A B
0 1 2
1 1 4
2 5 6
In [194]: g = df.groupby("A")
In [195]: g.head(1)
Out[195]:
A B
0 1 2
2 5 6
In [196]: g.tail(1)
Out[196]:
A B
1 1 4
2 5 6
This shows the first or last n rows from each group.
Taking the nth row of each group#
To select from a DataFrame or Series the nth item, use
nth(). This is a reduction method, and
will return a single row (or no row) per group if you pass an int for n:
In [197]: df = pd.DataFrame([[1, np.nan], [1, 4], [5, 6]], columns=["A", "B"])
In [198]: g = df.groupby("A")
In [199]: g.nth(0)
Out[199]:
B
A
1 NaN
5 6.0
In [200]: g.nth(-1)
Out[200]:
B
A
1 4.0
5 6.0
In [201]: g.nth(1)
Out[201]:
B
A
1 4.0
If you want to select the nth not-null item, use the dropna kwarg. For a DataFrame this should be either 'any' or 'all' just like you would pass to dropna:
# nth(0) is the same as g.first()
In [202]: g.nth(0, dropna="any")
Out[202]:
B
A
1 4.0
5 6.0
In [203]: g.first()
Out[203]:
B
A
1 4.0
5 6.0
# nth(-1) is the same as g.last()
In [204]: g.nth(-1, dropna="any") # NaNs denote group exhausted when using dropna
Out[204]:
B
A
1 4.0
5 6.0
In [205]: g.last()
Out[205]:
B
A
1 4.0
5 6.0
In [206]: g.B.nth(0, dropna="all")
Out[206]:
A
1 4.0
5 6.0
Name: B, dtype: float64
As with other methods, passing as_index=False, will achieve a filtration, which returns the grouped row.
In [207]: df = pd.DataFrame([[1, np.nan], [1, 4], [5, 6]], columns=["A", "B"])
In [208]: g = df.groupby("A", as_index=False)
In [209]: g.nth(0)
Out[209]:
A B
0 1 NaN
2 5 6.0
In [210]: g.nth(-1)
Out[210]:
A B
1 1 4.0
2 5 6.0
You can also select multiple rows from each group by specifying multiple nth values as a list of ints.
In [211]: business_dates = pd.date_range(start="4/1/2014", end="6/30/2014", freq="B")
In [212]: df = pd.DataFrame(1, index=business_dates, columns=["a", "b"])
# get the first, 4th, and last date index for each month
In [213]: df.groupby([df.index.year, df.index.month]).nth([0, 3, -1])
Out[213]:
a b
2014 4 1 1
4 1 1
4 1 1
5 1 1
5 1 1
5 1 1
6 1 1
6 1 1
6 1 1
Enumerate group items#
To see the order in which each row appears within its group, use the
cumcount method:
In [214]: dfg = pd.DataFrame(list("aaabba"), columns=["A"])
In [215]: dfg
Out[215]:
A
0 a
1 a
2 a
3 b
4 b
5 a
In [216]: dfg.groupby("A").cumcount()
Out[216]:
0 0
1 1
2 2
3 0
4 1
5 3
dtype: int64
In [217]: dfg.groupby("A").cumcount(ascending=False)
Out[217]:
0 3
1 2
2 1
3 1
4 0
5 0
dtype: int64
Enumerate groups#
To see the ordering of the groups (as opposed to the order of rows
within a group given by cumcount) you can use
ngroup().
Note that the numbers given to the groups match the order in which the
groups would be seen when iterating over the groupby object, not the
order they are first observed.
In [218]: dfg = pd.DataFrame(list("aaabba"), columns=["A"])
In [219]: dfg
Out[219]:
A
0 a
1 a
2 a
3 b
4 b
5 a
In [220]: dfg.groupby("A").ngroup()
Out[220]:
0 0
1 0
2 0
3 1
4 1
5 0
dtype: int64
In [221]: dfg.groupby("A").ngroup(ascending=False)
Out[221]:
0 1
1 1
2 1
3 0
4 0
5 1
dtype: int64
Plotting#
Groupby also works with some plotting methods. For example, suppose we
suspect that some features in a DataFrame may differ by group, in this case,
the values in column 1 where the group is “B” are 3 higher on average.
In [222]: np.random.seed(1234)
In [223]: df = pd.DataFrame(np.random.randn(50, 2))
In [224]: df["g"] = np.random.choice(["A", "B"], size=50)
In [225]: df.loc[df["g"] == "B", 1] += 3
We can easily visualize this with a boxplot:
In [226]: df.groupby("g").boxplot()
Out[226]:
A AxesSubplot(0.1,0.15;0.363636x0.75)
B AxesSubplot(0.536364,0.15;0.363636x0.75)
dtype: object
The result of calling boxplot is a dictionary whose keys are the values
of our grouping column g (“A” and “B”). The values of the resulting dictionary
can be controlled by the return_type keyword of boxplot.
See the visualization documentation for more.
Warning
For historical reasons, df.groupby("g").boxplot() is not equivalent
to df.boxplot(by="g"). See here for
an explanation.
Piping function calls#
Similar to the functionality provided by DataFrame and Series, functions
that take GroupBy objects can be chained together using a pipe method to
allow for a cleaner, more readable syntax. To read about .pipe in general terms,
see here.
Combining .groupby and .pipe is often useful when you need to reuse
GroupBy objects.
As an example, imagine having a DataFrame with columns for stores, products,
revenue and quantity sold. We’d like to do a groupwise calculation of prices
(i.e. revenue/quantity) per store and per product. We could do this in a
multi-step operation, but expressing it in terms of piping can make the
code more readable. First we set the data:
In [227]: n = 1000
In [228]: df = pd.DataFrame(
.....: {
.....: "Store": np.random.choice(["Store_1", "Store_2"], n),
.....: "Product": np.random.choice(["Product_1", "Product_2"], n),
.....: "Revenue": (np.random.random(n) * 50 + 10).round(2),
.....: "Quantity": np.random.randint(1, 10, size=n),
.....: }
.....: )
.....:
In [229]: df.head(2)
Out[229]:
Store Product Revenue Quantity
0 Store_2 Product_1 26.12 1
1 Store_2 Product_1 28.86 1
Now, to find prices per store/product, we can simply do:
In [230]: (
.....: df.groupby(["Store", "Product"])
.....: .pipe(lambda grp: grp.Revenue.sum() / grp.Quantity.sum())
.....: .unstack()
.....: .round(2)
.....: )
.....:
Out[230]:
Product Product_1 Product_2
Store
Store_1 6.82 7.05
Store_2 6.30 6.64
Piping can also be expressive when you want to deliver a grouped object to some
arbitrary function, for example:
In [231]: def mean(groupby):
.....: return groupby.mean()
.....:
In [232]: df.groupby(["Store", "Product"]).pipe(mean)
Out[232]:
Revenue Quantity
Store Product
Store_1 Product_1 34.622727 5.075758
Product_2 35.482815 5.029630
Store_2 Product_1 32.972837 5.237589
Product_2 34.684360 5.224000
where mean takes a GroupBy object and finds the mean of the Revenue and Quantity
columns respectively for each Store-Product combination. The mean function can
be any function that takes in a GroupBy object; the .pipe will pass the GroupBy
object as a parameter into the function you specify.
Examples#
Regrouping by factor#
Regroup columns of a DataFrame according to their sum, and sum the aggregated ones.
In [233]: df = pd.DataFrame({"a": [1, 0, 0], "b": [0, 1, 0], "c": [1, 0, 0], "d": [2, 3, 4]})
In [234]: df
Out[234]:
a b c d
0 1 0 1 2
1 0 1 0 3
2 0 0 0 4
In [235]: df.groupby(df.sum(), axis=1).sum()
Out[235]:
1 9
0 2 2
1 1 3
2 0 4
Multi-column factorization#
By using ngroup(), we can extract
information about the groups in a way similar to factorize() (as described
further in the reshaping API) but which applies
naturally to multiple columns of mixed type and different
sources. This can be useful as an intermediate categorical-like step
in processing, when the relationships between the group rows are more
important than their content, or as input to an algorithm which only
accepts the integer encoding. (For more information about support in
pandas for full categorical data, see the Categorical
introduction and the
API documentation.)
In [236]: dfg = pd.DataFrame({"A": [1, 1, 2, 3, 2], "B": list("aaaba")})
In [237]: dfg
Out[237]:
A B
0 1 a
1 1 a
2 2 a
3 3 b
4 2 a
In [238]: dfg.groupby(["A", "B"]).ngroup()
Out[238]:
0 0
1 0
2 1
3 2
4 1
dtype: int64
In [239]: dfg.groupby(["A", [0, 0, 0, 1, 1]]).ngroup()
Out[239]:
0 0
1 0
2 1
3 3
4 2
dtype: int64
Groupby by indexer to ‘resample’ data#
Resampling produces new hypothetical samples (resamples) from already existing observed data or from a model that generates data. These new samples are similar to the pre-existing samples.
In order to resample to work on indices that are non-datetimelike, the following procedure can be utilized.
In the following examples, df.index // 5 returns a binary array which is used to determine what gets selected for the groupby operation.
Note
The below example shows how we can downsample by consolidation of samples into fewer samples. Here by using df.index // 5, we are aggregating the samples in bins. By applying std() function, we aggregate the information contained in many samples into a small subset of values which is their standard deviation thereby reducing the number of samples.
In [240]: df = pd.DataFrame(np.random.randn(10, 2))
In [241]: df
Out[241]:
0 1
0 -0.793893 0.321153
1 0.342250 1.618906
2 -0.975807 1.918201
3 -0.810847 -1.405919
4 -1.977759 0.461659
5 0.730057 -1.316938
6 -0.751328 0.528290
7 -0.257759 -1.081009
8 0.505895 -1.701948
9 -1.006349 0.020208
In [242]: df.index // 5
Out[242]: Int64Index([0, 0, 0, 0, 0, 1, 1, 1, 1, 1], dtype='int64')
In [243]: df.groupby(df.index // 5).std()
Out[243]:
0 1
0 0.823647 1.312912
1 0.760109 0.942941
Returning a Series to propagate names#
Group DataFrame columns, compute a set of metrics and return a named Series.
The Series name is used as the name for the column index. This is especially
useful in conjunction with reshaping operations such as stacking in which the
column index name will be used as the name of the inserted column:
In [244]: df = pd.DataFrame(
.....: {
.....: "a": [0, 0, 0, 0, 1, 1, 1, 1, 2, 2, 2, 2],
.....: "b": [0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1],
.....: "c": [1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0],
.....: "d": [0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1],
.....: }
.....: )
.....:
In [245]: def compute_metrics(x):
.....: result = {"b_sum": x["b"].sum(), "c_mean": x["c"].mean()}
.....: return pd.Series(result, name="metrics")
.....:
In [246]: result = df.groupby("a").apply(compute_metrics)
In [247]: result
Out[247]:
metrics b_sum c_mean
a
0 2.0 0.5
1 2.0 0.5
2 2.0 0.5
In [248]: result.stack()
Out[248]:
a metrics
0 b_sum 2.0
c_mean 0.5
1 b_sum 2.0
c_mean 0.5
2 b_sum 2.0
c_mean 0.5
dtype: float64
| 64
| 723
|
How to make separate rows in Pandas by column value?
I have a df like this:
name total
bob 10
What I need is this: What is the best way to achieve this?
name total
bob 1
bob 1
bob 1
bob 1
bob 1
bob 1
bob 1
bob 1
bob 1
bob 1
|
59,723,329
|
Pandas - max value in a cell and save rows corresponded to it
|
<p>I have this problem, where I want to find a highest value in each segment.
Each segment means time, so all the rows corresponding to time, as you can see most of time step is five minute and for each step I need to find highest value in the 4th column, during that I need to save the whole row.
So far I came up with this:</p>
<pre><code>import pandas as pd
import numpy as np
data = pd.read_csv(f'/home/20170116.csv', header=None, sep=';',
usecols=[0, 1, 2, 3, 4, 5], names=['Time', 'degree', 'f1', 'p1', 'Intensity', 'Distance'])
for i in range(1, 5473, 19):
print(data.iloc[:i])
</code></pre>
<p>My data looks like this:</p>
<pre><code>00:00 0 7.44077320746235 0.453540438929378 317900000 67
00:00 10 7.39076196898179 0.487011284672025 341400000 67
00:00 20 7.37075747358957 0.506065836725554 328000000 65
00:00 30 7.34075073050124 0.495374317737197 321000000 65
00:00 40 7.33074848280513 0.473928991378983 379500000 70
00:00 50 7.33074848280513 0.429714866376765 344100000 70
00:00 60 7.34075073050124 0.378940997444553 461400000 77
00:00 70 7.37075747358957 0.330831053566623 402800000 77
00:00 80 7.43077095976624 0.28999520431443 353100000 77
00:00 90 7.50078669363902 0.256630783010184 312400000 77
00:00 -90 7.51078894133513 0.257848411262383 114700000 52
00:00 -80 7.59080692290402 0.226286016578661 92620000 48
00:00 -70 7.71083389525736 0.199411631799538 81620000 48
00:00 -60 7.81085637221848 0.178324045166602 217100000 77
00:00 -50 7.87086985839514 0.17447741754611 212400000 77
00:00 -40 7.8308608676107 0.209620778938056 276100000 78
00:00 -30 7.73083839064958 0.272603273214342 359100000 78
00:00 -20 7.61081141829625 0.341747195487005 361600000 75
00:00 -10 7.51078894133513 0.401902182098869 260500000 65
</code></pre>
<p>So above one segment is presented time increases every 5 minutes so I have 288 segments and each has 19 rows. And I need to find max value in the 4th column <code>p1</code> and save the whole row to another file for example.</p>
| 59,723,441
| 2020-01-13T19:57:44.667000
| 2
| null | 1
| 576
|
python|pandas
|
<p>Does this work:</p>
<pre><code>df.loc[df.groupby('Time')['p1'].idxmax()]
</code></pre>
<p>Output:</p>
<pre><code> Time degree f1 p1 Intensity Distance
1 00:00 20 7.370757 0.506066 328000000 65
</code></pre>
| 2020-01-13T20:06:23.300000
| 4
|
https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.mask.html
|
pandas.DataFrame.mask#
pandas.DataFrame.mask#
DataFrame.mask(cond, other=nan, *, inplace=False, axis=None, level=None, errors='raise', try_cast=_NoDefault.no_default)[source]#
Replace values where the condition is True.
Parameters
condbool Series/DataFrame, array-like, or callableWhere cond is False, keep the original value. Where
True, replace with corresponding value from other.
If cond is callable, it is computed on the Series/DataFrame and
should return boolean Series/DataFrame or array. The callable must
not change input Series/DataFrame (though pandas doesn’t check it).
otherscalar, Series/DataFrame, or callableEntries where cond is True are replaced with
corresponding value from other.
If other is callable, it is computed on the Series/DataFrame and
should return scalar or Series/DataFrame. The callable must not
change input Series/DataFrame (though pandas doesn’t check it).
inplacebool, default FalseWhether to perform the operation in place on the data.
axisint, default NoneAlignment axis if needed. For Series this parameter is
Does this work:
df.loc[df.groupby('Time')['p1'].idxmax()]
Output:
Time degree f1 p1 Intensity Distance
1 00:00 20 7.370757 0.506066 328000000 65
unused and defaults to 0.
levelint, default NoneAlignment level if needed.
errorsstr, {‘raise’, ‘ignore’}, default ‘raise’Note that currently this parameter won’t affect
the results and will always coerce to a suitable dtype.
‘raise’ : allow exceptions to be raised.
‘ignore’ : suppress exceptions. On error return original object.
Deprecated since version 1.5.0: This argument had no effect.
try_castbool, default NoneTry to cast the result back to the input type (if possible).
Deprecated since version 1.3.0: Manually cast back if necessary.
Returns
Same type as caller or None if inplace=True.
See also
DataFrame.where()Return an object of same shape as self.
Notes
The mask method is an application of the if-then idiom. For each
element in the calling DataFrame, if cond is False the
element is used; otherwise the corresponding element from the DataFrame
other is used. If the axis of other does not align with axis of
cond Series/DataFrame, the misaligned index positions will be filled with
True.
The signature for DataFrame.where() differs from
numpy.where(). Roughly df1.where(m, df2) is equivalent to
np.where(m, df1, df2).
For further details and examples see the mask documentation in
indexing.
The dtype of the object takes precedence. The fill value is casted to
the object’s dtype, if this can be done losslessly.
Examples
>>> s = pd.Series(range(5))
>>> s.where(s > 0)
0 NaN
1 1.0
2 2.0
3 3.0
4 4.0
dtype: float64
>>> s.mask(s > 0)
0 0.0
1 NaN
2 NaN
3 NaN
4 NaN
dtype: float64
>>> s = pd.Series(range(5))
>>> t = pd.Series([True, False])
>>> s.where(t, 99)
0 0
1 99
2 99
3 99
4 99
dtype: int64
>>> s.mask(t, 99)
0 99
1 1
2 99
3 99
4 99
dtype: int64
>>> s.where(s > 1, 10)
0 10
1 10
2 2
3 3
4 4
dtype: int64
>>> s.mask(s > 1, 10)
0 0
1 1
2 10
3 10
4 10
dtype: int64
>>> df = pd.DataFrame(np.arange(10).reshape(-1, 2), columns=['A', 'B'])
>>> df
A B
0 0 1
1 2 3
2 4 5
3 6 7
4 8 9
>>> m = df % 3 == 0
>>> df.where(m, -df)
A B
0 0 -1
1 -2 3
2 -4 -5
3 6 -7
4 -8 9
>>> df.where(m, -df) == np.where(m, df, -df)
A B
0 True True
1 True True
2 True True
3 True True
4 True True
>>> df.where(m, -df) == df.mask(~m, -df)
A B
0 True True
1 True True
2 True True
3 True True
4 True True
| 1,061
| 1,244
|
Pandas - max value in a cell and save rows corresponded to it
I have this problem, where I want to find a highest value in each segment.
Each segment means time, so all the rows corresponding to time, as you can see most of time step is five minute and for each step I need to find highest value in the 4th column, during that I need to save the whole row.
So far I came up with this:
import pandas as pd
import numpy as np
data = pd.read_csv(f'/home/20170116.csv', header=None, sep=';',
usecols=[0, 1, 2, 3, 4, 5], names=['Time', 'degree', 'f1', 'p1', 'Intensity', 'Distance'])
for i in range(1, 5473, 19):
print(data.iloc[:i])
My data looks like this:
00:00 0 7.44077320746235 0.453540438929378 317900000 67
00:00 10 7.39076196898179 0.487011284672025 341400000 67
00:00 20 7.37075747358957 0.506065836725554 328000000 65
00:00 30 7.34075073050124 0.495374317737197 321000000 65
00:00 40 7.33074848280513 0.473928991378983 379500000 70
00:00 50 7.33074848280513 0.429714866376765 344100000 70
00:00 60 7.34075073050124 0.378940997444553 461400000 77
00:00 70 7.37075747358957 0.330831053566623 402800000 77
00:00 80 7.43077095976624 0.28999520431443 353100000 77
00:00 90 7.50078669363902 0.256630783010184 312400000 77
00:00 -90 7.51078894133513 0.257848411262383 114700000 52
00:00 -80 7.59080692290402 0.226286016578661 92620000 48
00:00 -70 7.71083389525736 0.199411631799538 81620000 48
00:00 -60 7.81085637221848 0.178324045166602 217100000 77
00:00 -50 7.87086985839514 0.17447741754611 212400000 77
00:00 -40 7.8308608676107 0.209620778938056 276100000 78
00:00 -30 7.73083839064958 0.272603273214342 359100000 78
00:00 -20 7.61081141829625 0.341747195487005 361600000 75
00:00 -10 7.51078894133513 0.401902182098869 260500000 65
So above one segment is presented time increases every 5 minutes so I have 288 segments and each has 19 rows. And I need to find max value in the 4th column p1 and save the whole row to another file for example.
|
62,749,685
|
Loop pandas data frame
|
<p>i have below data frame and want to do loop:</p>
<pre><code>df = name
a
b
c
d
</code></pre>
<p>i have tried below code:</p>
<pre><code>for index, row in df.iterrows():
for line in df['name']:
print(index, line)
</code></pre>
<p>but the result i want is a dataframe as below:</p>
<pre><code>df = name name1
a a
a b
a c
a d
b a
b b
b c
b d
etc.
</code></pre>
<p>is there any possible way to do it? i know its a stupid question but im new to python</p>
| 62,749,744
| 2020-07-06T05:19:17.657000
| 2
| null | 3
| 65
|
python|pandas
|
<p>One way using <code>pandas.DataFrame.explode</code>:</p>
<pre><code>df["name1"] = [df["name"] for _ in df["name"]]
df.explode("name1")
</code></pre>
<p>Output:</p>
<pre><code> name name1
0 a a
0 a b
0 a c
0 a d
1 b a
1 b b
1 b c
1 b d
2 c a
2 c b
2 c c
2 c d
3 d a
3 d b
3 d c
3 d d
</code></pre>
| 2020-07-06T05:25:40.667000
| 4
|
https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.iterrows.html
|
pandas.DataFrame.iterrows#
One way using pandas.DataFrame.explode:
df["name1"] = [df["name"] for _ in df["name"]]
df.explode("name1")
Output:
name name1
0 a a
0 a b
0 a c
0 a d
1 b a
1 b b
1 b c
1 b d
2 c a
2 c b
2 c c
2 c d
3 d a
3 d b
3 d c
3 d d
pandas.DataFrame.iterrows#
DataFrame.iterrows()[source]#
Iterate over DataFrame rows as (index, Series) pairs.
Yields
indexlabel or tuple of labelThe index of the row. A tuple for a MultiIndex.
dataSeriesThe data of the row as a Series.
See also
DataFrame.itertuplesIterate over DataFrame rows as namedtuples of the values.
DataFrame.itemsIterate over (column name, Series) pairs.
Notes
Because iterrows returns a Series for each row,
it does not preserve dtypes across the rows (dtypes are
preserved across columns for DataFrames). For example,
>>> df = pd.DataFrame([[1, 1.5]], columns=['int', 'float'])
>>> row = next(df.iterrows())[1]
>>> row
int 1.0
float 1.5
Name: 0, dtype: float64
>>> print(row['int'].dtype)
float64
>>> print(df['int'].dtype)
int64
To preserve dtypes while iterating over the rows, it is better
to use itertuples() which returns namedtuples of the values
and which is generally faster than iterrows.
You should never modify something you are iterating over.
This is not guaranteed to work in all cases. Depending on the
data types, the iterator returns a copy and not a view, and writing
to it will have no effect.
| 29
| 366
|
Loop pandas data frame
i have below data frame and want to do loop:
df = name
a
b
c
d
i have tried below code:
for index, row in df.iterrows():
for line in df['name']:
print(index, line)
but the result i want is a dataframe as below:
df = name name1
a a
a b
a c
a d
b a
b b
b c
b d
etc.
is there any possible way to do it? i know its a stupid question but im new to python
|
67,610,717
|
How to remove rows with more than one value in a cell in Pandas
|
<p>I have a data frame that looks like below:</p>
<pre><code> receiver_id sender_id
a,b,d c
a,d b
b a
a b
</code></pre>
<p>I would like to remove rows containing more than one <code>receiver_id</code>. So the final data frame should only have row 3 and 4. How should I go about doing that?</p>
<p>Desired output:</p>
<pre><code> receiver_id sender_id
b a
a b
</code></pre>
| 67,610,756
| 2021-05-19T20:50:57.267000
| 3
| 1
| 1
| 337
|
python|pandas
|
<p>You can boolean slice the data frame by looking for a comma, assuming the multiple values are a single string and not a list.</p>
<pre><code>df = df[~df.receiver_id.str.contains(',')].reset_index(drop=True)
</code></pre>
| 2021-05-19T20:53:31.227000
| 4
|
https://pandas.pydata.org/docs/user_guide/reshaping.html
|
Reshaping and pivot tables#
Reshaping and pivot tables#
Reshaping by pivoting DataFrame objects#
Data is often stored in so-called “stacked” or “record” format:
In [1]: import pandas._testing as tm
In [2]: def unpivot(frame):
...: N, K = frame.shape
...: data = {
...: "value": frame.to_numpy().ravel("F"),
...: "variable": np.asarray(frame.columns).repeat(N),
...: "date": np.tile(np.asarray(frame.index), K),
...: }
...: return pd.DataFrame(data, columns=["date", "variable", "value"])
...:
In [3]: df = unpivot(tm.makeTimeDataFrame(3))
You can boolean slice the data frame by looking for a comma, assuming the multiple values are a single string and not a list.
df = df[~df.receiver_id.str.contains(',')].reset_index(drop=True)
In [4]: df
Out[4]:
date variable value
0 2000-01-03 A 0.469112
1 2000-01-04 A -0.282863
2 2000-01-05 A -1.509059
3 2000-01-03 B -1.135632
4 2000-01-04 B 1.212112
5 2000-01-05 B -0.173215
6 2000-01-03 C 0.119209
7 2000-01-04 C -1.044236
8 2000-01-05 C -0.861849
9 2000-01-03 D -2.104569
10 2000-01-04 D -0.494929
11 2000-01-05 D 1.071804
To select out everything for variable A we could do:
In [5]: filtered = df[df["variable"] == "A"]
In [6]: filtered
Out[6]:
date variable value
0 2000-01-03 A 0.469112
1 2000-01-04 A -0.282863
2 2000-01-05 A -1.509059
But suppose we wish to do time series operations with the variables. A better
representation would be where the columns are the unique variables and an
index of dates identifies individual observations. To reshape the data into
this form, we use the DataFrame.pivot() method (also implemented as a
top level function pivot()):
In [7]: pivoted = df.pivot(index="date", columns="variable", values="value")
In [8]: pivoted
Out[8]:
variable A B C D
date
2000-01-03 0.469112 -1.135632 0.119209 -2.104569
2000-01-04 -0.282863 1.212112 -1.044236 -0.494929
2000-01-05 -1.509059 -0.173215 -0.861849 1.071804
If the values argument is omitted, and the input DataFrame has more than
one column of values which are not used as column or index inputs to pivot(),
then the resulting “pivoted” DataFrame will have hierarchical columns whose topmost level indicates the respective value
column:
In [9]: df["value2"] = df["value"] * 2
In [10]: pivoted = df.pivot(index="date", columns="variable")
In [11]: pivoted
Out[11]:
value ... value2
variable A B C ... B C D
date ...
2000-01-03 0.469112 -1.135632 0.119209 ... -2.271265 0.238417 -4.209138
2000-01-04 -0.282863 1.212112 -1.044236 ... 2.424224 -2.088472 -0.989859
2000-01-05 -1.509059 -0.173215 -0.861849 ... -0.346429 -1.723698 2.143608
[3 rows x 8 columns]
You can then select subsets from the pivoted DataFrame:
In [12]: pivoted["value2"]
Out[12]:
variable A B C D
date
2000-01-03 0.938225 -2.271265 0.238417 -4.209138
2000-01-04 -0.565727 2.424224 -2.088472 -0.989859
2000-01-05 -3.018117 -0.346429 -1.723698 2.143608
Note that this returns a view on the underlying data in the case where the data
are homogeneously-typed.
Note
pivot() will error with a ValueError: Index contains duplicate
entries, cannot reshape if the index/column pair is not unique. In this
case, consider using pivot_table() which is a generalization
of pivot that can handle duplicate values for one index/column pair.
Reshaping by stacking and unstacking#
Closely related to the pivot() method are the related
stack() and unstack() methods available on
Series and DataFrame. These methods are designed to work together with
MultiIndex objects (see the section on hierarchical indexing). Here are essentially what these methods do:
stack(): “pivot” a level of the (possibly hierarchical) column labels,
returning a DataFrame with an index with a new inner-most level of row
labels.
unstack(): (inverse operation of stack()) “pivot” a level of the
(possibly hierarchical) row index to the column axis, producing a reshaped
DataFrame with a new inner-most level of column labels.
The clearest way to explain is by example. Let’s take a prior example data set
from the hierarchical indexing section:
In [13]: tuples = list(
....: zip(
....: *[
....: ["bar", "bar", "baz", "baz", "foo", "foo", "qux", "qux"],
....: ["one", "two", "one", "two", "one", "two", "one", "two"],
....: ]
....: )
....: )
....:
In [14]: index = pd.MultiIndex.from_tuples(tuples, names=["first", "second"])
In [15]: df = pd.DataFrame(np.random.randn(8, 2), index=index, columns=["A", "B"])
In [16]: df2 = df[:4]
In [17]: df2
Out[17]:
A B
first second
bar one 0.721555 -0.706771
two -1.039575 0.271860
baz one -0.424972 0.567020
two 0.276232 -1.087401
The stack() function “compresses” a level in the DataFrame columns to
produce either:
A Series, in the case of a simple column Index.
A DataFrame, in the case of a MultiIndex in the columns.
If the columns have a MultiIndex, you can choose which level to stack. The
stacked level becomes the new lowest level in a MultiIndex on the columns:
In [18]: stacked = df2.stack()
In [19]: stacked
Out[19]:
first second
bar one A 0.721555
B -0.706771
two A -1.039575
B 0.271860
baz one A -0.424972
B 0.567020
two A 0.276232
B -1.087401
dtype: float64
With a “stacked” DataFrame or Series (having a MultiIndex as the
index), the inverse operation of stack() is unstack(), which by default
unstacks the last level:
In [20]: stacked.unstack()
Out[20]:
A B
first second
bar one 0.721555 -0.706771
two -1.039575 0.271860
baz one -0.424972 0.567020
two 0.276232 -1.087401
In [21]: stacked.unstack(1)
Out[21]:
second one two
first
bar A 0.721555 -1.039575
B -0.706771 0.271860
baz A -0.424972 0.276232
B 0.567020 -1.087401
In [22]: stacked.unstack(0)
Out[22]:
first bar baz
second
one A 0.721555 -0.424972
B -0.706771 0.567020
two A -1.039575 0.276232
B 0.271860 -1.087401
If the indexes have names, you can use the level names instead of specifying
the level numbers:
In [23]: stacked.unstack("second")
Out[23]:
second one two
first
bar A 0.721555 -1.039575
B -0.706771 0.271860
baz A -0.424972 0.276232
B 0.567020 -1.087401
Notice that the stack() and unstack() methods implicitly sort the index
levels involved. Hence a call to stack() and then unstack(), or vice versa,
will result in a sorted copy of the original DataFrame or Series:
In [24]: index = pd.MultiIndex.from_product([[2, 1], ["a", "b"]])
In [25]: df = pd.DataFrame(np.random.randn(4), index=index, columns=["A"])
In [26]: df
Out[26]:
A
2 a -0.370647
b -1.157892
1 a -1.344312
b 0.844885
In [27]: all(df.unstack().stack() == df.sort_index())
Out[27]: True
The above code will raise a TypeError if the call to sort_index() is
removed.
Multiple levels#
You may also stack or unstack more than one level at a time by passing a list
of levels, in which case the end result is as if each level in the list were
processed individually.
In [28]: columns = pd.MultiIndex.from_tuples(
....: [
....: ("A", "cat", "long"),
....: ("B", "cat", "long"),
....: ("A", "dog", "short"),
....: ("B", "dog", "short"),
....: ],
....: names=["exp", "animal", "hair_length"],
....: )
....:
In [29]: df = pd.DataFrame(np.random.randn(4, 4), columns=columns)
In [30]: df
Out[30]:
exp A B A B
animal cat cat dog dog
hair_length long long short short
0 1.075770 -0.109050 1.643563 -1.469388
1 0.357021 -0.674600 -1.776904 -0.968914
2 -1.294524 0.413738 0.276662 -0.472035
3 -0.013960 -0.362543 -0.006154 -0.923061
In [31]: df.stack(level=["animal", "hair_length"])
Out[31]:
exp A B
animal hair_length
0 cat long 1.075770 -0.109050
dog short 1.643563 -1.469388
1 cat long 0.357021 -0.674600
dog short -1.776904 -0.968914
2 cat long -1.294524 0.413738
dog short 0.276662 -0.472035
3 cat long -0.013960 -0.362543
dog short -0.006154 -0.923061
The list of levels can contain either level names or level numbers (but
not a mixture of the two).
# df.stack(level=['animal', 'hair_length'])
# from above is equivalent to:
In [32]: df.stack(level=[1, 2])
Out[32]:
exp A B
animal hair_length
0 cat long 1.075770 -0.109050
dog short 1.643563 -1.469388
1 cat long 0.357021 -0.674600
dog short -1.776904 -0.968914
2 cat long -1.294524 0.413738
dog short 0.276662 -0.472035
3 cat long -0.013960 -0.362543
dog short -0.006154 -0.923061
Missing data#
These functions are intelligent about handling missing data and do not expect
each subgroup within the hierarchical index to have the same set of labels.
They also can handle the index being unsorted (but you can make it sorted by
calling sort_index(), of course). Here is a more complex example:
In [33]: columns = pd.MultiIndex.from_tuples(
....: [
....: ("A", "cat"),
....: ("B", "dog"),
....: ("B", "cat"),
....: ("A", "dog"),
....: ],
....: names=["exp", "animal"],
....: )
....:
In [34]: index = pd.MultiIndex.from_product(
....: [("bar", "baz", "foo", "qux"), ("one", "two")], names=["first", "second"]
....: )
....:
In [35]: df = pd.DataFrame(np.random.randn(8, 4), index=index, columns=columns)
In [36]: df2 = df.iloc[[0, 1, 2, 4, 5, 7]]
In [37]: df2
Out[37]:
exp A B A
animal cat dog cat dog
first second
bar one 0.895717 0.805244 -1.206412 2.565646
two 1.431256 1.340309 -1.170299 -0.226169
baz one 0.410835 0.813850 0.132003 -0.827317
foo one -1.413681 1.607920 1.024180 0.569605
two 0.875906 -2.211372 0.974466 -2.006747
qux two -1.226825 0.769804 -1.281247 -0.727707
As mentioned above, stack() can be called with a level argument to select
which level in the columns to stack:
In [38]: df2.stack("exp")
Out[38]:
animal cat dog
first second exp
bar one A 0.895717 2.565646
B -1.206412 0.805244
two A 1.431256 -0.226169
B -1.170299 1.340309
baz one A 0.410835 -0.827317
B 0.132003 0.813850
foo one A -1.413681 0.569605
B 1.024180 1.607920
two A 0.875906 -2.006747
B 0.974466 -2.211372
qux two A -1.226825 -0.727707
B -1.281247 0.769804
In [39]: df2.stack("animal")
Out[39]:
exp A B
first second animal
bar one cat 0.895717 -1.206412
dog 2.565646 0.805244
two cat 1.431256 -1.170299
dog -0.226169 1.340309
baz one cat 0.410835 0.132003
dog -0.827317 0.813850
foo one cat -1.413681 1.024180
dog 0.569605 1.607920
two cat 0.875906 0.974466
dog -2.006747 -2.211372
qux two cat -1.226825 -1.281247
dog -0.727707 0.769804
Unstacking can result in missing values if subgroups do not have the same
set of labels. By default, missing values will be replaced with the default
fill value for that data type, NaN for float, NaT for datetimelike,
etc. For integer types, by default data will converted to float and missing
values will be set to NaN.
In [40]: df3 = df.iloc[[0, 1, 4, 7], [1, 2]]
In [41]: df3
Out[41]:
exp B
animal dog cat
first second
bar one 0.805244 -1.206412
two 1.340309 -1.170299
foo one 1.607920 1.024180
qux two 0.769804 -1.281247
In [42]: df3.unstack()
Out[42]:
exp B
animal dog cat
second one two one two
first
bar 0.805244 1.340309 -1.206412 -1.170299
foo 1.607920 NaN 1.024180 NaN
qux NaN 0.769804 NaN -1.281247
Alternatively, unstack takes an optional fill_value argument, for specifying
the value of missing data.
In [43]: df3.unstack(fill_value=-1e9)
Out[43]:
exp B
animal dog cat
second one two one two
first
bar 8.052440e-01 1.340309e+00 -1.206412e+00 -1.170299e+00
foo 1.607920e+00 -1.000000e+09 1.024180e+00 -1.000000e+09
qux -1.000000e+09 7.698036e-01 -1.000000e+09 -1.281247e+00
With a MultiIndex#
Unstacking when the columns are a MultiIndex is also careful about doing
the right thing:
In [44]: df[:3].unstack(0)
Out[44]:
exp A B ... A
animal cat dog ... cat dog
first bar baz bar ... baz bar baz
second ...
one 0.895717 0.410835 0.805244 ... 0.132003 2.565646 -0.827317
two 1.431256 NaN 1.340309 ... NaN -0.226169 NaN
[2 rows x 8 columns]
In [45]: df2.unstack(1)
Out[45]:
exp A B ... A
animal cat dog ... cat dog
second one two one ... two one two
first ...
bar 0.895717 1.431256 0.805244 ... -1.170299 2.565646 -0.226169
baz 0.410835 NaN 0.813850 ... NaN -0.827317 NaN
foo -1.413681 0.875906 1.607920 ... 0.974466 0.569605 -2.006747
qux NaN -1.226825 NaN ... -1.281247 NaN -0.727707
[4 rows x 8 columns]
Reshaping by melt#
The top-level melt() function and the corresponding DataFrame.melt()
are useful to massage a DataFrame into a format where one or more columns
are identifier variables, while all other columns, considered measured
variables, are “unpivoted” to the row axis, leaving just two non-identifier
columns, “variable” and “value”. The names of those columns can be customized
by supplying the var_name and value_name parameters.
For instance,
In [46]: cheese = pd.DataFrame(
....: {
....: "first": ["John", "Mary"],
....: "last": ["Doe", "Bo"],
....: "height": [5.5, 6.0],
....: "weight": [130, 150],
....: }
....: )
....:
In [47]: cheese
Out[47]:
first last height weight
0 John Doe 5.5 130
1 Mary Bo 6.0 150
In [48]: cheese.melt(id_vars=["first", "last"])
Out[48]:
first last variable value
0 John Doe height 5.5
1 Mary Bo height 6.0
2 John Doe weight 130.0
3 Mary Bo weight 150.0
In [49]: cheese.melt(id_vars=["first", "last"], var_name="quantity")
Out[49]:
first last quantity value
0 John Doe height 5.5
1 Mary Bo height 6.0
2 John Doe weight 130.0
3 Mary Bo weight 150.0
When transforming a DataFrame using melt(), the index will be ignored. The original index values can be kept around by setting the ignore_index parameter to False (default is True). This will however duplicate them.
New in version 1.1.0.
In [50]: index = pd.MultiIndex.from_tuples([("person", "A"), ("person", "B")])
In [51]: cheese = pd.DataFrame(
....: {
....: "first": ["John", "Mary"],
....: "last": ["Doe", "Bo"],
....: "height": [5.5, 6.0],
....: "weight": [130, 150],
....: },
....: index=index,
....: )
....:
In [52]: cheese
Out[52]:
first last height weight
person A John Doe 5.5 130
B Mary Bo 6.0 150
In [53]: cheese.melt(id_vars=["first", "last"])
Out[53]:
first last variable value
0 John Doe height 5.5
1 Mary Bo height 6.0
2 John Doe weight 130.0
3 Mary Bo weight 150.0
In [54]: cheese.melt(id_vars=["first", "last"], ignore_index=False)
Out[54]:
first last variable value
person A John Doe height 5.5
B Mary Bo height 6.0
A John Doe weight 130.0
B Mary Bo weight 150.0
Another way to transform is to use the wide_to_long() panel data
convenience function. It is less flexible than melt(), but more
user-friendly.
In [55]: dft = pd.DataFrame(
....: {
....: "A1970": {0: "a", 1: "b", 2: "c"},
....: "A1980": {0: "d", 1: "e", 2: "f"},
....: "B1970": {0: 2.5, 1: 1.2, 2: 0.7},
....: "B1980": {0: 3.2, 1: 1.3, 2: 0.1},
....: "X": dict(zip(range(3), np.random.randn(3))),
....: }
....: )
....:
In [56]: dft["id"] = dft.index
In [57]: dft
Out[57]:
A1970 A1980 B1970 B1980 X id
0 a d 2.5 3.2 -0.121306 0
1 b e 1.2 1.3 -0.097883 1
2 c f 0.7 0.1 0.695775 2
In [58]: pd.wide_to_long(dft, ["A", "B"], i="id", j="year")
Out[58]:
X A B
id year
0 1970 -0.121306 a 2.5
1 1970 -0.097883 b 1.2
2 1970 0.695775 c 0.7
0 1980 -0.121306 d 3.2
1 1980 -0.097883 e 1.3
2 1980 0.695775 f 0.1
Combining with stats and GroupBy#
It should be no shock that combining pivot() / stack() / unstack() with
GroupBy and the basic Series and DataFrame statistical functions can produce
some very expressive and fast data manipulations.
In [59]: df
Out[59]:
exp A B A
animal cat dog cat dog
first second
bar one 0.895717 0.805244 -1.206412 2.565646
two 1.431256 1.340309 -1.170299 -0.226169
baz one 0.410835 0.813850 0.132003 -0.827317
two -0.076467 -1.187678 1.130127 -1.436737
foo one -1.413681 1.607920 1.024180 0.569605
two 0.875906 -2.211372 0.974466 -2.006747
qux one -0.410001 -0.078638 0.545952 -1.219217
two -1.226825 0.769804 -1.281247 -0.727707
In [60]: df.stack().mean(1).unstack()
Out[60]:
animal cat dog
first second
bar one -0.155347 1.685445
two 0.130479 0.557070
baz one 0.271419 -0.006733
two 0.526830 -1.312207
foo one -0.194750 1.088763
two 0.925186 -2.109060
qux one 0.067976 -0.648927
two -1.254036 0.021048
# same result, another way
In [61]: df.groupby(level=1, axis=1).mean()
Out[61]:
animal cat dog
first second
bar one -0.155347 1.685445
two 0.130479 0.557070
baz one 0.271419 -0.006733
two 0.526830 -1.312207
foo one -0.194750 1.088763
two 0.925186 -2.109060
qux one 0.067976 -0.648927
two -1.254036 0.021048
In [62]: df.stack().groupby(level=1).mean()
Out[62]:
exp A B
second
one 0.071448 0.455513
two -0.424186 -0.204486
In [63]: df.mean().unstack(0)
Out[63]:
exp A B
animal
cat 0.060843 0.018596
dog -0.413580 0.232430
Pivot tables#
While pivot() provides general purpose pivoting with various
data types (strings, numerics, etc.), pandas also provides pivot_table()
for pivoting with aggregation of numeric data.
The function pivot_table() can be used to create spreadsheet-style
pivot tables. See the cookbook for some advanced
strategies.
It takes a number of arguments:
data: a DataFrame object.
values: a column or a list of columns to aggregate.
index: a column, Grouper, array which has the same length as data, or list of them.
Keys to group by on the pivot table index. If an array is passed, it is being used as the same manner as column values.
columns: a column, Grouper, array which has the same length as data, or list of them.
Keys to group by on the pivot table column. If an array is passed, it is being used as the same manner as column values.
aggfunc: function to use for aggregation, defaulting to numpy.mean.
Consider a data set like this:
In [64]: import datetime
In [65]: df = pd.DataFrame(
....: {
....: "A": ["one", "one", "two", "three"] * 6,
....: "B": ["A", "B", "C"] * 8,
....: "C": ["foo", "foo", "foo", "bar", "bar", "bar"] * 4,
....: "D": np.random.randn(24),
....: "E": np.random.randn(24),
....: "F": [datetime.datetime(2013, i, 1) for i in range(1, 13)]
....: + [datetime.datetime(2013, i, 15) for i in range(1, 13)],
....: }
....: )
....:
In [66]: df
Out[66]:
A B C D E F
0 one A foo 0.341734 -0.317441 2013-01-01
1 one B foo 0.959726 -1.236269 2013-02-01
2 two C foo -1.110336 0.896171 2013-03-01
3 three A bar -0.619976 -0.487602 2013-04-01
4 one B bar 0.149748 -0.082240 2013-05-01
.. ... .. ... ... ... ...
19 three B foo 0.690579 -2.213588 2013-08-15
20 one C foo 0.995761 1.063327 2013-09-15
21 one A bar 2.396780 1.266143 2013-10-15
22 two B bar 0.014871 0.299368 2013-11-15
23 three C bar 3.357427 -0.863838 2013-12-15
[24 rows x 6 columns]
We can produce pivot tables from this data very easily:
In [67]: pd.pivot_table(df, values="D", index=["A", "B"], columns=["C"])
Out[67]:
C bar foo
A B
one A 1.120915 -0.514058
B -0.338421 0.002759
C -0.538846 0.699535
three A -1.181568 NaN
B NaN 0.433512
C 0.588783 NaN
two A NaN 1.000985
B 0.158248 NaN
C NaN 0.176180
In [68]: pd.pivot_table(df, values="D", index=["B"], columns=["A", "C"], aggfunc=np.sum)
Out[68]:
A one three two
C bar foo bar foo bar foo
B
A 2.241830 -1.028115 -2.363137 NaN NaN 2.001971
B -0.676843 0.005518 NaN 0.867024 0.316495 NaN
C -1.077692 1.399070 1.177566 NaN NaN 0.352360
In [69]: pd.pivot_table(
....: df, values=["D", "E"],
....: index=["B"],
....: columns=["A", "C"],
....: aggfunc=np.sum,
....: )
....:
Out[69]:
D ... E
A one three ... three two
C bar foo bar ... foo bar foo
B ...
A 2.241830 -1.028115 -2.363137 ... NaN NaN 0.128491
B -0.676843 0.005518 NaN ... -2.128743 -0.194294 NaN
C -1.077692 1.399070 1.177566 ... NaN NaN 0.872482
[3 rows x 12 columns]
The result object is a DataFrame having potentially hierarchical indexes on the
rows and columns. If the values column name is not given, the pivot table
will include all of the data in an additional level of hierarchy in the columns:
In [70]: pd.pivot_table(df[["A", "B", "C", "D", "E"]], index=["A", "B"], columns=["C"])
Out[70]:
D E
C bar foo bar foo
A B
one A 1.120915 -0.514058 1.393057 -0.021605
B -0.338421 0.002759 0.684140 -0.551692
C -0.538846 0.699535 -0.988442 0.747859
three A -1.181568 NaN 0.961289 NaN
B NaN 0.433512 NaN -1.064372
C 0.588783 NaN -0.131830 NaN
two A NaN 1.000985 NaN 0.064245
B 0.158248 NaN -0.097147 NaN
C NaN 0.176180 NaN 0.436241
Also, you can use Grouper for index and columns keywords. For detail of Grouper, see Grouping with a Grouper specification.
In [71]: pd.pivot_table(df, values="D", index=pd.Grouper(freq="M", key="F"), columns="C")
Out[71]:
C bar foo
F
2013-01-31 NaN -0.514058
2013-02-28 NaN 0.002759
2013-03-31 NaN 0.176180
2013-04-30 -1.181568 NaN
2013-05-31 -0.338421 NaN
2013-06-30 -0.538846 NaN
2013-07-31 NaN 1.000985
2013-08-31 NaN 0.433512
2013-09-30 NaN 0.699535
2013-10-31 1.120915 NaN
2013-11-30 0.158248 NaN
2013-12-31 0.588783 NaN
You can render a nice output of the table omitting the missing values by
calling to_string() if you wish:
In [72]: table = pd.pivot_table(df, index=["A", "B"], columns=["C"], values=["D", "E"])
In [73]: print(table.to_string(na_rep=""))
D E
C bar foo bar foo
A B
one A 1.120915 -0.514058 1.393057 -0.021605
B -0.338421 0.002759 0.684140 -0.551692
C -0.538846 0.699535 -0.988442 0.747859
three A -1.181568 0.961289
B 0.433512 -1.064372
C 0.588783 -0.131830
two A 1.000985 0.064245
B 0.158248 -0.097147
C 0.176180 0.436241
Note that pivot_table() is also available as an instance method on DataFrame,i.e. DataFrame.pivot_table().
Adding margins#
If you pass margins=True to pivot_table(), special All columns and
rows will be added with partial group aggregates across the categories on the
rows and columns:
In [74]: table = df.pivot_table(
....: index=["A", "B"],
....: columns="C",
....: values=["D", "E"],
....: margins=True,
....: aggfunc=np.std
....: )
....:
In [75]: table
Out[75]:
D E
C bar foo All bar foo All
A B
one A 1.804346 1.210272 1.569879 0.179483 0.418374 0.858005
B 0.690376 1.353355 0.898998 1.083825 0.968138 1.101401
C 0.273641 0.418926 0.771139 1.689271 0.446140 1.422136
three A 0.794212 NaN 0.794212 2.049040 NaN 2.049040
B NaN 0.363548 0.363548 NaN 1.625237 1.625237
C 3.915454 NaN 3.915454 1.035215 NaN 1.035215
two A NaN 0.442998 0.442998 NaN 0.447104 0.447104
B 0.202765 NaN 0.202765 0.560757 NaN 0.560757
C NaN 1.819408 1.819408 NaN 0.650439 0.650439
All 1.556686 0.952552 1.246608 1.250924 0.899904 1.059389
Additionally, you can call DataFrame.stack() to display a pivoted DataFrame
as having a multi-level index:
In [76]: table.stack()
Out[76]:
D E
A B C
one A All 1.569879 0.858005
bar 1.804346 0.179483
foo 1.210272 0.418374
B All 0.898998 1.101401
bar 0.690376 1.083825
... ... ...
two C All 1.819408 0.650439
foo 1.819408 0.650439
All All 1.246608 1.059389
bar 1.556686 1.250924
foo 0.952552 0.899904
[24 rows x 2 columns]
Cross tabulations#
Use crosstab() to compute a cross-tabulation of two (or more)
factors. By default crosstab() computes a frequency table of the factors
unless an array of values and an aggregation function are passed.
It takes a number of arguments
index: array-like, values to group by in the rows.
columns: array-like, values to group by in the columns.
values: array-like, optional, array of values to aggregate according to
the factors.
aggfunc: function, optional, If no values array is passed, computes a
frequency table.
rownames: sequence, default None, must match number of row arrays passed.
colnames: sequence, default None, if passed, must match number of column
arrays passed.
margins: boolean, default False, Add row/column margins (subtotals)
normalize: boolean, {‘all’, ‘index’, ‘columns’}, or {0,1}, default False.
Normalize by dividing all values by the sum of values.
Any Series passed will have their name attributes used unless row or column
names for the cross-tabulation are specified
For example:
In [77]: foo, bar, dull, shiny, one, two = "foo", "bar", "dull", "shiny", "one", "two"
In [78]: a = np.array([foo, foo, bar, bar, foo, foo], dtype=object)
In [79]: b = np.array([one, one, two, one, two, one], dtype=object)
In [80]: c = np.array([dull, dull, shiny, dull, dull, shiny], dtype=object)
In [81]: pd.crosstab(a, [b, c], rownames=["a"], colnames=["b", "c"])
Out[81]:
b one two
c dull shiny dull shiny
a
bar 1 0 0 1
foo 2 1 1 0
If crosstab() receives only two Series, it will provide a frequency table.
In [82]: df = pd.DataFrame(
....: {"A": [1, 2, 2, 2, 2], "B": [3, 3, 4, 4, 4], "C": [1, 1, np.nan, 1, 1]}
....: )
....:
In [83]: df
Out[83]:
A B C
0 1 3 1.0
1 2 3 1.0
2 2 4 NaN
3 2 4 1.0
4 2 4 1.0
In [84]: pd.crosstab(df["A"], df["B"])
Out[84]:
B 3 4
A
1 1 0
2 1 3
crosstab() can also be implemented
to Categorical data.
In [85]: foo = pd.Categorical(["a", "b"], categories=["a", "b", "c"])
In [86]: bar = pd.Categorical(["d", "e"], categories=["d", "e", "f"])
In [87]: pd.crosstab(foo, bar)
Out[87]:
col_0 d e
row_0
a 1 0
b 0 1
If you want to include all of data categories even if the actual data does
not contain any instances of a particular category, you should set dropna=False.
For example:
In [88]: pd.crosstab(foo, bar, dropna=False)
Out[88]:
col_0 d e f
row_0
a 1 0 0
b 0 1 0
c 0 0 0
Normalization#
Frequency tables can also be normalized to show percentages rather than counts
using the normalize argument:
In [89]: pd.crosstab(df["A"], df["B"], normalize=True)
Out[89]:
B 3 4
A
1 0.2 0.0
2 0.2 0.6
normalize can also normalize values within each row or within each column:
In [90]: pd.crosstab(df["A"], df["B"], normalize="columns")
Out[90]:
B 3 4
A
1 0.5 0.0
2 0.5 1.0
crosstab() can also be passed a third Series and an aggregation function
(aggfunc) that will be applied to the values of the third Series within
each group defined by the first two Series:
In [91]: pd.crosstab(df["A"], df["B"], values=df["C"], aggfunc=np.sum)
Out[91]:
B 3 4
A
1 1.0 NaN
2 1.0 2.0
Adding margins#
Finally, one can also add margins or normalize this output.
In [92]: pd.crosstab(
....: df["A"], df["B"], values=df["C"], aggfunc=np.sum, normalize=True, margins=True
....: )
....:
Out[92]:
B 3 4 All
A
1 0.25 0.0 0.25
2 0.25 0.5 0.75
All 0.50 0.5 1.00
Tiling#
The cut() function computes groupings for the values of the input
array and is often used to transform continuous variables to discrete or
categorical variables:
In [93]: ages = np.array([10, 15, 13, 12, 23, 25, 28, 59, 60])
In [94]: pd.cut(ages, bins=3)
Out[94]:
[(9.95, 26.667], (9.95, 26.667], (9.95, 26.667], (9.95, 26.667], (9.95, 26.667], (9.95, 26.667], (26.667, 43.333], (43.333, 60.0], (43.333, 60.0]]
Categories (3, interval[float64, right]): [(9.95, 26.667] < (26.667, 43.333] < (43.333, 60.0]]
If the bins keyword is an integer, then equal-width bins are formed.
Alternatively we can specify custom bin-edges:
In [95]: c = pd.cut(ages, bins=[0, 18, 35, 70])
In [96]: c
Out[96]:
[(0, 18], (0, 18], (0, 18], (0, 18], (18, 35], (18, 35], (18, 35], (35, 70], (35, 70]]
Categories (3, interval[int64, right]): [(0, 18] < (18, 35] < (35, 70]]
If the bins keyword is an IntervalIndex, then these will be
used to bin the passed data.:
pd.cut([25, 20, 50], bins=c.categories)
Computing indicator / dummy variables#
To convert a categorical variable into a “dummy” or “indicator” DataFrame,
for example a column in a DataFrame (a Series) which has k distinct
values, can derive a DataFrame containing k columns of 1s and 0s using
get_dummies():
In [97]: df = pd.DataFrame({"key": list("bbacab"), "data1": range(6)})
In [98]: pd.get_dummies(df["key"])
Out[98]:
a b c
0 0 1 0
1 0 1 0
2 1 0 0
3 0 0 1
4 1 0 0
5 0 1 0
Sometimes it’s useful to prefix the column names, for example when merging the result
with the original DataFrame:
In [99]: dummies = pd.get_dummies(df["key"], prefix="key")
In [100]: dummies
Out[100]:
key_a key_b key_c
0 0 1 0
1 0 1 0
2 1 0 0
3 0 0 1
4 1 0 0
5 0 1 0
In [101]: df[["data1"]].join(dummies)
Out[101]:
data1 key_a key_b key_c
0 0 0 1 0
1 1 0 1 0
2 2 1 0 0
3 3 0 0 1
4 4 1 0 0
5 5 0 1 0
This function is often used along with discretization functions like cut():
In [102]: values = np.random.randn(10)
In [103]: values
Out[103]:
array([ 0.4082, -1.0481, -0.0257, -0.9884, 0.0941, 1.2627, 1.29 ,
0.0824, -0.0558, 0.5366])
In [104]: bins = [0, 0.2, 0.4, 0.6, 0.8, 1]
In [105]: pd.get_dummies(pd.cut(values, bins))
Out[105]:
(0.0, 0.2] (0.2, 0.4] (0.4, 0.6] (0.6, 0.8] (0.8, 1.0]
0 0 0 1 0 0
1 0 0 0 0 0
2 0 0 0 0 0
3 0 0 0 0 0
4 1 0 0 0 0
5 0 0 0 0 0
6 0 0 0 0 0
7 1 0 0 0 0
8 0 0 0 0 0
9 0 0 1 0 0
See also Series.str.get_dummies.
get_dummies() also accepts a DataFrame. By default all categorical
variables (categorical in the statistical sense, those with object or
categorical dtype) are encoded as dummy variables.
In [106]: df = pd.DataFrame({"A": ["a", "b", "a"], "B": ["c", "c", "b"], "C": [1, 2, 3]})
In [107]: pd.get_dummies(df)
Out[107]:
C A_a A_b B_b B_c
0 1 1 0 0 1
1 2 0 1 0 1
2 3 1 0 1 0
All non-object columns are included untouched in the output. You can control
the columns that are encoded with the columns keyword.
In [108]: pd.get_dummies(df, columns=["A"])
Out[108]:
B C A_a A_b
0 c 1 1 0
1 c 2 0 1
2 b 3 1 0
Notice that the B column is still included in the output, it just hasn’t
been encoded. You can drop B before calling get_dummies if you don’t
want to include it in the output.
As with the Series version, you can pass values for the prefix and
prefix_sep. By default the column name is used as the prefix, and _ as
the prefix separator. You can specify prefix and prefix_sep in 3 ways:
string: Use the same value for prefix or prefix_sep for each column
to be encoded.
list: Must be the same length as the number of columns being encoded.
dict: Mapping column name to prefix.
In [109]: simple = pd.get_dummies(df, prefix="new_prefix")
In [110]: simple
Out[110]:
C new_prefix_a new_prefix_b new_prefix_b new_prefix_c
0 1 1 0 0 1
1 2 0 1 0 1
2 3 1 0 1 0
In [111]: from_list = pd.get_dummies(df, prefix=["from_A", "from_B"])
In [112]: from_list
Out[112]:
C from_A_a from_A_b from_B_b from_B_c
0 1 1 0 0 1
1 2 0 1 0 1
2 3 1 0 1 0
In [113]: from_dict = pd.get_dummies(df, prefix={"B": "from_B", "A": "from_A"})
In [114]: from_dict
Out[114]:
C from_A_a from_A_b from_B_b from_B_c
0 1 1 0 0 1
1 2 0 1 0 1
2 3 1 0 1 0
Sometimes it will be useful to only keep k-1 levels of a categorical
variable to avoid collinearity when feeding the result to statistical models.
You can switch to this mode by turn on drop_first.
In [115]: s = pd.Series(list("abcaa"))
In [116]: pd.get_dummies(s)
Out[116]:
a b c
0 1 0 0
1 0 1 0
2 0 0 1
3 1 0 0
4 1 0 0
In [117]: pd.get_dummies(s, drop_first=True)
Out[117]:
b c
0 0 0
1 1 0
2 0 1
3 0 0
4 0 0
When a column contains only one level, it will be omitted in the result.
In [118]: df = pd.DataFrame({"A": list("aaaaa"), "B": list("ababc")})
In [119]: pd.get_dummies(df)
Out[119]:
A_a B_a B_b B_c
0 1 1 0 0
1 1 0 1 0
2 1 1 0 0
3 1 0 1 0
4 1 0 0 1
In [120]: pd.get_dummies(df, drop_first=True)
Out[120]:
B_b B_c
0 0 0
1 1 0
2 0 0
3 1 0
4 0 1
By default new columns will have np.uint8 dtype.
To choose another dtype, use the dtype argument:
In [121]: df = pd.DataFrame({"A": list("abc"), "B": [1.1, 2.2, 3.3]})
In [122]: pd.get_dummies(df, dtype=bool).dtypes
Out[122]:
B float64
A_a bool
A_b bool
A_c bool
dtype: object
New in version 1.5.0.
To convert a “dummy” or “indicator” DataFrame, into a categorical DataFrame,
for example k columns of a DataFrame containing 1s and 0s can derive a
DataFrame which has k distinct values using
from_dummies():
In [123]: df = pd.DataFrame({"prefix_a": [0, 1, 0], "prefix_b": [1, 0, 1]})
In [124]: df
Out[124]:
prefix_a prefix_b
0 0 1
1 1 0
2 0 1
In [125]: pd.from_dummies(df, sep="_")
Out[125]:
prefix
0 b
1 a
2 b
Dummy coded data only requires k - 1 categories to be included, in this case
the k th category is the default category, implied by not being assigned any of
the other k - 1 categories, can be passed via default_category.
In [126]: df = pd.DataFrame({"prefix_a": [0, 1, 0]})
In [127]: df
Out[127]:
prefix_a
0 0
1 1
2 0
In [128]: pd.from_dummies(df, sep="_", default_category="b")
Out[128]:
prefix
0 b
1 a
2 b
Factorizing values#
To encode 1-d values as an enumerated type use factorize():
In [129]: x = pd.Series(["A", "A", np.nan, "B", 3.14, np.inf])
In [130]: x
Out[130]:
0 A
1 A
2 NaN
3 B
4 3.14
5 inf
dtype: object
In [131]: labels, uniques = pd.factorize(x)
In [132]: labels
Out[132]: array([ 0, 0, -1, 1, 2, 3])
In [133]: uniques
Out[133]: Index(['A', 'B', 3.14, inf], dtype='object')
Note that factorize() is similar to numpy.unique, but differs in its
handling of NaN:
Note
The following numpy.unique will fail under Python 3 with a TypeError
because of an ordering bug. See also
here.
In [134]: ser = pd.Series(['A', 'A', np.nan, 'B', 3.14, np.inf])
In [135]: pd.factorize(ser, sort=True)
Out[135]: (array([ 2, 2, -1, 3, 0, 1]), Index([3.14, inf, 'A', 'B'], dtype='object'))
In [136]: np.unique(ser, return_inverse=True)[::-1]
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[136], line 1
----> 1 np.unique(ser, return_inverse=True)[::-1]
File <__array_function__ internals>:180, in unique(*args, **kwargs)
File ~/micromamba/envs/test/lib/python3.8/site-packages/numpy/lib/arraysetops.py:274, in unique(ar, return_index, return_inverse, return_counts, axis, equal_nan)
272 ar = np.asanyarray(ar)
273 if axis is None:
--> 274 ret = _unique1d(ar, return_index, return_inverse, return_counts,
275 equal_nan=equal_nan)
276 return _unpack_tuple(ret)
278 # axis was specified and not None
File ~/micromamba/envs/test/lib/python3.8/site-packages/numpy/lib/arraysetops.py:333, in _unique1d(ar, return_index, return_inverse, return_counts, equal_nan)
330 optional_indices = return_index or return_inverse
332 if optional_indices:
--> 333 perm = ar.argsort(kind='mergesort' if return_index else 'quicksort')
334 aux = ar[perm]
335 else:
TypeError: '<' not supported between instances of 'float' and 'str'
Note
If you just want to handle one column as a categorical variable (like R’s factor),
you can use df["cat_col"] = pd.Categorical(df["col"]) or
df["cat_col"] = df["col"].astype("category"). For full docs on Categorical,
see the Categorical introduction and the
API documentation.
Examples#
In this section, we will review frequently asked questions and examples. The
column names and relevant column values are named to correspond with how this
DataFrame will be pivoted in the answers below.
In [137]: np.random.seed([3, 1415])
In [138]: n = 20
In [139]: cols = np.array(["key", "row", "item", "col"])
In [140]: df = cols + pd.DataFrame(
.....: (np.random.randint(5, size=(n, 4)) // [2, 1, 2, 1]).astype(str)
.....: )
.....:
In [141]: df.columns = cols
In [142]: df = df.join(pd.DataFrame(np.random.rand(n, 2).round(2)).add_prefix("val"))
In [143]: df
Out[143]:
key row item col val0 val1
0 key0 row3 item1 col3 0.81 0.04
1 key1 row2 item1 col2 0.44 0.07
2 key1 row0 item1 col0 0.77 0.01
3 key0 row4 item0 col2 0.15 0.59
4 key1 row0 item2 col1 0.81 0.64
.. ... ... ... ... ... ...
15 key0 row3 item1 col1 0.31 0.23
16 key0 row0 item2 col3 0.86 0.01
17 key0 row4 item0 col3 0.64 0.21
18 key2 row2 item2 col0 0.13 0.45
19 key0 row2 item0 col4 0.37 0.70
[20 rows x 6 columns]
Pivoting with single aggregations#
Suppose we wanted to pivot df such that the col values are columns,
row values are the index, and the mean of val0 are the values? In
particular, the resulting DataFrame should look like:
col col0 col1 col2 col3 col4
row
row0 0.77 0.605 NaN 0.860 0.65
row2 0.13 NaN 0.395 0.500 0.25
row3 NaN 0.310 NaN 0.545 NaN
row4 NaN 0.100 0.395 0.760 0.24
This solution uses pivot_table(). Also note that
aggfunc='mean' is the default. It is included here to be explicit.
In [144]: df.pivot_table(values="val0", index="row", columns="col", aggfunc="mean")
Out[144]:
col col0 col1 col2 col3 col4
row
row0 0.77 0.605 NaN 0.860 0.65
row2 0.13 NaN 0.395 0.500 0.25
row3 NaN 0.310 NaN 0.545 NaN
row4 NaN 0.100 0.395 0.760 0.24
Note that we can also replace the missing values by using the fill_value
parameter.
In [145]: df.pivot_table(
.....: values="val0",
.....: index="row",
.....: columns="col",
.....: aggfunc="mean",
.....: fill_value=0,
.....: )
.....:
Out[145]:
col col0 col1 col2 col3 col4
row
row0 0.77 0.605 0.000 0.860 0.65
row2 0.13 0.000 0.395 0.500 0.25
row3 0.00 0.310 0.000 0.545 0.00
row4 0.00 0.100 0.395 0.760 0.24
Also note that we can pass in other aggregation functions as well. For example,
we can also pass in sum.
In [146]: df.pivot_table(
.....: values="val0",
.....: index="row",
.....: columns="col",
.....: aggfunc="sum",
.....: fill_value=0,
.....: )
.....:
Out[146]:
col col0 col1 col2 col3 col4
row
row0 0.77 1.21 0.00 0.86 0.65
row2 0.13 0.00 0.79 0.50 0.50
row3 0.00 0.31 0.00 1.09 0.00
row4 0.00 0.10 0.79 1.52 0.24
Another aggregation we can do is calculate the frequency in which the columns
and rows occur together a.k.a. “cross tabulation”. To do this, we can pass
size to the aggfunc parameter.
In [147]: df.pivot_table(index="row", columns="col", fill_value=0, aggfunc="size")
Out[147]:
col col0 col1 col2 col3 col4
row
row0 1 2 0 1 1
row2 1 0 2 1 2
row3 0 1 0 2 0
row4 0 1 2 2 1
Pivoting with multiple aggregations#
We can also perform multiple aggregations. For example, to perform both a
sum and mean, we can pass in a list to the aggfunc argument.
In [148]: df.pivot_table(
.....: values="val0",
.....: index="row",
.....: columns="col",
.....: aggfunc=["mean", "sum"],
.....: )
.....:
Out[148]:
mean sum
col col0 col1 col2 col3 col4 col0 col1 col2 col3 col4
row
row0 0.77 0.605 NaN 0.860 0.65 0.77 1.21 NaN 0.86 0.65
row2 0.13 NaN 0.395 0.500 0.25 0.13 NaN 0.79 0.50 0.50
row3 NaN 0.310 NaN 0.545 NaN NaN 0.31 NaN 1.09 NaN
row4 NaN 0.100 0.395 0.760 0.24 NaN 0.10 0.79 1.52 0.24
Note to aggregate over multiple value columns, we can pass in a list to the
values parameter.
In [149]: df.pivot_table(
.....: values=["val0", "val1"],
.....: index="row",
.....: columns="col",
.....: aggfunc=["mean"],
.....: )
.....:
Out[149]:
mean
val0 val1
col col0 col1 col2 col3 col4 col0 col1 col2 col3 col4
row
row0 0.77 0.605 NaN 0.860 0.65 0.01 0.745 NaN 0.010 0.02
row2 0.13 NaN 0.395 0.500 0.25 0.45 NaN 0.34 0.440 0.79
row3 NaN 0.310 NaN 0.545 NaN NaN 0.230 NaN 0.075 NaN
row4 NaN 0.100 0.395 0.760 0.24 NaN 0.070 0.42 0.300 0.46
Note to subdivide over multiple columns we can pass in a list to the
columns parameter.
In [150]: df.pivot_table(
.....: values=["val0"],
.....: index="row",
.....: columns=["item", "col"],
.....: aggfunc=["mean"],
.....: )
.....:
Out[150]:
mean
val0
item item0 item1 item2
col col2 col3 col4 col0 col1 col2 col3 col4 col0 col1 col3 col4
row
row0 NaN NaN NaN 0.77 NaN NaN NaN NaN NaN 0.605 0.86 0.65
row2 0.35 NaN 0.37 NaN NaN 0.44 NaN NaN 0.13 NaN 0.50 0.13
row3 NaN NaN NaN NaN 0.31 NaN 0.81 NaN NaN NaN 0.28 NaN
row4 0.15 0.64 NaN NaN 0.10 0.64 0.88 0.24 NaN NaN NaN NaN
Exploding a list-like column#
New in version 0.25.0.
Sometimes the values in a column are list-like.
In [151]: keys = ["panda1", "panda2", "panda3"]
In [152]: values = [["eats", "shoots"], ["shoots", "leaves"], ["eats", "leaves"]]
In [153]: df = pd.DataFrame({"keys": keys, "values": values})
In [154]: df
Out[154]:
keys values
0 panda1 [eats, shoots]
1 panda2 [shoots, leaves]
2 panda3 [eats, leaves]
We can ‘explode’ the values column, transforming each list-like to a separate row, by using explode(). This will replicate the index values from the original row:
In [155]: df["values"].explode()
Out[155]:
0 eats
0 shoots
1 shoots
1 leaves
2 eats
2 leaves
Name: values, dtype: object
You can also explode the column in the DataFrame.
In [156]: df.explode("values")
Out[156]:
keys values
0 panda1 eats
0 panda1 shoots
1 panda2 shoots
1 panda2 leaves
2 panda3 eats
2 panda3 leaves
Series.explode() will replace empty lists with np.nan and preserve scalar entries. The dtype of the resulting Series is always object.
In [157]: s = pd.Series([[1, 2, 3], "foo", [], ["a", "b"]])
In [158]: s
Out[158]:
0 [1, 2, 3]
1 foo
2 []
3 [a, b]
dtype: object
In [159]: s.explode()
Out[159]:
0 1
0 2
0 3
1 foo
2 NaN
3 a
3 b
dtype: object
Here is a typical usecase. You have comma separated strings in a column and want to expand this.
In [160]: df = pd.DataFrame([{"var1": "a,b,c", "var2": 1}, {"var1": "d,e,f", "var2": 2}])
In [161]: df
Out[161]:
var1 var2
0 a,b,c 1
1 d,e,f 2
Creating a long form DataFrame is now straightforward using explode and chained operations
In [162]: df.assign(var1=df.var1.str.split(",")).explode("var1")
Out[162]:
var1 var2
0 a 1
0 b 1
0 c 1
1 d 2
1 e 2
1 f 2
| 610
| 802
|
How to remove rows with more than one value in a cell in Pandas
I have a data frame that looks like below:
receiver_id sender_id
a,b,d c
a,d b
b a
a b
I would like to remove rows containing more than one receiver_id. So the final data frame should only have row 3 and 4. How should I go about doing that?
Desired output:
receiver_id sender_id
b a
a b
|
65,896,011
|
Remove the characters after 64 characters of column names in pandas
|
<p>I have seen so many ways to remove special characters from column names, and those worked for my example. However, now, I want to remove all extra characters in all columns that are longer than 64 characters in length. Is there an easier way I can do it?</p>
<p>For example:</p>
<pre><code>>> df.columns
Index['hi', 'happy_tree_family_is_most_amazing_awesome_fantastic_series_even_in_2021_01_25_and_I_want_to_watch_it_again_ahhahahahahaha']
</code></pre>
<p>after work:</p>
<pre><code>>> df.columns ## 2nd column name only contains 64 character in length ##
Index['hi', 'happy_tree_family_is_most_amazing_awesome_fantastic_series_even_']
</code></pre>
<p>A million thanks!</p>
| 65,896,031
| 2021-01-26T04:42:58.493000
| 2
| 1
| 0
| 85
|
python|pandas
|
<p>Try with</p>
<pre><code>df.columns = df.columns.str[:64]
</code></pre>
| 2021-01-26T04:46:28.380000
| 4
|
https://pandas.pydata.org/docs/user_guide/io.html
|
IO tools (text, CSV, HDF5, …)#
IO tools (text, CSV, HDF5, …)#
The pandas I/O API is a set of top level reader functions accessed like
pandas.read_csv() that generally return a pandas object. The corresponding
writer functions are object methods that are accessed like
DataFrame.to_csv(). Below is a table containing available readers and
writers.
Format Type
Data Description
Reader
Writer
text
CSV
read_csv
to_csv
text
Fixed-Width Text File
read_fwf
text
JSON
read_json
to_json
text
HTML
read_html
to_html
text
LaTeX
Styler.to_latex
text
XML
read_xml
to_xml
text
Local clipboard
read_clipboard
to_clipboard
binary
MS Excel
read_excel
to_excel
binary
OpenDocument
read_excel
binary
HDF5 Format
read_hdf
to_hdf
binary
Feather Format
read_feather
to_feather
binary
Parquet Format
read_parquet
to_parquet
binary
ORC Format
read_orc
to_orc
binary
Stata
read_stata
to_stata
binary
SAS
read_sas
binary
SPSS
read_spss
binary
Python Pickle Format
read_pickle
to_pickle
SQL
SQL
read_sql
to_sql
SQL
Google BigQuery
read_gbq
to_gbq
Here is an informal performance comparison for some of these IO methods.
Try with
df.columns = df.columns.str[:64]
Note
For examples that use the StringIO class, make sure you import it
with from io import StringIO for Python 3.
CSV & text files#
The workhorse function for reading text files (a.k.a. flat files) is
read_csv(). See the cookbook for some advanced strategies.
Parsing options#
read_csv() accepts the following common arguments:
Basic#
filepath_or_buffervariousEither a path to a file (a str, pathlib.Path,
or py:py._path.local.LocalPath), URL (including http, ftp, and S3
locations), or any object with a read() method (such as an open file or
StringIO).
sepstr, defaults to ',' for read_csv(), \t for read_table()Delimiter to use. If sep is None, the C engine cannot automatically detect
the separator, but the Python parsing engine can, meaning the latter will be
used and automatically detect the separator by Python’s builtin sniffer tool,
csv.Sniffer. In addition, separators longer than 1 character and
different from '\s+' will be interpreted as regular expressions and
will also force the use of the Python parsing engine. Note that regex
delimiters are prone to ignoring quoted data. Regex example: '\\r\\t'.
delimiterstr, default NoneAlternative argument name for sep.
delim_whitespaceboolean, default FalseSpecifies whether or not whitespace (e.g. ' ' or '\t')
will be used as the delimiter. Equivalent to setting sep='\s+'.
If this option is set to True, nothing should be passed in for the
delimiter parameter.
Column and index locations and names#
headerint or list of ints, default 'infer'Row number(s) to use as the column names, and the start of the
data. Default behavior is to infer the column names: if no names are
passed the behavior is identical to header=0 and column names
are inferred from the first line of the file, if column names are
passed explicitly then the behavior is identical to
header=None. Explicitly pass header=0 to be able to replace
existing names.
The header can be a list of ints that specify row locations
for a MultiIndex on the columns e.g. [0,1,3]. Intervening rows
that are not specified will be skipped (e.g. 2 in this example is
skipped). Note that this parameter ignores commented lines and empty
lines if skip_blank_lines=True, so header=0 denotes the first
line of data rather than the first line of the file.
namesarray-like, default NoneList of column names to use. If file contains no header row, then you should
explicitly pass header=None. Duplicates in this list are not allowed.
index_colint, str, sequence of int / str, or False, optional, default NoneColumn(s) to use as the row labels of the DataFrame, either given as
string name or column index. If a sequence of int / str is given, a
MultiIndex is used.
Note
index_col=False can be used to force pandas to not use the first
column as the index, e.g. when you have a malformed file with delimiters at
the end of each line.
The default value of None instructs pandas to guess. If the number of
fields in the column header row is equal to the number of fields in the body
of the data file, then a default index is used. If it is larger, then
the first columns are used as index so that the remaining number of fields in
the body are equal to the number of fields in the header.
The first row after the header is used to determine the number of columns,
which will go into the index. If the subsequent rows contain less columns
than the first row, they are filled with NaN.
This can be avoided through usecols. This ensures that the columns are
taken as is and the trailing data are ignored.
usecolslist-like or callable, default NoneReturn a subset of the columns. If list-like, all elements must either
be positional (i.e. integer indices into the document columns) or strings
that correspond to column names provided either by the user in names or
inferred from the document header row(s). If names are given, the document
header row(s) are not taken into account. For example, a valid list-like
usecols parameter would be [0, 1, 2] or ['foo', 'bar', 'baz'].
Element order is ignored, so usecols=[0, 1] is the same as [1, 0]. To
instantiate a DataFrame from data with element order preserved use
pd.read_csv(data, usecols=['foo', 'bar'])[['foo', 'bar']] for columns
in ['foo', 'bar'] order or
pd.read_csv(data, usecols=['foo', 'bar'])[['bar', 'foo']] for
['bar', 'foo'] order.
If callable, the callable function will be evaluated against the column names,
returning names where the callable function evaluates to True:
In [1]: import pandas as pd
In [2]: from io import StringIO
In [3]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"
In [4]: pd.read_csv(StringIO(data))
Out[4]:
col1 col2 col3
0 a b 1
1 a b 2
2 c d 3
In [5]: pd.read_csv(StringIO(data), usecols=lambda x: x.upper() in ["COL1", "COL3"])
Out[5]:
col1 col3
0 a 1
1 a 2
2 c 3
Using this parameter results in much faster parsing time and lower memory usage
when using the c engine. The Python engine loads the data first before deciding
which columns to drop.
squeezeboolean, default FalseIf the parsed data only contains one column then return a Series.
Deprecated since version 1.4.0: Append .squeeze("columns") to the call to {func_name} to squeeze
the data.
prefixstr, default NonePrefix to add to column numbers when no header, e.g. ‘X’ for X0, X1, …
Deprecated since version 1.4.0: Use a list comprehension on the DataFrame’s columns after calling read_csv.
In [6]: data = "col1,col2,col3\na,b,1"
In [7]: df = pd.read_csv(StringIO(data))
In [8]: df.columns = [f"pre_{col}" for col in df.columns]
In [9]: df
Out[9]:
pre_col1 pre_col2 pre_col3
0 a b 1
mangle_dupe_colsboolean, default TrueDuplicate columns will be specified as ‘X’, ‘X.1’…’X.N’, rather than ‘X’…’X’.
Passing in False will cause data to be overwritten if there are duplicate
names in the columns.
Deprecated since version 1.5.0: The argument was never implemented, and a new argument where the
renaming pattern can be specified will be added instead.
General parsing configuration#
dtypeType name or dict of column -> type, default NoneData type for data or columns. E.g. {'a': np.float64, 'b': np.int32, 'c': 'Int64'}
Use str or object together with suitable na_values settings to preserve
and not interpret dtype. If converters are specified, they will be applied INSTEAD
of dtype conversion.
New in version 1.5.0: Support for defaultdict was added. Specify a defaultdict as input where
the default determines the dtype of the columns which are not explicitly
listed.
engine{'c', 'python', 'pyarrow'}Parser engine to use. The C and pyarrow engines are faster, while the python engine
is currently more feature-complete. Multithreading is currently only supported by
the pyarrow engine.
New in version 1.4.0: The “pyarrow” engine was added as an experimental engine, and some features
are unsupported, or may not work correctly, with this engine.
convertersdict, default NoneDict of functions for converting values in certain columns. Keys can either be
integers or column labels.
true_valueslist, default NoneValues to consider as True.
false_valueslist, default NoneValues to consider as False.
skipinitialspaceboolean, default FalseSkip spaces after delimiter.
skiprowslist-like or integer, default NoneLine numbers to skip (0-indexed) or number of lines to skip (int) at the start
of the file.
If callable, the callable function will be evaluated against the row
indices, returning True if the row should be skipped and False otherwise:
In [10]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"
In [11]: pd.read_csv(StringIO(data))
Out[11]:
col1 col2 col3
0 a b 1
1 a b 2
2 c d 3
In [12]: pd.read_csv(StringIO(data), skiprows=lambda x: x % 2 != 0)
Out[12]:
col1 col2 col3
0 a b 2
skipfooterint, default 0Number of lines at bottom of file to skip (unsupported with engine=’c’).
nrowsint, default NoneNumber of rows of file to read. Useful for reading pieces of large files.
low_memoryboolean, default TrueInternally process the file in chunks, resulting in lower memory use
while parsing, but possibly mixed type inference. To ensure no mixed
types either set False, or specify the type with the dtype parameter.
Note that the entire file is read into a single DataFrame regardless,
use the chunksize or iterator parameter to return the data in chunks.
(Only valid with C parser)
memory_mapboolean, default FalseIf a filepath is provided for filepath_or_buffer, map the file object
directly onto memory and access the data directly from there. Using this
option can improve performance because there is no longer any I/O overhead.
NA and missing data handling#
na_valuesscalar, str, list-like, or dict, default NoneAdditional strings to recognize as NA/NaN. If dict passed, specific per-column
NA values. See na values const below
for a list of the values interpreted as NaN by default.
keep_default_naboolean, default TrueWhether or not to include the default NaN values when parsing the data.
Depending on whether na_values is passed in, the behavior is as follows:
If keep_default_na is True, and na_values are specified, na_values
is appended to the default NaN values used for parsing.
If keep_default_na is True, and na_values are not specified, only
the default NaN values are used for parsing.
If keep_default_na is False, and na_values are specified, only
the NaN values specified na_values are used for parsing.
If keep_default_na is False, and na_values are not specified, no
strings will be parsed as NaN.
Note that if na_filter is passed in as False, the keep_default_na and
na_values parameters will be ignored.
na_filterboolean, default TrueDetect missing value markers (empty strings and the value of na_values). In
data without any NAs, passing na_filter=False can improve the performance
of reading a large file.
verboseboolean, default FalseIndicate number of NA values placed in non-numeric columns.
skip_blank_linesboolean, default TrueIf True, skip over blank lines rather than interpreting as NaN values.
Datetime handling#
parse_datesboolean or list of ints or names or list of lists or dict, default False.
If True -> try parsing the index.
If [1, 2, 3] -> try parsing columns 1, 2, 3 each as a separate date
column.
If [[1, 3]] -> combine columns 1 and 3 and parse as a single date
column.
If {'foo': [1, 3]} -> parse columns 1, 3 as date and call result ‘foo’.
Note
A fast-path exists for iso8601-formatted dates.
infer_datetime_formatboolean, default FalseIf True and parse_dates is enabled for a column, attempt to infer the
datetime format to speed up the processing.
keep_date_colboolean, default FalseIf True and parse_dates specifies combining multiple columns then keep the
original columns.
date_parserfunction, default NoneFunction to use for converting a sequence of string columns to an array of
datetime instances. The default uses dateutil.parser.parser to do the
conversion. pandas will try to call date_parser in three different ways,
advancing to the next if an exception occurs: 1) Pass one or more arrays (as
defined by parse_dates) as arguments; 2) concatenate (row-wise) the string
values from the columns defined by parse_dates into a single array and pass
that; and 3) call date_parser once for each row using one or more strings
(corresponding to the columns defined by parse_dates) as arguments.
dayfirstboolean, default FalseDD/MM format dates, international and European format.
cache_datesboolean, default TrueIf True, use a cache of unique, converted dates to apply the datetime
conversion. May produce significant speed-up when parsing duplicate
date strings, especially ones with timezone offsets.
New in version 0.25.0.
Iteration#
iteratorboolean, default FalseReturn TextFileReader object for iteration or getting chunks with
get_chunk().
chunksizeint, default NoneReturn TextFileReader object for iteration. See iterating and chunking below.
Quoting, compression, and file format#
compression{'infer', 'gzip', 'bz2', 'zip', 'xz', 'zstd', None, dict}, default 'infer'For on-the-fly decompression of on-disk data. If ‘infer’, then use gzip,
bz2, zip, xz, or zstandard if filepath_or_buffer is path-like ending in ‘.gz’, ‘.bz2’,
‘.zip’, ‘.xz’, ‘.zst’, respectively, and no decompression otherwise. If using ‘zip’,
the ZIP file must contain only one data file to be read in.
Set to None for no decompression. Can also be a dict with key 'method'
set to one of {'zip', 'gzip', 'bz2', 'zstd'} and other key-value pairs are
forwarded to zipfile.ZipFile, gzip.GzipFile, bz2.BZ2File, or zstandard.ZstdDecompressor.
As an example, the following could be passed for faster compression and to
create a reproducible gzip archive:
compression={'method': 'gzip', 'compresslevel': 1, 'mtime': 1}.
Changed in version 1.1.0: dict option extended to support gzip and bz2.
Changed in version 1.2.0: Previous versions forwarded dict entries for ‘gzip’ to gzip.open.
thousandsstr, default NoneThousands separator.
decimalstr, default '.'Character to recognize as decimal point. E.g. use ',' for European data.
float_precisionstring, default NoneSpecifies which converter the C engine should use for floating-point values.
The options are None for the ordinary converter, high for the
high-precision converter, and round_trip for the round-trip converter.
lineterminatorstr (length 1), default NoneCharacter to break file into lines. Only valid with C parser.
quotecharstr (length 1)The character used to denote the start and end of a quoted item. Quoted items
can include the delimiter and it will be ignored.
quotingint or csv.QUOTE_* instance, default 0Control field quoting behavior per csv.QUOTE_* constants. Use one of
QUOTE_MINIMAL (0), QUOTE_ALL (1), QUOTE_NONNUMERIC (2) or
QUOTE_NONE (3).
doublequoteboolean, default TrueWhen quotechar is specified and quoting is not QUOTE_NONE,
indicate whether or not to interpret two consecutive quotechar elements
inside a field as a single quotechar element.
escapecharstr (length 1), default NoneOne-character string used to escape delimiter when quoting is QUOTE_NONE.
commentstr, default NoneIndicates remainder of line should not be parsed. If found at the beginning of
a line, the line will be ignored altogether. This parameter must be a single
character. Like empty lines (as long as skip_blank_lines=True), fully
commented lines are ignored by the parameter header but not by skiprows.
For example, if comment='#', parsing ‘#empty\na,b,c\n1,2,3’ with
header=0 will result in ‘a,b,c’ being treated as the header.
encodingstr, default NoneEncoding to use for UTF when reading/writing (e.g. 'utf-8'). List of
Python standard encodings.
dialectstr or csv.Dialect instance, default NoneIf provided, this parameter will override values (default or not) for the
following parameters: delimiter, doublequote, escapechar,
skipinitialspace, quotechar, and quoting. If it is necessary to
override values, a ParserWarning will be issued. See csv.Dialect
documentation for more details.
Error handling#
error_bad_linesboolean, optional, default NoneLines with too many fields (e.g. a csv line with too many commas) will by
default cause an exception to be raised, and no DataFrame will be
returned. If False, then these “bad lines” will dropped from the
DataFrame that is returned. See bad lines
below.
Deprecated since version 1.3.0: The on_bad_lines parameter should be used instead to specify behavior upon
encountering a bad line instead.
warn_bad_linesboolean, optional, default NoneIf error_bad_lines is False, and warn_bad_lines is True, a warning for
each “bad line” will be output.
Deprecated since version 1.3.0: The on_bad_lines parameter should be used instead to specify behavior upon
encountering a bad line instead.
on_bad_lines(‘error’, ‘warn’, ‘skip’), default ‘error’Specifies what to do upon encountering a bad line (a line with too many fields).
Allowed values are :
‘error’, raise an ParserError when a bad line is encountered.
‘warn’, print a warning when a bad line is encountered and skip that line.
‘skip’, skip bad lines without raising or warning when they are encountered.
New in version 1.3.0.
Specifying column data types#
You can indicate the data type for the whole DataFrame or individual
columns:
In [13]: import numpy as np
In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"
In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11
In [16]: df = pd.read_csv(StringIO(data), dtype=object)
In [17]: df
Out[17]:
a b c d
0 1 2 3 4
1 5 6 7 8
2 9 10 11 NaN
In [18]: df["a"][0]
Out[18]: '1'
In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})
In [20]: df.dtypes
Out[20]:
a int64
b object
c float64
d Int64
dtype: object
Fortunately, pandas offers more than one way to ensure that your column(s)
contain only one dtype. If you’re unfamiliar with these concepts, you can
see here to learn more about dtypes, and
here to learn more about object conversion in
pandas.
For instance, you can use the converters argument
of read_csv():
In [21]: data = "col_1\n1\n2\n'A'\n4.22"
In [22]: df = pd.read_csv(StringIO(data), converters={"col_1": str})
In [23]: df
Out[23]:
col_1
0 1
1 2
2 'A'
3 4.22
In [24]: df["col_1"].apply(type).value_counts()
Out[24]:
<class 'str'> 4
Name: col_1, dtype: int64
Or you can use the to_numeric() function to coerce the
dtypes after reading in the data,
In [25]: df2 = pd.read_csv(StringIO(data))
In [26]: df2["col_1"] = pd.to_numeric(df2["col_1"], errors="coerce")
In [27]: df2
Out[27]:
col_1
0 1.00
1 2.00
2 NaN
3 4.22
In [28]: df2["col_1"].apply(type).value_counts()
Out[28]:
<class 'float'> 4
Name: col_1, dtype: int64
which will convert all valid parsing to floats, leaving the invalid parsing
as NaN.
Ultimately, how you deal with reading in columns containing mixed dtypes
depends on your specific needs. In the case above, if you wanted to NaN out
the data anomalies, then to_numeric() is probably your best option.
However, if you wanted for all the data to be coerced, no matter the type, then
using the converters argument of read_csv() would certainly be
worth trying.
Note
In some cases, reading in abnormal data with columns containing mixed dtypes
will result in an inconsistent dataset. If you rely on pandas to infer the
dtypes of your columns, the parsing engine will go and infer the dtypes for
different chunks of the data, rather than the whole dataset at once. Consequently,
you can end up with column(s) with mixed dtypes. For example,
In [29]: col_1 = list(range(500000)) + ["a", "b"] + list(range(500000))
In [30]: df = pd.DataFrame({"col_1": col_1})
In [31]: df.to_csv("foo.csv")
In [32]: mixed_df = pd.read_csv("foo.csv")
In [33]: mixed_df["col_1"].apply(type).value_counts()
Out[33]:
<class 'int'> 737858
<class 'str'> 262144
Name: col_1, dtype: int64
In [34]: mixed_df["col_1"].dtype
Out[34]: dtype('O')
will result with mixed_df containing an int dtype for certain chunks
of the column, and str for others due to the mixed dtypes from the
data that was read in. It is important to note that the overall column will be
marked with a dtype of object, which is used for columns with mixed dtypes.
Specifying categorical dtype#
Categorical columns can be parsed directly by specifying dtype='category' or
dtype=CategoricalDtype(categories, ordered).
In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"
In [36]: pd.read_csv(StringIO(data))
Out[36]:
col1 col2 col3
0 a b 1
1 a b 2
2 c d 3
In [37]: pd.read_csv(StringIO(data)).dtypes
Out[37]:
col1 object
col2 object
col3 int64
dtype: object
In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
Out[38]:
col1 category
col2 category
col3 category
dtype: object
Individual columns can be parsed as a Categorical using a dict
specification:
In [39]: pd.read_csv(StringIO(data), dtype={"col1": "category"}).dtypes
Out[39]:
col1 category
col2 object
col3 int64
dtype: object
Specifying dtype='category' will result in an unordered Categorical
whose categories are the unique values observed in the data. For more
control on the categories and order, create a
CategoricalDtype ahead of time, and pass that for
that column’s dtype.
In [40]: from pandas.api.types import CategoricalDtype
In [41]: dtype = CategoricalDtype(["d", "c", "b", "a"], ordered=True)
In [42]: pd.read_csv(StringIO(data), dtype={"col1": dtype}).dtypes
Out[42]:
col1 category
col2 object
col3 int64
dtype: object
When using dtype=CategoricalDtype, “unexpected” values outside of
dtype.categories are treated as missing values.
In [43]: dtype = CategoricalDtype(["a", "b", "d"]) # No 'c'
In [44]: pd.read_csv(StringIO(data), dtype={"col1": dtype}).col1
Out[44]:
0 a
1 a
2 NaN
Name: col1, dtype: category
Categories (3, object): ['a', 'b', 'd']
This matches the behavior of Categorical.set_categories().
Note
With dtype='category', the resulting categories will always be parsed
as strings (object dtype). If the categories are numeric they can be
converted using the to_numeric() function, or as appropriate, another
converter such as to_datetime().
When dtype is a CategoricalDtype with homogeneous categories (
all numeric, all datetimes, etc.), the conversion is done automatically.
In [45]: df = pd.read_csv(StringIO(data), dtype="category")
In [46]: df.dtypes
Out[46]:
col1 category
col2 category
col3 category
dtype: object
In [47]: df["col3"]
Out[47]:
0 1
1 2
2 3
Name: col3, dtype: category
Categories (3, object): ['1', '2', '3']
In [48]: new_categories = pd.to_numeric(df["col3"].cat.categories)
In [49]: df["col3"] = df["col3"].cat.rename_categories(new_categories)
In [50]: df["col3"]
Out[50]:
0 1
1 2
2 3
Name: col3, dtype: category
Categories (3, int64): [1, 2, 3]
Naming and using columns#
Handling column names#
A file may or may not have a header row. pandas assumes the first row should be
used as the column names:
In [51]: data = "a,b,c\n1,2,3\n4,5,6\n7,8,9"
In [52]: print(data)
a,b,c
1,2,3
4,5,6
7,8,9
In [53]: pd.read_csv(StringIO(data))
Out[53]:
a b c
0 1 2 3
1 4 5 6
2 7 8 9
By specifying the names argument in conjunction with header you can
indicate other names to use and whether or not to throw away the header row (if
any):
In [54]: print(data)
a,b,c
1,2,3
4,5,6
7,8,9
In [55]: pd.read_csv(StringIO(data), names=["foo", "bar", "baz"], header=0)
Out[55]:
foo bar baz
0 1 2 3
1 4 5 6
2 7 8 9
In [56]: pd.read_csv(StringIO(data), names=["foo", "bar", "baz"], header=None)
Out[56]:
foo bar baz
0 a b c
1 1 2 3
2 4 5 6
3 7 8 9
If the header is in a row other than the first, pass the row number to
header. This will skip the preceding rows:
In [57]: data = "skip this skip it\na,b,c\n1,2,3\n4,5,6\n7,8,9"
In [58]: pd.read_csv(StringIO(data), header=1)
Out[58]:
a b c
0 1 2 3
1 4 5 6
2 7 8 9
Note
Default behavior is to infer the column names: if no names are
passed the behavior is identical to header=0 and column names
are inferred from the first non-blank line of the file, if column
names are passed explicitly then the behavior is identical to
header=None.
Duplicate names parsing#
Deprecated since version 1.5.0: mangle_dupe_cols was never implemented, and a new argument where the
renaming pattern can be specified will be added instead.
If the file or header contains duplicate names, pandas will by default
distinguish between them so as to prevent overwriting data:
In [59]: data = "a,b,a\n0,1,2\n3,4,5"
In [60]: pd.read_csv(StringIO(data))
Out[60]:
a b a.1
0 0 1 2
1 3 4 5
There is no more duplicate data because mangle_dupe_cols=True by default,
which modifies a series of duplicate columns ‘X’, …, ‘X’ to become
‘X’, ‘X.1’, …, ‘X.N’.
Filtering columns (usecols)#
The usecols argument allows you to select any subset of the columns in a
file, either using the column names, position numbers or a callable:
In [61]: data = "a,b,c,d\n1,2,3,foo\n4,5,6,bar\n7,8,9,baz"
In [62]: pd.read_csv(StringIO(data))
Out[62]:
a b c d
0 1 2 3 foo
1 4 5 6 bar
2 7 8 9 baz
In [63]: pd.read_csv(StringIO(data), usecols=["b", "d"])
Out[63]:
b d
0 2 foo
1 5 bar
2 8 baz
In [64]: pd.read_csv(StringIO(data), usecols=[0, 2, 3])
Out[64]:
a c d
0 1 3 foo
1 4 6 bar
2 7 9 baz
In [65]: pd.read_csv(StringIO(data), usecols=lambda x: x.upper() in ["A", "C"])
Out[65]:
a c
0 1 3
1 4 6
2 7 9
The usecols argument can also be used to specify which columns not to
use in the final result:
In [66]: pd.read_csv(StringIO(data), usecols=lambda x: x not in ["a", "c"])
Out[66]:
b d
0 2 foo
1 5 bar
2 8 baz
In this case, the callable is specifying that we exclude the “a” and “c”
columns from the output.
Comments and empty lines#
Ignoring line comments and empty lines#
If the comment parameter is specified, then completely commented lines will
be ignored. By default, completely blank lines will be ignored as well.
In [67]: data = "\na,b,c\n \n# commented line\n1,2,3\n\n4,5,6"
In [68]: print(data)
a,b,c
# commented line
1,2,3
4,5,6
In [69]: pd.read_csv(StringIO(data), comment="#")
Out[69]:
a b c
0 1 2 3
1 4 5 6
If skip_blank_lines=False, then read_csv will not ignore blank lines:
In [70]: data = "a,b,c\n\n1,2,3\n\n\n4,5,6"
In [71]: pd.read_csv(StringIO(data), skip_blank_lines=False)
Out[71]:
a b c
0 NaN NaN NaN
1 1.0 2.0 3.0
2 NaN NaN NaN
3 NaN NaN NaN
4 4.0 5.0 6.0
Warning
The presence of ignored lines might create ambiguities involving line numbers;
the parameter header uses row numbers (ignoring commented/empty
lines), while skiprows uses line numbers (including commented/empty lines):
In [72]: data = "#comment\na,b,c\nA,B,C\n1,2,3"
In [73]: pd.read_csv(StringIO(data), comment="#", header=1)
Out[73]:
A B C
0 1 2 3
In [74]: data = "A,B,C\n#comment\na,b,c\n1,2,3"
In [75]: pd.read_csv(StringIO(data), comment="#", skiprows=2)
Out[75]:
a b c
0 1 2 3
If both header and skiprows are specified, header will be
relative to the end of skiprows. For example:
In [76]: data = (
....: "# empty\n"
....: "# second empty line\n"
....: "# third emptyline\n"
....: "X,Y,Z\n"
....: "1,2,3\n"
....: "A,B,C\n"
....: "1,2.,4.\n"
....: "5.,NaN,10.0\n"
....: )
....:
In [77]: print(data)
# empty
# second empty line
# third emptyline
X,Y,Z
1,2,3
A,B,C
1,2.,4.
5.,NaN,10.0
In [78]: pd.read_csv(StringIO(data), comment="#", skiprows=4, header=1)
Out[78]:
A B C
0 1.0 2.0 4.0
1 5.0 NaN 10.0
Comments#
Sometimes comments or meta data may be included in a file:
In [79]: print(open("tmp.csv").read())
ID,level,category
Patient1,123000,x # really unpleasant
Patient2,23000,y # wouldn't take his medicine
Patient3,1234018,z # awesome
By default, the parser includes the comments in the output:
In [80]: df = pd.read_csv("tmp.csv")
In [81]: df
Out[81]:
ID level category
0 Patient1 123000 x # really unpleasant
1 Patient2 23000 y # wouldn't take his medicine
2 Patient3 1234018 z # awesome
We can suppress the comments using the comment keyword:
In [82]: df = pd.read_csv("tmp.csv", comment="#")
In [83]: df
Out[83]:
ID level category
0 Patient1 123000 x
1 Patient2 23000 y
2 Patient3 1234018 z
Dealing with Unicode data#
The encoding argument should be used for encoded unicode data, which will
result in byte strings being decoded to unicode in the result:
In [84]: from io import BytesIO
In [85]: data = b"word,length\n" b"Tr\xc3\xa4umen,7\n" b"Gr\xc3\xbc\xc3\x9fe,5"
In [86]: data = data.decode("utf8").encode("latin-1")
In [87]: df = pd.read_csv(BytesIO(data), encoding="latin-1")
In [88]: df
Out[88]:
word length
0 Träumen 7
1 Grüße 5
In [89]: df["word"][1]
Out[89]: 'Grüße'
Some formats which encode all characters as multiple bytes, like UTF-16, won’t
parse correctly at all without specifying the encoding. Full list of Python
standard encodings.
Index columns and trailing delimiters#
If a file has one more column of data than the number of column names, the
first column will be used as the DataFrame’s row names:
In [90]: data = "a,b,c\n4,apple,bat,5.7\n8,orange,cow,10"
In [91]: pd.read_csv(StringIO(data))
Out[91]:
a b c
4 apple bat 5.7
8 orange cow 10.0
In [92]: data = "index,a,b,c\n4,apple,bat,5.7\n8,orange,cow,10"
In [93]: pd.read_csv(StringIO(data), index_col=0)
Out[93]:
a b c
index
4 apple bat 5.7
8 orange cow 10.0
Ordinarily, you can achieve this behavior using the index_col option.
There are some exception cases when a file has been prepared with delimiters at
the end of each data line, confusing the parser. To explicitly disable the
index column inference and discard the last column, pass index_col=False:
In [94]: data = "a,b,c\n4,apple,bat,\n8,orange,cow,"
In [95]: print(data)
a,b,c
4,apple,bat,
8,orange,cow,
In [96]: pd.read_csv(StringIO(data))
Out[96]:
a b c
4 apple bat NaN
8 orange cow NaN
In [97]: pd.read_csv(StringIO(data), index_col=False)
Out[97]:
a b c
0 4 apple bat
1 8 orange cow
If a subset of data is being parsed using the usecols option, the
index_col specification is based on that subset, not the original data.
In [98]: data = "a,b,c\n4,apple,bat,\n8,orange,cow,"
In [99]: print(data)
a,b,c
4,apple,bat,
8,orange,cow,
In [100]: pd.read_csv(StringIO(data), usecols=["b", "c"])
Out[100]:
b c
4 bat NaN
8 cow NaN
In [101]: pd.read_csv(StringIO(data), usecols=["b", "c"], index_col=0)
Out[101]:
b c
4 bat NaN
8 cow NaN
Date Handling#
Specifying date columns#
To better facilitate working with datetime data, read_csv()
uses the keyword arguments parse_dates and date_parser
to allow users to specify a variety of columns and date/time formats to turn the
input text data into datetime objects.
The simplest case is to just pass in parse_dates=True:
In [102]: with open("foo.csv", mode="w") as f:
.....: f.write("date,A,B,C\n20090101,a,1,2\n20090102,b,3,4\n20090103,c,4,5")
.....:
# Use a column as an index, and parse it as dates.
In [103]: df = pd.read_csv("foo.csv", index_col=0, parse_dates=True)
In [104]: df
Out[104]:
A B C
date
2009-01-01 a 1 2
2009-01-02 b 3 4
2009-01-03 c 4 5
# These are Python datetime objects
In [105]: df.index
Out[105]: DatetimeIndex(['2009-01-01', '2009-01-02', '2009-01-03'], dtype='datetime64[ns]', name='date', freq=None)
It is often the case that we may want to store date and time data separately,
or store various date fields separately. the parse_dates keyword can be
used to specify a combination of columns to parse the dates and/or times from.
You can specify a list of column lists to parse_dates, the resulting date
columns will be prepended to the output (so as to not affect the existing column
order) and the new column names will be the concatenation of the component
column names:
In [106]: data = (
.....: "KORD,19990127, 19:00:00, 18:56:00, 0.8100\n"
.....: "KORD,19990127, 20:00:00, 19:56:00, 0.0100\n"
.....: "KORD,19990127, 21:00:00, 20:56:00, -0.5900\n"
.....: "KORD,19990127, 21:00:00, 21:18:00, -0.9900\n"
.....: "KORD,19990127, 22:00:00, 21:56:00, -0.5900\n"
.....: "KORD,19990127, 23:00:00, 22:56:00, -0.5900"
.....: )
.....:
In [107]: with open("tmp.csv", "w") as fh:
.....: fh.write(data)
.....:
In [108]: df = pd.read_csv("tmp.csv", header=None, parse_dates=[[1, 2], [1, 3]])
In [109]: df
Out[109]:
1_2 1_3 0 4
0 1999-01-27 19:00:00 1999-01-27 18:56:00 KORD 0.81
1 1999-01-27 20:00:00 1999-01-27 19:56:00 KORD 0.01
2 1999-01-27 21:00:00 1999-01-27 20:56:00 KORD -0.59
3 1999-01-27 21:00:00 1999-01-27 21:18:00 KORD -0.99
4 1999-01-27 22:00:00 1999-01-27 21:56:00 KORD -0.59
5 1999-01-27 23:00:00 1999-01-27 22:56:00 KORD -0.59
By default the parser removes the component date columns, but you can choose
to retain them via the keep_date_col keyword:
In [110]: df = pd.read_csv(
.....: "tmp.csv", header=None, parse_dates=[[1, 2], [1, 3]], keep_date_col=True
.....: )
.....:
In [111]: df
Out[111]:
1_2 1_3 0 ... 2 3 4
0 1999-01-27 19:00:00 1999-01-27 18:56:00 KORD ... 19:00:00 18:56:00 0.81
1 1999-01-27 20:00:00 1999-01-27 19:56:00 KORD ... 20:00:00 19:56:00 0.01
2 1999-01-27 21:00:00 1999-01-27 20:56:00 KORD ... 21:00:00 20:56:00 -0.59
3 1999-01-27 21:00:00 1999-01-27 21:18:00 KORD ... 21:00:00 21:18:00 -0.99
4 1999-01-27 22:00:00 1999-01-27 21:56:00 KORD ... 22:00:00 21:56:00 -0.59
5 1999-01-27 23:00:00 1999-01-27 22:56:00 KORD ... 23:00:00 22:56:00 -0.59
[6 rows x 7 columns]
Note that if you wish to combine multiple columns into a single date column, a
nested list must be used. In other words, parse_dates=[1, 2] indicates that
the second and third columns should each be parsed as separate date columns
while parse_dates=[[1, 2]] means the two columns should be parsed into a
single column.
You can also use a dict to specify custom name columns:
In [112]: date_spec = {"nominal": [1, 2], "actual": [1, 3]}
In [113]: df = pd.read_csv("tmp.csv", header=None, parse_dates=date_spec)
In [114]: df
Out[114]:
nominal actual 0 4
0 1999-01-27 19:00:00 1999-01-27 18:56:00 KORD 0.81
1 1999-01-27 20:00:00 1999-01-27 19:56:00 KORD 0.01
2 1999-01-27 21:00:00 1999-01-27 20:56:00 KORD -0.59
3 1999-01-27 21:00:00 1999-01-27 21:18:00 KORD -0.99
4 1999-01-27 22:00:00 1999-01-27 21:56:00 KORD -0.59
5 1999-01-27 23:00:00 1999-01-27 22:56:00 KORD -0.59
It is important to remember that if multiple text columns are to be parsed into
a single date column, then a new column is prepended to the data. The index_col
specification is based off of this new set of columns rather than the original
data columns:
In [115]: date_spec = {"nominal": [1, 2], "actual": [1, 3]}
In [116]: df = pd.read_csv(
.....: "tmp.csv", header=None, parse_dates=date_spec, index_col=0
.....: ) # index is the nominal column
.....:
In [117]: df
Out[117]:
actual 0 4
nominal
1999-01-27 19:00:00 1999-01-27 18:56:00 KORD 0.81
1999-01-27 20:00:00 1999-01-27 19:56:00 KORD 0.01
1999-01-27 21:00:00 1999-01-27 20:56:00 KORD -0.59
1999-01-27 21:00:00 1999-01-27 21:18:00 KORD -0.99
1999-01-27 22:00:00 1999-01-27 21:56:00 KORD -0.59
1999-01-27 23:00:00 1999-01-27 22:56:00 KORD -0.59
Note
If a column or index contains an unparsable date, the entire column or
index will be returned unaltered as an object data type. For non-standard
datetime parsing, use to_datetime() after pd.read_csv.
Note
read_csv has a fast_path for parsing datetime strings in iso8601 format,
e.g “2000-01-01T00:01:02+00:00” and similar variations. If you can arrange
for your data to store datetimes in this format, load times will be
significantly faster, ~20x has been observed.
Date parsing functions#
Finally, the parser allows you to specify a custom date_parser function to
take full advantage of the flexibility of the date parsing API:
In [118]: df = pd.read_csv(
.....: "tmp.csv", header=None, parse_dates=date_spec, date_parser=pd.to_datetime
.....: )
.....:
In [119]: df
Out[119]:
nominal actual 0 4
0 1999-01-27 19:00:00 1999-01-27 18:56:00 KORD 0.81
1 1999-01-27 20:00:00 1999-01-27 19:56:00 KORD 0.01
2 1999-01-27 21:00:00 1999-01-27 20:56:00 KORD -0.59
3 1999-01-27 21:00:00 1999-01-27 21:18:00 KORD -0.99
4 1999-01-27 22:00:00 1999-01-27 21:56:00 KORD -0.59
5 1999-01-27 23:00:00 1999-01-27 22:56:00 KORD -0.59
pandas will try to call the date_parser function in three different ways. If
an exception is raised, the next one is tried:
date_parser is first called with one or more arrays as arguments,
as defined using parse_dates (e.g., date_parser(['2013', '2013'], ['1', '2'])).
If #1 fails, date_parser is called with all the columns
concatenated row-wise into a single array (e.g., date_parser(['2013 1', '2013 2'])).
Note that performance-wise, you should try these methods of parsing dates in order:
Try to infer the format using infer_datetime_format=True (see section below).
If you know the format, use pd.to_datetime():
date_parser=lambda x: pd.to_datetime(x, format=...).
If you have a really non-standard format, use a custom date_parser function.
For optimal performance, this should be vectorized, i.e., it should accept arrays
as arguments.
Parsing a CSV with mixed timezones#
pandas cannot natively represent a column or index with mixed timezones. If your CSV
file contains columns with a mixture of timezones, the default result will be
an object-dtype column with strings, even with parse_dates.
In [120]: content = """\
.....: a
.....: 2000-01-01T00:00:00+05:00
.....: 2000-01-01T00:00:00+06:00"""
.....:
In [121]: df = pd.read_csv(StringIO(content), parse_dates=["a"])
In [122]: df["a"]
Out[122]:
0 2000-01-01 00:00:00+05:00
1 2000-01-01 00:00:00+06:00
Name: a, dtype: object
To parse the mixed-timezone values as a datetime column, pass a partially-applied
to_datetime() with utc=True as the date_parser.
In [123]: df = pd.read_csv(
.....: StringIO(content),
.....: parse_dates=["a"],
.....: date_parser=lambda col: pd.to_datetime(col, utc=True),
.....: )
.....:
In [124]: df["a"]
Out[124]:
0 1999-12-31 19:00:00+00:00
1 1999-12-31 18:00:00+00:00
Name: a, dtype: datetime64[ns, UTC]
Inferring datetime format#
If you have parse_dates enabled for some or all of your columns, and your
datetime strings are all formatted the same way, you may get a large speed
up by setting infer_datetime_format=True. If set, pandas will attempt
to guess the format of your datetime strings, and then use a faster means
of parsing the strings. 5-10x parsing speeds have been observed. pandas
will fallback to the usual parsing if either the format cannot be guessed
or the format that was guessed cannot properly parse the entire column
of strings. So in general, infer_datetime_format should not have any
negative consequences if enabled.
Here are some examples of datetime strings that can be guessed (All
representing December 30th, 2011 at 00:00:00):
“20111230”
“2011/12/30”
“20111230 00:00:00”
“12/30/2011 00:00:00”
“30/Dec/2011 00:00:00”
“30/December/2011 00:00:00”
Note that infer_datetime_format is sensitive to dayfirst. With
dayfirst=True, it will guess “01/12/2011” to be December 1st. With
dayfirst=False (default) it will guess “01/12/2011” to be January 12th.
# Try to infer the format for the index column
In [125]: df = pd.read_csv(
.....: "foo.csv",
.....: index_col=0,
.....: parse_dates=True,
.....: infer_datetime_format=True,
.....: )
.....:
In [126]: df
Out[126]:
A B C
date
2009-01-01 a 1 2
2009-01-02 b 3 4
2009-01-03 c 4 5
International date formats#
While US date formats tend to be MM/DD/YYYY, many international formats use
DD/MM/YYYY instead. For convenience, a dayfirst keyword is provided:
In [127]: data = "date,value,cat\n1/6/2000,5,a\n2/6/2000,10,b\n3/6/2000,15,c"
In [128]: print(data)
date,value,cat
1/6/2000,5,a
2/6/2000,10,b
3/6/2000,15,c
In [129]: with open("tmp.csv", "w") as fh:
.....: fh.write(data)
.....:
In [130]: pd.read_csv("tmp.csv", parse_dates=[0])
Out[130]:
date value cat
0 2000-01-06 5 a
1 2000-02-06 10 b
2 2000-03-06 15 c
In [131]: pd.read_csv("tmp.csv", dayfirst=True, parse_dates=[0])
Out[131]:
date value cat
0 2000-06-01 5 a
1 2000-06-02 10 b
2 2000-06-03 15 c
Writing CSVs to binary file objects#
New in version 1.2.0.
df.to_csv(..., mode="wb") allows writing a CSV to a file object
opened binary mode. In most cases, it is not necessary to specify
mode as Pandas will auto-detect whether the file object is
opened in text or binary mode.
In [132]: import io
In [133]: data = pd.DataFrame([0, 1, 2])
In [134]: buffer = io.BytesIO()
In [135]: data.to_csv(buffer, encoding="utf-8", compression="gzip")
Specifying method for floating-point conversion#
The parameter float_precision can be specified in order to use
a specific floating-point converter during parsing with the C engine.
The options are the ordinary converter, the high-precision converter, and
the round-trip converter (which is guaranteed to round-trip values after
writing to a file). For example:
In [136]: val = "0.3066101993807095471566981359501369297504425048828125"
In [137]: data = "a,b,c\n1,2,{0}".format(val)
In [138]: abs(
.....: pd.read_csv(
.....: StringIO(data),
.....: engine="c",
.....: float_precision=None,
.....: )["c"][0] - float(val)
.....: )
.....:
Out[138]: 5.551115123125783e-17
In [139]: abs(
.....: pd.read_csv(
.....: StringIO(data),
.....: engine="c",
.....: float_precision="high",
.....: )["c"][0] - float(val)
.....: )
.....:
Out[139]: 5.551115123125783e-17
In [140]: abs(
.....: pd.read_csv(StringIO(data), engine="c", float_precision="round_trip")["c"][0]
.....: - float(val)
.....: )
.....:
Out[140]: 0.0
Thousand separators#
For large numbers that have been written with a thousands separator, you can
set the thousands keyword to a string of length 1 so that integers will be parsed
correctly:
By default, numbers with a thousands separator will be parsed as strings:
In [141]: data = (
.....: "ID|level|category\n"
.....: "Patient1|123,000|x\n"
.....: "Patient2|23,000|y\n"
.....: "Patient3|1,234,018|z"
.....: )
.....:
In [142]: with open("tmp.csv", "w") as fh:
.....: fh.write(data)
.....:
In [143]: df = pd.read_csv("tmp.csv", sep="|")
In [144]: df
Out[144]:
ID level category
0 Patient1 123,000 x
1 Patient2 23,000 y
2 Patient3 1,234,018 z
In [145]: df.level.dtype
Out[145]: dtype('O')
The thousands keyword allows integers to be parsed correctly:
In [146]: df = pd.read_csv("tmp.csv", sep="|", thousands=",")
In [147]: df
Out[147]:
ID level category
0 Patient1 123000 x
1 Patient2 23000 y
2 Patient3 1234018 z
In [148]: df.level.dtype
Out[148]: dtype('int64')
NA values#
To control which values are parsed as missing values (which are signified by
NaN), specify a string in na_values. If you specify a list of strings,
then all values in it are considered to be missing values. If you specify a
number (a float, like 5.0 or an integer like 5), the
corresponding equivalent values will also imply a missing value (in this case
effectively [5.0, 5] are recognized as NaN).
To completely override the default values that are recognized as missing, specify keep_default_na=False.
The default NaN recognized values are ['-1.#IND', '1.#QNAN', '1.#IND', '-1.#QNAN', '#N/A N/A', '#N/A', 'N/A',
'n/a', 'NA', '<NA>', '#NA', 'NULL', 'null', 'NaN', '-NaN', 'nan', '-nan', ''].
Let us consider some examples:
pd.read_csv("path_to_file.csv", na_values=[5])
In the example above 5 and 5.0 will be recognized as NaN, in
addition to the defaults. A string will first be interpreted as a numerical
5, then as a NaN.
pd.read_csv("path_to_file.csv", keep_default_na=False, na_values=[""])
Above, only an empty field will be recognized as NaN.
pd.read_csv("path_to_file.csv", keep_default_na=False, na_values=["NA", "0"])
Above, both NA and 0 as strings are NaN.
pd.read_csv("path_to_file.csv", na_values=["Nope"])
The default values, in addition to the string "Nope" are recognized as
NaN.
Infinity#
inf like values will be parsed as np.inf (positive infinity), and -inf as -np.inf (negative infinity).
These will ignore the case of the value, meaning Inf, will also be parsed as np.inf.
Returning Series#
Using the squeeze keyword, the parser will return output with a single column
as a Series:
Deprecated since version 1.4.0: Users should append .squeeze("columns") to the DataFrame returned by
read_csv instead.
In [149]: data = "level\nPatient1,123000\nPatient2,23000\nPatient3,1234018"
In [150]: with open("tmp.csv", "w") as fh:
.....: fh.write(data)
.....:
In [151]: print(open("tmp.csv").read())
level
Patient1,123000
Patient2,23000
Patient3,1234018
In [152]: output = pd.read_csv("tmp.csv", squeeze=True)
In [153]: output
Out[153]:
Patient1 123000
Patient2 23000
Patient3 1234018
Name: level, dtype: int64
In [154]: type(output)
Out[154]: pandas.core.series.Series
Boolean values#
The common values True, False, TRUE, and FALSE are all
recognized as boolean. Occasionally you might want to recognize other values
as being boolean. To do this, use the true_values and false_values
options as follows:
In [155]: data = "a,b,c\n1,Yes,2\n3,No,4"
In [156]: print(data)
a,b,c
1,Yes,2
3,No,4
In [157]: pd.read_csv(StringIO(data))
Out[157]:
a b c
0 1 Yes 2
1 3 No 4
In [158]: pd.read_csv(StringIO(data), true_values=["Yes"], false_values=["No"])
Out[158]:
a b c
0 1 True 2
1 3 False 4
Handling “bad” lines#
Some files may have malformed lines with too few fields or too many. Lines with
too few fields will have NA values filled in the trailing fields. Lines with
too many fields will raise an error by default:
In [159]: data = "a,b,c\n1,2,3\n4,5,6,7\n8,9,10"
In [160]: pd.read_csv(StringIO(data))
---------------------------------------------------------------------------
ParserError Traceback (most recent call last)
Cell In[160], line 1
----> 1 pd.read_csv(StringIO(data))
File ~/work/pandas/pandas/pandas/util/_decorators.py:211, in deprecate_kwarg.<locals>._deprecate_kwarg.<locals>.wrapper(*args, **kwargs)
209 else:
210 kwargs[new_arg_name] = new_arg_value
--> 211 return func(*args, **kwargs)
File ~/work/pandas/pandas/pandas/util/_decorators.py:331, in deprecate_nonkeyword_arguments.<locals>.decorate.<locals>.wrapper(*args, **kwargs)
325 if len(args) > num_allow_args:
326 warnings.warn(
327 msg.format(arguments=_format_argument_list(allow_args)),
328 FutureWarning,
329 stacklevel=find_stack_level(),
330 )
--> 331 return func(*args, **kwargs)
File ~/work/pandas/pandas/pandas/io/parsers/readers.py:950, in read_csv(filepath_or_buffer, sep, delimiter, header, names, index_col, usecols, squeeze, prefix, mangle_dupe_cols, dtype, engine, converters, true_values, false_values, skipinitialspace, skiprows, skipfooter, nrows, na_values, keep_default_na, na_filter, verbose, skip_blank_lines, parse_dates, infer_datetime_format, keep_date_col, date_parser, dayfirst, cache_dates, iterator, chunksize, compression, thousands, decimal, lineterminator, quotechar, quoting, doublequote, escapechar, comment, encoding, encoding_errors, dialect, error_bad_lines, warn_bad_lines, on_bad_lines, delim_whitespace, low_memory, memory_map, float_precision, storage_options)
935 kwds_defaults = _refine_defaults_read(
936 dialect,
937 delimiter,
(...)
946 defaults={"delimiter": ","},
947 )
948 kwds.update(kwds_defaults)
--> 950 return _read(filepath_or_buffer, kwds)
File ~/work/pandas/pandas/pandas/io/parsers/readers.py:611, in _read(filepath_or_buffer, kwds)
608 return parser
610 with parser:
--> 611 return parser.read(nrows)
File ~/work/pandas/pandas/pandas/io/parsers/readers.py:1778, in TextFileReader.read(self, nrows)
1771 nrows = validate_integer("nrows", nrows)
1772 try:
1773 # error: "ParserBase" has no attribute "read"
1774 (
1775 index,
1776 columns,
1777 col_dict,
-> 1778 ) = self._engine.read( # type: ignore[attr-defined]
1779 nrows
1780 )
1781 except Exception:
1782 self.close()
File ~/work/pandas/pandas/pandas/io/parsers/c_parser_wrapper.py:230, in CParserWrapper.read(self, nrows)
228 try:
229 if self.low_memory:
--> 230 chunks = self._reader.read_low_memory(nrows)
231 # destructive to chunks
232 data = _concatenate_chunks(chunks)
File ~/work/pandas/pandas/pandas/_libs/parsers.pyx:808, in pandas._libs.parsers.TextReader.read_low_memory()
File ~/work/pandas/pandas/pandas/_libs/parsers.pyx:866, in pandas._libs.parsers.TextReader._read_rows()
File ~/work/pandas/pandas/pandas/_libs/parsers.pyx:852, in pandas._libs.parsers.TextReader._tokenize_rows()
File ~/work/pandas/pandas/pandas/_libs/parsers.pyx:1973, in pandas._libs.parsers.raise_parser_error()
ParserError: Error tokenizing data. C error: Expected 3 fields in line 3, saw 4
You can elect to skip bad lines:
In [29]: pd.read_csv(StringIO(data), on_bad_lines="warn")
Skipping line 3: expected 3 fields, saw 4
Out[29]:
a b c
0 1 2 3
1 8 9 10
Or pass a callable function to handle the bad line if engine="python".
The bad line will be a list of strings that was split by the sep:
In [29]: external_list = []
In [30]: def bad_lines_func(line):
...: external_list.append(line)
...: return line[-3:]
In [31]: pd.read_csv(StringIO(data), on_bad_lines=bad_lines_func, engine="python")
Out[31]:
a b c
0 1 2 3
1 5 6 7
2 8 9 10
In [32]: external_list
Out[32]: [4, 5, 6, 7]
.. versionadded:: 1.4.0
You can also use the usecols parameter to eliminate extraneous column
data that appear in some lines but not others:
In [33]: pd.read_csv(StringIO(data), usecols=[0, 1, 2])
Out[33]:
a b c
0 1 2 3
1 4 5 6
2 8 9 10
In case you want to keep all data including the lines with too many fields, you can
specify a sufficient number of names. This ensures that lines with not enough
fields are filled with NaN.
In [34]: pd.read_csv(StringIO(data), names=['a', 'b', 'c', 'd'])
Out[34]:
a b c d
0 1 2 3 NaN
1 4 5 6 7
2 8 9 10 NaN
Dialect#
The dialect keyword gives greater flexibility in specifying the file format.
By default it uses the Excel dialect but you can specify either the dialect name
or a csv.Dialect instance.
Suppose you had data with unenclosed quotes:
In [161]: data = "label1,label2,label3\n" 'index1,"a,c,e\n' "index2,b,d,f"
In [162]: print(data)
label1,label2,label3
index1,"a,c,e
index2,b,d,f
By default, read_csv uses the Excel dialect and treats the double quote as
the quote character, which causes it to fail when it finds a newline before it
finds the closing double quote.
We can get around this using dialect:
In [163]: import csv
In [164]: dia = csv.excel()
In [165]: dia.quoting = csv.QUOTE_NONE
In [166]: pd.read_csv(StringIO(data), dialect=dia)
Out[166]:
label1 label2 label3
index1 "a c e
index2 b d f
All of the dialect options can be specified separately by keyword arguments:
In [167]: data = "a,b,c~1,2,3~4,5,6"
In [168]: pd.read_csv(StringIO(data), lineterminator="~")
Out[168]:
a b c
0 1 2 3
1 4 5 6
Another common dialect option is skipinitialspace, to skip any whitespace
after a delimiter:
In [169]: data = "a, b, c\n1, 2, 3\n4, 5, 6"
In [170]: print(data)
a, b, c
1, 2, 3
4, 5, 6
In [171]: pd.read_csv(StringIO(data), skipinitialspace=True)
Out[171]:
a b c
0 1 2 3
1 4 5 6
The parsers make every attempt to “do the right thing” and not be fragile. Type
inference is a pretty big deal. If a column can be coerced to integer dtype
without altering the contents, the parser will do so. Any non-numeric
columns will come through as object dtype as with the rest of pandas objects.
Quoting and Escape Characters#
Quotes (and other escape characters) in embedded fields can be handled in any
number of ways. One way is to use backslashes; to properly parse this data, you
should pass the escapechar option:
In [172]: data = 'a,b\n"hello, \\"Bob\\", nice to see you",5'
In [173]: print(data)
a,b
"hello, \"Bob\", nice to see you",5
In [174]: pd.read_csv(StringIO(data), escapechar="\\")
Out[174]:
a b
0 hello, "Bob", nice to see you 5
Files with fixed width columns#
While read_csv() reads delimited data, the read_fwf() function works
with data files that have known and fixed column widths. The function parameters
to read_fwf are largely the same as read_csv with two extra parameters, and
a different usage of the delimiter parameter:
colspecs: A list of pairs (tuples) giving the extents of the
fixed-width fields of each line as half-open intervals (i.e., [from, to[ ).
String value ‘infer’ can be used to instruct the parser to try detecting
the column specifications from the first 100 rows of the data. Default
behavior, if not specified, is to infer.
widths: A list of field widths which can be used instead of ‘colspecs’
if the intervals are contiguous.
delimiter: Characters to consider as filler characters in the fixed-width file.
Can be used to specify the filler character of the fields
if it is not spaces (e.g., ‘~’).
Consider a typical fixed-width data file:
In [175]: data1 = (
.....: "id8141 360.242940 149.910199 11950.7\n"
.....: "id1594 444.953632 166.985655 11788.4\n"
.....: "id1849 364.136849 183.628767 11806.2\n"
.....: "id1230 413.836124 184.375703 11916.8\n"
.....: "id1948 502.953953 173.237159 12468.3"
.....: )
.....:
In [176]: with open("bar.csv", "w") as f:
.....: f.write(data1)
.....:
In order to parse this file into a DataFrame, we simply need to supply the
column specifications to the read_fwf function along with the file name:
# Column specifications are a list of half-intervals
In [177]: colspecs = [(0, 6), (8, 20), (21, 33), (34, 43)]
In [178]: df = pd.read_fwf("bar.csv", colspecs=colspecs, header=None, index_col=0)
In [179]: df
Out[179]:
1 2 3
0
id8141 360.242940 149.910199 11950.7
id1594 444.953632 166.985655 11788.4
id1849 364.136849 183.628767 11806.2
id1230 413.836124 184.375703 11916.8
id1948 502.953953 173.237159 12468.3
Note how the parser automatically picks column names X.<column number> when
header=None argument is specified. Alternatively, you can supply just the
column widths for contiguous columns:
# Widths are a list of integers
In [180]: widths = [6, 14, 13, 10]
In [181]: df = pd.read_fwf("bar.csv", widths=widths, header=None)
In [182]: df
Out[182]:
0 1 2 3
0 id8141 360.242940 149.910199 11950.7
1 id1594 444.953632 166.985655 11788.4
2 id1849 364.136849 183.628767 11806.2
3 id1230 413.836124 184.375703 11916.8
4 id1948 502.953953 173.237159 12468.3
The parser will take care of extra white spaces around the columns
so it’s ok to have extra separation between the columns in the file.
By default, read_fwf will try to infer the file’s colspecs by using the
first 100 rows of the file. It can do it only in cases when the columns are
aligned and correctly separated by the provided delimiter (default delimiter
is whitespace).
In [183]: df = pd.read_fwf("bar.csv", header=None, index_col=0)
In [184]: df
Out[184]:
1 2 3
0
id8141 360.242940 149.910199 11950.7
id1594 444.953632 166.985655 11788.4
id1849 364.136849 183.628767 11806.2
id1230 413.836124 184.375703 11916.8
id1948 502.953953 173.237159 12468.3
read_fwf supports the dtype parameter for specifying the types of
parsed columns to be different from the inferred type.
In [185]: pd.read_fwf("bar.csv", header=None, index_col=0).dtypes
Out[185]:
1 float64
2 float64
3 float64
dtype: object
In [186]: pd.read_fwf("bar.csv", header=None, dtype={2: "object"}).dtypes
Out[186]:
0 object
1 float64
2 object
3 float64
dtype: object
Indexes#
Files with an “implicit” index column#
Consider a file with one less entry in the header than the number of data
column:
In [187]: data = "A,B,C\n20090101,a,1,2\n20090102,b,3,4\n20090103,c,4,5"
In [188]: print(data)
A,B,C
20090101,a,1,2
20090102,b,3,4
20090103,c,4,5
In [189]: with open("foo.csv", "w") as f:
.....: f.write(data)
.....:
In this special case, read_csv assumes that the first column is to be used
as the index of the DataFrame:
In [190]: pd.read_csv("foo.csv")
Out[190]:
A B C
20090101 a 1 2
20090102 b 3 4
20090103 c 4 5
Note that the dates weren’t automatically parsed. In that case you would need
to do as before:
In [191]: df = pd.read_csv("foo.csv", parse_dates=True)
In [192]: df.index
Out[192]: DatetimeIndex(['2009-01-01', '2009-01-02', '2009-01-03'], dtype='datetime64[ns]', freq=None)
Reading an index with a MultiIndex#
Suppose you have data indexed by two columns:
In [193]: data = 'year,indiv,zit,xit\n1977,"A",1.2,.6\n1977,"B",1.5,.5'
In [194]: print(data)
year,indiv,zit,xit
1977,"A",1.2,.6
1977,"B",1.5,.5
In [195]: with open("mindex_ex.csv", mode="w") as f:
.....: f.write(data)
.....:
The index_col argument to read_csv can take a list of
column numbers to turn multiple columns into a MultiIndex for the index of the
returned object:
In [196]: df = pd.read_csv("mindex_ex.csv", index_col=[0, 1])
In [197]: df
Out[197]:
zit xit
year indiv
1977 A 1.2 0.6
B 1.5 0.5
In [198]: df.loc[1977]
Out[198]:
zit xit
indiv
A 1.2 0.6
B 1.5 0.5
Reading columns with a MultiIndex#
By specifying list of row locations for the header argument, you
can read in a MultiIndex for the columns. Specifying non-consecutive
rows will skip the intervening rows.
In [199]: from pandas._testing import makeCustomDataframe as mkdf
In [200]: df = mkdf(5, 3, r_idx_nlevels=2, c_idx_nlevels=4)
In [201]: df.to_csv("mi.csv")
In [202]: print(open("mi.csv").read())
C0,,C_l0_g0,C_l0_g1,C_l0_g2
C1,,C_l1_g0,C_l1_g1,C_l1_g2
C2,,C_l2_g0,C_l2_g1,C_l2_g2
C3,,C_l3_g0,C_l3_g1,C_l3_g2
R0,R1,,,
R_l0_g0,R_l1_g0,R0C0,R0C1,R0C2
R_l0_g1,R_l1_g1,R1C0,R1C1,R1C2
R_l0_g2,R_l1_g2,R2C0,R2C1,R2C2
R_l0_g3,R_l1_g3,R3C0,R3C1,R3C2
R_l0_g4,R_l1_g4,R4C0,R4C1,R4C2
In [203]: pd.read_csv("mi.csv", header=[0, 1, 2, 3], index_col=[0, 1])
Out[203]:
C0 C_l0_g0 C_l0_g1 C_l0_g2
C1 C_l1_g0 C_l1_g1 C_l1_g2
C2 C_l2_g0 C_l2_g1 C_l2_g2
C3 C_l3_g0 C_l3_g1 C_l3_g2
R0 R1
R_l0_g0 R_l1_g0 R0C0 R0C1 R0C2
R_l0_g1 R_l1_g1 R1C0 R1C1 R1C2
R_l0_g2 R_l1_g2 R2C0 R2C1 R2C2
R_l0_g3 R_l1_g3 R3C0 R3C1 R3C2
R_l0_g4 R_l1_g4 R4C0 R4C1 R4C2
read_csv is also able to interpret a more common format
of multi-columns indices.
In [204]: data = ",a,a,a,b,c,c\n,q,r,s,t,u,v\none,1,2,3,4,5,6\ntwo,7,8,9,10,11,12"
In [205]: print(data)
,a,a,a,b,c,c
,q,r,s,t,u,v
one,1,2,3,4,5,6
two,7,8,9,10,11,12
In [206]: with open("mi2.csv", "w") as fh:
.....: fh.write(data)
.....:
In [207]: pd.read_csv("mi2.csv", header=[0, 1], index_col=0)
Out[207]:
a b c
q r s t u v
one 1 2 3 4 5 6
two 7 8 9 10 11 12
Note
If an index_col is not specified (e.g. you don’t have an index, or wrote it
with df.to_csv(..., index=False), then any names on the columns index will
be lost.
Automatically “sniffing” the delimiter#
read_csv is capable of inferring delimited (not necessarily
comma-separated) files, as pandas uses the csv.Sniffer
class of the csv module. For this, you have to specify sep=None.
In [208]: df = pd.DataFrame(np.random.randn(10, 4))
In [209]: df.to_csv("tmp.csv", sep="|")
In [210]: df.to_csv("tmp2.csv", sep=":")
In [211]: pd.read_csv("tmp2.csv", sep=None, engine="python")
Out[211]:
Unnamed: 0 0 1 2 3
0 0 0.469112 -0.282863 -1.509059 -1.135632
1 1 1.212112 -0.173215 0.119209 -1.044236
2 2 -0.861849 -2.104569 -0.494929 1.071804
3 3 0.721555 -0.706771 -1.039575 0.271860
4 4 -0.424972 0.567020 0.276232 -1.087401
5 5 -0.673690 0.113648 -1.478427 0.524988
6 6 0.404705 0.577046 -1.715002 -1.039268
7 7 -0.370647 -1.157892 -1.344312 0.844885
8 8 1.075770 -0.109050 1.643563 -1.469388
9 9 0.357021 -0.674600 -1.776904 -0.968914
Reading multiple files to create a single DataFrame#
It’s best to use concat() to combine multiple files.
See the cookbook for an example.
Iterating through files chunk by chunk#
Suppose you wish to iterate through a (potentially very large) file lazily
rather than reading the entire file into memory, such as the following:
In [212]: df = pd.DataFrame(np.random.randn(10, 4))
In [213]: df.to_csv("tmp.csv", sep="|")
In [214]: table = pd.read_csv("tmp.csv", sep="|")
In [215]: table
Out[215]:
Unnamed: 0 0 1 2 3
0 0 -1.294524 0.413738 0.276662 -0.472035
1 1 -0.013960 -0.362543 -0.006154 -0.923061
2 2 0.895717 0.805244 -1.206412 2.565646
3 3 1.431256 1.340309 -1.170299 -0.226169
4 4 0.410835 0.813850 0.132003 -0.827317
5 5 -0.076467 -1.187678 1.130127 -1.436737
6 6 -1.413681 1.607920 1.024180 0.569605
7 7 0.875906 -2.211372 0.974466 -2.006747
8 8 -0.410001 -0.078638 0.545952 -1.219217
9 9 -1.226825 0.769804 -1.281247 -0.727707
By specifying a chunksize to read_csv, the return
value will be an iterable object of type TextFileReader:
In [216]: with pd.read_csv("tmp.csv", sep="|", chunksize=4) as reader:
.....: reader
.....: for chunk in reader:
.....: print(chunk)
.....:
Unnamed: 0 0 1 2 3
0 0 -1.294524 0.413738 0.276662 -0.472035
1 1 -0.013960 -0.362543 -0.006154 -0.923061
2 2 0.895717 0.805244 -1.206412 2.565646
3 3 1.431256 1.340309 -1.170299 -0.226169
Unnamed: 0 0 1 2 3
4 4 0.410835 0.813850 0.132003 -0.827317
5 5 -0.076467 -1.187678 1.130127 -1.436737
6 6 -1.413681 1.607920 1.024180 0.569605
7 7 0.875906 -2.211372 0.974466 -2.006747
Unnamed: 0 0 1 2 3
8 8 -0.410001 -0.078638 0.545952 -1.219217
9 9 -1.226825 0.769804 -1.281247 -0.727707
Changed in version 1.2: read_csv/json/sas return a context-manager when iterating through a file.
Specifying iterator=True will also return the TextFileReader object:
In [217]: with pd.read_csv("tmp.csv", sep="|", iterator=True) as reader:
.....: reader.get_chunk(5)
.....:
Specifying the parser engine#
Pandas currently supports three engines, the C engine, the python engine, and an experimental
pyarrow engine (requires the pyarrow package). In general, the pyarrow engine is fastest
on larger workloads and is equivalent in speed to the C engine on most other workloads.
The python engine tends to be slower than the pyarrow and C engines on most workloads. However,
the pyarrow engine is much less robust than the C engine, which lacks a few features compared to the
Python engine.
Where possible, pandas uses the C parser (specified as engine='c'), but it may fall
back to Python if C-unsupported options are specified.
Currently, options unsupported by the C and pyarrow engines include:
sep other than a single character (e.g. regex separators)
skipfooter
sep=None with delim_whitespace=False
Specifying any of the above options will produce a ParserWarning unless the
python engine is selected explicitly using engine='python'.
Options that are unsupported by the pyarrow engine which are not covered by the list above include:
float_precision
chunksize
comment
nrows
thousands
memory_map
dialect
warn_bad_lines
error_bad_lines
on_bad_lines
delim_whitespace
quoting
lineterminator
converters
decimal
iterator
dayfirst
infer_datetime_format
verbose
skipinitialspace
low_memory
Specifying these options with engine='pyarrow' will raise a ValueError.
Reading/writing remote files#
You can pass in a URL to read or write remote files to many of pandas’ IO
functions - the following example shows reading a CSV file:
df = pd.read_csv("https://download.bls.gov/pub/time.series/cu/cu.item", sep="\t")
New in version 1.3.0.
A custom header can be sent alongside HTTP(s) requests by passing a dictionary
of header key value mappings to the storage_options keyword argument as shown below:
headers = {"User-Agent": "pandas"}
df = pd.read_csv(
"https://download.bls.gov/pub/time.series/cu/cu.item",
sep="\t",
storage_options=headers
)
All URLs which are not local files or HTTP(s) are handled by
fsspec, if installed, and its various filesystem implementations
(including Amazon S3, Google Cloud, SSH, FTP, webHDFS…).
Some of these implementations will require additional packages to be
installed, for example
S3 URLs require the s3fs library:
df = pd.read_json("s3://pandas-test/adatafile.json")
When dealing with remote storage systems, you might need
extra configuration with environment variables or config files in
special locations. For example, to access data in your S3 bucket,
you will need to define credentials in one of the several ways listed in
the S3Fs documentation. The same is true
for several of the storage backends, and you should follow the links
at fsimpl1 for implementations built into fsspec and fsimpl2
for those not included in the main fsspec
distribution.
You can also pass parameters directly to the backend driver. For example,
if you do not have S3 credentials, you can still access public data by
specifying an anonymous connection, such as
New in version 1.2.0.
pd.read_csv(
"s3://ncei-wcsd-archive/data/processed/SH1305/18kHz/SaKe2013"
"-D20130523-T080854_to_SaKe2013-D20130523-T085643.csv",
storage_options={"anon": True},
)
fsspec also allows complex URLs, for accessing data in compressed
archives, local caching of files, and more. To locally cache the above
example, you would modify the call to
pd.read_csv(
"simplecache::s3://ncei-wcsd-archive/data/processed/SH1305/18kHz/"
"SaKe2013-D20130523-T080854_to_SaKe2013-D20130523-T085643.csv",
storage_options={"s3": {"anon": True}},
)
where we specify that the “anon” parameter is meant for the “s3” part of
the implementation, not to the caching implementation. Note that this caches to a temporary
directory for the duration of the session only, but you can also specify
a permanent store.
Writing out data#
Writing to CSV format#
The Series and DataFrame objects have an instance method to_csv which
allows storing the contents of the object as a comma-separated-values file. The
function takes a number of arguments. Only the first is required.
path_or_buf: A string path to the file to write or a file object. If a file object it must be opened with newline=''
sep : Field delimiter for the output file (default “,”)
na_rep: A string representation of a missing value (default ‘’)
float_format: Format string for floating point numbers
columns: Columns to write (default None)
header: Whether to write out the column names (default True)
index: whether to write row (index) names (default True)
index_label: Column label(s) for index column(s) if desired. If None
(default), and header and index are True, then the index names are
used. (A sequence should be given if the DataFrame uses MultiIndex).
mode : Python write mode, default ‘w’
encoding: a string representing the encoding to use if the contents are
non-ASCII, for Python versions prior to 3
lineterminator: Character sequence denoting line end (default os.linesep)
quoting: Set quoting rules as in csv module (default csv.QUOTE_MINIMAL). Note that if you have set a float_format then floats are converted to strings and csv.QUOTE_NONNUMERIC will treat them as non-numeric
quotechar: Character used to quote fields (default ‘”’)
doublequote: Control quoting of quotechar in fields (default True)
escapechar: Character used to escape sep and quotechar when
appropriate (default None)
chunksize: Number of rows to write at a time
date_format: Format string for datetime objects
Writing a formatted string#
The DataFrame object has an instance method to_string which allows control
over the string representation of the object. All arguments are optional:
buf default None, for example a StringIO object
columns default None, which columns to write
col_space default None, minimum width of each column.
na_rep default NaN, representation of NA value
formatters default None, a dictionary (by column) of functions each of
which takes a single argument and returns a formatted string
float_format default None, a function which takes a single (float)
argument and returns a formatted string; to be applied to floats in the
DataFrame.
sparsify default True, set to False for a DataFrame with a hierarchical
index to print every MultiIndex key at each row.
index_names default True, will print the names of the indices
index default True, will print the index (ie, row labels)
header default True, will print the column labels
justify default left, will print column headers left- or
right-justified
The Series object also has a to_string method, but with only the buf,
na_rep, float_format arguments. There is also a length argument
which, if set to True, will additionally output the length of the Series.
JSON#
Read and write JSON format files and strings.
Writing JSON#
A Series or DataFrame can be converted to a valid JSON string. Use to_json
with optional parameters:
path_or_buf : the pathname or buffer to write the output
This can be None in which case a JSON string is returned
orient :
Series:
default is index
allowed values are {split, records, index}
DataFrame:
default is columns
allowed values are {split, records, index, columns, values, table}
The format of the JSON string
split
dict like {index -> [index], columns -> [columns], data -> [values]}
records
list like [{column -> value}, … , {column -> value}]
index
dict like {index -> {column -> value}}
columns
dict like {column -> {index -> value}}
values
just the values array
table
adhering to the JSON Table Schema
date_format : string, type of date conversion, ‘epoch’ for timestamp, ‘iso’ for ISO8601.
double_precision : The number of decimal places to use when encoding floating point values, default 10.
force_ascii : force encoded string to be ASCII, default True.
date_unit : The time unit to encode to, governs timestamp and ISO8601 precision. One of ‘s’, ‘ms’, ‘us’ or ‘ns’ for seconds, milliseconds, microseconds and nanoseconds respectively. Default ‘ms’.
default_handler : The handler to call if an object cannot otherwise be converted to a suitable format for JSON. Takes a single argument, which is the object to convert, and returns a serializable object.
lines : If records orient, then will write each record per line as json.
Note NaN’s, NaT’s and None will be converted to null and datetime objects will be converted based on the date_format and date_unit parameters.
In [218]: dfj = pd.DataFrame(np.random.randn(5, 2), columns=list("AB"))
In [219]: json = dfj.to_json()
In [220]: json
Out[220]: '{"A":{"0":-0.1213062281,"1":0.6957746499,"2":0.9597255933,"3":-0.6199759194,"4":-0.7323393705},"B":{"0":-0.0978826728,"1":0.3417343559,"2":-1.1103361029,"3":0.1497483186,"4":0.6877383895}}'
Orient options#
There are a number of different options for the format of the resulting JSON
file / string. Consider the following DataFrame and Series:
In [221]: dfjo = pd.DataFrame(
.....: dict(A=range(1, 4), B=range(4, 7), C=range(7, 10)),
.....: columns=list("ABC"),
.....: index=list("xyz"),
.....: )
.....:
In [222]: dfjo
Out[222]:
A B C
x 1 4 7
y 2 5 8
z 3 6 9
In [223]: sjo = pd.Series(dict(x=15, y=16, z=17), name="D")
In [224]: sjo
Out[224]:
x 15
y 16
z 17
Name: D, dtype: int64
Column oriented (the default for DataFrame) serializes the data as
nested JSON objects with column labels acting as the primary index:
In [225]: dfjo.to_json(orient="columns")
Out[225]: '{"A":{"x":1,"y":2,"z":3},"B":{"x":4,"y":5,"z":6},"C":{"x":7,"y":8,"z":9}}'
# Not available for Series
Index oriented (the default for Series) similar to column oriented
but the index labels are now primary:
In [226]: dfjo.to_json(orient="index")
Out[226]: '{"x":{"A":1,"B":4,"C":7},"y":{"A":2,"B":5,"C":8},"z":{"A":3,"B":6,"C":9}}'
In [227]: sjo.to_json(orient="index")
Out[227]: '{"x":15,"y":16,"z":17}'
Record oriented serializes the data to a JSON array of column -> value records,
index labels are not included. This is useful for passing DataFrame data to plotting
libraries, for example the JavaScript library d3.js:
In [228]: dfjo.to_json(orient="records")
Out[228]: '[{"A":1,"B":4,"C":7},{"A":2,"B":5,"C":8},{"A":3,"B":6,"C":9}]'
In [229]: sjo.to_json(orient="records")
Out[229]: '[15,16,17]'
Value oriented is a bare-bones option which serializes to nested JSON arrays of
values only, column and index labels are not included:
In [230]: dfjo.to_json(orient="values")
Out[230]: '[[1,4,7],[2,5,8],[3,6,9]]'
# Not available for Series
Split oriented serializes to a JSON object containing separate entries for
values, index and columns. Name is also included for Series:
In [231]: dfjo.to_json(orient="split")
Out[231]: '{"columns":["A","B","C"],"index":["x","y","z"],"data":[[1,4,7],[2,5,8],[3,6,9]]}'
In [232]: sjo.to_json(orient="split")
Out[232]: '{"name":"D","index":["x","y","z"],"data":[15,16,17]}'
Table oriented serializes to the JSON Table Schema, allowing for the
preservation of metadata including but not limited to dtypes and index names.
Note
Any orient option that encodes to a JSON object will not preserve the ordering of
index and column labels during round-trip serialization. If you wish to preserve
label ordering use the split option as it uses ordered containers.
Date handling#
Writing in ISO date format:
In [233]: dfd = pd.DataFrame(np.random.randn(5, 2), columns=list("AB"))
In [234]: dfd["date"] = pd.Timestamp("20130101")
In [235]: dfd = dfd.sort_index(axis=1, ascending=False)
In [236]: json = dfd.to_json(date_format="iso")
In [237]: json
Out[237]: '{"date":{"0":"2013-01-01T00:00:00.000","1":"2013-01-01T00:00:00.000","2":"2013-01-01T00:00:00.000","3":"2013-01-01T00:00:00.000","4":"2013-01-01T00:00:00.000"},"B":{"0":0.403309524,"1":0.3016244523,"2":-1.3698493577,"3":1.4626960492,"4":-0.8265909164},"A":{"0":0.1764443426,"1":-0.1549507744,"2":-2.1798606054,"3":-0.9542078401,"4":-1.7431609117}}'
Writing in ISO date format, with microseconds:
In [238]: json = dfd.to_json(date_format="iso", date_unit="us")
In [239]: json
Out[239]: '{"date":{"0":"2013-01-01T00:00:00.000000","1":"2013-01-01T00:00:00.000000","2":"2013-01-01T00:00:00.000000","3":"2013-01-01T00:00:00.000000","4":"2013-01-01T00:00:00.000000"},"B":{"0":0.403309524,"1":0.3016244523,"2":-1.3698493577,"3":1.4626960492,"4":-0.8265909164},"A":{"0":0.1764443426,"1":-0.1549507744,"2":-2.1798606054,"3":-0.9542078401,"4":-1.7431609117}}'
Epoch timestamps, in seconds:
In [240]: json = dfd.to_json(date_format="epoch", date_unit="s")
In [241]: json
Out[241]: '{"date":{"0":1356998400,"1":1356998400,"2":1356998400,"3":1356998400,"4":1356998400},"B":{"0":0.403309524,"1":0.3016244523,"2":-1.3698493577,"3":1.4626960492,"4":-0.8265909164},"A":{"0":0.1764443426,"1":-0.1549507744,"2":-2.1798606054,"3":-0.9542078401,"4":-1.7431609117}}'
Writing to a file, with a date index and a date column:
In [242]: dfj2 = dfj.copy()
In [243]: dfj2["date"] = pd.Timestamp("20130101")
In [244]: dfj2["ints"] = list(range(5))
In [245]: dfj2["bools"] = True
In [246]: dfj2.index = pd.date_range("20130101", periods=5)
In [247]: dfj2.to_json("test.json")
In [248]: with open("test.json") as fh:
.....: print(fh.read())
.....:
{"A":{"1356998400000":-0.1213062281,"1357084800000":0.6957746499,"1357171200000":0.9597255933,"1357257600000":-0.6199759194,"1357344000000":-0.7323393705},"B":{"1356998400000":-0.0978826728,"1357084800000":0.3417343559,"1357171200000":-1.1103361029,"1357257600000":0.1497483186,"1357344000000":0.6877383895},"date":{"1356998400000":1356998400000,"1357084800000":1356998400000,"1357171200000":1356998400000,"1357257600000":1356998400000,"1357344000000":1356998400000},"ints":{"1356998400000":0,"1357084800000":1,"1357171200000":2,"1357257600000":3,"1357344000000":4},"bools":{"1356998400000":true,"1357084800000":true,"1357171200000":true,"1357257600000":true,"1357344000000":true}}
Fallback behavior#
If the JSON serializer cannot handle the container contents directly it will
fall back in the following manner:
if the dtype is unsupported (e.g. np.complex_) then the default_handler, if provided, will be called
for each value, otherwise an exception is raised.
if an object is unsupported it will attempt the following:
check if the object has defined a toDict method and call it.
A toDict method should return a dict which will then be JSON serialized.
invoke the default_handler if one was provided.
convert the object to a dict by traversing its contents. However this will often fail
with an OverflowError or give unexpected results.
In general the best approach for unsupported objects or dtypes is to provide a default_handler.
For example:
>>> DataFrame([1.0, 2.0, complex(1.0, 2.0)]).to_json() # raises
RuntimeError: Unhandled numpy dtype 15
can be dealt with by specifying a simple default_handler:
In [249]: pd.DataFrame([1.0, 2.0, complex(1.0, 2.0)]).to_json(default_handler=str)
Out[249]: '{"0":{"0":"(1+0j)","1":"(2+0j)","2":"(1+2j)"}}'
Reading JSON#
Reading a JSON string to pandas object can take a number of parameters.
The parser will try to parse a DataFrame if typ is not supplied or
is None. To explicitly force Series parsing, pass typ=series
filepath_or_buffer : a VALID JSON string or file handle / StringIO. The string could be
a URL. Valid URL schemes include http, ftp, S3, and file. For file URLs, a host
is expected. For instance, a local file could be
file ://localhost/path/to/table.json
typ : type of object to recover (series or frame), default ‘frame’
orient :
Series :
default is index
allowed values are {split, records, index}
DataFrame
default is columns
allowed values are {split, records, index, columns, values, table}
The format of the JSON string
split
dict like {index -> [index], columns -> [columns], data -> [values]}
records
list like [{column -> value}, … , {column -> value}]
index
dict like {index -> {column -> value}}
columns
dict like {column -> {index -> value}}
values
just the values array
table
adhering to the JSON Table Schema
dtype : if True, infer dtypes, if a dict of column to dtype, then use those, if False, then don’t infer dtypes at all, default is True, apply only to the data.
convert_axes : boolean, try to convert the axes to the proper dtypes, default is True
convert_dates : a list of columns to parse for dates; If True, then try to parse date-like columns, default is True.
keep_default_dates : boolean, default True. If parsing dates, then parse the default date-like columns.
numpy : direct decoding to NumPy arrays. default is False;
Supports numeric data only, although labels may be non-numeric. Also note that the JSON ordering MUST be the same for each term if numpy=True.
precise_float : boolean, default False. Set to enable usage of higher precision (strtod) function when decoding string to double values. Default (False) is to use fast but less precise builtin functionality.
date_unit : string, the timestamp unit to detect if converting dates. Default
None. By default the timestamp precision will be detected, if this is not desired
then pass one of ‘s’, ‘ms’, ‘us’ or ‘ns’ to force timestamp precision to
seconds, milliseconds, microseconds or nanoseconds respectively.
lines : reads file as one json object per line.
encoding : The encoding to use to decode py3 bytes.
chunksize : when used in combination with lines=True, return a JsonReader which reads in chunksize lines per iteration.
The parser will raise one of ValueError/TypeError/AssertionError if the JSON is not parseable.
If a non-default orient was used when encoding to JSON be sure to pass the same
option here so that decoding produces sensible results, see Orient Options for an
overview.
Data conversion#
The default of convert_axes=True, dtype=True, and convert_dates=True
will try to parse the axes, and all of the data into appropriate types,
including dates. If you need to override specific dtypes, pass a dict to
dtype. convert_axes should only be set to False if you need to
preserve string-like numbers (e.g. ‘1’, ‘2’) in an axes.
Note
Large integer values may be converted to dates if convert_dates=True and the data and / or column labels appear ‘date-like’. The exact threshold depends on the date_unit specified. ‘date-like’ means that the column label meets one of the following criteria:
it ends with '_at'
it ends with '_time'
it begins with 'timestamp'
it is 'modified'
it is 'date'
Warning
When reading JSON data, automatic coercing into dtypes has some quirks:
an index can be reconstructed in a different order from serialization, that is, the returned order is not guaranteed to be the same as before serialization
a column that was float data will be converted to integer if it can be done safely, e.g. a column of 1.
bool columns will be converted to integer on reconstruction
Thus there are times where you may want to specify specific dtypes via the dtype keyword argument.
Reading from a JSON string:
In [250]: pd.read_json(json)
Out[250]:
date B A
0 2013-01-01 0.403310 0.176444
1 2013-01-01 0.301624 -0.154951
2 2013-01-01 -1.369849 -2.179861
3 2013-01-01 1.462696 -0.954208
4 2013-01-01 -0.826591 -1.743161
Reading from a file:
In [251]: pd.read_json("test.json")
Out[251]:
A B date ints bools
2013-01-01 -0.121306 -0.097883 2013-01-01 0 True
2013-01-02 0.695775 0.341734 2013-01-01 1 True
2013-01-03 0.959726 -1.110336 2013-01-01 2 True
2013-01-04 -0.619976 0.149748 2013-01-01 3 True
2013-01-05 -0.732339 0.687738 2013-01-01 4 True
Don’t convert any data (but still convert axes and dates):
In [252]: pd.read_json("test.json", dtype=object).dtypes
Out[252]:
A object
B object
date object
ints object
bools object
dtype: object
Specify dtypes for conversion:
In [253]: pd.read_json("test.json", dtype={"A": "float32", "bools": "int8"}).dtypes
Out[253]:
A float32
B float64
date datetime64[ns]
ints int64
bools int8
dtype: object
Preserve string indices:
In [254]: si = pd.DataFrame(
.....: np.zeros((4, 4)), columns=list(range(4)), index=[str(i) for i in range(4)]
.....: )
.....:
In [255]: si
Out[255]:
0 1 2 3
0 0.0 0.0 0.0 0.0
1 0.0 0.0 0.0 0.0
2 0.0 0.0 0.0 0.0
3 0.0 0.0 0.0 0.0
In [256]: si.index
Out[256]: Index(['0', '1', '2', '3'], dtype='object')
In [257]: si.columns
Out[257]: Int64Index([0, 1, 2, 3], dtype='int64')
In [258]: json = si.to_json()
In [259]: sij = pd.read_json(json, convert_axes=False)
In [260]: sij
Out[260]:
0 1 2 3
0 0 0 0 0
1 0 0 0 0
2 0 0 0 0
3 0 0 0 0
In [261]: sij.index
Out[261]: Index(['0', '1', '2', '3'], dtype='object')
In [262]: sij.columns
Out[262]: Index(['0', '1', '2', '3'], dtype='object')
Dates written in nanoseconds need to be read back in nanoseconds:
In [263]: json = dfj2.to_json(date_unit="ns")
# Try to parse timestamps as milliseconds -> Won't Work
In [264]: dfju = pd.read_json(json, date_unit="ms")
In [265]: dfju
Out[265]:
A B date ints bools
1356998400000000000 -0.121306 -0.097883 1356998400000000000 0 True
1357084800000000000 0.695775 0.341734 1356998400000000000 1 True
1357171200000000000 0.959726 -1.110336 1356998400000000000 2 True
1357257600000000000 -0.619976 0.149748 1356998400000000000 3 True
1357344000000000000 -0.732339 0.687738 1356998400000000000 4 True
# Let pandas detect the correct precision
In [266]: dfju = pd.read_json(json)
In [267]: dfju
Out[267]:
A B date ints bools
2013-01-01 -0.121306 -0.097883 2013-01-01 0 True
2013-01-02 0.695775 0.341734 2013-01-01 1 True
2013-01-03 0.959726 -1.110336 2013-01-01 2 True
2013-01-04 -0.619976 0.149748 2013-01-01 3 True
2013-01-05 -0.732339 0.687738 2013-01-01 4 True
# Or specify that all timestamps are in nanoseconds
In [268]: dfju = pd.read_json(json, date_unit="ns")
In [269]: dfju
Out[269]:
A B date ints bools
2013-01-01 -0.121306 -0.097883 2013-01-01 0 True
2013-01-02 0.695775 0.341734 2013-01-01 1 True
2013-01-03 0.959726 -1.110336 2013-01-01 2 True
2013-01-04 -0.619976 0.149748 2013-01-01 3 True
2013-01-05 -0.732339 0.687738 2013-01-01 4 True
The Numpy parameter#
Note
This param has been deprecated as of version 1.0.0 and will raise a FutureWarning.
This supports numeric data only. Index and columns labels may be non-numeric, e.g. strings, dates etc.
If numpy=True is passed to read_json an attempt will be made to sniff
an appropriate dtype during deserialization and to subsequently decode directly
to NumPy arrays, bypassing the need for intermediate Python objects.
This can provide speedups if you are deserialising a large amount of numeric
data:
In [270]: randfloats = np.random.uniform(-100, 1000, 10000)
In [271]: randfloats.shape = (1000, 10)
In [272]: dffloats = pd.DataFrame(randfloats, columns=list("ABCDEFGHIJ"))
In [273]: jsonfloats = dffloats.to_json()
In [274]: %timeit pd.read_json(jsonfloats)
7.91 ms +- 77.3 us per loop (mean +- std. dev. of 7 runs, 100 loops each)
In [275]: %timeit pd.read_json(jsonfloats, numpy=True)
5.71 ms +- 333 us per loop (mean +- std. dev. of 7 runs, 100 loops each)
The speedup is less noticeable for smaller datasets:
In [276]: jsonfloats = dffloats.head(100).to_json()
In [277]: %timeit pd.read_json(jsonfloats)
4.46 ms +- 25.9 us per loop (mean +- std. dev. of 7 runs, 100 loops each)
In [278]: %timeit pd.read_json(jsonfloats, numpy=True)
4.09 ms +- 32.3 us per loop (mean +- std. dev. of 7 runs, 100 loops each)
Warning
Direct NumPy decoding makes a number of assumptions and may fail or produce
unexpected output if these assumptions are not satisfied:
data is numeric.
data is uniform. The dtype is sniffed from the first value decoded.
A ValueError may be raised, or incorrect output may be produced
if this condition is not satisfied.
labels are ordered. Labels are only read from the first container, it is assumed
that each subsequent row / column has been encoded in the same order. This should be satisfied if the
data was encoded using to_json but may not be the case if the JSON
is from another source.
Normalization#
pandas provides a utility function to take a dict or list of dicts and normalize this semi-structured data
into a flat table.
In [279]: data = [
.....: {"id": 1, "name": {"first": "Coleen", "last": "Volk"}},
.....: {"name": {"given": "Mark", "family": "Regner"}},
.....: {"id": 2, "name": "Faye Raker"},
.....: ]
.....:
In [280]: pd.json_normalize(data)
Out[280]:
id name.first name.last name.given name.family name
0 1.0 Coleen Volk NaN NaN NaN
1 NaN NaN NaN Mark Regner NaN
2 2.0 NaN NaN NaN NaN Faye Raker
In [281]: data = [
.....: {
.....: "state": "Florida",
.....: "shortname": "FL",
.....: "info": {"governor": "Rick Scott"},
.....: "county": [
.....: {"name": "Dade", "population": 12345},
.....: {"name": "Broward", "population": 40000},
.....: {"name": "Palm Beach", "population": 60000},
.....: ],
.....: },
.....: {
.....: "state": "Ohio",
.....: "shortname": "OH",
.....: "info": {"governor": "John Kasich"},
.....: "county": [
.....: {"name": "Summit", "population": 1234},
.....: {"name": "Cuyahoga", "population": 1337},
.....: ],
.....: },
.....: ]
.....:
In [282]: pd.json_normalize(data, "county", ["state", "shortname", ["info", "governor"]])
Out[282]:
name population state shortname info.governor
0 Dade 12345 Florida FL Rick Scott
1 Broward 40000 Florida FL Rick Scott
2 Palm Beach 60000 Florida FL Rick Scott
3 Summit 1234 Ohio OH John Kasich
4 Cuyahoga 1337 Ohio OH John Kasich
The max_level parameter provides more control over which level to end normalization.
With max_level=1 the following snippet normalizes until 1st nesting level of the provided dict.
In [283]: data = [
.....: {
.....: "CreatedBy": {"Name": "User001"},
.....: "Lookup": {
.....: "TextField": "Some text",
.....: "UserField": {"Id": "ID001", "Name": "Name001"},
.....: },
.....: "Image": {"a": "b"},
.....: }
.....: ]
.....:
In [284]: pd.json_normalize(data, max_level=1)
Out[284]:
CreatedBy.Name Lookup.TextField Lookup.UserField Image.a
0 User001 Some text {'Id': 'ID001', 'Name': 'Name001'} b
Line delimited json#
pandas is able to read and write line-delimited json files that are common in data processing pipelines
using Hadoop or Spark.
For line-delimited json files, pandas can also return an iterator which reads in chunksize lines at a time. This can be useful for large files or to read from a stream.
In [285]: jsonl = """
.....: {"a": 1, "b": 2}
.....: {"a": 3, "b": 4}
.....: """
.....:
In [286]: df = pd.read_json(jsonl, lines=True)
In [287]: df
Out[287]:
a b
0 1 2
1 3 4
In [288]: df.to_json(orient="records", lines=True)
Out[288]: '{"a":1,"b":2}\n{"a":3,"b":4}\n'
# reader is an iterator that returns ``chunksize`` lines each iteration
In [289]: with pd.read_json(StringIO(jsonl), lines=True, chunksize=1) as reader:
.....: reader
.....: for chunk in reader:
.....: print(chunk)
.....:
Empty DataFrame
Columns: []
Index: []
a b
0 1 2
a b
1 3 4
Table schema#
Table Schema is a spec for describing tabular datasets as a JSON
object. The JSON includes information on the field names, types, and
other attributes. You can use the orient table to build
a JSON string with two fields, schema and data.
In [290]: df = pd.DataFrame(
.....: {
.....: "A": [1, 2, 3],
.....: "B": ["a", "b", "c"],
.....: "C": pd.date_range("2016-01-01", freq="d", periods=3),
.....: },
.....: index=pd.Index(range(3), name="idx"),
.....: )
.....:
In [291]: df
Out[291]:
A B C
idx
0 1 a 2016-01-01
1 2 b 2016-01-02
2 3 c 2016-01-03
In [292]: df.to_json(orient="table", date_format="iso")
Out[292]: '{"schema":{"fields":[{"name":"idx","type":"integer"},{"name":"A","type":"integer"},{"name":"B","type":"string"},{"name":"C","type":"datetime"}],"primaryKey":["idx"],"pandas_version":"1.4.0"},"data":[{"idx":0,"A":1,"B":"a","C":"2016-01-01T00:00:00.000"},{"idx":1,"A":2,"B":"b","C":"2016-01-02T00:00:00.000"},{"idx":2,"A":3,"B":"c","C":"2016-01-03T00:00:00.000"}]}'
The schema field contains the fields key, which itself contains
a list of column name to type pairs, including the Index or MultiIndex
(see below for a list of types).
The schema field also contains a primaryKey field if the (Multi)index
is unique.
The second field, data, contains the serialized data with the records
orient.
The index is included, and any datetimes are ISO 8601 formatted, as required
by the Table Schema spec.
The full list of types supported are described in the Table Schema
spec. This table shows the mapping from pandas types:
pandas type
Table Schema type
int64
integer
float64
number
bool
boolean
datetime64[ns]
datetime
timedelta64[ns]
duration
categorical
any
object
str
A few notes on the generated table schema:
The schema object contains a pandas_version field. This contains
the version of pandas’ dialect of the schema, and will be incremented
with each revision.
All dates are converted to UTC when serializing. Even timezone naive values,
which are treated as UTC with an offset of 0.
In [293]: from pandas.io.json import build_table_schema
In [294]: s = pd.Series(pd.date_range("2016", periods=4))
In [295]: build_table_schema(s)
Out[295]:
{'fields': [{'name': 'index', 'type': 'integer'},
{'name': 'values', 'type': 'datetime'}],
'primaryKey': ['index'],
'pandas_version': '1.4.0'}
datetimes with a timezone (before serializing), include an additional field
tz with the time zone name (e.g. 'US/Central').
In [296]: s_tz = pd.Series(pd.date_range("2016", periods=12, tz="US/Central"))
In [297]: build_table_schema(s_tz)
Out[297]:
{'fields': [{'name': 'index', 'type': 'integer'},
{'name': 'values', 'type': 'datetime', 'tz': 'US/Central'}],
'primaryKey': ['index'],
'pandas_version': '1.4.0'}
Periods are converted to timestamps before serialization, and so have the
same behavior of being converted to UTC. In addition, periods will contain
and additional field freq with the period’s frequency, e.g. 'A-DEC'.
In [298]: s_per = pd.Series(1, index=pd.period_range("2016", freq="A-DEC", periods=4))
In [299]: build_table_schema(s_per)
Out[299]:
{'fields': [{'name': 'index', 'type': 'datetime', 'freq': 'A-DEC'},
{'name': 'values', 'type': 'integer'}],
'primaryKey': ['index'],
'pandas_version': '1.4.0'}
Categoricals use the any type and an enum constraint listing
the set of possible values. Additionally, an ordered field is included:
In [300]: s_cat = pd.Series(pd.Categorical(["a", "b", "a"]))
In [301]: build_table_schema(s_cat)
Out[301]:
{'fields': [{'name': 'index', 'type': 'integer'},
{'name': 'values',
'type': 'any',
'constraints': {'enum': ['a', 'b']},
'ordered': False}],
'primaryKey': ['index'],
'pandas_version': '1.4.0'}
A primaryKey field, containing an array of labels, is included
if the index is unique:
In [302]: s_dupe = pd.Series([1, 2], index=[1, 1])
In [303]: build_table_schema(s_dupe)
Out[303]:
{'fields': [{'name': 'index', 'type': 'integer'},
{'name': 'values', 'type': 'integer'}],
'pandas_version': '1.4.0'}
The primaryKey behavior is the same with MultiIndexes, but in this
case the primaryKey is an array:
In [304]: s_multi = pd.Series(1, index=pd.MultiIndex.from_product([("a", "b"), (0, 1)]))
In [305]: build_table_schema(s_multi)
Out[305]:
{'fields': [{'name': 'level_0', 'type': 'string'},
{'name': 'level_1', 'type': 'integer'},
{'name': 'values', 'type': 'integer'}],
'primaryKey': FrozenList(['level_0', 'level_1']),
'pandas_version': '1.4.0'}
The default naming roughly follows these rules:
For series, the object.name is used. If that’s none, then the
name is values
For DataFrames, the stringified version of the column name is used
For Index (not MultiIndex), index.name is used, with a
fallback to index if that is None.
For MultiIndex, mi.names is used. If any level has no name,
then level_<i> is used.
read_json also accepts orient='table' as an argument. This allows for
the preservation of metadata such as dtypes and index names in a
round-trippable manner.
In [306]: df = pd.DataFrame(
.....: {
.....: "foo": [1, 2, 3, 4],
.....: "bar": ["a", "b", "c", "d"],
.....: "baz": pd.date_range("2018-01-01", freq="d", periods=4),
.....: "qux": pd.Categorical(["a", "b", "c", "c"]),
.....: },
.....: index=pd.Index(range(4), name="idx"),
.....: )
.....:
In [307]: df
Out[307]:
foo bar baz qux
idx
0 1 a 2018-01-01 a
1 2 b 2018-01-02 b
2 3 c 2018-01-03 c
3 4 d 2018-01-04 c
In [308]: df.dtypes
Out[308]:
foo int64
bar object
baz datetime64[ns]
qux category
dtype: object
In [309]: df.to_json("test.json", orient="table")
In [310]: new_df = pd.read_json("test.json", orient="table")
In [311]: new_df
Out[311]:
foo bar baz qux
idx
0 1 a 2018-01-01 a
1 2 b 2018-01-02 b
2 3 c 2018-01-03 c
3 4 d 2018-01-04 c
In [312]: new_df.dtypes
Out[312]:
foo int64
bar object
baz datetime64[ns]
qux category
dtype: object
Please note that the literal string ‘index’ as the name of an Index
is not round-trippable, nor are any names beginning with 'level_' within a
MultiIndex. These are used by default in DataFrame.to_json() to
indicate missing values and the subsequent read cannot distinguish the intent.
In [313]: df.index.name = "index"
In [314]: df.to_json("test.json", orient="table")
In [315]: new_df = pd.read_json("test.json", orient="table")
In [316]: print(new_df.index.name)
None
When using orient='table' along with user-defined ExtensionArray,
the generated schema will contain an additional extDtype key in the respective
fields element. This extra key is not standard but does enable JSON roundtrips
for extension types (e.g. read_json(df.to_json(orient="table"), orient="table")).
The extDtype key carries the name of the extension, if you have properly registered
the ExtensionDtype, pandas will use said name to perform a lookup into the registry
and re-convert the serialized data into your custom dtype.
HTML#
Reading HTML content#
Warning
We highly encourage you to read the HTML Table Parsing gotchas
below regarding the issues surrounding the BeautifulSoup4/html5lib/lxml parsers.
The top-level read_html() function can accept an HTML
string/file/URL and will parse HTML tables into list of pandas DataFrames.
Let’s look at a few examples.
Note
read_html returns a list of DataFrame objects, even if there is
only a single table contained in the HTML content.
Read a URL with no options:
In [320]: "https://www.fdic.gov/resources/resolutions/bank-failures/failed-bank-list"
In [321]: pd.read_html(url)
Out[321]:
[ Bank NameBank CityCity StateSt ... Acquiring InstitutionAI Closing DateClosing FundFund
0 Almena State Bank Almena KS ... Equity Bank October 23, 2020 10538
1 First City Bank of Florida Fort Walton Beach FL ... United Fidelity Bank, fsb October 16, 2020 10537
2 The First State Bank Barboursville WV ... MVB Bank, Inc. April 3, 2020 10536
3 Ericson State Bank Ericson NE ... Farmers and Merchants Bank February 14, 2020 10535
4 City National Bank of New Jersey Newark NJ ... Industrial Bank November 1, 2019 10534
.. ... ... ... ... ... ... ...
558 Superior Bank, FSB Hinsdale IL ... Superior Federal, FSB July 27, 2001 6004
559 Malta National Bank Malta OH ... North Valley Bank May 3, 2001 4648
560 First Alliance Bank & Trust Co. Manchester NH ... Southern New Hampshire Bank & Trust February 2, 2001 4647
561 National State Bank of Metropolis Metropolis IL ... Banterra Bank of Marion December 14, 2000 4646
562 Bank of Honolulu Honolulu HI ... Bank of the Orient October 13, 2000 4645
[563 rows x 7 columns]]
Note
The data from the above URL changes every Monday so the resulting data above may be slightly different.
Read in the content of the file from the above URL and pass it to read_html
as a string:
In [317]: html_str = """
.....: <table>
.....: <tr>
.....: <th>A</th>
.....: <th colspan="1">B</th>
.....: <th rowspan="1">C</th>
.....: </tr>
.....: <tr>
.....: <td>a</td>
.....: <td>b</td>
.....: <td>c</td>
.....: </tr>
.....: </table>
.....: """
.....:
In [318]: with open("tmp.html", "w") as f:
.....: f.write(html_str)
.....:
In [319]: df = pd.read_html("tmp.html")
In [320]: df[0]
Out[320]:
A B C
0 a b c
You can even pass in an instance of StringIO if you so desire:
In [321]: dfs = pd.read_html(StringIO(html_str))
In [322]: dfs[0]
Out[322]:
A B C
0 a b c
Note
The following examples are not run by the IPython evaluator due to the fact
that having so many network-accessing functions slows down the documentation
build. If you spot an error or an example that doesn’t run, please do not
hesitate to report it over on pandas GitHub issues page.
Read a URL and match a table that contains specific text:
match = "Metcalf Bank"
df_list = pd.read_html(url, match=match)
Specify a header row (by default <th> or <td> elements located within a
<thead> are used to form the column index, if multiple rows are contained within
<thead> then a MultiIndex is created); if specified, the header row is taken
from the data minus the parsed header elements (<th> elements).
dfs = pd.read_html(url, header=0)
Specify an index column:
dfs = pd.read_html(url, index_col=0)
Specify a number of rows to skip:
dfs = pd.read_html(url, skiprows=0)
Specify a number of rows to skip using a list (range works
as well):
dfs = pd.read_html(url, skiprows=range(2))
Specify an HTML attribute:
dfs1 = pd.read_html(url, attrs={"id": "table"})
dfs2 = pd.read_html(url, attrs={"class": "sortable"})
print(np.array_equal(dfs1[0], dfs2[0])) # Should be True
Specify values that should be converted to NaN:
dfs = pd.read_html(url, na_values=["No Acquirer"])
Specify whether to keep the default set of NaN values:
dfs = pd.read_html(url, keep_default_na=False)
Specify converters for columns. This is useful for numerical text data that has
leading zeros. By default columns that are numerical are cast to numeric
types and the leading zeros are lost. To avoid this, we can convert these
columns to strings.
url_mcc = "https://en.wikipedia.org/wiki/Mobile_country_code"
dfs = pd.read_html(
url_mcc,
match="Telekom Albania",
header=0,
converters={"MNC": str},
)
Use some combination of the above:
dfs = pd.read_html(url, match="Metcalf Bank", index_col=0)
Read in pandas to_html output (with some loss of floating point precision):
df = pd.DataFrame(np.random.randn(2, 2))
s = df.to_html(float_format="{0:.40g}".format)
dfin = pd.read_html(s, index_col=0)
The lxml backend will raise an error on a failed parse if that is the only
parser you provide. If you only have a single parser you can provide just a
string, but it is considered good practice to pass a list with one string if,
for example, the function expects a sequence of strings. You may use:
dfs = pd.read_html(url, "Metcalf Bank", index_col=0, flavor=["lxml"])
Or you could pass flavor='lxml' without a list:
dfs = pd.read_html(url, "Metcalf Bank", index_col=0, flavor="lxml")
However, if you have bs4 and html5lib installed and pass None or ['lxml',
'bs4'] then the parse will most likely succeed. Note that as soon as a parse
succeeds, the function will return.
dfs = pd.read_html(url, "Metcalf Bank", index_col=0, flavor=["lxml", "bs4"])
Links can be extracted from cells along with the text using extract_links="all".
In [323]: html_table = """
.....: <table>
.....: <tr>
.....: <th>GitHub</th>
.....: </tr>
.....: <tr>
.....: <td><a href="https://github.com/pandas-dev/pandas">pandas</a></td>
.....: </tr>
.....: </table>
.....: """
.....:
In [324]: df = pd.read_html(
.....: html_table,
.....: extract_links="all"
.....: )[0]
.....:
In [325]: df
Out[325]:
(GitHub, None)
0 (pandas, https://github.com/pandas-dev/pandas)
In [326]: df[("GitHub", None)]
Out[326]:
0 (pandas, https://github.com/pandas-dev/pandas)
Name: (GitHub, None), dtype: object
In [327]: df[("GitHub", None)].str[1]
Out[327]:
0 https://github.com/pandas-dev/pandas
Name: (GitHub, None), dtype: object
New in version 1.5.0.
Writing to HTML files#
DataFrame objects have an instance method to_html which renders the
contents of the DataFrame as an HTML table. The function arguments are as
in the method to_string described above.
Note
Not all of the possible options for DataFrame.to_html are shown here for
brevity’s sake. See to_html() for the
full set of options.
Note
In an HTML-rendering supported environment like a Jupyter Notebook, display(HTML(...))`
will render the raw HTML into the environment.
In [328]: from IPython.display import display, HTML
In [329]: df = pd.DataFrame(np.random.randn(2, 2))
In [330]: df
Out[330]:
0 1
0 0.070319 1.773907
1 0.253908 0.414581
In [331]: html = df.to_html()
In [332]: print(html) # raw html
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>0</th>
<th>1</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>0.070319</td>
<td>1.773907</td>
</tr>
<tr>
<th>1</th>
<td>0.253908</td>
<td>0.414581</td>
</tr>
</tbody>
</table>
In [333]: display(HTML(html))
<IPython.core.display.HTML object>
The columns argument will limit the columns shown:
In [334]: html = df.to_html(columns=[0])
In [335]: print(html)
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>0</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>0.070319</td>
</tr>
<tr>
<th>1</th>
<td>0.253908</td>
</tr>
</tbody>
</table>
In [336]: display(HTML(html))
<IPython.core.display.HTML object>
float_format takes a Python callable to control the precision of floating
point values:
In [337]: html = df.to_html(float_format="{0:.10f}".format)
In [338]: print(html)
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>0</th>
<th>1</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>0.0703192665</td>
<td>1.7739074228</td>
</tr>
<tr>
<th>1</th>
<td>0.2539083433</td>
<td>0.4145805920</td>
</tr>
</tbody>
</table>
In [339]: display(HTML(html))
<IPython.core.display.HTML object>
bold_rows will make the row labels bold by default, but you can turn that
off:
In [340]: html = df.to_html(bold_rows=False)
In [341]: print(html)
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>0</th>
<th>1</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>0.070319</td>
<td>1.773907</td>
</tr>
<tr>
<td>1</td>
<td>0.253908</td>
<td>0.414581</td>
</tr>
</tbody>
</table>
In [342]: display(HTML(html))
<IPython.core.display.HTML object>
The classes argument provides the ability to give the resulting HTML
table CSS classes. Note that these classes are appended to the existing
'dataframe' class.
In [343]: print(df.to_html(classes=["awesome_table_class", "even_more_awesome_class"]))
<table border="1" class="dataframe awesome_table_class even_more_awesome_class">
<thead>
<tr style="text-align: right;">
<th></th>
<th>0</th>
<th>1</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>0.070319</td>
<td>1.773907</td>
</tr>
<tr>
<th>1</th>
<td>0.253908</td>
<td>0.414581</td>
</tr>
</tbody>
</table>
The render_links argument provides the ability to add hyperlinks to cells
that contain URLs.
In [344]: url_df = pd.DataFrame(
.....: {
.....: "name": ["Python", "pandas"],
.....: "url": ["https://www.python.org/", "https://pandas.pydata.org"],
.....: }
.....: )
.....:
In [345]: html = url_df.to_html(render_links=True)
In [346]: print(html)
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>name</th>
<th>url</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>Python</td>
<td><a href="https://www.python.org/" target="_blank">https://www.python.org/</a></td>
</tr>
<tr>
<th>1</th>
<td>pandas</td>
<td><a href="https://pandas.pydata.org" target="_blank">https://pandas.pydata.org</a></td>
</tr>
</tbody>
</table>
In [347]: display(HTML(html))
<IPython.core.display.HTML object>
Finally, the escape argument allows you to control whether the
“<”, “>” and “&” characters escaped in the resulting HTML (by default it is
True). So to get the HTML without escaped characters pass escape=False
In [348]: df = pd.DataFrame({"a": list("&<>"), "b": np.random.randn(3)})
Escaped:
In [349]: html = df.to_html()
In [350]: print(html)
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>a</th>
<th>b</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>&</td>
<td>0.842321</td>
</tr>
<tr>
<th>1</th>
<td><</td>
<td>0.211337</td>
</tr>
<tr>
<th>2</th>
<td>></td>
<td>-1.055427</td>
</tr>
</tbody>
</table>
In [351]: display(HTML(html))
<IPython.core.display.HTML object>
Not escaped:
In [352]: html = df.to_html(escape=False)
In [353]: print(html)
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>a</th>
<th>b</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>&</td>
<td>0.842321</td>
</tr>
<tr>
<th>1</th>
<td><</td>
<td>0.211337</td>
</tr>
<tr>
<th>2</th>
<td>></td>
<td>-1.055427</td>
</tr>
</tbody>
</table>
In [354]: display(HTML(html))
<IPython.core.display.HTML object>
Note
Some browsers may not show a difference in the rendering of the previous two
HTML tables.
HTML Table Parsing Gotchas#
There are some versioning issues surrounding the libraries that are used to
parse HTML tables in the top-level pandas io function read_html.
Issues with lxml
Benefits
lxml is very fast.
lxml requires Cython to install correctly.
Drawbacks
lxml does not make any guarantees about the results of its parse
unless it is given strictly valid markup.
In light of the above, we have chosen to allow you, the user, to use the
lxml backend, but this backend will use html5lib if lxml
fails to parse
It is therefore highly recommended that you install both
BeautifulSoup4 and html5lib, so that you will still get a valid
result (provided everything else is valid) even if lxml fails.
Issues with BeautifulSoup4 using lxml as a backend
The above issues hold here as well since BeautifulSoup4 is essentially
just a wrapper around a parser backend.
Issues with BeautifulSoup4 using html5lib as a backend
Benefits
html5lib is far more lenient than lxml and consequently deals
with real-life markup in a much saner way rather than just, e.g.,
dropping an element without notifying you.
html5lib generates valid HTML5 markup from invalid markup
automatically. This is extremely important for parsing HTML tables,
since it guarantees a valid document. However, that does NOT mean that
it is “correct”, since the process of fixing markup does not have a
single definition.
html5lib is pure Python and requires no additional build steps beyond
its own installation.
Drawbacks
The biggest drawback to using html5lib is that it is slow as
molasses. However consider the fact that many tables on the web are not
big enough for the parsing algorithm runtime to matter. It is more
likely that the bottleneck will be in the process of reading the raw
text from the URL over the web, i.e., IO (input-output). For very large
tables, this might not be true.
LaTeX#
New in version 1.3.0.
Currently there are no methods to read from LaTeX, only output methods.
Writing to LaTeX files#
Note
DataFrame and Styler objects currently have a to_latex method. We recommend
using the Styler.to_latex() method
over DataFrame.to_latex() due to the former’s greater flexibility with
conditional styling, and the latter’s possible future deprecation.
Review the documentation for Styler.to_latex,
which gives examples of conditional styling and explains the operation of its keyword
arguments.
For simple application the following pattern is sufficient.
In [355]: df = pd.DataFrame([[1, 2], [3, 4]], index=["a", "b"], columns=["c", "d"])
In [356]: print(df.style.to_latex())
\begin{tabular}{lrr}
& c & d \\
a & 1 & 2 \\
b & 3 & 4 \\
\end{tabular}
To format values before output, chain the Styler.format
method.
In [357]: print(df.style.format("€ {}").to_latex())
\begin{tabular}{lrr}
& c & d \\
a & € 1 & € 2 \\
b & € 3 & € 4 \\
\end{tabular}
XML#
Reading XML#
New in version 1.3.0.
The top-level read_xml() function can accept an XML
string/file/URL and will parse nodes and attributes into a pandas DataFrame.
Note
Since there is no standard XML structure where design types can vary in
many ways, read_xml works best with flatter, shallow versions. If
an XML document is deeply nested, use the stylesheet feature to
transform XML into a flatter version.
Let’s look at a few examples.
Read an XML string:
In [358]: xml = """<?xml version="1.0" encoding="UTF-8"?>
.....: <bookstore>
.....: <book category="cooking">
.....: <title lang="en">Everyday Italian</title>
.....: <author>Giada De Laurentiis</author>
.....: <year>2005</year>
.....: <price>30.00</price>
.....: </book>
.....: <book category="children">
.....: <title lang="en">Harry Potter</title>
.....: <author>J K. Rowling</author>
.....: <year>2005</year>
.....: <price>29.99</price>
.....: </book>
.....: <book category="web">
.....: <title lang="en">Learning XML</title>
.....: <author>Erik T. Ray</author>
.....: <year>2003</year>
.....: <price>39.95</price>
.....: </book>
.....: </bookstore>"""
.....:
In [359]: df = pd.read_xml(xml)
In [360]: df
Out[360]:
category title author year price
0 cooking Everyday Italian Giada De Laurentiis 2005 30.00
1 children Harry Potter J K. Rowling 2005 29.99
2 web Learning XML Erik T. Ray 2003 39.95
Read a URL with no options:
In [361]: df = pd.read_xml("https://www.w3schools.com/xml/books.xml")
In [362]: df
Out[362]:
category title author year price cover
0 cooking Everyday Italian Giada De Laurentiis 2005 30.00 None
1 children Harry Potter J K. Rowling 2005 29.99 None
2 web XQuery Kick Start Vaidyanathan Nagarajan 2003 49.99 None
3 web Learning XML Erik T. Ray 2003 39.95 paperback
Read in the content of the “books.xml” file and pass it to read_xml
as a string:
In [363]: file_path = "books.xml"
In [364]: with open(file_path, "w") as f:
.....: f.write(xml)
.....:
In [365]: with open(file_path, "r") as f:
.....: df = pd.read_xml(f.read())
.....:
In [366]: df
Out[366]:
category title author year price
0 cooking Everyday Italian Giada De Laurentiis 2005 30.00
1 children Harry Potter J K. Rowling 2005 29.99
2 web Learning XML Erik T. Ray 2003 39.95
Read in the content of the “books.xml” as instance of StringIO or
BytesIO and pass it to read_xml:
In [367]: with open(file_path, "r") as f:
.....: sio = StringIO(f.read())
.....:
In [368]: df = pd.read_xml(sio)
In [369]: df
Out[369]:
category title author year price
0 cooking Everyday Italian Giada De Laurentiis 2005 30.00
1 children Harry Potter J K. Rowling 2005 29.99
2 web Learning XML Erik T. Ray 2003 39.95
In [370]: with open(file_path, "rb") as f:
.....: bio = BytesIO(f.read())
.....:
In [371]: df = pd.read_xml(bio)
In [372]: df
Out[372]:
category title author year price
0 cooking Everyday Italian Giada De Laurentiis 2005 30.00
1 children Harry Potter J K. Rowling 2005 29.99
2 web Learning XML Erik T. Ray 2003 39.95
Even read XML from AWS S3 buckets such as NIH NCBI PMC Article Datasets providing
Biomedical and Life Science Jorurnals:
In [373]: df = pd.read_xml(
.....: "s3://pmc-oa-opendata/oa_comm/xml/all/PMC1236943.xml",
.....: xpath=".//journal-meta",
.....: )
.....:
In [374]: df
Out[374]:
journal-id journal-title issn publisher
0 Cardiovasc Ultrasound Cardiovascular Ultrasound 1476-7120 NaN
With lxml as default parser, you access the full-featured XML library
that extends Python’s ElementTree API. One powerful tool is ability to query
nodes selectively or conditionally with more expressive XPath:
In [375]: df = pd.read_xml(file_path, xpath="//book[year=2005]")
In [376]: df
Out[376]:
category title author year price
0 cooking Everyday Italian Giada De Laurentiis 2005 30.00
1 children Harry Potter J K. Rowling 2005 29.99
Specify only elements or only attributes to parse:
In [377]: df = pd.read_xml(file_path, elems_only=True)
In [378]: df
Out[378]:
title author year price
0 Everyday Italian Giada De Laurentiis 2005 30.00
1 Harry Potter J K. Rowling 2005 29.99
2 Learning XML Erik T. Ray 2003 39.95
In [379]: df = pd.read_xml(file_path, attrs_only=True)
In [380]: df
Out[380]:
category
0 cooking
1 children
2 web
XML documents can have namespaces with prefixes and default namespaces without
prefixes both of which are denoted with a special attribute xmlns. In order
to parse by node under a namespace context, xpath must reference a prefix.
For example, below XML contains a namespace with prefix, doc, and URI at
https://example.com. In order to parse doc:row nodes,
namespaces must be used.
In [381]: xml = """<?xml version='1.0' encoding='utf-8'?>
.....: <doc:data xmlns:doc="https://example.com">
.....: <doc:row>
.....: <doc:shape>square</doc:shape>
.....: <doc:degrees>360</doc:degrees>
.....: <doc:sides>4.0</doc:sides>
.....: </doc:row>
.....: <doc:row>
.....: <doc:shape>circle</doc:shape>
.....: <doc:degrees>360</doc:degrees>
.....: <doc:sides/>
.....: </doc:row>
.....: <doc:row>
.....: <doc:shape>triangle</doc:shape>
.....: <doc:degrees>180</doc:degrees>
.....: <doc:sides>3.0</doc:sides>
.....: </doc:row>
.....: </doc:data>"""
.....:
In [382]: df = pd.read_xml(xml,
.....: xpath="//doc:row",
.....: namespaces={"doc": "https://example.com"})
.....:
In [383]: df
Out[383]:
shape degrees sides
0 square 360 4.0
1 circle 360 NaN
2 triangle 180 3.0
Similarly, an XML document can have a default namespace without prefix. Failing
to assign a temporary prefix will return no nodes and raise a ValueError.
But assigning any temporary name to correct URI allows parsing by nodes.
In [384]: xml = """<?xml version='1.0' encoding='utf-8'?>
.....: <data xmlns="https://example.com">
.....: <row>
.....: <shape>square</shape>
.....: <degrees>360</degrees>
.....: <sides>4.0</sides>
.....: </row>
.....: <row>
.....: <shape>circle</shape>
.....: <degrees>360</degrees>
.....: <sides/>
.....: </row>
.....: <row>
.....: <shape>triangle</shape>
.....: <degrees>180</degrees>
.....: <sides>3.0</sides>
.....: </row>
.....: </data>"""
.....:
In [385]: df = pd.read_xml(xml,
.....: xpath="//pandas:row",
.....: namespaces={"pandas": "https://example.com"})
.....:
In [386]: df
Out[386]:
shape degrees sides
0 square 360 4.0
1 circle 360 NaN
2 triangle 180 3.0
However, if XPath does not reference node names such as default, /*, then
namespaces is not required.
With lxml as parser, you can flatten nested XML documents with an XSLT
script which also can be string/file/URL types. As background, XSLT is
a special-purpose language written in a special XML file that can transform
original XML documents into other XML, HTML, even text (CSV, JSON, etc.)
using an XSLT processor.
For example, consider this somewhat nested structure of Chicago “L” Rides
where station and rides elements encapsulate data in their own sections.
With below XSLT, lxml can transform original nested document into a flatter
output (as shown below for demonstration) for easier parse into DataFrame:
In [387]: xml = """<?xml version='1.0' encoding='utf-8'?>
.....: <response>
.....: <row>
.....: <station id="40850" name="Library"/>
.....: <month>2020-09-01T00:00:00</month>
.....: <rides>
.....: <avg_weekday_rides>864.2</avg_weekday_rides>
.....: <avg_saturday_rides>534</avg_saturday_rides>
.....: <avg_sunday_holiday_rides>417.2</avg_sunday_holiday_rides>
.....: </rides>
.....: </row>
.....: <row>
.....: <station id="41700" name="Washington/Wabash"/>
.....: <month>2020-09-01T00:00:00</month>
.....: <rides>
.....: <avg_weekday_rides>2707.4</avg_weekday_rides>
.....: <avg_saturday_rides>1909.8</avg_saturday_rides>
.....: <avg_sunday_holiday_rides>1438.6</avg_sunday_holiday_rides>
.....: </rides>
.....: </row>
.....: <row>
.....: <station id="40380" name="Clark/Lake"/>
.....: <month>2020-09-01T00:00:00</month>
.....: <rides>
.....: <avg_weekday_rides>2949.6</avg_weekday_rides>
.....: <avg_saturday_rides>1657</avg_saturday_rides>
.....: <avg_sunday_holiday_rides>1453.8</avg_sunday_holiday_rides>
.....: </rides>
.....: </row>
.....: </response>"""
.....:
In [388]: xsl = """<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
.....: <xsl:output method="xml" omit-xml-declaration="no" indent="yes"/>
.....: <xsl:strip-space elements="*"/>
.....: <xsl:template match="/response">
.....: <xsl:copy>
.....: <xsl:apply-templates select="row"/>
.....: </xsl:copy>
.....: </xsl:template>
.....: <xsl:template match="row">
.....: <xsl:copy>
.....: <station_id><xsl:value-of select="station/@id"/></station_id>
.....: <station_name><xsl:value-of select="station/@name"/></station_name>
.....: <xsl:copy-of select="month|rides/*"/>
.....: </xsl:copy>
.....: </xsl:template>
.....: </xsl:stylesheet>"""
.....:
In [389]: output = """<?xml version='1.0' encoding='utf-8'?>
.....: <response>
.....: <row>
.....: <station_id>40850</station_id>
.....: <station_name>Library</station_name>
.....: <month>2020-09-01T00:00:00</month>
.....: <avg_weekday_rides>864.2</avg_weekday_rides>
.....: <avg_saturday_rides>534</avg_saturday_rides>
.....: <avg_sunday_holiday_rides>417.2</avg_sunday_holiday_rides>
.....: </row>
.....: <row>
.....: <station_id>41700</station_id>
.....: <station_name>Washington/Wabash</station_name>
.....: <month>2020-09-01T00:00:00</month>
.....: <avg_weekday_rides>2707.4</avg_weekday_rides>
.....: <avg_saturday_rides>1909.8</avg_saturday_rides>
.....: <avg_sunday_holiday_rides>1438.6</avg_sunday_holiday_rides>
.....: </row>
.....: <row>
.....: <station_id>40380</station_id>
.....: <station_name>Clark/Lake</station_name>
.....: <month>2020-09-01T00:00:00</month>
.....: <avg_weekday_rides>2949.6</avg_weekday_rides>
.....: <avg_saturday_rides>1657</avg_saturday_rides>
.....: <avg_sunday_holiday_rides>1453.8</avg_sunday_holiday_rides>
.....: </row>
.....: </response>"""
.....:
In [390]: df = pd.read_xml(xml, stylesheet=xsl)
In [391]: df
Out[391]:
station_id station_name ... avg_saturday_rides avg_sunday_holiday_rides
0 40850 Library ... 534.0 417.2
1 41700 Washington/Wabash ... 1909.8 1438.6
2 40380 Clark/Lake ... 1657.0 1453.8
[3 rows x 6 columns]
For very large XML files that can range in hundreds of megabytes to gigabytes, pandas.read_xml()
supports parsing such sizeable files using lxml’s iterparse and etree’s iterparse
which are memory-efficient methods to iterate through an XML tree and extract specific elements and attributes.
without holding entire tree in memory.
New in version 1.5.0.
To use this feature, you must pass a physical XML file path into read_xml and use the iterparse argument.
Files should not be compressed or point to online sources but stored on local disk. Also, iterparse should be
a dictionary where the key is the repeating nodes in document (which become the rows) and the value is a list of
any element or attribute that is a descendant (i.e., child, grandchild) of repeating node. Since XPath is not
used in this method, descendants do not need to share same relationship with one another. Below shows example
of reading in Wikipedia’s very large (12 GB+) latest article data dump.
In [1]: df = pd.read_xml(
... "/path/to/downloaded/enwikisource-latest-pages-articles.xml",
... iterparse = {"page": ["title", "ns", "id"]}
... )
... df
Out[2]:
title ns id
0 Gettysburg Address 0 21450
1 Main Page 0 42950
2 Declaration by United Nations 0 8435
3 Constitution of the United States of America 0 8435
4 Declaration of Independence (Israel) 0 17858
... ... ... ...
3578760 Page:Black cat 1897 07 v2 n10.pdf/17 104 219649
3578761 Page:Black cat 1897 07 v2 n10.pdf/43 104 219649
3578762 Page:Black cat 1897 07 v2 n10.pdf/44 104 219649
3578763 The History of Tom Jones, a Foundling/Book IX 0 12084291
3578764 Page:Shakespeare of Stratford (1926) Yale.djvu/91 104 21450
[3578765 rows x 3 columns]
Writing XML#
New in version 1.3.0.
DataFrame objects have an instance method to_xml which renders the
contents of the DataFrame as an XML document.
Note
This method does not support special properties of XML including DTD,
CData, XSD schemas, processing instructions, comments, and others.
Only namespaces at the root level is supported. However, stylesheet
allows design changes after initial output.
Let’s look at a few examples.
Write an XML without options:
In [392]: geom_df = pd.DataFrame(
.....: {
.....: "shape": ["square", "circle", "triangle"],
.....: "degrees": [360, 360, 180],
.....: "sides": [4, np.nan, 3],
.....: }
.....: )
.....:
In [393]: print(geom_df.to_xml())
<?xml version='1.0' encoding='utf-8'?>
<data>
<row>
<index>0</index>
<shape>square</shape>
<degrees>360</degrees>
<sides>4.0</sides>
</row>
<row>
<index>1</index>
<shape>circle</shape>
<degrees>360</degrees>
<sides/>
</row>
<row>
<index>2</index>
<shape>triangle</shape>
<degrees>180</degrees>
<sides>3.0</sides>
</row>
</data>
Write an XML with new root and row name:
In [394]: print(geom_df.to_xml(root_name="geometry", row_name="objects"))
<?xml version='1.0' encoding='utf-8'?>
<geometry>
<objects>
<index>0</index>
<shape>square</shape>
<degrees>360</degrees>
<sides>4.0</sides>
</objects>
<objects>
<index>1</index>
<shape>circle</shape>
<degrees>360</degrees>
<sides/>
</objects>
<objects>
<index>2</index>
<shape>triangle</shape>
<degrees>180</degrees>
<sides>3.0</sides>
</objects>
</geometry>
Write an attribute-centric XML:
In [395]: print(geom_df.to_xml(attr_cols=geom_df.columns.tolist()))
<?xml version='1.0' encoding='utf-8'?>
<data>
<row index="0" shape="square" degrees="360" sides="4.0"/>
<row index="1" shape="circle" degrees="360"/>
<row index="2" shape="triangle" degrees="180" sides="3.0"/>
</data>
Write a mix of elements and attributes:
In [396]: print(
.....: geom_df.to_xml(
.....: index=False,
.....: attr_cols=['shape'],
.....: elem_cols=['degrees', 'sides'])
.....: )
.....:
<?xml version='1.0' encoding='utf-8'?>
<data>
<row shape="square">
<degrees>360</degrees>
<sides>4.0</sides>
</row>
<row shape="circle">
<degrees>360</degrees>
<sides/>
</row>
<row shape="triangle">
<degrees>180</degrees>
<sides>3.0</sides>
</row>
</data>
Any DataFrames with hierarchical columns will be flattened for XML element names
with levels delimited by underscores:
In [397]: ext_geom_df = pd.DataFrame(
.....: {
.....: "type": ["polygon", "other", "polygon"],
.....: "shape": ["square", "circle", "triangle"],
.....: "degrees": [360, 360, 180],
.....: "sides": [4, np.nan, 3],
.....: }
.....: )
.....:
In [398]: pvt_df = ext_geom_df.pivot_table(index='shape',
.....: columns='type',
.....: values=['degrees', 'sides'],
.....: aggfunc='sum')
.....:
In [399]: pvt_df
Out[399]:
degrees sides
type other polygon other polygon
shape
circle 360.0 NaN 0.0 NaN
square NaN 360.0 NaN 4.0
triangle NaN 180.0 NaN 3.0
In [400]: print(pvt_df.to_xml())
<?xml version='1.0' encoding='utf-8'?>
<data>
<row>
<shape>circle</shape>
<degrees_other>360.0</degrees_other>
<degrees_polygon/>
<sides_other>0.0</sides_other>
<sides_polygon/>
</row>
<row>
<shape>square</shape>
<degrees_other/>
<degrees_polygon>360.0</degrees_polygon>
<sides_other/>
<sides_polygon>4.0</sides_polygon>
</row>
<row>
<shape>triangle</shape>
<degrees_other/>
<degrees_polygon>180.0</degrees_polygon>
<sides_other/>
<sides_polygon>3.0</sides_polygon>
</row>
</data>
Write an XML with default namespace:
In [401]: print(geom_df.to_xml(namespaces={"": "https://example.com"}))
<?xml version='1.0' encoding='utf-8'?>
<data xmlns="https://example.com">
<row>
<index>0</index>
<shape>square</shape>
<degrees>360</degrees>
<sides>4.0</sides>
</row>
<row>
<index>1</index>
<shape>circle</shape>
<degrees>360</degrees>
<sides/>
</row>
<row>
<index>2</index>
<shape>triangle</shape>
<degrees>180</degrees>
<sides>3.0</sides>
</row>
</data>
Write an XML with namespace prefix:
In [402]: print(
.....: geom_df.to_xml(namespaces={"doc": "https://example.com"},
.....: prefix="doc")
.....: )
.....:
<?xml version='1.0' encoding='utf-8'?>
<doc:data xmlns:doc="https://example.com">
<doc:row>
<doc:index>0</doc:index>
<doc:shape>square</doc:shape>
<doc:degrees>360</doc:degrees>
<doc:sides>4.0</doc:sides>
</doc:row>
<doc:row>
<doc:index>1</doc:index>
<doc:shape>circle</doc:shape>
<doc:degrees>360</doc:degrees>
<doc:sides/>
</doc:row>
<doc:row>
<doc:index>2</doc:index>
<doc:shape>triangle</doc:shape>
<doc:degrees>180</doc:degrees>
<doc:sides>3.0</doc:sides>
</doc:row>
</doc:data>
Write an XML without declaration or pretty print:
In [403]: print(
.....: geom_df.to_xml(xml_declaration=False,
.....: pretty_print=False)
.....: )
.....:
<data><row><index>0</index><shape>square</shape><degrees>360</degrees><sides>4.0</sides></row><row><index>1</index><shape>circle</shape><degrees>360</degrees><sides/></row><row><index>2</index><shape>triangle</shape><degrees>180</degrees><sides>3.0</sides></row></data>
Write an XML and transform with stylesheet:
In [404]: xsl = """<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
.....: <xsl:output method="xml" omit-xml-declaration="no" indent="yes"/>
.....: <xsl:strip-space elements="*"/>
.....: <xsl:template match="/data">
.....: <geometry>
.....: <xsl:apply-templates select="row"/>
.....: </geometry>
.....: </xsl:template>
.....: <xsl:template match="row">
.....: <object index="{index}">
.....: <xsl:if test="shape!='circle'">
.....: <xsl:attribute name="type">polygon</xsl:attribute>
.....: </xsl:if>
.....: <xsl:copy-of select="shape"/>
.....: <property>
.....: <xsl:copy-of select="degrees|sides"/>
.....: </property>
.....: </object>
.....: </xsl:template>
.....: </xsl:stylesheet>"""
.....:
In [405]: print(geom_df.to_xml(stylesheet=xsl))
<?xml version="1.0"?>
<geometry>
<object index="0" type="polygon">
<shape>square</shape>
<property>
<degrees>360</degrees>
<sides>4.0</sides>
</property>
</object>
<object index="1">
<shape>circle</shape>
<property>
<degrees>360</degrees>
<sides/>
</property>
</object>
<object index="2" type="polygon">
<shape>triangle</shape>
<property>
<degrees>180</degrees>
<sides>3.0</sides>
</property>
</object>
</geometry>
XML Final Notes#
All XML documents adhere to W3C specifications. Both etree and lxml
parsers will fail to parse any markup document that is not well-formed or
follows XML syntax rules. Do be aware HTML is not an XML document unless it
follows XHTML specs. However, other popular markup types including KML, XAML,
RSS, MusicML, MathML are compliant XML schemas.
For above reason, if your application builds XML prior to pandas operations,
use appropriate DOM libraries like etree and lxml to build the necessary
document and not by string concatenation or regex adjustments. Always remember
XML is a special text file with markup rules.
With very large XML files (several hundred MBs to GBs), XPath and XSLT
can become memory-intensive operations. Be sure to have enough available
RAM for reading and writing to large XML files (roughly about 5 times the
size of text).
Because XSLT is a programming language, use it with caution since such scripts
can pose a security risk in your environment and can run large or infinite
recursive operations. Always test scripts on small fragments before full run.
The etree parser supports all functionality of both read_xml and
to_xml except for complex XPath and any XSLT. Though limited in features,
etree is still a reliable and capable parser and tree builder. Its
performance may trail lxml to a certain degree for larger files but
relatively unnoticeable on small to medium size files.
Excel files#
The read_excel() method can read Excel 2007+ (.xlsx) files
using the openpyxl Python module. Excel 2003 (.xls) files
can be read using xlrd. Binary Excel (.xlsb)
files can be read using pyxlsb.
The to_excel() instance method is used for
saving a DataFrame to Excel. Generally the semantics are
similar to working with csv data.
See the cookbook for some advanced strategies.
Warning
The xlwt package for writing old-style .xls
excel files is no longer maintained.
The xlrd package is now only for reading
old-style .xls files.
Before pandas 1.3.0, the default argument engine=None to read_excel()
would result in using the xlrd engine in many cases, including new
Excel 2007+ (.xlsx) files. pandas will now default to using the
openpyxl engine.
It is strongly encouraged to install openpyxl to read Excel 2007+
(.xlsx) files.
Please do not report issues when using ``xlrd`` to read ``.xlsx`` files.
This is no longer supported, switch to using openpyxl instead.
Attempting to use the xlwt engine will raise a FutureWarning
unless the option io.excel.xls.writer is set to "xlwt".
While this option is now deprecated and will also raise a FutureWarning,
it can be globally set and the warning suppressed. Users are recommended to
write .xlsx files using the openpyxl engine instead.
Reading Excel files#
In the most basic use-case, read_excel takes a path to an Excel
file, and the sheet_name indicating which sheet to parse.
# Returns a DataFrame
pd.read_excel("path_to_file.xls", sheet_name="Sheet1")
ExcelFile class#
To facilitate working with multiple sheets from the same file, the ExcelFile
class can be used to wrap the file and can be passed into read_excel
There will be a performance benefit for reading multiple sheets as the file is
read into memory only once.
xlsx = pd.ExcelFile("path_to_file.xls")
df = pd.read_excel(xlsx, "Sheet1")
The ExcelFile class can also be used as a context manager.
with pd.ExcelFile("path_to_file.xls") as xls:
df1 = pd.read_excel(xls, "Sheet1")
df2 = pd.read_excel(xls, "Sheet2")
The sheet_names property will generate
a list of the sheet names in the file.
The primary use-case for an ExcelFile is parsing multiple sheets with
different parameters:
data = {}
# For when Sheet1's format differs from Sheet2
with pd.ExcelFile("path_to_file.xls") as xls:
data["Sheet1"] = pd.read_excel(xls, "Sheet1", index_col=None, na_values=["NA"])
data["Sheet2"] = pd.read_excel(xls, "Sheet2", index_col=1)
Note that if the same parsing parameters are used for all sheets, a list
of sheet names can simply be passed to read_excel with no loss in performance.
# using the ExcelFile class
data = {}
with pd.ExcelFile("path_to_file.xls") as xls:
data["Sheet1"] = pd.read_excel(xls, "Sheet1", index_col=None, na_values=["NA"])
data["Sheet2"] = pd.read_excel(xls, "Sheet2", index_col=None, na_values=["NA"])
# equivalent using the read_excel function
data = pd.read_excel(
"path_to_file.xls", ["Sheet1", "Sheet2"], index_col=None, na_values=["NA"]
)
ExcelFile can also be called with a xlrd.book.Book object
as a parameter. This allows the user to control how the excel file is read.
For example, sheets can be loaded on demand by calling xlrd.open_workbook()
with on_demand=True.
import xlrd
xlrd_book = xlrd.open_workbook("path_to_file.xls", on_demand=True)
with pd.ExcelFile(xlrd_book) as xls:
df1 = pd.read_excel(xls, "Sheet1")
df2 = pd.read_excel(xls, "Sheet2")
Specifying sheets#
Note
The second argument is sheet_name, not to be confused with ExcelFile.sheet_names.
Note
An ExcelFile’s attribute sheet_names provides access to a list of sheets.
The arguments sheet_name allows specifying the sheet or sheets to read.
The default value for sheet_name is 0, indicating to read the first sheet
Pass a string to refer to the name of a particular sheet in the workbook.
Pass an integer to refer to the index of a sheet. Indices follow Python
convention, beginning at 0.
Pass a list of either strings or integers, to return a dictionary of specified sheets.
Pass a None to return a dictionary of all available sheets.
# Returns a DataFrame
pd.read_excel("path_to_file.xls", "Sheet1", index_col=None, na_values=["NA"])
Using the sheet index:
# Returns a DataFrame
pd.read_excel("path_to_file.xls", 0, index_col=None, na_values=["NA"])
Using all default values:
# Returns a DataFrame
pd.read_excel("path_to_file.xls")
Using None to get all sheets:
# Returns a dictionary of DataFrames
pd.read_excel("path_to_file.xls", sheet_name=None)
Using a list to get multiple sheets:
# Returns the 1st and 4th sheet, as a dictionary of DataFrames.
pd.read_excel("path_to_file.xls", sheet_name=["Sheet1", 3])
read_excel can read more than one sheet, by setting sheet_name to either
a list of sheet names, a list of sheet positions, or None to read all sheets.
Sheets can be specified by sheet index or sheet name, using an integer or string,
respectively.
Reading a MultiIndex#
read_excel can read a MultiIndex index, by passing a list of columns to index_col
and a MultiIndex column by passing a list of rows to header. If either the index
or columns have serialized level names those will be read in as well by specifying
the rows/columns that make up the levels.
For example, to read in a MultiIndex index without names:
In [406]: df = pd.DataFrame(
.....: {"a": [1, 2, 3, 4], "b": [5, 6, 7, 8]},
.....: index=pd.MultiIndex.from_product([["a", "b"], ["c", "d"]]),
.....: )
.....:
In [407]: df.to_excel("path_to_file.xlsx")
In [408]: df = pd.read_excel("path_to_file.xlsx", index_col=[0, 1])
In [409]: df
Out[409]:
a b
a c 1 5
d 2 6
b c 3 7
d 4 8
If the index has level names, they will parsed as well, using the same
parameters.
In [410]: df.index = df.index.set_names(["lvl1", "lvl2"])
In [411]: df.to_excel("path_to_file.xlsx")
In [412]: df = pd.read_excel("path_to_file.xlsx", index_col=[0, 1])
In [413]: df
Out[413]:
a b
lvl1 lvl2
a c 1 5
d 2 6
b c 3 7
d 4 8
If the source file has both MultiIndex index and columns, lists specifying each
should be passed to index_col and header:
In [414]: df.columns = pd.MultiIndex.from_product([["a"], ["b", "d"]], names=["c1", "c2"])
In [415]: df.to_excel("path_to_file.xlsx")
In [416]: df = pd.read_excel("path_to_file.xlsx", index_col=[0, 1], header=[0, 1])
In [417]: df
Out[417]:
c1 a
c2 b d
lvl1 lvl2
a c 1 5
d 2 6
b c 3 7
d 4 8
Missing values in columns specified in index_col will be forward filled to
allow roundtripping with to_excel for merged_cells=True. To avoid forward
filling the missing values use set_index after reading the data instead of
index_col.
Parsing specific columns#
It is often the case that users will insert columns to do temporary computations
in Excel and you may not want to read in those columns. read_excel takes
a usecols keyword to allow you to specify a subset of columns to parse.
Changed in version 1.0.0.
Passing in an integer for usecols will no longer work. Please pass in a list
of ints from 0 to usecols inclusive instead.
You can specify a comma-delimited set of Excel columns and ranges as a string:
pd.read_excel("path_to_file.xls", "Sheet1", usecols="A,C:E")
If usecols is a list of integers, then it is assumed to be the file column
indices to be parsed.
pd.read_excel("path_to_file.xls", "Sheet1", usecols=[0, 2, 3])
Element order is ignored, so usecols=[0, 1] is the same as [1, 0].
If usecols is a list of strings, it is assumed that each string corresponds
to a column name provided either by the user in names or inferred from the
document header row(s). Those strings define which columns will be parsed:
pd.read_excel("path_to_file.xls", "Sheet1", usecols=["foo", "bar"])
Element order is ignored, so usecols=['baz', 'joe'] is the same as ['joe', 'baz'].
If usecols is callable, the callable function will be evaluated against
the column names, returning names where the callable function evaluates to True.
pd.read_excel("path_to_file.xls", "Sheet1", usecols=lambda x: x.isalpha())
Parsing dates#
Datetime-like values are normally automatically converted to the appropriate
dtype when reading the excel file. But if you have a column of strings that
look like dates (but are not actually formatted as dates in excel), you can
use the parse_dates keyword to parse those strings to datetimes:
pd.read_excel("path_to_file.xls", "Sheet1", parse_dates=["date_strings"])
Cell converters#
It is possible to transform the contents of Excel cells via the converters
option. For instance, to convert a column to boolean:
pd.read_excel("path_to_file.xls", "Sheet1", converters={"MyBools": bool})
This options handles missing values and treats exceptions in the converters
as missing data. Transformations are applied cell by cell rather than to the
column as a whole, so the array dtype is not guaranteed. For instance, a
column of integers with missing values cannot be transformed to an array
with integer dtype, because NaN is strictly a float. You can manually mask
missing data to recover integer dtype:
def cfun(x):
return int(x) if x else -1
pd.read_excel("path_to_file.xls", "Sheet1", converters={"MyInts": cfun})
Dtype specifications#
As an alternative to converters, the type for an entire column can
be specified using the dtype keyword, which takes a dictionary
mapping column names to types. To interpret data with
no type inference, use the type str or object.
pd.read_excel("path_to_file.xls", dtype={"MyInts": "int64", "MyText": str})
Writing Excel files#
Writing Excel files to disk#
To write a DataFrame object to a sheet of an Excel file, you can use the
to_excel instance method. The arguments are largely the same as to_csv
described above, the first argument being the name of the excel file, and the
optional second argument the name of the sheet to which the DataFrame should be
written. For example:
df.to_excel("path_to_file.xlsx", sheet_name="Sheet1")
Files with a .xls extension will be written using xlwt and those with a
.xlsx extension will be written using xlsxwriter (if available) or
openpyxl.
The DataFrame will be written in a way that tries to mimic the REPL output.
The index_label will be placed in the second
row instead of the first. You can place it in the first row by setting the
merge_cells option in to_excel() to False:
df.to_excel("path_to_file.xlsx", index_label="label", merge_cells=False)
In order to write separate DataFrames to separate sheets in a single Excel file,
one can pass an ExcelWriter.
with pd.ExcelWriter("path_to_file.xlsx") as writer:
df1.to_excel(writer, sheet_name="Sheet1")
df2.to_excel(writer, sheet_name="Sheet2")
Writing Excel files to memory#
pandas supports writing Excel files to buffer-like objects such as StringIO or
BytesIO using ExcelWriter.
from io import BytesIO
bio = BytesIO()
# By setting the 'engine' in the ExcelWriter constructor.
writer = pd.ExcelWriter(bio, engine="xlsxwriter")
df.to_excel(writer, sheet_name="Sheet1")
# Save the workbook
writer.save()
# Seek to the beginning and read to copy the workbook to a variable in memory
bio.seek(0)
workbook = bio.read()
Note
engine is optional but recommended. Setting the engine determines
the version of workbook produced. Setting engine='xlrd' will produce an
Excel 2003-format workbook (xls). Using either 'openpyxl' or
'xlsxwriter' will produce an Excel 2007-format workbook (xlsx). If
omitted, an Excel 2007-formatted workbook is produced.
Excel writer engines#
Deprecated since version 1.2.0: As the xlwt package is no longer
maintained, the xlwt engine will be removed from a future version
of pandas. This is the only engine in pandas that supports writing to
.xls files.
pandas chooses an Excel writer via two methods:
the engine keyword argument
the filename extension (via the default specified in config options)
By default, pandas uses the XlsxWriter for .xlsx, openpyxl
for .xlsm, and xlwt for .xls files. If you have multiple
engines installed, you can set the default engine through setting the
config options io.excel.xlsx.writer and
io.excel.xls.writer. pandas will fall back on openpyxl for .xlsx
files if Xlsxwriter is not available.
To specify which writer you want to use, you can pass an engine keyword
argument to to_excel and to ExcelWriter. The built-in engines are:
openpyxl: version 2.4 or higher is required
xlsxwriter
xlwt
# By setting the 'engine' in the DataFrame 'to_excel()' methods.
df.to_excel("path_to_file.xlsx", sheet_name="Sheet1", engine="xlsxwriter")
# By setting the 'engine' in the ExcelWriter constructor.
writer = pd.ExcelWriter("path_to_file.xlsx", engine="xlsxwriter")
# Or via pandas configuration.
from pandas import options # noqa: E402
options.io.excel.xlsx.writer = "xlsxwriter"
df.to_excel("path_to_file.xlsx", sheet_name="Sheet1")
Style and formatting#
The look and feel of Excel worksheets created from pandas can be modified using the following parameters on the DataFrame’s to_excel method.
float_format : Format string for floating point numbers (default None).
freeze_panes : A tuple of two integers representing the bottommost row and rightmost column to freeze. Each of these parameters is one-based, so (1, 1) will freeze the first row and first column (default None).
Using the Xlsxwriter engine provides many options for controlling the
format of an Excel worksheet created with the to_excel method. Excellent examples can be found in the
Xlsxwriter documentation here: https://xlsxwriter.readthedocs.io/working_with_pandas.html
OpenDocument Spreadsheets#
New in version 0.25.
The read_excel() method can also read OpenDocument spreadsheets
using the odfpy module. The semantics and features for reading
OpenDocument spreadsheets match what can be done for Excel files using
engine='odf'.
# Returns a DataFrame
pd.read_excel("path_to_file.ods", engine="odf")
Note
Currently pandas only supports reading OpenDocument spreadsheets. Writing
is not implemented.
Binary Excel (.xlsb) files#
New in version 1.0.0.
The read_excel() method can also read binary Excel files
using the pyxlsb module. The semantics and features for reading
binary Excel files mostly match what can be done for Excel files using
engine='pyxlsb'. pyxlsb does not recognize datetime types
in files and will return floats instead.
# Returns a DataFrame
pd.read_excel("path_to_file.xlsb", engine="pyxlsb")
Note
Currently pandas only supports reading binary Excel files. Writing
is not implemented.
Clipboard#
A handy way to grab data is to use the read_clipboard() method,
which takes the contents of the clipboard buffer and passes them to the
read_csv method. For instance, you can copy the following text to the
clipboard (CTRL-C on many operating systems):
A B C
x 1 4 p
y 2 5 q
z 3 6 r
And then import the data directly to a DataFrame by calling:
>>> clipdf = pd.read_clipboard()
>>> clipdf
A B C
x 1 4 p
y 2 5 q
z 3 6 r
The to_clipboard method can be used to write the contents of a DataFrame to
the clipboard. Following which you can paste the clipboard contents into other
applications (CTRL-V on many operating systems). Here we illustrate writing a
DataFrame into clipboard and reading it back.
>>> df = pd.DataFrame(
... {"A": [1, 2, 3], "B": [4, 5, 6], "C": ["p", "q", "r"]}, index=["x", "y", "z"]
... )
>>> df
A B C
x 1 4 p
y 2 5 q
z 3 6 r
>>> df.to_clipboard()
>>> pd.read_clipboard()
A B C
x 1 4 p
y 2 5 q
z 3 6 r
We can see that we got the same content back, which we had earlier written to the clipboard.
Note
You may need to install xclip or xsel (with PyQt5, PyQt4 or qtpy) on Linux to use these methods.
Pickling#
All pandas objects are equipped with to_pickle methods which use Python’s
cPickle module to save data structures to disk using the pickle format.
In [418]: df
Out[418]:
c1 a
c2 b d
lvl1 lvl2
a c 1 5
d 2 6
b c 3 7
d 4 8
In [419]: df.to_pickle("foo.pkl")
The read_pickle function in the pandas namespace can be used to load
any pickled pandas object (or any other pickled object) from file:
In [420]: pd.read_pickle("foo.pkl")
Out[420]:
c1 a
c2 b d
lvl1 lvl2
a c 1 5
d 2 6
b c 3 7
d 4 8
Warning
Loading pickled data received from untrusted sources can be unsafe.
See: https://docs.python.org/3/library/pickle.html
Warning
read_pickle() is only guaranteed backwards compatible back to pandas version 0.20.3
Compressed pickle files#
read_pickle(), DataFrame.to_pickle() and Series.to_pickle() can read
and write compressed pickle files. The compression types of gzip, bz2, xz, zstd are supported for reading and writing.
The zip file format only supports reading and must contain only one data file
to be read.
The compression type can be an explicit parameter or be inferred from the file extension.
If ‘infer’, then use gzip, bz2, zip, xz, zstd if filename ends in '.gz', '.bz2', '.zip',
'.xz', or '.zst', respectively.
The compression parameter can also be a dict in order to pass options to the
compression protocol. It must have a 'method' key set to the name
of the compression protocol, which must be one of
{'zip', 'gzip', 'bz2', 'xz', 'zstd'}. All other key-value pairs are passed to
the underlying compression library.
In [421]: df = pd.DataFrame(
.....: {
.....: "A": np.random.randn(1000),
.....: "B": "foo",
.....: "C": pd.date_range("20130101", periods=1000, freq="s"),
.....: }
.....: )
.....:
In [422]: df
Out[422]:
A B C
0 -0.828876 foo 2013-01-01 00:00:00
1 -0.110383 foo 2013-01-01 00:00:01
2 2.357598 foo 2013-01-01 00:00:02
3 -1.620073 foo 2013-01-01 00:00:03
4 0.440903 foo 2013-01-01 00:00:04
.. ... ... ...
995 -1.177365 foo 2013-01-01 00:16:35
996 1.236988 foo 2013-01-01 00:16:36
997 0.743946 foo 2013-01-01 00:16:37
998 -0.533097 foo 2013-01-01 00:16:38
999 -0.140850 foo 2013-01-01 00:16:39
[1000 rows x 3 columns]
Using an explicit compression type:
In [423]: df.to_pickle("data.pkl.compress", compression="gzip")
In [424]: rt = pd.read_pickle("data.pkl.compress", compression="gzip")
In [425]: rt
Out[425]:
A B C
0 -0.828876 foo 2013-01-01 00:00:00
1 -0.110383 foo 2013-01-01 00:00:01
2 2.357598 foo 2013-01-01 00:00:02
3 -1.620073 foo 2013-01-01 00:00:03
4 0.440903 foo 2013-01-01 00:00:04
.. ... ... ...
995 -1.177365 foo 2013-01-01 00:16:35
996 1.236988 foo 2013-01-01 00:16:36
997 0.743946 foo 2013-01-01 00:16:37
998 -0.533097 foo 2013-01-01 00:16:38
999 -0.140850 foo 2013-01-01 00:16:39
[1000 rows x 3 columns]
Inferring compression type from the extension:
In [426]: df.to_pickle("data.pkl.xz", compression="infer")
In [427]: rt = pd.read_pickle("data.pkl.xz", compression="infer")
In [428]: rt
Out[428]:
A B C
0 -0.828876 foo 2013-01-01 00:00:00
1 -0.110383 foo 2013-01-01 00:00:01
2 2.357598 foo 2013-01-01 00:00:02
3 -1.620073 foo 2013-01-01 00:00:03
4 0.440903 foo 2013-01-01 00:00:04
.. ... ... ...
995 -1.177365 foo 2013-01-01 00:16:35
996 1.236988 foo 2013-01-01 00:16:36
997 0.743946 foo 2013-01-01 00:16:37
998 -0.533097 foo 2013-01-01 00:16:38
999 -0.140850 foo 2013-01-01 00:16:39
[1000 rows x 3 columns]
The default is to ‘infer’:
In [429]: df.to_pickle("data.pkl.gz")
In [430]: rt = pd.read_pickle("data.pkl.gz")
In [431]: rt
Out[431]:
A B C
0 -0.828876 foo 2013-01-01 00:00:00
1 -0.110383 foo 2013-01-01 00:00:01
2 2.357598 foo 2013-01-01 00:00:02
3 -1.620073 foo 2013-01-01 00:00:03
4 0.440903 foo 2013-01-01 00:00:04
.. ... ... ...
995 -1.177365 foo 2013-01-01 00:16:35
996 1.236988 foo 2013-01-01 00:16:36
997 0.743946 foo 2013-01-01 00:16:37
998 -0.533097 foo 2013-01-01 00:16:38
999 -0.140850 foo 2013-01-01 00:16:39
[1000 rows x 3 columns]
In [432]: df["A"].to_pickle("s1.pkl.bz2")
In [433]: rt = pd.read_pickle("s1.pkl.bz2")
In [434]: rt
Out[434]:
0 -0.828876
1 -0.110383
2 2.357598
3 -1.620073
4 0.440903
...
995 -1.177365
996 1.236988
997 0.743946
998 -0.533097
999 -0.140850
Name: A, Length: 1000, dtype: float64
Passing options to the compression protocol in order to speed up compression:
In [435]: df.to_pickle("data.pkl.gz", compression={"method": "gzip", "compresslevel": 1})
msgpack#
pandas support for msgpack has been removed in version 1.0.0. It is
recommended to use pickle instead.
Alternatively, you can also the Arrow IPC serialization format for on-the-wire
transmission of pandas objects. For documentation on pyarrow, see
here.
HDF5 (PyTables)#
HDFStore is a dict-like object which reads and writes pandas using
the high performance HDF5 format using the excellent PyTables library. See the cookbook
for some advanced strategies
Warning
pandas uses PyTables for reading and writing HDF5 files, which allows
serializing object-dtype data with pickle. Loading pickled data received from
untrusted sources can be unsafe.
See: https://docs.python.org/3/library/pickle.html for more.
In [436]: store = pd.HDFStore("store.h5")
In [437]: print(store)
<class 'pandas.io.pytables.HDFStore'>
File path: store.h5
Objects can be written to the file just like adding key-value pairs to a
dict:
In [438]: index = pd.date_range("1/1/2000", periods=8)
In [439]: s = pd.Series(np.random.randn(5), index=["a", "b", "c", "d", "e"])
In [440]: df = pd.DataFrame(np.random.randn(8, 3), index=index, columns=["A", "B", "C"])
# store.put('s', s) is an equivalent method
In [441]: store["s"] = s
In [442]: store["df"] = df
In [443]: store
Out[443]:
<class 'pandas.io.pytables.HDFStore'>
File path: store.h5
In a current or later Python session, you can retrieve stored objects:
# store.get('df') is an equivalent method
In [444]: store["df"]
Out[444]:
A B C
2000-01-01 -0.398501 -0.677311 -0.874991
2000-01-02 -1.167564 -0.593353 0.146262
2000-01-03 -0.131959 0.089012 0.667450
2000-01-04 0.169405 -1.358046 -0.105563
2000-01-05 0.492195 0.076693 0.213685
2000-01-06 -0.285283 -1.210529 -1.408386
2000-01-07 0.941577 -0.342447 0.222031
2000-01-08 0.052607 2.093214 1.064908
# dotted (attribute) access provides get as well
In [445]: store.df
Out[445]:
A B C
2000-01-01 -0.398501 -0.677311 -0.874991
2000-01-02 -1.167564 -0.593353 0.146262
2000-01-03 -0.131959 0.089012 0.667450
2000-01-04 0.169405 -1.358046 -0.105563
2000-01-05 0.492195 0.076693 0.213685
2000-01-06 -0.285283 -1.210529 -1.408386
2000-01-07 0.941577 -0.342447 0.222031
2000-01-08 0.052607 2.093214 1.064908
Deletion of the object specified by the key:
# store.remove('df') is an equivalent method
In [446]: del store["df"]
In [447]: store
Out[447]:
<class 'pandas.io.pytables.HDFStore'>
File path: store.h5
Closing a Store and using a context manager:
In [448]: store.close()
In [449]: store
Out[449]:
<class 'pandas.io.pytables.HDFStore'>
File path: store.h5
In [450]: store.is_open
Out[450]: False
# Working with, and automatically closing the store using a context manager
In [451]: with pd.HDFStore("store.h5") as store:
.....: store.keys()
.....:
Read/write API#
HDFStore supports a top-level API using read_hdf for reading and to_hdf for writing,
similar to how read_csv and to_csv work.
In [452]: df_tl = pd.DataFrame({"A": list(range(5)), "B": list(range(5))})
In [453]: df_tl.to_hdf("store_tl.h5", "table", append=True)
In [454]: pd.read_hdf("store_tl.h5", "table", where=["index>2"])
Out[454]:
A B
3 3 3
4 4 4
HDFStore will by default not drop rows that are all missing. This behavior can be changed by setting dropna=True.
In [455]: df_with_missing = pd.DataFrame(
.....: {
.....: "col1": [0, np.nan, 2],
.....: "col2": [1, np.nan, np.nan],
.....: }
.....: )
.....:
In [456]: df_with_missing
Out[456]:
col1 col2
0 0.0 1.0
1 NaN NaN
2 2.0 NaN
In [457]: df_with_missing.to_hdf("file.h5", "df_with_missing", format="table", mode="w")
In [458]: pd.read_hdf("file.h5", "df_with_missing")
Out[458]:
col1 col2
0 0.0 1.0
1 NaN NaN
2 2.0 NaN
In [459]: df_with_missing.to_hdf(
.....: "file.h5", "df_with_missing", format="table", mode="w", dropna=True
.....: )
.....:
In [460]: pd.read_hdf("file.h5", "df_with_missing")
Out[460]:
col1 col2
0 0.0 1.0
2 2.0 NaN
Fixed format#
The examples above show storing using put, which write the HDF5 to PyTables in a fixed array format, called
the fixed format. These types of stores are not appendable once written (though you can simply
remove them and rewrite). Nor are they queryable; they must be
retrieved in their entirety. They also do not support dataframes with non-unique column names.
The fixed format stores offer very fast writing and slightly faster reading than table stores.
This format is specified by default when using put or to_hdf or by format='fixed' or format='f'.
Warning
A fixed format will raise a TypeError if you try to retrieve using a where:
>>> pd.DataFrame(np.random.randn(10, 2)).to_hdf("test_fixed.h5", "df")
>>> pd.read_hdf("test_fixed.h5", "df", where="index>5")
TypeError: cannot pass a where specification when reading a fixed format.
this store must be selected in its entirety
Table format#
HDFStore supports another PyTables format on disk, the table
format. Conceptually a table is shaped very much like a DataFrame,
with rows and columns. A table may be appended to in the same or
other sessions. In addition, delete and query type operations are
supported. This format is specified by format='table' or format='t'
to append or put or to_hdf.
This format can be set as an option as well pd.set_option('io.hdf.default_format','table') to
enable put/append/to_hdf to by default store in the table format.
In [461]: store = pd.HDFStore("store.h5")
In [462]: df1 = df[0:4]
In [463]: df2 = df[4:]
# append data (creates a table automatically)
In [464]: store.append("df", df1)
In [465]: store.append("df", df2)
In [466]: store
Out[466]:
<class 'pandas.io.pytables.HDFStore'>
File path: store.h5
# select the entire object
In [467]: store.select("df")
Out[467]:
A B C
2000-01-01 -0.398501 -0.677311 -0.874991
2000-01-02 -1.167564 -0.593353 0.146262
2000-01-03 -0.131959 0.089012 0.667450
2000-01-04 0.169405 -1.358046 -0.105563
2000-01-05 0.492195 0.076693 0.213685
2000-01-06 -0.285283 -1.210529 -1.408386
2000-01-07 0.941577 -0.342447 0.222031
2000-01-08 0.052607 2.093214 1.064908
# the type of stored data
In [468]: store.root.df._v_attrs.pandas_type
Out[468]: 'frame_table'
Note
You can also create a table by passing format='table' or format='t' to a put operation.
Hierarchical keys#
Keys to a store can be specified as a string. These can be in a
hierarchical path-name like format (e.g. foo/bar/bah), which will
generate a hierarchy of sub-stores (or Groups in PyTables
parlance). Keys can be specified without the leading ‘/’ and are always
absolute (e.g. ‘foo’ refers to ‘/foo’). Removal operations can remove
everything in the sub-store and below, so be careful.
In [469]: store.put("foo/bar/bah", df)
In [470]: store.append("food/orange", df)
In [471]: store.append("food/apple", df)
In [472]: store
Out[472]:
<class 'pandas.io.pytables.HDFStore'>
File path: store.h5
# a list of keys are returned
In [473]: store.keys()
Out[473]: ['/df', '/food/apple', '/food/orange', '/foo/bar/bah']
# remove all nodes under this level
In [474]: store.remove("food")
In [475]: store
Out[475]:
<class 'pandas.io.pytables.HDFStore'>
File path: store.h5
You can walk through the group hierarchy using the walk method which
will yield a tuple for each group key along with the relative keys of its contents.
In [476]: for (path, subgroups, subkeys) in store.walk():
.....: for subgroup in subgroups:
.....: print("GROUP: {}/{}".format(path, subgroup))
.....: for subkey in subkeys:
.....: key = "/".join([path, subkey])
.....: print("KEY: {}".format(key))
.....: print(store.get(key))
.....:
GROUP: /foo
KEY: /df
A B C
2000-01-01 -0.398501 -0.677311 -0.874991
2000-01-02 -1.167564 -0.593353 0.146262
2000-01-03 -0.131959 0.089012 0.667450
2000-01-04 0.169405 -1.358046 -0.105563
2000-01-05 0.492195 0.076693 0.213685
2000-01-06 -0.285283 -1.210529 -1.408386
2000-01-07 0.941577 -0.342447 0.222031
2000-01-08 0.052607 2.093214 1.064908
GROUP: /foo/bar
KEY: /foo/bar/bah
A B C
2000-01-01 -0.398501 -0.677311 -0.874991
2000-01-02 -1.167564 -0.593353 0.146262
2000-01-03 -0.131959 0.089012 0.667450
2000-01-04 0.169405 -1.358046 -0.105563
2000-01-05 0.492195 0.076693 0.213685
2000-01-06 -0.285283 -1.210529 -1.408386
2000-01-07 0.941577 -0.342447 0.222031
2000-01-08 0.052607 2.093214 1.064908
Warning
Hierarchical keys cannot be retrieved as dotted (attribute) access as described above for items stored under the root node.
In [8]: store.foo.bar.bah
AttributeError: 'HDFStore' object has no attribute 'foo'
# you can directly access the actual PyTables node but using the root node
In [9]: store.root.foo.bar.bah
Out[9]:
/foo/bar/bah (Group) ''
children := ['block0_items' (Array), 'block0_values' (Array), 'axis0' (Array), 'axis1' (Array)]
Instead, use explicit string based keys:
In [477]: store["foo/bar/bah"]
Out[477]:
A B C
2000-01-01 -0.398501 -0.677311 -0.874991
2000-01-02 -1.167564 -0.593353 0.146262
2000-01-03 -0.131959 0.089012 0.667450
2000-01-04 0.169405 -1.358046 -0.105563
2000-01-05 0.492195 0.076693 0.213685
2000-01-06 -0.285283 -1.210529 -1.408386
2000-01-07 0.941577 -0.342447 0.222031
2000-01-08 0.052607 2.093214 1.064908
Storing types#
Storing mixed types in a table#
Storing mixed-dtype data is supported. Strings are stored as a
fixed-width using the maximum size of the appended column. Subsequent attempts
at appending longer strings will raise a ValueError.
Passing min_itemsize={`values`: size} as a parameter to append
will set a larger minimum for the string columns. Storing floats,
strings, ints, bools, datetime64 are currently supported. For string
columns, passing nan_rep = 'nan' to append will change the default
nan representation on disk (which converts to/from np.nan), this
defaults to nan.
In [478]: df_mixed = pd.DataFrame(
.....: {
.....: "A": np.random.randn(8),
.....: "B": np.random.randn(8),
.....: "C": np.array(np.random.randn(8), dtype="float32"),
.....: "string": "string",
.....: "int": 1,
.....: "bool": True,
.....: "datetime64": pd.Timestamp("20010102"),
.....: },
.....: index=list(range(8)),
.....: )
.....:
In [479]: df_mixed.loc[df_mixed.index[3:5], ["A", "B", "string", "datetime64"]] = np.nan
In [480]: store.append("df_mixed", df_mixed, min_itemsize={"values": 50})
In [481]: df_mixed1 = store.select("df_mixed")
In [482]: df_mixed1
Out[482]:
A B C string int bool datetime64
0 1.778161 -0.898283 -0.263043 string 1 True 2001-01-02
1 -0.913867 -0.218499 -0.639244 string 1 True 2001-01-02
2 -0.030004 1.408028 -0.866305 string 1 True 2001-01-02
3 NaN NaN -0.225250 NaN 1 True NaT
4 NaN NaN -0.890978 NaN 1 True NaT
5 0.081323 0.520995 -0.553839 string 1 True 2001-01-02
6 -0.268494 0.620028 -2.762875 string 1 True 2001-01-02
7 0.168016 0.159416 -1.244763 string 1 True 2001-01-02
In [483]: df_mixed1.dtypes.value_counts()
Out[483]:
float64 2
float32 1
object 1
int64 1
bool 1
datetime64[ns] 1
dtype: int64
# we have provided a minimum string column size
In [484]: store.root.df_mixed.table
Out[484]:
/df_mixed/table (Table(8,)) ''
description := {
"index": Int64Col(shape=(), dflt=0, pos=0),
"values_block_0": Float64Col(shape=(2,), dflt=0.0, pos=1),
"values_block_1": Float32Col(shape=(1,), dflt=0.0, pos=2),
"values_block_2": StringCol(itemsize=50, shape=(1,), dflt=b'', pos=3),
"values_block_3": Int64Col(shape=(1,), dflt=0, pos=4),
"values_block_4": BoolCol(shape=(1,), dflt=False, pos=5),
"values_block_5": Int64Col(shape=(1,), dflt=0, pos=6)}
byteorder := 'little'
chunkshape := (689,)
autoindex := True
colindexes := {
"index": Index(6, mediumshuffle, zlib(1)).is_csi=False}
Storing MultiIndex DataFrames#
Storing MultiIndex DataFrames as tables is very similar to
storing/selecting from homogeneous index DataFrames.
In [485]: index = pd.MultiIndex(
.....: levels=[["foo", "bar", "baz", "qux"], ["one", "two", "three"]],
.....: codes=[[0, 0, 0, 1, 1, 2, 2, 3, 3, 3], [0, 1, 2, 0, 1, 1, 2, 0, 1, 2]],
.....: names=["foo", "bar"],
.....: )
.....:
In [486]: df_mi = pd.DataFrame(np.random.randn(10, 3), index=index, columns=["A", "B", "C"])
In [487]: df_mi
Out[487]:
A B C
foo bar
foo one -1.280289 0.692545 -0.536722
two 1.005707 0.296917 0.139796
three -1.083889 0.811865 1.648435
bar one -0.164377 -0.402227 1.618922
two -1.424723 -0.023232 0.948196
baz two 0.183573 0.145277 0.308146
three -1.043530 -0.708145 1.430905
qux one -0.850136 0.813949 1.508891
two -1.556154 0.187597 1.176488
three -1.246093 -0.002726 -0.444249
In [488]: store.append("df_mi", df_mi)
In [489]: store.select("df_mi")
Out[489]:
A B C
foo bar
foo one -1.280289 0.692545 -0.536722
two 1.005707 0.296917 0.139796
three -1.083889 0.811865 1.648435
bar one -0.164377 -0.402227 1.618922
two -1.424723 -0.023232 0.948196
baz two 0.183573 0.145277 0.308146
three -1.043530 -0.708145 1.430905
qux one -0.850136 0.813949 1.508891
two -1.556154 0.187597 1.176488
three -1.246093 -0.002726 -0.444249
# the levels are automatically included as data columns
In [490]: store.select("df_mi", "foo=bar")
Out[490]:
A B C
foo bar
bar one -0.164377 -0.402227 1.618922
two -1.424723 -0.023232 0.948196
Note
The index keyword is reserved and cannot be use as a level name.
Querying#
Querying a table#
select and delete operations have an optional criterion that can
be specified to select/delete only a subset of the data. This allows one
to have a very large on-disk table and retrieve only a portion of the
data.
A query is specified using the Term class under the hood, as a boolean expression.
index and columns are supported indexers of DataFrames.
if data_columns are specified, these can be used as additional indexers.
level name in a MultiIndex, with default name level_0, level_1, … if not provided.
Valid comparison operators are:
=, ==, !=, >, >=, <, <=
Valid boolean expressions are combined with:
| : or
& : and
( and ) : for grouping
These rules are similar to how boolean expressions are used in pandas for indexing.
Note
= will be automatically expanded to the comparison operator ==
~ is the not operator, but can only be used in very limited
circumstances
If a list/tuple of expressions is passed they will be combined via &
The following are valid expressions:
'index >= date'
"columns = ['A', 'D']"
"columns in ['A', 'D']"
'columns = A'
'columns == A'
"~(columns = ['A', 'B'])"
'index > df.index[3] & string = "bar"'
'(index > df.index[3] & index <= df.index[6]) | string = "bar"'
"ts >= Timestamp('2012-02-01')"
"major_axis>=20130101"
The indexers are on the left-hand side of the sub-expression:
columns, major_axis, ts
The right-hand side of the sub-expression (after a comparison operator) can be:
functions that will be evaluated, e.g. Timestamp('2012-02-01')
strings, e.g. "bar"
date-like, e.g. 20130101, or "20130101"
lists, e.g. "['A', 'B']"
variables that are defined in the local names space, e.g. date
Note
Passing a string to a query by interpolating it into the query
expression is not recommended. Simply assign the string of interest to a
variable and use that variable in an expression. For example, do this
string = "HolyMoly'"
store.select("df", "index == string")
instead of this
string = "HolyMoly'"
store.select('df', f'index == {string}')
The latter will not work and will raise a SyntaxError.Note that
there’s a single quote followed by a double quote in the string
variable.
If you must interpolate, use the '%r' format specifier
store.select("df", "index == %r" % string)
which will quote string.
Here are some examples:
In [491]: dfq = pd.DataFrame(
.....: np.random.randn(10, 4),
.....: columns=list("ABCD"),
.....: index=pd.date_range("20130101", periods=10),
.....: )
.....:
In [492]: store.append("dfq", dfq, format="table", data_columns=True)
Use boolean expressions, with in-line function evaluation.
In [493]: store.select("dfq", "index>pd.Timestamp('20130104') & columns=['A', 'B']")
Out[493]:
A B
2013-01-05 1.366810 1.073372
2013-01-06 2.119746 -2.628174
2013-01-07 0.337920 -0.634027
2013-01-08 1.053434 1.109090
2013-01-09 -0.772942 -0.269415
2013-01-10 0.048562 -0.285920
Use inline column reference.
In [494]: store.select("dfq", where="A>0 or C>0")
Out[494]:
A B C D
2013-01-01 0.856838 1.491776 0.001283 0.701816
2013-01-02 -1.097917 0.102588 0.661740 0.443531
2013-01-03 0.559313 -0.459055 -1.222598 -0.455304
2013-01-05 1.366810 1.073372 -0.994957 0.755314
2013-01-06 2.119746 -2.628174 -0.089460 -0.133636
2013-01-07 0.337920 -0.634027 0.421107 0.604303
2013-01-08 1.053434 1.109090 -0.367891 -0.846206
2013-01-10 0.048562 -0.285920 1.334100 0.194462
The columns keyword can be supplied to select a list of columns to be
returned, this is equivalent to passing a
'columns=list_of_columns_to_filter':
In [495]: store.select("df", "columns=['A', 'B']")
Out[495]:
A B
2000-01-01 -0.398501 -0.677311
2000-01-02 -1.167564 -0.593353
2000-01-03 -0.131959 0.089012
2000-01-04 0.169405 -1.358046
2000-01-05 0.492195 0.076693
2000-01-06 -0.285283 -1.210529
2000-01-07 0.941577 -0.342447
2000-01-08 0.052607 2.093214
start and stop parameters can be specified to limit the total search
space. These are in terms of the total number of rows in a table.
Note
select will raise a ValueError if the query expression has an unknown
variable reference. Usually this means that you are trying to select on a column
that is not a data_column.
select will raise a SyntaxError if the query expression is not valid.
Query timedelta64[ns]#
You can store and query using the timedelta64[ns] type. Terms can be
specified in the format: <float>(<unit>), where float may be signed (and fractional), and unit can be
D,s,ms,us,ns for the timedelta. Here’s an example:
In [496]: from datetime import timedelta
In [497]: dftd = pd.DataFrame(
.....: {
.....: "A": pd.Timestamp("20130101"),
.....: "B": [
.....: pd.Timestamp("20130101") + timedelta(days=i, seconds=10)
.....: for i in range(10)
.....: ],
.....: }
.....: )
.....:
In [498]: dftd["C"] = dftd["A"] - dftd["B"]
In [499]: dftd
Out[499]:
A B C
0 2013-01-01 2013-01-01 00:00:10 -1 days +23:59:50
1 2013-01-01 2013-01-02 00:00:10 -2 days +23:59:50
2 2013-01-01 2013-01-03 00:00:10 -3 days +23:59:50
3 2013-01-01 2013-01-04 00:00:10 -4 days +23:59:50
4 2013-01-01 2013-01-05 00:00:10 -5 days +23:59:50
5 2013-01-01 2013-01-06 00:00:10 -6 days +23:59:50
6 2013-01-01 2013-01-07 00:00:10 -7 days +23:59:50
7 2013-01-01 2013-01-08 00:00:10 -8 days +23:59:50
8 2013-01-01 2013-01-09 00:00:10 -9 days +23:59:50
9 2013-01-01 2013-01-10 00:00:10 -10 days +23:59:50
In [500]: store.append("dftd", dftd, data_columns=True)
In [501]: store.select("dftd", "C<'-3.5D'")
Out[501]:
A B C
4 2013-01-01 2013-01-05 00:00:10 -5 days +23:59:50
5 2013-01-01 2013-01-06 00:00:10 -6 days +23:59:50
6 2013-01-01 2013-01-07 00:00:10 -7 days +23:59:50
7 2013-01-01 2013-01-08 00:00:10 -8 days +23:59:50
8 2013-01-01 2013-01-09 00:00:10 -9 days +23:59:50
9 2013-01-01 2013-01-10 00:00:10 -10 days +23:59:50
Query MultiIndex#
Selecting from a MultiIndex can be achieved by using the name of the level.
In [502]: df_mi.index.names
Out[502]: FrozenList(['foo', 'bar'])
In [503]: store.select("df_mi", "foo=baz and bar=two")
Out[503]:
A B C
foo bar
baz two 0.183573 0.145277 0.308146
If the MultiIndex levels names are None, the levels are automatically made available via
the level_n keyword with n the level of the MultiIndex you want to select from.
In [504]: index = pd.MultiIndex(
.....: levels=[["foo", "bar", "baz", "qux"], ["one", "two", "three"]],
.....: codes=[[0, 0, 0, 1, 1, 2, 2, 3, 3, 3], [0, 1, 2, 0, 1, 1, 2, 0, 1, 2]],
.....: )
.....:
In [505]: df_mi_2 = pd.DataFrame(np.random.randn(10, 3), index=index, columns=["A", "B", "C"])
In [506]: df_mi_2
Out[506]:
A B C
foo one -0.646538 1.210676 -0.315409
two 1.528366 0.376542 0.174490
three 1.247943 -0.742283 0.710400
bar one 0.434128 -1.246384 1.139595
two 1.388668 -0.413554 -0.666287
baz two 0.010150 -0.163820 -0.115305
three 0.216467 0.633720 0.473945
qux one -0.155446 1.287082 0.320201
two -1.256989 0.874920 0.765944
three 0.025557 -0.729782 -0.127439
In [507]: store.append("df_mi_2", df_mi_2)
# the levels are automatically included as data columns with keyword level_n
In [508]: store.select("df_mi_2", "level_0=foo and level_1=two")
Out[508]:
A B C
foo two 1.528366 0.376542 0.17449
Indexing#
You can create/modify an index for a table with create_table_index
after data is already in the table (after and append/put
operation). Creating a table index is highly encouraged. This will
speed your queries a great deal when you use a select with the
indexed dimension as the where.
Note
Indexes are automagically created on the indexables
and any data columns you specify. This behavior can be turned off by passing
index=False to append.
# we have automagically already created an index (in the first section)
In [509]: i = store.root.df.table.cols.index.index
In [510]: i.optlevel, i.kind
Out[510]: (6, 'medium')
# change an index by passing new parameters
In [511]: store.create_table_index("df", optlevel=9, kind="full")
In [512]: i = store.root.df.table.cols.index.index
In [513]: i.optlevel, i.kind
Out[513]: (9, 'full')
Oftentimes when appending large amounts of data to a store, it is useful to turn off index creation for each append, then recreate at the end.
In [514]: df_1 = pd.DataFrame(np.random.randn(10, 2), columns=list("AB"))
In [515]: df_2 = pd.DataFrame(np.random.randn(10, 2), columns=list("AB"))
In [516]: st = pd.HDFStore("appends.h5", mode="w")
In [517]: st.append("df", df_1, data_columns=["B"], index=False)
In [518]: st.append("df", df_2, data_columns=["B"], index=False)
In [519]: st.get_storer("df").table
Out[519]:
/df/table (Table(20,)) ''
description := {
"index": Int64Col(shape=(), dflt=0, pos=0),
"values_block_0": Float64Col(shape=(1,), dflt=0.0, pos=1),
"B": Float64Col(shape=(), dflt=0.0, pos=2)}
byteorder := 'little'
chunkshape := (2730,)
Then create the index when finished appending.
In [520]: st.create_table_index("df", columns=["B"], optlevel=9, kind="full")
In [521]: st.get_storer("df").table
Out[521]:
/df/table (Table(20,)) ''
description := {
"index": Int64Col(shape=(), dflt=0, pos=0),
"values_block_0": Float64Col(shape=(1,), dflt=0.0, pos=1),
"B": Float64Col(shape=(), dflt=0.0, pos=2)}
byteorder := 'little'
chunkshape := (2730,)
autoindex := True
colindexes := {
"B": Index(9, fullshuffle, zlib(1)).is_csi=True}
In [522]: st.close()
See here for how to create a completely-sorted-index (CSI) on an existing store.
Query via data columns#
You can designate (and index) certain columns that you want to be able
to perform queries (other than the indexable columns, which you can
always query). For instance say you want to perform this common
operation, on-disk, and return just the frame that matches this
query. You can specify data_columns = True to force all columns to
be data_columns.
In [523]: df_dc = df.copy()
In [524]: df_dc["string"] = "foo"
In [525]: df_dc.loc[df_dc.index[4:6], "string"] = np.nan
In [526]: df_dc.loc[df_dc.index[7:9], "string"] = "bar"
In [527]: df_dc["string2"] = "cool"
In [528]: df_dc.loc[df_dc.index[1:3], ["B", "C"]] = 1.0
In [529]: df_dc
Out[529]:
A B C string string2
2000-01-01 -0.398501 -0.677311 -0.874991 foo cool
2000-01-02 -1.167564 1.000000 1.000000 foo cool
2000-01-03 -0.131959 1.000000 1.000000 foo cool
2000-01-04 0.169405 -1.358046 -0.105563 foo cool
2000-01-05 0.492195 0.076693 0.213685 NaN cool
2000-01-06 -0.285283 -1.210529 -1.408386 NaN cool
2000-01-07 0.941577 -0.342447 0.222031 foo cool
2000-01-08 0.052607 2.093214 1.064908 bar cool
# on-disk operations
In [530]: store.append("df_dc", df_dc, data_columns=["B", "C", "string", "string2"])
In [531]: store.select("df_dc", where="B > 0")
Out[531]:
A B C string string2
2000-01-02 -1.167564 1.000000 1.000000 foo cool
2000-01-03 -0.131959 1.000000 1.000000 foo cool
2000-01-05 0.492195 0.076693 0.213685 NaN cool
2000-01-08 0.052607 2.093214 1.064908 bar cool
# getting creative
In [532]: store.select("df_dc", "B > 0 & C > 0 & string == foo")
Out[532]:
A B C string string2
2000-01-02 -1.167564 1.0 1.0 foo cool
2000-01-03 -0.131959 1.0 1.0 foo cool
# this is in-memory version of this type of selection
In [533]: df_dc[(df_dc.B > 0) & (df_dc.C > 0) & (df_dc.string == "foo")]
Out[533]:
A B C string string2
2000-01-02 -1.167564 1.0 1.0 foo cool
2000-01-03 -0.131959 1.0 1.0 foo cool
# we have automagically created this index and the B/C/string/string2
# columns are stored separately as ``PyTables`` columns
In [534]: store.root.df_dc.table
Out[534]:
/df_dc/table (Table(8,)) ''
description := {
"index": Int64Col(shape=(), dflt=0, pos=0),
"values_block_0": Float64Col(shape=(1,), dflt=0.0, pos=1),
"B": Float64Col(shape=(), dflt=0.0, pos=2),
"C": Float64Col(shape=(), dflt=0.0, pos=3),
"string": StringCol(itemsize=3, shape=(), dflt=b'', pos=4),
"string2": StringCol(itemsize=4, shape=(), dflt=b'', pos=5)}
byteorder := 'little'
chunkshape := (1680,)
autoindex := True
colindexes := {
"index": Index(6, mediumshuffle, zlib(1)).is_csi=False,
"B": Index(6, mediumshuffle, zlib(1)).is_csi=False,
"C": Index(6, mediumshuffle, zlib(1)).is_csi=False,
"string": Index(6, mediumshuffle, zlib(1)).is_csi=False,
"string2": Index(6, mediumshuffle, zlib(1)).is_csi=False}
There is some performance degradation by making lots of columns into
data columns, so it is up to the user to designate these. In addition,
you cannot change data columns (nor indexables) after the first
append/put operation (Of course you can simply read in the data and
create a new table!).
Iterator#
You can pass iterator=True or chunksize=number_in_a_chunk
to select and select_as_multiple to return an iterator on the results.
The default is 50,000 rows returned in a chunk.
In [535]: for df in store.select("df", chunksize=3):
.....: print(df)
.....:
A B C
2000-01-01 -0.398501 -0.677311 -0.874991
2000-01-02 -1.167564 -0.593353 0.146262
2000-01-03 -0.131959 0.089012 0.667450
A B C
2000-01-04 0.169405 -1.358046 -0.105563
2000-01-05 0.492195 0.076693 0.213685
2000-01-06 -0.285283 -1.210529 -1.408386
A B C
2000-01-07 0.941577 -0.342447 0.222031
2000-01-08 0.052607 2.093214 1.064908
Note
You can also use the iterator with read_hdf which will open, then
automatically close the store when finished iterating.
for df in pd.read_hdf("store.h5", "df", chunksize=3):
print(df)
Note, that the chunksize keyword applies to the source rows. So if you
are doing a query, then the chunksize will subdivide the total rows in the table
and the query applied, returning an iterator on potentially unequal sized chunks.
Here is a recipe for generating a query and using it to create equal sized return
chunks.
In [536]: dfeq = pd.DataFrame({"number": np.arange(1, 11)})
In [537]: dfeq
Out[537]:
number
0 1
1 2
2 3
3 4
4 5
5 6
6 7
7 8
8 9
9 10
In [538]: store.append("dfeq", dfeq, data_columns=["number"])
In [539]: def chunks(l, n):
.....: return [l[i: i + n] for i in range(0, len(l), n)]
.....:
In [540]: evens = [2, 4, 6, 8, 10]
In [541]: coordinates = store.select_as_coordinates("dfeq", "number=evens")
In [542]: for c in chunks(coordinates, 2):
.....: print(store.select("dfeq", where=c))
.....:
number
1 2
3 4
number
5 6
7 8
number
9 10
Advanced queries#
Select a single column#
To retrieve a single indexable or data column, use the
method select_column. This will, for example, enable you to get the index
very quickly. These return a Series of the result, indexed by the row number.
These do not currently accept the where selector.
In [543]: store.select_column("df_dc", "index")
Out[543]:
0 2000-01-01
1 2000-01-02
2 2000-01-03
3 2000-01-04
4 2000-01-05
5 2000-01-06
6 2000-01-07
7 2000-01-08
Name: index, dtype: datetime64[ns]
In [544]: store.select_column("df_dc", "string")
Out[544]:
0 foo
1 foo
2 foo
3 foo
4 NaN
5 NaN
6 foo
7 bar
Name: string, dtype: object
Selecting coordinates#
Sometimes you want to get the coordinates (a.k.a the index locations) of your query. This returns an
Int64Index of the resulting locations. These coordinates can also be passed to subsequent
where operations.
In [545]: df_coord = pd.DataFrame(
.....: np.random.randn(1000, 2), index=pd.date_range("20000101", periods=1000)
.....: )
.....:
In [546]: store.append("df_coord", df_coord)
In [547]: c = store.select_as_coordinates("df_coord", "index > 20020101")
In [548]: c
Out[548]:
Int64Index([732, 733, 734, 735, 736, 737, 738, 739, 740, 741,
...
990, 991, 992, 993, 994, 995, 996, 997, 998, 999],
dtype='int64', length=268)
In [549]: store.select("df_coord", where=c)
Out[549]:
0 1
2002-01-02 0.009035 0.921784
2002-01-03 -1.476563 -1.376375
2002-01-04 1.266731 2.173681
2002-01-05 0.147621 0.616468
2002-01-06 0.008611 2.136001
... ... ...
2002-09-22 0.781169 -0.791687
2002-09-23 -0.764810 -2.000933
2002-09-24 -0.345662 0.393915
2002-09-25 -0.116661 0.834638
2002-09-26 -1.341780 0.686366
[268 rows x 2 columns]
Selecting using a where mask#
Sometime your query can involve creating a list of rows to select. Usually this mask would
be a resulting index from an indexing operation. This example selects the months of
a datetimeindex which are 5.
In [550]: df_mask = pd.DataFrame(
.....: np.random.randn(1000, 2), index=pd.date_range("20000101", periods=1000)
.....: )
.....:
In [551]: store.append("df_mask", df_mask)
In [552]: c = store.select_column("df_mask", "index")
In [553]: where = c[pd.DatetimeIndex(c).month == 5].index
In [554]: store.select("df_mask", where=where)
Out[554]:
0 1
2000-05-01 -0.386742 -0.977433
2000-05-02 -0.228819 0.471671
2000-05-03 0.337307 1.840494
2000-05-04 0.050249 0.307149
2000-05-05 -0.802947 -0.946730
... ... ...
2002-05-27 1.605281 1.741415
2002-05-28 -0.804450 -0.715040
2002-05-29 -0.874851 0.037178
2002-05-30 -0.161167 -1.294944
2002-05-31 -0.258463 -0.731969
[93 rows x 2 columns]
Storer object#
If you want to inspect the stored object, retrieve via
get_storer. You could use this programmatically to say get the number
of rows in an object.
In [555]: store.get_storer("df_dc").nrows
Out[555]: 8
Multiple table queries#
The methods append_to_multiple and
select_as_multiple can perform appending/selecting from
multiple tables at once. The idea is to have one table (call it the
selector table) that you index most/all of the columns, and perform your
queries. The other table(s) are data tables with an index matching the
selector table’s index. You can then perform a very fast query
on the selector table, yet get lots of data back. This method is similar to
having a very wide table, but enables more efficient queries.
The append_to_multiple method splits a given single DataFrame
into multiple tables according to d, a dictionary that maps the
table names to a list of ‘columns’ you want in that table. If None
is used in place of a list, that table will have the remaining
unspecified columns of the given DataFrame. The argument selector
defines which table is the selector table (which you can make queries from).
The argument dropna will drop rows from the input DataFrame to ensure
tables are synchronized. This means that if a row for one of the tables
being written to is entirely np.NaN, that row will be dropped from all tables.
If dropna is False, THE USER IS RESPONSIBLE FOR SYNCHRONIZING THE TABLES.
Remember that entirely np.Nan rows are not written to the HDFStore, so if
you choose to call dropna=False, some tables may have more rows than others,
and therefore select_as_multiple may not work or it may return unexpected
results.
In [556]: df_mt = pd.DataFrame(
.....: np.random.randn(8, 6),
.....: index=pd.date_range("1/1/2000", periods=8),
.....: columns=["A", "B", "C", "D", "E", "F"],
.....: )
.....:
In [557]: df_mt["foo"] = "bar"
In [558]: df_mt.loc[df_mt.index[1], ("A", "B")] = np.nan
# you can also create the tables individually
In [559]: store.append_to_multiple(
.....: {"df1_mt": ["A", "B"], "df2_mt": None}, df_mt, selector="df1_mt"
.....: )
.....:
In [560]: store
Out[560]:
<class 'pandas.io.pytables.HDFStore'>
File path: store.h5
# individual tables were created
In [561]: store.select("df1_mt")
Out[561]:
A B
2000-01-01 0.079529 -1.459471
2000-01-02 NaN NaN
2000-01-03 -0.423113 2.314361
2000-01-04 0.756744 -0.792372
2000-01-05 -0.184971 0.170852
2000-01-06 0.678830 0.633974
2000-01-07 0.034973 0.974369
2000-01-08 -2.110103 0.243062
In [562]: store.select("df2_mt")
Out[562]:
C D E F foo
2000-01-01 -0.596306 -0.910022 -1.057072 -0.864360 bar
2000-01-02 0.477849 0.283128 -2.045700 -0.338206 bar
2000-01-03 -0.033100 -0.965461 -0.001079 -0.351689 bar
2000-01-04 -0.513555 -1.484776 -0.796280 -0.182321 bar
2000-01-05 -0.872407 -1.751515 0.934334 0.938818 bar
2000-01-06 -1.398256 1.347142 -0.029520 0.082738 bar
2000-01-07 -0.755544 0.380786 -1.634116 1.293610 bar
2000-01-08 1.453064 0.500558 -0.574475 0.694324 bar
# as a multiple
In [563]: store.select_as_multiple(
.....: ["df1_mt", "df2_mt"],
.....: where=["A>0", "B>0"],
.....: selector="df1_mt",
.....: )
.....:
Out[563]:
A B C D E F foo
2000-01-06 0.678830 0.633974 -1.398256 1.347142 -0.029520 0.082738 bar
2000-01-07 0.034973 0.974369 -0.755544 0.380786 -1.634116 1.293610 bar
Delete from a table#
You can delete from a table selectively by specifying a where. In
deleting rows, it is important to understand the PyTables deletes
rows by erasing the rows, then moving the following data. Thus
deleting can potentially be a very expensive operation depending on the
orientation of your data. To get optimal performance, it’s
worthwhile to have the dimension you are deleting be the first of the
indexables.
Data is ordered (on the disk) in terms of the indexables. Here’s a
simple use case. You store panel-type data, with dates in the
major_axis and ids in the minor_axis. The data is then
interleaved like this:
date_1
id_1
id_2
.
id_n
date_2
id_1
.
id_n
It should be clear that a delete operation on the major_axis will be
fairly quick, as one chunk is removed, then the following data moved. On
the other hand a delete operation on the minor_axis will be very
expensive. In this case it would almost certainly be faster to rewrite
the table using a where that selects all but the missing data.
Warning
Please note that HDF5 DOES NOT RECLAIM SPACE in the h5 files
automatically. Thus, repeatedly deleting (or removing nodes) and adding
again, WILL TEND TO INCREASE THE FILE SIZE.
To repack and clean the file, use ptrepack.
Notes & caveats#
Compression#
PyTables allows the stored data to be compressed. This applies to
all kinds of stores, not just tables. Two parameters are used to
control compression: complevel and complib.
complevel specifies if and how hard data is to be compressed.
complevel=0 and complevel=None disables compression and
0<complevel<10 enables compression.
complib specifies which compression library to use.
If nothing is specified the default library zlib is used. A
compression library usually optimizes for either good compression rates
or speed and the results will depend on the type of data. Which type of
compression to choose depends on your specific needs and data. The list
of supported compression libraries:
zlib: The default compression library.
A classic in terms of compression, achieves good compression
rates but is somewhat slow.
lzo: Fast
compression and decompression.
bzip2: Good compression rates.
blosc: Fast compression and
decompression.
Support for alternative blosc compressors:
blosc:blosclz This is the
default compressor for blosc
blosc:lz4:
A compact, very popular and fast compressor.
blosc:lz4hc:
A tweaked version of LZ4, produces better
compression ratios at the expense of speed.
blosc:snappy:
A popular compressor used in many places.
blosc:zlib: A classic;
somewhat slower than the previous ones, but
achieving better compression ratios.
blosc:zstd: An
extremely well balanced codec; it provides the best
compression ratios among the others above, and at
reasonably fast speed.
If complib is defined as something other than the listed libraries a
ValueError exception is issued.
Note
If the library specified with the complib option is missing on your platform,
compression defaults to zlib without further ado.
Enable compression for all objects within the file:
store_compressed = pd.HDFStore(
"store_compressed.h5", complevel=9, complib="blosc:blosclz"
)
Or on-the-fly compression (this only applies to tables) in stores where compression is not enabled:
store.append("df", df, complib="zlib", complevel=5)
ptrepack#
PyTables offers better write performance when tables are compressed after
they are written, as opposed to turning on compression at the very
beginning. You can use the supplied PyTables utility
ptrepack. In addition, ptrepack can change compression levels
after the fact.
ptrepack --chunkshape=auto --propindexes --complevel=9 --complib=blosc in.h5 out.h5
Furthermore ptrepack in.h5 out.h5 will repack the file to allow
you to reuse previously deleted space. Alternatively, one can simply
remove the file and write again, or use the copy method.
Caveats#
Warning
HDFStore is not-threadsafe for writing. The underlying
PyTables only supports concurrent reads (via threading or
processes). If you need reading and writing at the same time, you
need to serialize these operations in a single thread in a single
process. You will corrupt your data otherwise. See the (GH2397) for more information.
If you use locks to manage write access between multiple processes, you
may want to use fsync() before releasing write locks. For
convenience you can use store.flush(fsync=True) to do this for you.
Once a table is created columns (DataFrame)
are fixed; only exactly the same columns can be appended
Be aware that timezones (e.g., pytz.timezone('US/Eastern'))
are not necessarily equal across timezone versions. So if data is
localized to a specific timezone in the HDFStore using one version
of a timezone library and that data is updated with another version, the data
will be converted to UTC since these timezones are not considered
equal. Either use the same version of timezone library or use tz_convert with
the updated timezone definition.
Warning
PyTables will show a NaturalNameWarning if a column name
cannot be used as an attribute selector.
Natural identifiers contain only letters, numbers, and underscores,
and may not begin with a number.
Other identifiers cannot be used in a where clause
and are generally a bad idea.
DataTypes#
HDFStore will map an object dtype to the PyTables underlying
dtype. This means the following types are known to work:
Type
Represents missing values
floating : float64, float32, float16
np.nan
integer : int64, int32, int8, uint64,uint32, uint8
boolean
datetime64[ns]
NaT
timedelta64[ns]
NaT
categorical : see the section below
object : strings
np.nan
unicode columns are not supported, and WILL FAIL.
Categorical data#
You can write data that contains category dtypes to a HDFStore.
Queries work the same as if it was an object array. However, the category dtyped data is
stored in a more efficient manner.
In [564]: dfcat = pd.DataFrame(
.....: {"A": pd.Series(list("aabbcdba")).astype("category"), "B": np.random.randn(8)}
.....: )
.....:
In [565]: dfcat
Out[565]:
A B
0 a -1.608059
1 a 0.851060
2 b -0.736931
3 b 0.003538
4 c -1.422611
5 d 2.060901
6 b 0.993899
7 a -1.371768
In [566]: dfcat.dtypes
Out[566]:
A category
B float64
dtype: object
In [567]: cstore = pd.HDFStore("cats.h5", mode="w")
In [568]: cstore.append("dfcat", dfcat, format="table", data_columns=["A"])
In [569]: result = cstore.select("dfcat", where="A in ['b', 'c']")
In [570]: result
Out[570]:
A B
2 b -0.736931
3 b 0.003538
4 c -1.422611
6 b 0.993899
In [571]: result.dtypes
Out[571]:
A category
B float64
dtype: object
String columns#
min_itemsize
The underlying implementation of HDFStore uses a fixed column width (itemsize) for string columns.
A string column itemsize is calculated as the maximum of the
length of data (for that column) that is passed to the HDFStore, in the first append. Subsequent appends,
may introduce a string for a column larger than the column can hold, an Exception will be raised (otherwise you
could have a silent truncation of these columns, leading to loss of information). In the future we may relax this and
allow a user-specified truncation to occur.
Pass min_itemsize on the first table creation to a-priori specify the minimum length of a particular string column.
min_itemsize can be an integer, or a dict mapping a column name to an integer. You can pass values as a key to
allow all indexables or data_columns to have this min_itemsize.
Passing a min_itemsize dict will cause all passed columns to be created as data_columns automatically.
Note
If you are not passing any data_columns, then the min_itemsize will be the maximum of the length of any string passed
In [572]: dfs = pd.DataFrame({"A": "foo", "B": "bar"}, index=list(range(5)))
In [573]: dfs
Out[573]:
A B
0 foo bar
1 foo bar
2 foo bar
3 foo bar
4 foo bar
# A and B have a size of 30
In [574]: store.append("dfs", dfs, min_itemsize=30)
In [575]: store.get_storer("dfs").table
Out[575]:
/dfs/table (Table(5,)) ''
description := {
"index": Int64Col(shape=(), dflt=0, pos=0),
"values_block_0": StringCol(itemsize=30, shape=(2,), dflt=b'', pos=1)}
byteorder := 'little'
chunkshape := (963,)
autoindex := True
colindexes := {
"index": Index(6, mediumshuffle, zlib(1)).is_csi=False}
# A is created as a data_column with a size of 30
# B is size is calculated
In [576]: store.append("dfs2", dfs, min_itemsize={"A": 30})
In [577]: store.get_storer("dfs2").table
Out[577]:
/dfs2/table (Table(5,)) ''
description := {
"index": Int64Col(shape=(), dflt=0, pos=0),
"values_block_0": StringCol(itemsize=3, shape=(1,), dflt=b'', pos=1),
"A": StringCol(itemsize=30, shape=(), dflt=b'', pos=2)}
byteorder := 'little'
chunkshape := (1598,)
autoindex := True
colindexes := {
"index": Index(6, mediumshuffle, zlib(1)).is_csi=False,
"A": Index(6, mediumshuffle, zlib(1)).is_csi=False}
nan_rep
String columns will serialize a np.nan (a missing value) with the nan_rep string representation. This defaults to the string value nan.
You could inadvertently turn an actual nan value into a missing value.
In [578]: dfss = pd.DataFrame({"A": ["foo", "bar", "nan"]})
In [579]: dfss
Out[579]:
A
0 foo
1 bar
2 nan
In [580]: store.append("dfss", dfss)
In [581]: store.select("dfss")
Out[581]:
A
0 foo
1 bar
2 NaN
# here you need to specify a different nan rep
In [582]: store.append("dfss2", dfss, nan_rep="_nan_")
In [583]: store.select("dfss2")
Out[583]:
A
0 foo
1 bar
2 nan
External compatibility#
HDFStore writes table format objects in specific formats suitable for
producing loss-less round trips to pandas objects. For external
compatibility, HDFStore can read native PyTables format
tables.
It is possible to write an HDFStore object that can easily be imported into R using the
rhdf5 library (Package website). Create a table format store like this:
In [584]: df_for_r = pd.DataFrame(
.....: {
.....: "first": np.random.rand(100),
.....: "second": np.random.rand(100),
.....: "class": np.random.randint(0, 2, (100,)),
.....: },
.....: index=range(100),
.....: )
.....:
In [585]: df_for_r.head()
Out[585]:
first second class
0 0.013480 0.504941 0
1 0.690984 0.898188 1
2 0.510113 0.618748 1
3 0.357698 0.004972 0
4 0.451658 0.012065 1
In [586]: store_export = pd.HDFStore("export.h5")
In [587]: store_export.append("df_for_r", df_for_r, data_columns=df_dc.columns)
In [588]: store_export
Out[588]:
<class 'pandas.io.pytables.HDFStore'>
File path: export.h5
In R this file can be read into a data.frame object using the rhdf5
library. The following example function reads the corresponding column names
and data values from the values and assembles them into a data.frame:
# Load values and column names for all datasets from corresponding nodes and
# insert them into one data.frame object.
library(rhdf5)
loadhdf5data <- function(h5File) {
listing <- h5ls(h5File)
# Find all data nodes, values are stored in *_values and corresponding column
# titles in *_items
data_nodes <- grep("_values", listing$name)
name_nodes <- grep("_items", listing$name)
data_paths = paste(listing$group[data_nodes], listing$name[data_nodes], sep = "/")
name_paths = paste(listing$group[name_nodes], listing$name[name_nodes], sep = "/")
columns = list()
for (idx in seq(data_paths)) {
# NOTE: matrices returned by h5read have to be transposed to obtain
# required Fortran order!
data <- data.frame(t(h5read(h5File, data_paths[idx])))
names <- t(h5read(h5File, name_paths[idx]))
entry <- data.frame(data)
colnames(entry) <- names
columns <- append(columns, entry)
}
data <- data.frame(columns)
return(data)
}
Now you can import the DataFrame into R:
> data = loadhdf5data("transfer.hdf5")
> head(data)
first second class
1 0.4170220047 0.3266449 0
2 0.7203244934 0.5270581 0
3 0.0001143748 0.8859421 1
4 0.3023325726 0.3572698 1
5 0.1467558908 0.9085352 1
6 0.0923385948 0.6233601 1
Note
The R function lists the entire HDF5 file’s contents and assembles the
data.frame object from all matching nodes, so use this only as a
starting point if you have stored multiple DataFrame objects to a
single HDF5 file.
Performance#
tables format come with a writing performance penalty as compared to
fixed stores. The benefit is the ability to append/delete and
query (potentially very large amounts of data). Write times are
generally longer as compared with regular stores. Query times can
be quite fast, especially on an indexed axis.
You can pass chunksize=<int> to append, specifying the
write chunksize (default is 50000). This will significantly lower
your memory usage on writing.
You can pass expectedrows=<int> to the first append,
to set the TOTAL number of rows that PyTables will expect.
This will optimize read/write performance.
Duplicate rows can be written to tables, but are filtered out in
selection (with the last items being selected; thus a table is
unique on major, minor pairs)
A PerformanceWarning will be raised if you are attempting to
store types that will be pickled by PyTables (rather than stored as
endemic types). See
Here
for more information and some solutions.
Feather#
Feather provides binary columnar serialization for data frames. It is designed to make reading and writing data
frames efficient, and to make sharing data across data analysis languages easy.
Feather is designed to faithfully serialize and de-serialize DataFrames, supporting all of the pandas
dtypes, including extension dtypes such as categorical and datetime with tz.
Several caveats:
The format will NOT write an Index, or MultiIndex for the
DataFrame and will raise an error if a non-default one is provided. You
can .reset_index() to store the index or .reset_index(drop=True) to
ignore it.
Duplicate column names and non-string columns names are not supported
Actual Python objects in object dtype columns are not supported. These will
raise a helpful error message on an attempt at serialization.
See the Full Documentation.
In [589]: df = pd.DataFrame(
.....: {
.....: "a": list("abc"),
.....: "b": list(range(1, 4)),
.....: "c": np.arange(3, 6).astype("u1"),
.....: "d": np.arange(4.0, 7.0, dtype="float64"),
.....: "e": [True, False, True],
.....: "f": pd.Categorical(list("abc")),
.....: "g": pd.date_range("20130101", periods=3),
.....: "h": pd.date_range("20130101", periods=3, tz="US/Eastern"),
.....: "i": pd.date_range("20130101", periods=3, freq="ns"),
.....: }
.....: )
.....:
In [590]: df
Out[590]:
a b c ... g h i
0 a 1 3 ... 2013-01-01 2013-01-01 00:00:00-05:00 2013-01-01 00:00:00.000000000
1 b 2 4 ... 2013-01-02 2013-01-02 00:00:00-05:00 2013-01-01 00:00:00.000000001
2 c 3 5 ... 2013-01-03 2013-01-03 00:00:00-05:00 2013-01-01 00:00:00.000000002
[3 rows x 9 columns]
In [591]: df.dtypes
Out[591]:
a object
b int64
c uint8
d float64
e bool
f category
g datetime64[ns]
h datetime64[ns, US/Eastern]
i datetime64[ns]
dtype: object
Write to a feather file.
In [592]: df.to_feather("example.feather")
Read from a feather file.
In [593]: result = pd.read_feather("example.feather")
In [594]: result
Out[594]:
a b c ... g h i
0 a 1 3 ... 2013-01-01 2013-01-01 00:00:00-05:00 2013-01-01 00:00:00.000000000
1 b 2 4 ... 2013-01-02 2013-01-02 00:00:00-05:00 2013-01-01 00:00:00.000000001
2 c 3 5 ... 2013-01-03 2013-01-03 00:00:00-05:00 2013-01-01 00:00:00.000000002
[3 rows x 9 columns]
# we preserve dtypes
In [595]: result.dtypes
Out[595]:
a object
b int64
c uint8
d float64
e bool
f category
g datetime64[ns]
h datetime64[ns, US/Eastern]
i datetime64[ns]
dtype: object
Parquet#
Apache Parquet provides a partitioned binary columnar serialization for data frames. It is designed to
make reading and writing data frames efficient, and to make sharing data across data analysis
languages easy. Parquet can use a variety of compression techniques to shrink the file size as much as possible
while still maintaining good read performance.
Parquet is designed to faithfully serialize and de-serialize DataFrame s, supporting all of the pandas
dtypes, including extension dtypes such as datetime with tz.
Several caveats.
Duplicate column names and non-string columns names are not supported.
The pyarrow engine always writes the index to the output, but fastparquet only writes non-default
indexes. This extra column can cause problems for non-pandas consumers that are not expecting it. You can
force including or omitting indexes with the index argument, regardless of the underlying engine.
Index level names, if specified, must be strings.
In the pyarrow engine, categorical dtypes for non-string types can be serialized to parquet, but will de-serialize as their primitive dtype.
The pyarrow engine preserves the ordered flag of categorical dtypes with string types. fastparquet does not preserve the ordered flag.
Non supported types include Interval and actual Python object types. These will raise a helpful error message
on an attempt at serialization. Period type is supported with pyarrow >= 0.16.0.
The pyarrow engine preserves extension data types such as the nullable integer and string data
type (requiring pyarrow >= 0.16.0, and requiring the extension type to implement the needed protocols,
see the extension types documentation).
You can specify an engine to direct the serialization. This can be one of pyarrow, or fastparquet, or auto.
If the engine is NOT specified, then the pd.options.io.parquet.engine option is checked; if this is also auto,
then pyarrow is tried, and falling back to fastparquet.
See the documentation for pyarrow and fastparquet.
Note
These engines are very similar and should read/write nearly identical parquet format files.
pyarrow>=8.0.0 supports timedelta data, fastparquet>=0.1.4 supports timezone aware datetimes.
These libraries differ by having different underlying dependencies (fastparquet by using numba, while pyarrow uses a c-library).
In [596]: df = pd.DataFrame(
.....: {
.....: "a": list("abc"),
.....: "b": list(range(1, 4)),
.....: "c": np.arange(3, 6).astype("u1"),
.....: "d": np.arange(4.0, 7.0, dtype="float64"),
.....: "e": [True, False, True],
.....: "f": pd.date_range("20130101", periods=3),
.....: "g": pd.date_range("20130101", periods=3, tz="US/Eastern"),
.....: "h": pd.Categorical(list("abc")),
.....: "i": pd.Categorical(list("abc"), ordered=True),
.....: }
.....: )
.....:
In [597]: df
Out[597]:
a b c d e f g h i
0 a 1 3 4.0 True 2013-01-01 2013-01-01 00:00:00-05:00 a a
1 b 2 4 5.0 False 2013-01-02 2013-01-02 00:00:00-05:00 b b
2 c 3 5 6.0 True 2013-01-03 2013-01-03 00:00:00-05:00 c c
In [598]: df.dtypes
Out[598]:
a object
b int64
c uint8
d float64
e bool
f datetime64[ns]
g datetime64[ns, US/Eastern]
h category
i category
dtype: object
Write to a parquet file.
In [599]: df.to_parquet("example_pa.parquet", engine="pyarrow")
In [600]: df.to_parquet("example_fp.parquet", engine="fastparquet")
Read from a parquet file.
In [601]: result = pd.read_parquet("example_fp.parquet", engine="fastparquet")
In [602]: result = pd.read_parquet("example_pa.parquet", engine="pyarrow")
In [603]: result.dtypes
Out[603]:
a object
b int64
c uint8
d float64
e bool
f datetime64[ns]
g datetime64[ns, US/Eastern]
h category
i category
dtype: object
Read only certain columns of a parquet file.
In [604]: result = pd.read_parquet(
.....: "example_fp.parquet",
.....: engine="fastparquet",
.....: columns=["a", "b"],
.....: )
.....:
In [605]: result = pd.read_parquet(
.....: "example_pa.parquet",
.....: engine="pyarrow",
.....: columns=["a", "b"],
.....: )
.....:
In [606]: result.dtypes
Out[606]:
a object
b int64
dtype: object
Handling indexes#
Serializing a DataFrame to parquet may include the implicit index as one or
more columns in the output file. Thus, this code:
In [607]: df = pd.DataFrame({"a": [1, 2], "b": [3, 4]})
In [608]: df.to_parquet("test.parquet", engine="pyarrow")
creates a parquet file with three columns if you use pyarrow for serialization:
a, b, and __index_level_0__. If you’re using fastparquet, the
index may or may not
be written to the file.
This unexpected extra column causes some databases like Amazon Redshift to reject
the file, because that column doesn’t exist in the target table.
If you want to omit a dataframe’s indexes when writing, pass index=False to
to_parquet():
In [609]: df.to_parquet("test.parquet", index=False)
This creates a parquet file with just the two expected columns, a and b.
If your DataFrame has a custom index, you won’t get it back when you load
this file into a DataFrame.
Passing index=True will always write the index, even if that’s not the
underlying engine’s default behavior.
Partitioning Parquet files#
Parquet supports partitioning of data based on the values of one or more columns.
In [610]: df = pd.DataFrame({"a": [0, 0, 1, 1], "b": [0, 1, 0, 1]})
In [611]: df.to_parquet(path="test", engine="pyarrow", partition_cols=["a"], compression=None)
The path specifies the parent directory to which data will be saved.
The partition_cols are the column names by which the dataset will be partitioned.
Columns are partitioned in the order they are given. The partition splits are
determined by the unique values in the partition columns.
The above example creates a partitioned dataset that may look like:
test
├── a=0
│ ├── 0bac803e32dc42ae83fddfd029cbdebc.parquet
│ └── ...
└── a=1
├── e6ab24a4f45147b49b54a662f0c412a3.parquet
└── ...
ORC#
New in version 1.0.0.
Similar to the parquet format, the ORC Format is a binary columnar serialization
for data frames. It is designed to make reading data frames efficient. pandas provides both the reader and the writer for the
ORC format, read_orc() and to_orc(). This requires the pyarrow library.
Warning
It is highly recommended to install pyarrow using conda due to some issues occurred by pyarrow.
to_orc() requires pyarrow>=7.0.0.
read_orc() and to_orc() are not supported on Windows yet, you can find valid environments on install optional dependencies.
For supported dtypes please refer to supported ORC features in Arrow.
Currently timezones in datetime columns are not preserved when a dataframe is converted into ORC files.
In [612]: df = pd.DataFrame(
.....: {
.....: "a": list("abc"),
.....: "b": list(range(1, 4)),
.....: "c": np.arange(4.0, 7.0, dtype="float64"),
.....: "d": [True, False, True],
.....: "e": pd.date_range("20130101", periods=3),
.....: }
.....: )
.....:
In [613]: df
Out[613]:
a b c d e
0 a 1 4.0 True 2013-01-01
1 b 2 5.0 False 2013-01-02
2 c 3 6.0 True 2013-01-03
In [614]: df.dtypes
Out[614]:
a object
b int64
c float64
d bool
e datetime64[ns]
dtype: object
Write to an orc file.
In [615]: df.to_orc("example_pa.orc", engine="pyarrow")
Read from an orc file.
In [616]: result = pd.read_orc("example_pa.orc")
In [617]: result.dtypes
Out[617]:
a object
b int64
c float64
d bool
e datetime64[ns]
dtype: object
Read only certain columns of an orc file.
In [618]: result = pd.read_orc(
.....: "example_pa.orc",
.....: columns=["a", "b"],
.....: )
.....:
In [619]: result.dtypes
Out[619]:
a object
b int64
dtype: object
SQL queries#
The pandas.io.sql module provides a collection of query wrappers to both
facilitate data retrieval and to reduce dependency on DB-specific API. Database abstraction
is provided by SQLAlchemy if installed. In addition you will need a driver library for
your database. Examples of such drivers are psycopg2
for PostgreSQL or pymysql for MySQL.
For SQLite this is
included in Python’s standard library by default.
You can find an overview of supported drivers for each SQL dialect in the
SQLAlchemy docs.
If SQLAlchemy is not installed, a fallback is only provided for sqlite (and
for mysql for backwards compatibility, but this is deprecated and will be
removed in a future version).
This mode requires a Python database adapter which respect the Python
DB-API.
See also some cookbook examples for some advanced strategies.
The key functions are:
read_sql_table(table_name, con[, schema, ...])
Read SQL database table into a DataFrame.
read_sql_query(sql, con[, index_col, ...])
Read SQL query into a DataFrame.
read_sql(sql, con[, index_col, ...])
Read SQL query or database table into a DataFrame.
DataFrame.to_sql(name, con[, schema, ...])
Write records stored in a DataFrame to a SQL database.
Note
The function read_sql() is a convenience wrapper around
read_sql_table() and read_sql_query() (and for
backward compatibility) and will delegate to specific function depending on
the provided input (database table name or sql query).
Table names do not need to be quoted if they have special characters.
In the following example, we use the SQlite SQL database
engine. You can use a temporary SQLite database where data are stored in
“memory”.
To connect with SQLAlchemy you use the create_engine() function to create an engine
object from database URI. You only need to create the engine once per database you are
connecting to.
For more information on create_engine() and the URI formatting, see the examples
below and the SQLAlchemy documentation
In [620]: from sqlalchemy import create_engine
# Create your engine.
In [621]: engine = create_engine("sqlite:///:memory:")
If you want to manage your own connections you can pass one of those instead. The example below opens a
connection to the database using a Python context manager that automatically closes the connection after
the block has completed.
See the SQLAlchemy docs
for an explanation of how the database connection is handled.
with engine.connect() as conn, conn.begin():
data = pd.read_sql_table("data", conn)
Warning
When you open a connection to a database you are also responsible for closing it.
Side effects of leaving a connection open may include locking the database or
other breaking behaviour.
Writing DataFrames#
Assuming the following data is in a DataFrame data, we can insert it into
the database using to_sql().
id
Date
Col_1
Col_2
Col_3
26
2012-10-18
X
25.7
True
42
2012-10-19
Y
-12.4
False
63
2012-10-20
Z
5.73
True
In [622]: import datetime
In [623]: c = ["id", "Date", "Col_1", "Col_2", "Col_3"]
In [624]: d = [
.....: (26, datetime.datetime(2010, 10, 18), "X", 27.5, True),
.....: (42, datetime.datetime(2010, 10, 19), "Y", -12.5, False),
.....: (63, datetime.datetime(2010, 10, 20), "Z", 5.73, True),
.....: ]
.....:
In [625]: data = pd.DataFrame(d, columns=c)
In [626]: data
Out[626]:
id Date Col_1 Col_2 Col_3
0 26 2010-10-18 X 27.50 True
1 42 2010-10-19 Y -12.50 False
2 63 2010-10-20 Z 5.73 True
In [627]: data.to_sql("data", engine)
Out[627]: 3
With some databases, writing large DataFrames can result in errors due to
packet size limitations being exceeded. This can be avoided by setting the
chunksize parameter when calling to_sql. For example, the following
writes data to the database in batches of 1000 rows at a time:
In [628]: data.to_sql("data_chunked", engine, chunksize=1000)
Out[628]: 3
SQL data types#
to_sql() will try to map your data to an appropriate
SQL data type based on the dtype of the data. When you have columns of dtype
object, pandas will try to infer the data type.
You can always override the default type by specifying the desired SQL type of
any of the columns by using the dtype argument. This argument needs a
dictionary mapping column names to SQLAlchemy types (or strings for the sqlite3
fallback mode).
For example, specifying to use the sqlalchemy String type instead of the
default Text type for string columns:
In [629]: from sqlalchemy.types import String
In [630]: data.to_sql("data_dtype", engine, dtype={"Col_1": String})
Out[630]: 3
Note
Due to the limited support for timedelta’s in the different database
flavors, columns with type timedelta64 will be written as integer
values as nanoseconds to the database and a warning will be raised.
Note
Columns of category dtype will be converted to the dense representation
as you would get with np.asarray(categorical) (e.g. for string categories
this gives an array of strings).
Because of this, reading the database table back in does not generate
a categorical.
Datetime data types#
Using SQLAlchemy, to_sql() is capable of writing
datetime data that is timezone naive or timezone aware. However, the resulting
data stored in the database ultimately depends on the supported data type
for datetime data of the database system being used.
The following table lists supported data types for datetime data for some
common databases. Other database dialects may have different data types for
datetime data.
Database
SQL Datetime Types
Timezone Support
SQLite
TEXT
No
MySQL
TIMESTAMP or DATETIME
No
PostgreSQL
TIMESTAMP or TIMESTAMP WITH TIME ZONE
Yes
When writing timezone aware data to databases that do not support timezones,
the data will be written as timezone naive timestamps that are in local time
with respect to the timezone.
read_sql_table() is also capable of reading datetime data that is
timezone aware or naive. When reading TIMESTAMP WITH TIME ZONE types, pandas
will convert the data to UTC.
Insertion method#
The parameter method controls the SQL insertion clause used.
Possible values are:
None: Uses standard SQL INSERT clause (one per row).
'multi': Pass multiple values in a single INSERT clause.
It uses a special SQL syntax not supported by all backends.
This usually provides better performance for analytic databases
like Presto and Redshift, but has worse performance for
traditional SQL backend if the table contains many columns.
For more information check the SQLAlchemy documentation.
callable with signature (pd_table, conn, keys, data_iter):
This can be used to implement a more performant insertion method based on
specific backend dialect features.
Example of a callable using PostgreSQL COPY clause:
# Alternative to_sql() *method* for DBs that support COPY FROM
import csv
from io import StringIO
def psql_insert_copy(table, conn, keys, data_iter):
"""
Execute SQL statement inserting data
Parameters
----------
table : pandas.io.sql.SQLTable
conn : sqlalchemy.engine.Engine or sqlalchemy.engine.Connection
keys : list of str
Column names
data_iter : Iterable that iterates the values to be inserted
"""
# gets a DBAPI connection that can provide a cursor
dbapi_conn = conn.connection
with dbapi_conn.cursor() as cur:
s_buf = StringIO()
writer = csv.writer(s_buf)
writer.writerows(data_iter)
s_buf.seek(0)
columns = ', '.join(['"{}"'.format(k) for k in keys])
if table.schema:
table_name = '{}.{}'.format(table.schema, table.name)
else:
table_name = table.name
sql = 'COPY {} ({}) FROM STDIN WITH CSV'.format(
table_name, columns)
cur.copy_expert(sql=sql, file=s_buf)
Reading tables#
read_sql_table() will read a database table given the
table name and optionally a subset of columns to read.
Note
In order to use read_sql_table(), you must have the
SQLAlchemy optional dependency installed.
In [631]: pd.read_sql_table("data", engine)
Out[631]:
index id Date Col_1 Col_2 Col_3
0 0 26 2010-10-18 X 27.50 True
1 1 42 2010-10-19 Y -12.50 False
2 2 63 2010-10-20 Z 5.73 True
Note
Note that pandas infers column dtypes from query outputs, and not by looking
up data types in the physical database schema. For example, assume userid
is an integer column in a table. Then, intuitively, select userid ... will
return integer-valued series, while select cast(userid as text) ... will
return object-valued (str) series. Accordingly, if the query output is empty,
then all resulting columns will be returned as object-valued (since they are
most general). If you foresee that your query will sometimes generate an empty
result, you may want to explicitly typecast afterwards to ensure dtype
integrity.
You can also specify the name of the column as the DataFrame index,
and specify a subset of columns to be read.
In [632]: pd.read_sql_table("data", engine, index_col="id")
Out[632]:
index Date Col_1 Col_2 Col_3
id
26 0 2010-10-18 X 27.50 True
42 1 2010-10-19 Y -12.50 False
63 2 2010-10-20 Z 5.73 True
In [633]: pd.read_sql_table("data", engine, columns=["Col_1", "Col_2"])
Out[633]:
Col_1 Col_2
0 X 27.50
1 Y -12.50
2 Z 5.73
And you can explicitly force columns to be parsed as dates:
In [634]: pd.read_sql_table("data", engine, parse_dates=["Date"])
Out[634]:
index id Date Col_1 Col_2 Col_3
0 0 26 2010-10-18 X 27.50 True
1 1 42 2010-10-19 Y -12.50 False
2 2 63 2010-10-20 Z 5.73 True
If needed you can explicitly specify a format string, or a dict of arguments
to pass to pandas.to_datetime():
pd.read_sql_table("data", engine, parse_dates={"Date": "%Y-%m-%d"})
pd.read_sql_table(
"data",
engine,
parse_dates={"Date": {"format": "%Y-%m-%d %H:%M:%S"}},
)
You can check if a table exists using has_table()
Schema support#
Reading from and writing to different schema’s is supported through the schema
keyword in the read_sql_table() and to_sql()
functions. Note however that this depends on the database flavor (sqlite does not
have schema’s). For example:
df.to_sql("table", engine, schema="other_schema")
pd.read_sql_table("table", engine, schema="other_schema")
Querying#
You can query using raw SQL in the read_sql_query() function.
In this case you must use the SQL variant appropriate for your database.
When using SQLAlchemy, you can also pass SQLAlchemy Expression language constructs,
which are database-agnostic.
In [635]: pd.read_sql_query("SELECT * FROM data", engine)
Out[635]:
index id Date Col_1 Col_2 Col_3
0 0 26 2010-10-18 00:00:00.000000 X 27.50 1
1 1 42 2010-10-19 00:00:00.000000 Y -12.50 0
2 2 63 2010-10-20 00:00:00.000000 Z 5.73 1
Of course, you can specify a more “complex” query.
In [636]: pd.read_sql_query("SELECT id, Col_1, Col_2 FROM data WHERE id = 42;", engine)
Out[636]:
id Col_1 Col_2
0 42 Y -12.5
The read_sql_query() function supports a chunksize argument.
Specifying this will return an iterator through chunks of the query result:
In [637]: df = pd.DataFrame(np.random.randn(20, 3), columns=list("abc"))
In [638]: df.to_sql("data_chunks", engine, index=False)
Out[638]: 20
In [639]: for chunk in pd.read_sql_query("SELECT * FROM data_chunks", engine, chunksize=5):
.....: print(chunk)
.....:
a b c
0 0.070470 0.901320 0.937577
1 0.295770 1.420548 -0.005283
2 -1.518598 -0.730065 0.226497
3 -2.061465 0.632115 0.853619
4 2.719155 0.139018 0.214557
a b c
0 -1.538924 -0.366973 -0.748801
1 -0.478137 -1.559153 -3.097759
2 -2.320335 -0.221090 0.119763
3 0.608228 1.064810 -0.780506
4 -2.736887 0.143539 1.170191
a b c
0 -1.573076 0.075792 -1.722223
1 -0.774650 0.803627 0.221665
2 0.584637 0.147264 1.057825
3 -0.284136 0.912395 1.552808
4 0.189376 -0.109830 0.539341
a b c
0 0.592591 -0.155407 -1.356475
1 0.833837 1.524249 1.606722
2 -0.029487 -0.051359 1.700152
3 0.921484 -0.926347 0.979818
4 0.182380 -0.186376 0.049820
You can also run a plain query without creating a DataFrame with
execute(). This is useful for queries that don’t return values,
such as INSERT. This is functionally equivalent to calling execute on the
SQLAlchemy engine or db connection object. Again, you must use the SQL syntax
variant appropriate for your database.
from pandas.io import sql
sql.execute("SELECT * FROM table_name", engine)
sql.execute(
"INSERT INTO table_name VALUES(?, ?, ?)", engine, params=[("id", 1, 12.2, True)]
)
Engine connection examples#
To connect with SQLAlchemy you use the create_engine() function to create an engine
object from database URI. You only need to create the engine once per database you are
connecting to.
from sqlalchemy import create_engine
engine = create_engine("postgresql://scott:[email protected]:5432/mydatabase")
engine = create_engine("mysql+mysqldb://scott:[email protected]/foo")
engine = create_engine("oracle://scott:[email protected]:1521/sidname")
engine = create_engine("mssql+pyodbc://mydsn")
# sqlite://<nohostname>/<path>
# where <path> is relative:
engine = create_engine("sqlite:///foo.db")
# or absolute, starting with a slash:
engine = create_engine("sqlite:////absolute/path/to/foo.db")
For more information see the examples the SQLAlchemy documentation
Advanced SQLAlchemy queries#
You can use SQLAlchemy constructs to describe your query.
Use sqlalchemy.text() to specify query parameters in a backend-neutral way
In [640]: import sqlalchemy as sa
In [641]: pd.read_sql(
.....: sa.text("SELECT * FROM data where Col_1=:col1"), engine, params={"col1": "X"}
.....: )
.....:
Out[641]:
index id Date Col_1 Col_2 Col_3
0 0 26 2010-10-18 00:00:00.000000 X 27.5 1
If you have an SQLAlchemy description of your database you can express where conditions using SQLAlchemy expressions
In [642]: metadata = sa.MetaData()
In [643]: data_table = sa.Table(
.....: "data",
.....: metadata,
.....: sa.Column("index", sa.Integer),
.....: sa.Column("Date", sa.DateTime),
.....: sa.Column("Col_1", sa.String),
.....: sa.Column("Col_2", sa.Float),
.....: sa.Column("Col_3", sa.Boolean),
.....: )
.....:
In [644]: pd.read_sql(sa.select([data_table]).where(data_table.c.Col_3 is True), engine)
Out[644]:
Empty DataFrame
Columns: [index, Date, Col_1, Col_2, Col_3]
Index: []
You can combine SQLAlchemy expressions with parameters passed to read_sql() using sqlalchemy.bindparam()
In [645]: import datetime as dt
In [646]: expr = sa.select([data_table]).where(data_table.c.Date > sa.bindparam("date"))
In [647]: pd.read_sql(expr, engine, params={"date": dt.datetime(2010, 10, 18)})
Out[647]:
index Date Col_1 Col_2 Col_3
0 1 2010-10-19 Y -12.50 False
1 2 2010-10-20 Z 5.73 True
Sqlite fallback#
The use of sqlite is supported without using SQLAlchemy.
This mode requires a Python database adapter which respect the Python
DB-API.
You can create connections like so:
import sqlite3
con = sqlite3.connect(":memory:")
And then issue the following queries:
data.to_sql("data", con)
pd.read_sql_query("SELECT * FROM data", con)
Google BigQuery#
Warning
Starting in 0.20.0, pandas has split off Google BigQuery support into the
separate package pandas-gbq. You can pip install pandas-gbq to get it.
The pandas-gbq package provides functionality to read/write from Google BigQuery.
pandas integrates with this external package. if pandas-gbq is installed, you can
use the pandas methods pd.read_gbq and DataFrame.to_gbq, which will call the
respective functions from pandas-gbq.
Full documentation can be found here.
Stata format#
Writing to stata format#
The method to_stata() will write a DataFrame
into a .dta file. The format version of this file is always 115 (Stata 12).
In [648]: df = pd.DataFrame(np.random.randn(10, 2), columns=list("AB"))
In [649]: df.to_stata("stata.dta")
Stata data files have limited data type support; only strings with
244 or fewer characters, int8, int16, int32, float32
and float64 can be stored in .dta files. Additionally,
Stata reserves certain values to represent missing data. Exporting a
non-missing value that is outside of the permitted range in Stata for
a particular data type will retype the variable to the next larger
size. For example, int8 values are restricted to lie between -127
and 100 in Stata, and so variables with values above 100 will trigger
a conversion to int16. nan values in floating points data
types are stored as the basic missing data type (. in Stata).
Note
It is not possible to export missing data values for integer data types.
The Stata writer gracefully handles other data types including int64,
bool, uint8, uint16, uint32 by casting to
the smallest supported type that can represent the data. For example, data
with a type of uint8 will be cast to int8 if all values are less than
100 (the upper bound for non-missing int8 data in Stata), or, if values are
outside of this range, the variable is cast to int16.
Warning
Conversion from int64 to float64 may result in a loss of precision
if int64 values are larger than 2**53.
Warning
StataWriter and
to_stata() only support fixed width
strings containing up to 244 characters, a limitation imposed by the version
115 dta file format. Attempting to write Stata dta files with strings
longer than 244 characters raises a ValueError.
Reading from Stata format#
The top-level function read_stata will read a dta file and return
either a DataFrame or a StataReader that can
be used to read the file incrementally.
In [650]: pd.read_stata("stata.dta")
Out[650]:
index A B
0 0 -1.690072 0.405144
1 1 -1.511309 -1.531396
2 2 0.572698 -1.106845
3 3 -1.185859 0.174564
4 4 0.603797 -1.796129
5 5 -0.791679 1.173795
6 6 -0.277710 1.859988
7 7 -0.258413 1.251808
8 8 1.443262 0.441553
9 9 1.168163 -2.054946
Specifying a chunksize yields a
StataReader instance that can be used to
read chunksize lines from the file at a time. The StataReader
object can be used as an iterator.
In [651]: with pd.read_stata("stata.dta", chunksize=3) as reader:
.....: for df in reader:
.....: print(df.shape)
.....:
(3, 3)
(3, 3)
(3, 3)
(1, 3)
For more fine-grained control, use iterator=True and specify
chunksize with each call to
read().
In [652]: with pd.read_stata("stata.dta", iterator=True) as reader:
.....: chunk1 = reader.read(5)
.....: chunk2 = reader.read(5)
.....:
Currently the index is retrieved as a column.
The parameter convert_categoricals indicates whether value labels should be
read and used to create a Categorical variable from them. Value labels can
also be retrieved by the function value_labels, which requires read()
to be called before use.
The parameter convert_missing indicates whether missing value
representations in Stata should be preserved. If False (the default),
missing values are represented as np.nan. If True, missing values are
represented using StataMissingValue objects, and columns containing missing
values will have object data type.
Note
read_stata() and
StataReader support .dta formats 113-115
(Stata 10-12), 117 (Stata 13), and 118 (Stata 14).
Note
Setting preserve_dtypes=False will upcast to the standard pandas data types:
int64 for all integer types and float64 for floating point data. By default,
the Stata data types are preserved when importing.
Categorical data#
Categorical data can be exported to Stata data files as value labeled data.
The exported data consists of the underlying category codes as integer data values
and the categories as value labels. Stata does not have an explicit equivalent
to a Categorical and information about whether the variable is ordered
is lost when exporting.
Warning
Stata only supports string value labels, and so str is called on the
categories when exporting data. Exporting Categorical variables with
non-string categories produces a warning, and can result a loss of
information if the str representations of the categories are not unique.
Labeled data can similarly be imported from Stata data files as Categorical
variables using the keyword argument convert_categoricals (True by default).
The keyword argument order_categoricals (True by default) determines
whether imported Categorical variables are ordered.
Note
When importing categorical data, the values of the variables in the Stata
data file are not preserved since Categorical variables always
use integer data types between -1 and n-1 where n is the number
of categories. If the original values in the Stata data file are required,
these can be imported by setting convert_categoricals=False, which will
import original data (but not the variable labels). The original values can
be matched to the imported categorical data since there is a simple mapping
between the original Stata data values and the category codes of imported
Categorical variables: missing values are assigned code -1, and the
smallest original value is assigned 0, the second smallest is assigned
1 and so on until the largest original value is assigned the code n-1.
Note
Stata supports partially labeled series. These series have value labels for
some but not all data values. Importing a partially labeled series will produce
a Categorical with string categories for the values that are labeled and
numeric categories for values with no label.
SAS formats#
The top-level function read_sas() can read (but not write) SAS
XPORT (.xpt) and (since v0.18.0) SAS7BDAT (.sas7bdat) format files.
SAS files only contain two value types: ASCII text and floating point
values (usually 8 bytes but sometimes truncated). For xport files,
there is no automatic type conversion to integers, dates, or
categoricals. For SAS7BDAT files, the format codes may allow date
variables to be automatically converted to dates. By default the
whole file is read and returned as a DataFrame.
Specify a chunksize or use iterator=True to obtain reader
objects (XportReader or SAS7BDATReader) for incrementally
reading the file. The reader objects also have attributes that
contain additional information about the file and its variables.
Read a SAS7BDAT file:
df = pd.read_sas("sas_data.sas7bdat")
Obtain an iterator and read an XPORT file 100,000 lines at a time:
def do_something(chunk):
pass
with pd.read_sas("sas_xport.xpt", chunk=100000) as rdr:
for chunk in rdr:
do_something(chunk)
The specification for the xport file format is available from the SAS
web site.
No official documentation is available for the SAS7BDAT format.
SPSS formats#
New in version 0.25.0.
The top-level function read_spss() can read (but not write) SPSS
SAV (.sav) and ZSAV (.zsav) format files.
SPSS files contain column names. By default the
whole file is read, categorical columns are converted into pd.Categorical,
and a DataFrame with all columns is returned.
Specify the usecols parameter to obtain a subset of columns. Specify convert_categoricals=False
to avoid converting categorical columns into pd.Categorical.
Read an SPSS file:
df = pd.read_spss("spss_data.sav")
Extract a subset of columns contained in usecols from an SPSS file and
avoid converting categorical columns into pd.Categorical:
df = pd.read_spss(
"spss_data.sav",
usecols=["foo", "bar"],
convert_categoricals=False,
)
More information about the SAV and ZSAV file formats is available here.
Other file formats#
pandas itself only supports IO with a limited set of file formats that map
cleanly to its tabular data model. For reading and writing other file formats
into and from pandas, we recommend these packages from the broader community.
netCDF#
xarray provides data structures inspired by the pandas DataFrame for working
with multi-dimensional datasets, with a focus on the netCDF file format and
easy conversion to and from pandas.
Performance considerations#
This is an informal comparison of various IO methods, using pandas
0.24.2. Timings are machine dependent and small differences should be
ignored.
In [1]: sz = 1000000
In [2]: df = pd.DataFrame({'A': np.random.randn(sz), 'B': [1] * sz})
In [3]: df.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1000000 entries, 0 to 999999
Data columns (total 2 columns):
A 1000000 non-null float64
B 1000000 non-null int64
dtypes: float64(1), int64(1)
memory usage: 15.3 MB
The following test functions will be used below to compare the performance of several IO methods:
import numpy as np
import os
sz = 1000000
df = pd.DataFrame({"A": np.random.randn(sz), "B": [1] * sz})
sz = 1000000
np.random.seed(42)
df = pd.DataFrame({"A": np.random.randn(sz), "B": [1] * sz})
def test_sql_write(df):
if os.path.exists("test.sql"):
os.remove("test.sql")
sql_db = sqlite3.connect("test.sql")
df.to_sql(name="test_table", con=sql_db)
sql_db.close()
def test_sql_read():
sql_db = sqlite3.connect("test.sql")
pd.read_sql_query("select * from test_table", sql_db)
sql_db.close()
def test_hdf_fixed_write(df):
df.to_hdf("test_fixed.hdf", "test", mode="w")
def test_hdf_fixed_read():
pd.read_hdf("test_fixed.hdf", "test")
def test_hdf_fixed_write_compress(df):
df.to_hdf("test_fixed_compress.hdf", "test", mode="w", complib="blosc")
def test_hdf_fixed_read_compress():
pd.read_hdf("test_fixed_compress.hdf", "test")
def test_hdf_table_write(df):
df.to_hdf("test_table.hdf", "test", mode="w", format="table")
def test_hdf_table_read():
pd.read_hdf("test_table.hdf", "test")
def test_hdf_table_write_compress(df):
df.to_hdf(
"test_table_compress.hdf", "test", mode="w", complib="blosc", format="table"
)
def test_hdf_table_read_compress():
pd.read_hdf("test_table_compress.hdf", "test")
def test_csv_write(df):
df.to_csv("test.csv", mode="w")
def test_csv_read():
pd.read_csv("test.csv", index_col=0)
def test_feather_write(df):
df.to_feather("test.feather")
def test_feather_read():
pd.read_feather("test.feather")
def test_pickle_write(df):
df.to_pickle("test.pkl")
def test_pickle_read():
pd.read_pickle("test.pkl")
def test_pickle_write_compress(df):
df.to_pickle("test.pkl.compress", compression="xz")
def test_pickle_read_compress():
pd.read_pickle("test.pkl.compress", compression="xz")
def test_parquet_write(df):
df.to_parquet("test.parquet")
def test_parquet_read():
pd.read_parquet("test.parquet")
When writing, the top three functions in terms of speed are test_feather_write, test_hdf_fixed_write and test_hdf_fixed_write_compress.
In [4]: %timeit test_sql_write(df)
3.29 s ± 43.2 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [5]: %timeit test_hdf_fixed_write(df)
19.4 ms ± 560 µs per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [6]: %timeit test_hdf_fixed_write_compress(df)
19.6 ms ± 308 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [7]: %timeit test_hdf_table_write(df)
449 ms ± 5.61 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [8]: %timeit test_hdf_table_write_compress(df)
448 ms ± 11.9 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [9]: %timeit test_csv_write(df)
3.66 s ± 26.2 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [10]: %timeit test_feather_write(df)
9.75 ms ± 117 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [11]: %timeit test_pickle_write(df)
30.1 ms ± 229 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [12]: %timeit test_pickle_write_compress(df)
4.29 s ± 15.9 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [13]: %timeit test_parquet_write(df)
67.6 ms ± 706 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
When reading, the top three functions in terms of speed are test_feather_read, test_pickle_read and
test_hdf_fixed_read.
In [14]: %timeit test_sql_read()
1.77 s ± 17.7 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [15]: %timeit test_hdf_fixed_read()
19.4 ms ± 436 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [16]: %timeit test_hdf_fixed_read_compress()
19.5 ms ± 222 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [17]: %timeit test_hdf_table_read()
38.6 ms ± 857 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [18]: %timeit test_hdf_table_read_compress()
38.8 ms ± 1.49 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [19]: %timeit test_csv_read()
452 ms ± 9.04 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [20]: %timeit test_feather_read()
12.4 ms ± 99.7 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [21]: %timeit test_pickle_read()
18.4 ms ± 191 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [22]: %timeit test_pickle_read_compress()
915 ms ± 7.48 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [23]: %timeit test_parquet_read()
24.4 ms ± 146 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
The files test.pkl.compress, test.parquet and test.feather took the least space on disk (in bytes).
29519500 Oct 10 06:45 test.csv
16000248 Oct 10 06:45 test.feather
8281983 Oct 10 06:49 test.parquet
16000857 Oct 10 06:47 test.pkl
7552144 Oct 10 06:48 test.pkl.compress
34816000 Oct 10 06:42 test.sql
24009288 Oct 10 06:43 test_fixed.hdf
24009288 Oct 10 06:43 test_fixed_compress.hdf
24458940 Oct 10 06:44 test_table.hdf
24458940 Oct 10 06:44 test_table_compress.hdf
| 1,133
| 1,175
|
Remove the characters after 64 characters of column names in pandas
I have seen so many ways to remove special characters from column names, and those worked for my example. However, now, I want to remove all extra characters in all columns that are longer than 64 characters in length. Is there an easier way I can do it?
For example:
>> df.columns
Index['hi', 'happy_tree_family_is_most_amazing_awesome_fantastic_series_even_in_2021_01_25_and_I_want_to_watch_it_again_ahhahahahahaha']
after work:
>> df.columns ## 2nd column name only contains 64 character in length ##
Index['hi', 'happy_tree_family_is_most_amazing_awesome_fantastic_series_even_']
A million thanks!
|
64,101,141
|
Change multiple column names in pandas dataframe (not all colmn names) at the same time using index numbers
|
<p>I have successfully changed a single column name in the dataframe using this:</p>
<pre><code>df.columns=['new_name' if x=='old_name' else x for x in df.columns]
</code></pre>
<p>However i have lots of columns to update (but not all 240 of them) and I don't want to have to write it out for each single change if i can help it.</p>
<p>I have tried to follow the advice from @StefanK in this thread:</p>
<p><a href="https://stackoverflow.com/questions/38101009/changing-multiple-column-names-but-not-all-of-them-pandas-python/47795975#47795975">Changing multiple column names but not all of them - Pandas Python</a></p>
<p>my code:</p>
<pre><code>df.columns=[[4,18,181,182,187,188,189,190,203,204]]=['Brand','Reason','Chat_helpful','Chat_expertise','Answered_questions','Recommend_chat','Alternate_help','Customer_comments','Agent_category','Agent_outcome']
</code></pre>
<p>but i am getting an error message:</p>
<pre><code>File "<ipython-input-17-2808488b712d>", line 3
df.columns=[[4,18,181,182,187,188,189,190,203,204]]=['Brand','Reason','Chat_helpful','Chat_expertise','Answered_questions','Recommend_chat','Alternate_help','Customer_comments','Agent_category','Agent_outcome']
^
SyntaxError: can't assign to literal
</code></pre>
<p>So having googled the error and read many more S.O. questions here it looks to me like it is trying to read the numbers as integers instead of an index? I'm not certain here though.</p>
<p>So how do i fix it so it looks at the numbers as the index?! The column names I am replacing are at least 10 words long each so I'm keen not to have to type them all out! my only ideas are to use iloc somehow but i'm going into new territory here!</p>
<p>really appreciate some help please</p>
| 64,101,256
| 2020-09-28T11:16:40.750000
| 2
| null | 1
| 114
|
python|pandas
|
<p>Remove the '=' after df.columns in your code and use this instead:</p>
<pre><code>df.columns.values[[4,18,181,182,187,188,189,190,203,204]]=['Brand','Reason','Chat_helpful','Chat_expertise','Answered_questions','Recommend_chat','Alternate_help','Customer_comments','Agent_category','Agent_outcome']
</code></pre>
| 2020-09-28T11:23:48.090000
| 4
|
https://pandas.pydata.org/docs/user_guide/reshaping.html
|
Reshaping and pivot tables#
Reshaping and pivot tables#
Reshaping by pivoting DataFrame objects#
Data is often stored in so-called “stacked” or “record” format:
In [1]: import pandas._testing as tm
In [2]: def unpivot(frame):
...: N, K = frame.shape
...: data = {
...: "value": frame.to_numpy().ravel("F"),
...: "variable": np.asarray(frame.columns).repeat(N),
...: "date": np.tile(np.asarray(frame.index), K),
...: }
...: return pd.DataFrame(data, columns=["date", "variable", "value"])
...:
In [3]: df = unpivot(tm.makeTimeDataFrame(3))
In [4]: df
Out[4]:
date variable value
0 2000-01-03 A 0.469112
1 2000-01-04 A -0.282863
2 2000-01-05 A -1.509059
3 2000-01-03 B -1.135632
4 2000-01-04 B 1.212112
5 2000-01-05 B -0.173215
Remove the '=' after df.columns in your code and use this instead:
df.columns.values[[4,18,181,182,187,188,189,190,203,204]]=['Brand','Reason','Chat_helpful','Chat_expertise','Answered_questions','Recommend_chat','Alternate_help','Customer_comments','Agent_category','Agent_outcome']
6 2000-01-03 C 0.119209
7 2000-01-04 C -1.044236
8 2000-01-05 C -0.861849
9 2000-01-03 D -2.104569
10 2000-01-04 D -0.494929
11 2000-01-05 D 1.071804
To select out everything for variable A we could do:
In [5]: filtered = df[df["variable"] == "A"]
In [6]: filtered
Out[6]:
date variable value
0 2000-01-03 A 0.469112
1 2000-01-04 A -0.282863
2 2000-01-05 A -1.509059
But suppose we wish to do time series operations with the variables. A better
representation would be where the columns are the unique variables and an
index of dates identifies individual observations. To reshape the data into
this form, we use the DataFrame.pivot() method (also implemented as a
top level function pivot()):
In [7]: pivoted = df.pivot(index="date", columns="variable", values="value")
In [8]: pivoted
Out[8]:
variable A B C D
date
2000-01-03 0.469112 -1.135632 0.119209 -2.104569
2000-01-04 -0.282863 1.212112 -1.044236 -0.494929
2000-01-05 -1.509059 -0.173215 -0.861849 1.071804
If the values argument is omitted, and the input DataFrame has more than
one column of values which are not used as column or index inputs to pivot(),
then the resulting “pivoted” DataFrame will have hierarchical columns whose topmost level indicates the respective value
column:
In [9]: df["value2"] = df["value"] * 2
In [10]: pivoted = df.pivot(index="date", columns="variable")
In [11]: pivoted
Out[11]:
value ... value2
variable A B C ... B C D
date ...
2000-01-03 0.469112 -1.135632 0.119209 ... -2.271265 0.238417 -4.209138
2000-01-04 -0.282863 1.212112 -1.044236 ... 2.424224 -2.088472 -0.989859
2000-01-05 -1.509059 -0.173215 -0.861849 ... -0.346429 -1.723698 2.143608
[3 rows x 8 columns]
You can then select subsets from the pivoted DataFrame:
In [12]: pivoted["value2"]
Out[12]:
variable A B C D
date
2000-01-03 0.938225 -2.271265 0.238417 -4.209138
2000-01-04 -0.565727 2.424224 -2.088472 -0.989859
2000-01-05 -3.018117 -0.346429 -1.723698 2.143608
Note that this returns a view on the underlying data in the case where the data
are homogeneously-typed.
Note
pivot() will error with a ValueError: Index contains duplicate
entries, cannot reshape if the index/column pair is not unique. In this
case, consider using pivot_table() which is a generalization
of pivot that can handle duplicate values for one index/column pair.
Reshaping by stacking and unstacking#
Closely related to the pivot() method are the related
stack() and unstack() methods available on
Series and DataFrame. These methods are designed to work together with
MultiIndex objects (see the section on hierarchical indexing). Here are essentially what these methods do:
stack(): “pivot” a level of the (possibly hierarchical) column labels,
returning a DataFrame with an index with a new inner-most level of row
labels.
unstack(): (inverse operation of stack()) “pivot” a level of the
(possibly hierarchical) row index to the column axis, producing a reshaped
DataFrame with a new inner-most level of column labels.
The clearest way to explain is by example. Let’s take a prior example data set
from the hierarchical indexing section:
In [13]: tuples = list(
....: zip(
....: *[
....: ["bar", "bar", "baz", "baz", "foo", "foo", "qux", "qux"],
....: ["one", "two", "one", "two", "one", "two", "one", "two"],
....: ]
....: )
....: )
....:
In [14]: index = pd.MultiIndex.from_tuples(tuples, names=["first", "second"])
In [15]: df = pd.DataFrame(np.random.randn(8, 2), index=index, columns=["A", "B"])
In [16]: df2 = df[:4]
In [17]: df2
Out[17]:
A B
first second
bar one 0.721555 -0.706771
two -1.039575 0.271860
baz one -0.424972 0.567020
two 0.276232 -1.087401
The stack() function “compresses” a level in the DataFrame columns to
produce either:
A Series, in the case of a simple column Index.
A DataFrame, in the case of a MultiIndex in the columns.
If the columns have a MultiIndex, you can choose which level to stack. The
stacked level becomes the new lowest level in a MultiIndex on the columns:
In [18]: stacked = df2.stack()
In [19]: stacked
Out[19]:
first second
bar one A 0.721555
B -0.706771
two A -1.039575
B 0.271860
baz one A -0.424972
B 0.567020
two A 0.276232
B -1.087401
dtype: float64
With a “stacked” DataFrame or Series (having a MultiIndex as the
index), the inverse operation of stack() is unstack(), which by default
unstacks the last level:
In [20]: stacked.unstack()
Out[20]:
A B
first second
bar one 0.721555 -0.706771
two -1.039575 0.271860
baz one -0.424972 0.567020
two 0.276232 -1.087401
In [21]: stacked.unstack(1)
Out[21]:
second one two
first
bar A 0.721555 -1.039575
B -0.706771 0.271860
baz A -0.424972 0.276232
B 0.567020 -1.087401
In [22]: stacked.unstack(0)
Out[22]:
first bar baz
second
one A 0.721555 -0.424972
B -0.706771 0.567020
two A -1.039575 0.276232
B 0.271860 -1.087401
If the indexes have names, you can use the level names instead of specifying
the level numbers:
In [23]: stacked.unstack("second")
Out[23]:
second one two
first
bar A 0.721555 -1.039575
B -0.706771 0.271860
baz A -0.424972 0.276232
B 0.567020 -1.087401
Notice that the stack() and unstack() methods implicitly sort the index
levels involved. Hence a call to stack() and then unstack(), or vice versa,
will result in a sorted copy of the original DataFrame or Series:
In [24]: index = pd.MultiIndex.from_product([[2, 1], ["a", "b"]])
In [25]: df = pd.DataFrame(np.random.randn(4), index=index, columns=["A"])
In [26]: df
Out[26]:
A
2 a -0.370647
b -1.157892
1 a -1.344312
b 0.844885
In [27]: all(df.unstack().stack() == df.sort_index())
Out[27]: True
The above code will raise a TypeError if the call to sort_index() is
removed.
Multiple levels#
You may also stack or unstack more than one level at a time by passing a list
of levels, in which case the end result is as if each level in the list were
processed individually.
In [28]: columns = pd.MultiIndex.from_tuples(
....: [
....: ("A", "cat", "long"),
....: ("B", "cat", "long"),
....: ("A", "dog", "short"),
....: ("B", "dog", "short"),
....: ],
....: names=["exp", "animal", "hair_length"],
....: )
....:
In [29]: df = pd.DataFrame(np.random.randn(4, 4), columns=columns)
In [30]: df
Out[30]:
exp A B A B
animal cat cat dog dog
hair_length long long short short
0 1.075770 -0.109050 1.643563 -1.469388
1 0.357021 -0.674600 -1.776904 -0.968914
2 -1.294524 0.413738 0.276662 -0.472035
3 -0.013960 -0.362543 -0.006154 -0.923061
In [31]: df.stack(level=["animal", "hair_length"])
Out[31]:
exp A B
animal hair_length
0 cat long 1.075770 -0.109050
dog short 1.643563 -1.469388
1 cat long 0.357021 -0.674600
dog short -1.776904 -0.968914
2 cat long -1.294524 0.413738
dog short 0.276662 -0.472035
3 cat long -0.013960 -0.362543
dog short -0.006154 -0.923061
The list of levels can contain either level names or level numbers (but
not a mixture of the two).
# df.stack(level=['animal', 'hair_length'])
# from above is equivalent to:
In [32]: df.stack(level=[1, 2])
Out[32]:
exp A B
animal hair_length
0 cat long 1.075770 -0.109050
dog short 1.643563 -1.469388
1 cat long 0.357021 -0.674600
dog short -1.776904 -0.968914
2 cat long -1.294524 0.413738
dog short 0.276662 -0.472035
3 cat long -0.013960 -0.362543
dog short -0.006154 -0.923061
Missing data#
These functions are intelligent about handling missing data and do not expect
each subgroup within the hierarchical index to have the same set of labels.
They also can handle the index being unsorted (but you can make it sorted by
calling sort_index(), of course). Here is a more complex example:
In [33]: columns = pd.MultiIndex.from_tuples(
....: [
....: ("A", "cat"),
....: ("B", "dog"),
....: ("B", "cat"),
....: ("A", "dog"),
....: ],
....: names=["exp", "animal"],
....: )
....:
In [34]: index = pd.MultiIndex.from_product(
....: [("bar", "baz", "foo", "qux"), ("one", "two")], names=["first", "second"]
....: )
....:
In [35]: df = pd.DataFrame(np.random.randn(8, 4), index=index, columns=columns)
In [36]: df2 = df.iloc[[0, 1, 2, 4, 5, 7]]
In [37]: df2
Out[37]:
exp A B A
animal cat dog cat dog
first second
bar one 0.895717 0.805244 -1.206412 2.565646
two 1.431256 1.340309 -1.170299 -0.226169
baz one 0.410835 0.813850 0.132003 -0.827317
foo one -1.413681 1.607920 1.024180 0.569605
two 0.875906 -2.211372 0.974466 -2.006747
qux two -1.226825 0.769804 -1.281247 -0.727707
As mentioned above, stack() can be called with a level argument to select
which level in the columns to stack:
In [38]: df2.stack("exp")
Out[38]:
animal cat dog
first second exp
bar one A 0.895717 2.565646
B -1.206412 0.805244
two A 1.431256 -0.226169
B -1.170299 1.340309
baz one A 0.410835 -0.827317
B 0.132003 0.813850
foo one A -1.413681 0.569605
B 1.024180 1.607920
two A 0.875906 -2.006747
B 0.974466 -2.211372
qux two A -1.226825 -0.727707
B -1.281247 0.769804
In [39]: df2.stack("animal")
Out[39]:
exp A B
first second animal
bar one cat 0.895717 -1.206412
dog 2.565646 0.805244
two cat 1.431256 -1.170299
dog -0.226169 1.340309
baz one cat 0.410835 0.132003
dog -0.827317 0.813850
foo one cat -1.413681 1.024180
dog 0.569605 1.607920
two cat 0.875906 0.974466
dog -2.006747 -2.211372
qux two cat -1.226825 -1.281247
dog -0.727707 0.769804
Unstacking can result in missing values if subgroups do not have the same
set of labels. By default, missing values will be replaced with the default
fill value for that data type, NaN for float, NaT for datetimelike,
etc. For integer types, by default data will converted to float and missing
values will be set to NaN.
In [40]: df3 = df.iloc[[0, 1, 4, 7], [1, 2]]
In [41]: df3
Out[41]:
exp B
animal dog cat
first second
bar one 0.805244 -1.206412
two 1.340309 -1.170299
foo one 1.607920 1.024180
qux two 0.769804 -1.281247
In [42]: df3.unstack()
Out[42]:
exp B
animal dog cat
second one two one two
first
bar 0.805244 1.340309 -1.206412 -1.170299
foo 1.607920 NaN 1.024180 NaN
qux NaN 0.769804 NaN -1.281247
Alternatively, unstack takes an optional fill_value argument, for specifying
the value of missing data.
In [43]: df3.unstack(fill_value=-1e9)
Out[43]:
exp B
animal dog cat
second one two one two
first
bar 8.052440e-01 1.340309e+00 -1.206412e+00 -1.170299e+00
foo 1.607920e+00 -1.000000e+09 1.024180e+00 -1.000000e+09
qux -1.000000e+09 7.698036e-01 -1.000000e+09 -1.281247e+00
With a MultiIndex#
Unstacking when the columns are a MultiIndex is also careful about doing
the right thing:
In [44]: df[:3].unstack(0)
Out[44]:
exp A B ... A
animal cat dog ... cat dog
first bar baz bar ... baz bar baz
second ...
one 0.895717 0.410835 0.805244 ... 0.132003 2.565646 -0.827317
two 1.431256 NaN 1.340309 ... NaN -0.226169 NaN
[2 rows x 8 columns]
In [45]: df2.unstack(1)
Out[45]:
exp A B ... A
animal cat dog ... cat dog
second one two one ... two one two
first ...
bar 0.895717 1.431256 0.805244 ... -1.170299 2.565646 -0.226169
baz 0.410835 NaN 0.813850 ... NaN -0.827317 NaN
foo -1.413681 0.875906 1.607920 ... 0.974466 0.569605 -2.006747
qux NaN -1.226825 NaN ... -1.281247 NaN -0.727707
[4 rows x 8 columns]
Reshaping by melt#
The top-level melt() function and the corresponding DataFrame.melt()
are useful to massage a DataFrame into a format where one or more columns
are identifier variables, while all other columns, considered measured
variables, are “unpivoted” to the row axis, leaving just two non-identifier
columns, “variable” and “value”. The names of those columns can be customized
by supplying the var_name and value_name parameters.
For instance,
In [46]: cheese = pd.DataFrame(
....: {
....: "first": ["John", "Mary"],
....: "last": ["Doe", "Bo"],
....: "height": [5.5, 6.0],
....: "weight": [130, 150],
....: }
....: )
....:
In [47]: cheese
Out[47]:
first last height weight
0 John Doe 5.5 130
1 Mary Bo 6.0 150
In [48]: cheese.melt(id_vars=["first", "last"])
Out[48]:
first last variable value
0 John Doe height 5.5
1 Mary Bo height 6.0
2 John Doe weight 130.0
3 Mary Bo weight 150.0
In [49]: cheese.melt(id_vars=["first", "last"], var_name="quantity")
Out[49]:
first last quantity value
0 John Doe height 5.5
1 Mary Bo height 6.0
2 John Doe weight 130.0
3 Mary Bo weight 150.0
When transforming a DataFrame using melt(), the index will be ignored. The original index values can be kept around by setting the ignore_index parameter to False (default is True). This will however duplicate them.
New in version 1.1.0.
In [50]: index = pd.MultiIndex.from_tuples([("person", "A"), ("person", "B")])
In [51]: cheese = pd.DataFrame(
....: {
....: "first": ["John", "Mary"],
....: "last": ["Doe", "Bo"],
....: "height": [5.5, 6.0],
....: "weight": [130, 150],
....: },
....: index=index,
....: )
....:
In [52]: cheese
Out[52]:
first last height weight
person A John Doe 5.5 130
B Mary Bo 6.0 150
In [53]: cheese.melt(id_vars=["first", "last"])
Out[53]:
first last variable value
0 John Doe height 5.5
1 Mary Bo height 6.0
2 John Doe weight 130.0
3 Mary Bo weight 150.0
In [54]: cheese.melt(id_vars=["first", "last"], ignore_index=False)
Out[54]:
first last variable value
person A John Doe height 5.5
B Mary Bo height 6.0
A John Doe weight 130.0
B Mary Bo weight 150.0
Another way to transform is to use the wide_to_long() panel data
convenience function. It is less flexible than melt(), but more
user-friendly.
In [55]: dft = pd.DataFrame(
....: {
....: "A1970": {0: "a", 1: "b", 2: "c"},
....: "A1980": {0: "d", 1: "e", 2: "f"},
....: "B1970": {0: 2.5, 1: 1.2, 2: 0.7},
....: "B1980": {0: 3.2, 1: 1.3, 2: 0.1},
....: "X": dict(zip(range(3), np.random.randn(3))),
....: }
....: )
....:
In [56]: dft["id"] = dft.index
In [57]: dft
Out[57]:
A1970 A1980 B1970 B1980 X id
0 a d 2.5 3.2 -0.121306 0
1 b e 1.2 1.3 -0.097883 1
2 c f 0.7 0.1 0.695775 2
In [58]: pd.wide_to_long(dft, ["A", "B"], i="id", j="year")
Out[58]:
X A B
id year
0 1970 -0.121306 a 2.5
1 1970 -0.097883 b 1.2
2 1970 0.695775 c 0.7
0 1980 -0.121306 d 3.2
1 1980 -0.097883 e 1.3
2 1980 0.695775 f 0.1
Combining with stats and GroupBy#
It should be no shock that combining pivot() / stack() / unstack() with
GroupBy and the basic Series and DataFrame statistical functions can produce
some very expressive and fast data manipulations.
In [59]: df
Out[59]:
exp A B A
animal cat dog cat dog
first second
bar one 0.895717 0.805244 -1.206412 2.565646
two 1.431256 1.340309 -1.170299 -0.226169
baz one 0.410835 0.813850 0.132003 -0.827317
two -0.076467 -1.187678 1.130127 -1.436737
foo one -1.413681 1.607920 1.024180 0.569605
two 0.875906 -2.211372 0.974466 -2.006747
qux one -0.410001 -0.078638 0.545952 -1.219217
two -1.226825 0.769804 -1.281247 -0.727707
In [60]: df.stack().mean(1).unstack()
Out[60]:
animal cat dog
first second
bar one -0.155347 1.685445
two 0.130479 0.557070
baz one 0.271419 -0.006733
two 0.526830 -1.312207
foo one -0.194750 1.088763
two 0.925186 -2.109060
qux one 0.067976 -0.648927
two -1.254036 0.021048
# same result, another way
In [61]: df.groupby(level=1, axis=1).mean()
Out[61]:
animal cat dog
first second
bar one -0.155347 1.685445
two 0.130479 0.557070
baz one 0.271419 -0.006733
two 0.526830 -1.312207
foo one -0.194750 1.088763
two 0.925186 -2.109060
qux one 0.067976 -0.648927
two -1.254036 0.021048
In [62]: df.stack().groupby(level=1).mean()
Out[62]:
exp A B
second
one 0.071448 0.455513
two -0.424186 -0.204486
In [63]: df.mean().unstack(0)
Out[63]:
exp A B
animal
cat 0.060843 0.018596
dog -0.413580 0.232430
Pivot tables#
While pivot() provides general purpose pivoting with various
data types (strings, numerics, etc.), pandas also provides pivot_table()
for pivoting with aggregation of numeric data.
The function pivot_table() can be used to create spreadsheet-style
pivot tables. See the cookbook for some advanced
strategies.
It takes a number of arguments:
data: a DataFrame object.
values: a column or a list of columns to aggregate.
index: a column, Grouper, array which has the same length as data, or list of them.
Keys to group by on the pivot table index. If an array is passed, it is being used as the same manner as column values.
columns: a column, Grouper, array which has the same length as data, or list of them.
Keys to group by on the pivot table column. If an array is passed, it is being used as the same manner as column values.
aggfunc: function to use for aggregation, defaulting to numpy.mean.
Consider a data set like this:
In [64]: import datetime
In [65]: df = pd.DataFrame(
....: {
....: "A": ["one", "one", "two", "three"] * 6,
....: "B": ["A", "B", "C"] * 8,
....: "C": ["foo", "foo", "foo", "bar", "bar", "bar"] * 4,
....: "D": np.random.randn(24),
....: "E": np.random.randn(24),
....: "F": [datetime.datetime(2013, i, 1) for i in range(1, 13)]
....: + [datetime.datetime(2013, i, 15) for i in range(1, 13)],
....: }
....: )
....:
In [66]: df
Out[66]:
A B C D E F
0 one A foo 0.341734 -0.317441 2013-01-01
1 one B foo 0.959726 -1.236269 2013-02-01
2 two C foo -1.110336 0.896171 2013-03-01
3 three A bar -0.619976 -0.487602 2013-04-01
4 one B bar 0.149748 -0.082240 2013-05-01
.. ... .. ... ... ... ...
19 three B foo 0.690579 -2.213588 2013-08-15
20 one C foo 0.995761 1.063327 2013-09-15
21 one A bar 2.396780 1.266143 2013-10-15
22 two B bar 0.014871 0.299368 2013-11-15
23 three C bar 3.357427 -0.863838 2013-12-15
[24 rows x 6 columns]
We can produce pivot tables from this data very easily:
In [67]: pd.pivot_table(df, values="D", index=["A", "B"], columns=["C"])
Out[67]:
C bar foo
A B
one A 1.120915 -0.514058
B -0.338421 0.002759
C -0.538846 0.699535
three A -1.181568 NaN
B NaN 0.433512
C 0.588783 NaN
two A NaN 1.000985
B 0.158248 NaN
C NaN 0.176180
In [68]: pd.pivot_table(df, values="D", index=["B"], columns=["A", "C"], aggfunc=np.sum)
Out[68]:
A one three two
C bar foo bar foo bar foo
B
A 2.241830 -1.028115 -2.363137 NaN NaN 2.001971
B -0.676843 0.005518 NaN 0.867024 0.316495 NaN
C -1.077692 1.399070 1.177566 NaN NaN 0.352360
In [69]: pd.pivot_table(
....: df, values=["D", "E"],
....: index=["B"],
....: columns=["A", "C"],
....: aggfunc=np.sum,
....: )
....:
Out[69]:
D ... E
A one three ... three two
C bar foo bar ... foo bar foo
B ...
A 2.241830 -1.028115 -2.363137 ... NaN NaN 0.128491
B -0.676843 0.005518 NaN ... -2.128743 -0.194294 NaN
C -1.077692 1.399070 1.177566 ... NaN NaN 0.872482
[3 rows x 12 columns]
The result object is a DataFrame having potentially hierarchical indexes on the
rows and columns. If the values column name is not given, the pivot table
will include all of the data in an additional level of hierarchy in the columns:
In [70]: pd.pivot_table(df[["A", "B", "C", "D", "E"]], index=["A", "B"], columns=["C"])
Out[70]:
D E
C bar foo bar foo
A B
one A 1.120915 -0.514058 1.393057 -0.021605
B -0.338421 0.002759 0.684140 -0.551692
C -0.538846 0.699535 -0.988442 0.747859
three A -1.181568 NaN 0.961289 NaN
B NaN 0.433512 NaN -1.064372
C 0.588783 NaN -0.131830 NaN
two A NaN 1.000985 NaN 0.064245
B 0.158248 NaN -0.097147 NaN
C NaN 0.176180 NaN 0.436241
Also, you can use Grouper for index and columns keywords. For detail of Grouper, see Grouping with a Grouper specification.
In [71]: pd.pivot_table(df, values="D", index=pd.Grouper(freq="M", key="F"), columns="C")
Out[71]:
C bar foo
F
2013-01-31 NaN -0.514058
2013-02-28 NaN 0.002759
2013-03-31 NaN 0.176180
2013-04-30 -1.181568 NaN
2013-05-31 -0.338421 NaN
2013-06-30 -0.538846 NaN
2013-07-31 NaN 1.000985
2013-08-31 NaN 0.433512
2013-09-30 NaN 0.699535
2013-10-31 1.120915 NaN
2013-11-30 0.158248 NaN
2013-12-31 0.588783 NaN
You can render a nice output of the table omitting the missing values by
calling to_string() if you wish:
In [72]: table = pd.pivot_table(df, index=["A", "B"], columns=["C"], values=["D", "E"])
In [73]: print(table.to_string(na_rep=""))
D E
C bar foo bar foo
A B
one A 1.120915 -0.514058 1.393057 -0.021605
B -0.338421 0.002759 0.684140 -0.551692
C -0.538846 0.699535 -0.988442 0.747859
three A -1.181568 0.961289
B 0.433512 -1.064372
C 0.588783 -0.131830
two A 1.000985 0.064245
B 0.158248 -0.097147
C 0.176180 0.436241
Note that pivot_table() is also available as an instance method on DataFrame,i.e. DataFrame.pivot_table().
Adding margins#
If you pass margins=True to pivot_table(), special All columns and
rows will be added with partial group aggregates across the categories on the
rows and columns:
In [74]: table = df.pivot_table(
....: index=["A", "B"],
....: columns="C",
....: values=["D", "E"],
....: margins=True,
....: aggfunc=np.std
....: )
....:
In [75]: table
Out[75]:
D E
C bar foo All bar foo All
A B
one A 1.804346 1.210272 1.569879 0.179483 0.418374 0.858005
B 0.690376 1.353355 0.898998 1.083825 0.968138 1.101401
C 0.273641 0.418926 0.771139 1.689271 0.446140 1.422136
three A 0.794212 NaN 0.794212 2.049040 NaN 2.049040
B NaN 0.363548 0.363548 NaN 1.625237 1.625237
C 3.915454 NaN 3.915454 1.035215 NaN 1.035215
two A NaN 0.442998 0.442998 NaN 0.447104 0.447104
B 0.202765 NaN 0.202765 0.560757 NaN 0.560757
C NaN 1.819408 1.819408 NaN 0.650439 0.650439
All 1.556686 0.952552 1.246608 1.250924 0.899904 1.059389
Additionally, you can call DataFrame.stack() to display a pivoted DataFrame
as having a multi-level index:
In [76]: table.stack()
Out[76]:
D E
A B C
one A All 1.569879 0.858005
bar 1.804346 0.179483
foo 1.210272 0.418374
B All 0.898998 1.101401
bar 0.690376 1.083825
... ... ...
two C All 1.819408 0.650439
foo 1.819408 0.650439
All All 1.246608 1.059389
bar 1.556686 1.250924
foo 0.952552 0.899904
[24 rows x 2 columns]
Cross tabulations#
Use crosstab() to compute a cross-tabulation of two (or more)
factors. By default crosstab() computes a frequency table of the factors
unless an array of values and an aggregation function are passed.
It takes a number of arguments
index: array-like, values to group by in the rows.
columns: array-like, values to group by in the columns.
values: array-like, optional, array of values to aggregate according to
the factors.
aggfunc: function, optional, If no values array is passed, computes a
frequency table.
rownames: sequence, default None, must match number of row arrays passed.
colnames: sequence, default None, if passed, must match number of column
arrays passed.
margins: boolean, default False, Add row/column margins (subtotals)
normalize: boolean, {‘all’, ‘index’, ‘columns’}, or {0,1}, default False.
Normalize by dividing all values by the sum of values.
Any Series passed will have their name attributes used unless row or column
names for the cross-tabulation are specified
For example:
In [77]: foo, bar, dull, shiny, one, two = "foo", "bar", "dull", "shiny", "one", "two"
In [78]: a = np.array([foo, foo, bar, bar, foo, foo], dtype=object)
In [79]: b = np.array([one, one, two, one, two, one], dtype=object)
In [80]: c = np.array([dull, dull, shiny, dull, dull, shiny], dtype=object)
In [81]: pd.crosstab(a, [b, c], rownames=["a"], colnames=["b", "c"])
Out[81]:
b one two
c dull shiny dull shiny
a
bar 1 0 0 1
foo 2 1 1 0
If crosstab() receives only two Series, it will provide a frequency table.
In [82]: df = pd.DataFrame(
....: {"A": [1, 2, 2, 2, 2], "B": [3, 3, 4, 4, 4], "C": [1, 1, np.nan, 1, 1]}
....: )
....:
In [83]: df
Out[83]:
A B C
0 1 3 1.0
1 2 3 1.0
2 2 4 NaN
3 2 4 1.0
4 2 4 1.0
In [84]: pd.crosstab(df["A"], df["B"])
Out[84]:
B 3 4
A
1 1 0
2 1 3
crosstab() can also be implemented
to Categorical data.
In [85]: foo = pd.Categorical(["a", "b"], categories=["a", "b", "c"])
In [86]: bar = pd.Categorical(["d", "e"], categories=["d", "e", "f"])
In [87]: pd.crosstab(foo, bar)
Out[87]:
col_0 d e
row_0
a 1 0
b 0 1
If you want to include all of data categories even if the actual data does
not contain any instances of a particular category, you should set dropna=False.
For example:
In [88]: pd.crosstab(foo, bar, dropna=False)
Out[88]:
col_0 d e f
row_0
a 1 0 0
b 0 1 0
c 0 0 0
Normalization#
Frequency tables can also be normalized to show percentages rather than counts
using the normalize argument:
In [89]: pd.crosstab(df["A"], df["B"], normalize=True)
Out[89]:
B 3 4
A
1 0.2 0.0
2 0.2 0.6
normalize can also normalize values within each row or within each column:
In [90]: pd.crosstab(df["A"], df["B"], normalize="columns")
Out[90]:
B 3 4
A
1 0.5 0.0
2 0.5 1.0
crosstab() can also be passed a third Series and an aggregation function
(aggfunc) that will be applied to the values of the third Series within
each group defined by the first two Series:
In [91]: pd.crosstab(df["A"], df["B"], values=df["C"], aggfunc=np.sum)
Out[91]:
B 3 4
A
1 1.0 NaN
2 1.0 2.0
Adding margins#
Finally, one can also add margins or normalize this output.
In [92]: pd.crosstab(
....: df["A"], df["B"], values=df["C"], aggfunc=np.sum, normalize=True, margins=True
....: )
....:
Out[92]:
B 3 4 All
A
1 0.25 0.0 0.25
2 0.25 0.5 0.75
All 0.50 0.5 1.00
Tiling#
The cut() function computes groupings for the values of the input
array and is often used to transform continuous variables to discrete or
categorical variables:
In [93]: ages = np.array([10, 15, 13, 12, 23, 25, 28, 59, 60])
In [94]: pd.cut(ages, bins=3)
Out[94]:
[(9.95, 26.667], (9.95, 26.667], (9.95, 26.667], (9.95, 26.667], (9.95, 26.667], (9.95, 26.667], (26.667, 43.333], (43.333, 60.0], (43.333, 60.0]]
Categories (3, interval[float64, right]): [(9.95, 26.667] < (26.667, 43.333] < (43.333, 60.0]]
If the bins keyword is an integer, then equal-width bins are formed.
Alternatively we can specify custom bin-edges:
In [95]: c = pd.cut(ages, bins=[0, 18, 35, 70])
In [96]: c
Out[96]:
[(0, 18], (0, 18], (0, 18], (0, 18], (18, 35], (18, 35], (18, 35], (35, 70], (35, 70]]
Categories (3, interval[int64, right]): [(0, 18] < (18, 35] < (35, 70]]
If the bins keyword is an IntervalIndex, then these will be
used to bin the passed data.:
pd.cut([25, 20, 50], bins=c.categories)
Computing indicator / dummy variables#
To convert a categorical variable into a “dummy” or “indicator” DataFrame,
for example a column in a DataFrame (a Series) which has k distinct
values, can derive a DataFrame containing k columns of 1s and 0s using
get_dummies():
In [97]: df = pd.DataFrame({"key": list("bbacab"), "data1": range(6)})
In [98]: pd.get_dummies(df["key"])
Out[98]:
a b c
0 0 1 0
1 0 1 0
2 1 0 0
3 0 0 1
4 1 0 0
5 0 1 0
Sometimes it’s useful to prefix the column names, for example when merging the result
with the original DataFrame:
In [99]: dummies = pd.get_dummies(df["key"], prefix="key")
In [100]: dummies
Out[100]:
key_a key_b key_c
0 0 1 0
1 0 1 0
2 1 0 0
3 0 0 1
4 1 0 0
5 0 1 0
In [101]: df[["data1"]].join(dummies)
Out[101]:
data1 key_a key_b key_c
0 0 0 1 0
1 1 0 1 0
2 2 1 0 0
3 3 0 0 1
4 4 1 0 0
5 5 0 1 0
This function is often used along with discretization functions like cut():
In [102]: values = np.random.randn(10)
In [103]: values
Out[103]:
array([ 0.4082, -1.0481, -0.0257, -0.9884, 0.0941, 1.2627, 1.29 ,
0.0824, -0.0558, 0.5366])
In [104]: bins = [0, 0.2, 0.4, 0.6, 0.8, 1]
In [105]: pd.get_dummies(pd.cut(values, bins))
Out[105]:
(0.0, 0.2] (0.2, 0.4] (0.4, 0.6] (0.6, 0.8] (0.8, 1.0]
0 0 0 1 0 0
1 0 0 0 0 0
2 0 0 0 0 0
3 0 0 0 0 0
4 1 0 0 0 0
5 0 0 0 0 0
6 0 0 0 0 0
7 1 0 0 0 0
8 0 0 0 0 0
9 0 0 1 0 0
See also Series.str.get_dummies.
get_dummies() also accepts a DataFrame. By default all categorical
variables (categorical in the statistical sense, those with object or
categorical dtype) are encoded as dummy variables.
In [106]: df = pd.DataFrame({"A": ["a", "b", "a"], "B": ["c", "c", "b"], "C": [1, 2, 3]})
In [107]: pd.get_dummies(df)
Out[107]:
C A_a A_b B_b B_c
0 1 1 0 0 1
1 2 0 1 0 1
2 3 1 0 1 0
All non-object columns are included untouched in the output. You can control
the columns that are encoded with the columns keyword.
In [108]: pd.get_dummies(df, columns=["A"])
Out[108]:
B C A_a A_b
0 c 1 1 0
1 c 2 0 1
2 b 3 1 0
Notice that the B column is still included in the output, it just hasn’t
been encoded. You can drop B before calling get_dummies if you don’t
want to include it in the output.
As with the Series version, you can pass values for the prefix and
prefix_sep. By default the column name is used as the prefix, and _ as
the prefix separator. You can specify prefix and prefix_sep in 3 ways:
string: Use the same value for prefix or prefix_sep for each column
to be encoded.
list: Must be the same length as the number of columns being encoded.
dict: Mapping column name to prefix.
In [109]: simple = pd.get_dummies(df, prefix="new_prefix")
In [110]: simple
Out[110]:
C new_prefix_a new_prefix_b new_prefix_b new_prefix_c
0 1 1 0 0 1
1 2 0 1 0 1
2 3 1 0 1 0
In [111]: from_list = pd.get_dummies(df, prefix=["from_A", "from_B"])
In [112]: from_list
Out[112]:
C from_A_a from_A_b from_B_b from_B_c
0 1 1 0 0 1
1 2 0 1 0 1
2 3 1 0 1 0
In [113]: from_dict = pd.get_dummies(df, prefix={"B": "from_B", "A": "from_A"})
In [114]: from_dict
Out[114]:
C from_A_a from_A_b from_B_b from_B_c
0 1 1 0 0 1
1 2 0 1 0 1
2 3 1 0 1 0
Sometimes it will be useful to only keep k-1 levels of a categorical
variable to avoid collinearity when feeding the result to statistical models.
You can switch to this mode by turn on drop_first.
In [115]: s = pd.Series(list("abcaa"))
In [116]: pd.get_dummies(s)
Out[116]:
a b c
0 1 0 0
1 0 1 0
2 0 0 1
3 1 0 0
4 1 0 0
In [117]: pd.get_dummies(s, drop_first=True)
Out[117]:
b c
0 0 0
1 1 0
2 0 1
3 0 0
4 0 0
When a column contains only one level, it will be omitted in the result.
In [118]: df = pd.DataFrame({"A": list("aaaaa"), "B": list("ababc")})
In [119]: pd.get_dummies(df)
Out[119]:
A_a B_a B_b B_c
0 1 1 0 0
1 1 0 1 0
2 1 1 0 0
3 1 0 1 0
4 1 0 0 1
In [120]: pd.get_dummies(df, drop_first=True)
Out[120]:
B_b B_c
0 0 0
1 1 0
2 0 0
3 1 0
4 0 1
By default new columns will have np.uint8 dtype.
To choose another dtype, use the dtype argument:
In [121]: df = pd.DataFrame({"A": list("abc"), "B": [1.1, 2.2, 3.3]})
In [122]: pd.get_dummies(df, dtype=bool).dtypes
Out[122]:
B float64
A_a bool
A_b bool
A_c bool
dtype: object
New in version 1.5.0.
To convert a “dummy” or “indicator” DataFrame, into a categorical DataFrame,
for example k columns of a DataFrame containing 1s and 0s can derive a
DataFrame which has k distinct values using
from_dummies():
In [123]: df = pd.DataFrame({"prefix_a": [0, 1, 0], "prefix_b": [1, 0, 1]})
In [124]: df
Out[124]:
prefix_a prefix_b
0 0 1
1 1 0
2 0 1
In [125]: pd.from_dummies(df, sep="_")
Out[125]:
prefix
0 b
1 a
2 b
Dummy coded data only requires k - 1 categories to be included, in this case
the k th category is the default category, implied by not being assigned any of
the other k - 1 categories, can be passed via default_category.
In [126]: df = pd.DataFrame({"prefix_a": [0, 1, 0]})
In [127]: df
Out[127]:
prefix_a
0 0
1 1
2 0
In [128]: pd.from_dummies(df, sep="_", default_category="b")
Out[128]:
prefix
0 b
1 a
2 b
Factorizing values#
To encode 1-d values as an enumerated type use factorize():
In [129]: x = pd.Series(["A", "A", np.nan, "B", 3.14, np.inf])
In [130]: x
Out[130]:
0 A
1 A
2 NaN
3 B
4 3.14
5 inf
dtype: object
In [131]: labels, uniques = pd.factorize(x)
In [132]: labels
Out[132]: array([ 0, 0, -1, 1, 2, 3])
In [133]: uniques
Out[133]: Index(['A', 'B', 3.14, inf], dtype='object')
Note that factorize() is similar to numpy.unique, but differs in its
handling of NaN:
Note
The following numpy.unique will fail under Python 3 with a TypeError
because of an ordering bug. See also
here.
In [134]: ser = pd.Series(['A', 'A', np.nan, 'B', 3.14, np.inf])
In [135]: pd.factorize(ser, sort=True)
Out[135]: (array([ 2, 2, -1, 3, 0, 1]), Index([3.14, inf, 'A', 'B'], dtype='object'))
In [136]: np.unique(ser, return_inverse=True)[::-1]
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[136], line 1
----> 1 np.unique(ser, return_inverse=True)[::-1]
File <__array_function__ internals>:180, in unique(*args, **kwargs)
File ~/micromamba/envs/test/lib/python3.8/site-packages/numpy/lib/arraysetops.py:274, in unique(ar, return_index, return_inverse, return_counts, axis, equal_nan)
272 ar = np.asanyarray(ar)
273 if axis is None:
--> 274 ret = _unique1d(ar, return_index, return_inverse, return_counts,
275 equal_nan=equal_nan)
276 return _unpack_tuple(ret)
278 # axis was specified and not None
File ~/micromamba/envs/test/lib/python3.8/site-packages/numpy/lib/arraysetops.py:333, in _unique1d(ar, return_index, return_inverse, return_counts, equal_nan)
330 optional_indices = return_index or return_inverse
332 if optional_indices:
--> 333 perm = ar.argsort(kind='mergesort' if return_index else 'quicksort')
334 aux = ar[perm]
335 else:
TypeError: '<' not supported between instances of 'float' and 'str'
Note
If you just want to handle one column as a categorical variable (like R’s factor),
you can use df["cat_col"] = pd.Categorical(df["col"]) or
df["cat_col"] = df["col"].astype("category"). For full docs on Categorical,
see the Categorical introduction and the
API documentation.
Examples#
In this section, we will review frequently asked questions and examples. The
column names and relevant column values are named to correspond with how this
DataFrame will be pivoted in the answers below.
In [137]: np.random.seed([3, 1415])
In [138]: n = 20
In [139]: cols = np.array(["key", "row", "item", "col"])
In [140]: df = cols + pd.DataFrame(
.....: (np.random.randint(5, size=(n, 4)) // [2, 1, 2, 1]).astype(str)
.....: )
.....:
In [141]: df.columns = cols
In [142]: df = df.join(pd.DataFrame(np.random.rand(n, 2).round(2)).add_prefix("val"))
In [143]: df
Out[143]:
key row item col val0 val1
0 key0 row3 item1 col3 0.81 0.04
1 key1 row2 item1 col2 0.44 0.07
2 key1 row0 item1 col0 0.77 0.01
3 key0 row4 item0 col2 0.15 0.59
4 key1 row0 item2 col1 0.81 0.64
.. ... ... ... ... ... ...
15 key0 row3 item1 col1 0.31 0.23
16 key0 row0 item2 col3 0.86 0.01
17 key0 row4 item0 col3 0.64 0.21
18 key2 row2 item2 col0 0.13 0.45
19 key0 row2 item0 col4 0.37 0.70
[20 rows x 6 columns]
Pivoting with single aggregations#
Suppose we wanted to pivot df such that the col values are columns,
row values are the index, and the mean of val0 are the values? In
particular, the resulting DataFrame should look like:
col col0 col1 col2 col3 col4
row
row0 0.77 0.605 NaN 0.860 0.65
row2 0.13 NaN 0.395 0.500 0.25
row3 NaN 0.310 NaN 0.545 NaN
row4 NaN 0.100 0.395 0.760 0.24
This solution uses pivot_table(). Also note that
aggfunc='mean' is the default. It is included here to be explicit.
In [144]: df.pivot_table(values="val0", index="row", columns="col", aggfunc="mean")
Out[144]:
col col0 col1 col2 col3 col4
row
row0 0.77 0.605 NaN 0.860 0.65
row2 0.13 NaN 0.395 0.500 0.25
row3 NaN 0.310 NaN 0.545 NaN
row4 NaN 0.100 0.395 0.760 0.24
Note that we can also replace the missing values by using the fill_value
parameter.
In [145]: df.pivot_table(
.....: values="val0",
.....: index="row",
.....: columns="col",
.....: aggfunc="mean",
.....: fill_value=0,
.....: )
.....:
Out[145]:
col col0 col1 col2 col3 col4
row
row0 0.77 0.605 0.000 0.860 0.65
row2 0.13 0.000 0.395 0.500 0.25
row3 0.00 0.310 0.000 0.545 0.00
row4 0.00 0.100 0.395 0.760 0.24
Also note that we can pass in other aggregation functions as well. For example,
we can also pass in sum.
In [146]: df.pivot_table(
.....: values="val0",
.....: index="row",
.....: columns="col",
.....: aggfunc="sum",
.....: fill_value=0,
.....: )
.....:
Out[146]:
col col0 col1 col2 col3 col4
row
row0 0.77 1.21 0.00 0.86 0.65
row2 0.13 0.00 0.79 0.50 0.50
row3 0.00 0.31 0.00 1.09 0.00
row4 0.00 0.10 0.79 1.52 0.24
Another aggregation we can do is calculate the frequency in which the columns
and rows occur together a.k.a. “cross tabulation”. To do this, we can pass
size to the aggfunc parameter.
In [147]: df.pivot_table(index="row", columns="col", fill_value=0, aggfunc="size")
Out[147]:
col col0 col1 col2 col3 col4
row
row0 1 2 0 1 1
row2 1 0 2 1 2
row3 0 1 0 2 0
row4 0 1 2 2 1
Pivoting with multiple aggregations#
We can also perform multiple aggregations. For example, to perform both a
sum and mean, we can pass in a list to the aggfunc argument.
In [148]: df.pivot_table(
.....: values="val0",
.....: index="row",
.....: columns="col",
.....: aggfunc=["mean", "sum"],
.....: )
.....:
Out[148]:
mean sum
col col0 col1 col2 col3 col4 col0 col1 col2 col3 col4
row
row0 0.77 0.605 NaN 0.860 0.65 0.77 1.21 NaN 0.86 0.65
row2 0.13 NaN 0.395 0.500 0.25 0.13 NaN 0.79 0.50 0.50
row3 NaN 0.310 NaN 0.545 NaN NaN 0.31 NaN 1.09 NaN
row4 NaN 0.100 0.395 0.760 0.24 NaN 0.10 0.79 1.52 0.24
Note to aggregate over multiple value columns, we can pass in a list to the
values parameter.
In [149]: df.pivot_table(
.....: values=["val0", "val1"],
.....: index="row",
.....: columns="col",
.....: aggfunc=["mean"],
.....: )
.....:
Out[149]:
mean
val0 val1
col col0 col1 col2 col3 col4 col0 col1 col2 col3 col4
row
row0 0.77 0.605 NaN 0.860 0.65 0.01 0.745 NaN 0.010 0.02
row2 0.13 NaN 0.395 0.500 0.25 0.45 NaN 0.34 0.440 0.79
row3 NaN 0.310 NaN 0.545 NaN NaN 0.230 NaN 0.075 NaN
row4 NaN 0.100 0.395 0.760 0.24 NaN 0.070 0.42 0.300 0.46
Note to subdivide over multiple columns we can pass in a list to the
columns parameter.
In [150]: df.pivot_table(
.....: values=["val0"],
.....: index="row",
.....: columns=["item", "col"],
.....: aggfunc=["mean"],
.....: )
.....:
Out[150]:
mean
val0
item item0 item1 item2
col col2 col3 col4 col0 col1 col2 col3 col4 col0 col1 col3 col4
row
row0 NaN NaN NaN 0.77 NaN NaN NaN NaN NaN 0.605 0.86 0.65
row2 0.35 NaN 0.37 NaN NaN 0.44 NaN NaN 0.13 NaN 0.50 0.13
row3 NaN NaN NaN NaN 0.31 NaN 0.81 NaN NaN NaN 0.28 NaN
row4 0.15 0.64 NaN NaN 0.10 0.64 0.88 0.24 NaN NaN NaN NaN
Exploding a list-like column#
New in version 0.25.0.
Sometimes the values in a column are list-like.
In [151]: keys = ["panda1", "panda2", "panda3"]
In [152]: values = [["eats", "shoots"], ["shoots", "leaves"], ["eats", "leaves"]]
In [153]: df = pd.DataFrame({"keys": keys, "values": values})
In [154]: df
Out[154]:
keys values
0 panda1 [eats, shoots]
1 panda2 [shoots, leaves]
2 panda3 [eats, leaves]
We can ‘explode’ the values column, transforming each list-like to a separate row, by using explode(). This will replicate the index values from the original row:
In [155]: df["values"].explode()
Out[155]:
0 eats
0 shoots
1 shoots
1 leaves
2 eats
2 leaves
Name: values, dtype: object
You can also explode the column in the DataFrame.
In [156]: df.explode("values")
Out[156]:
keys values
0 panda1 eats
0 panda1 shoots
1 panda2 shoots
1 panda2 leaves
2 panda3 eats
2 panda3 leaves
Series.explode() will replace empty lists with np.nan and preserve scalar entries. The dtype of the resulting Series is always object.
In [157]: s = pd.Series([[1, 2, 3], "foo", [], ["a", "b"]])
In [158]: s
Out[158]:
0 [1, 2, 3]
1 foo
2 []
3 [a, b]
dtype: object
In [159]: s.explode()
Out[159]:
0 1
0 2
0 3
1 foo
2 NaN
3 a
3 b
dtype: object
Here is a typical usecase. You have comma separated strings in a column and want to expand this.
In [160]: df = pd.DataFrame([{"var1": "a,b,c", "var2": 1}, {"var1": "d,e,f", "var2": 2}])
In [161]: df
Out[161]:
var1 var2
0 a,b,c 1
1 d,e,f 2
Creating a long form DataFrame is now straightforward using explode and chained operations
In [162]: df.assign(var1=df.var1.str.split(",")).explode("var1")
Out[162]:
var1 var2
0 a 1
0 b 1
0 c 1
1 d 2
1 e 2
1 f 2
| 862
| 1,146
|
Change multiple column names in pandas dataframe (not all colmn names) at the same time using index numbers
I have successfully changed a single column name in the dataframe using this:
df.columns=['new_name' if x=='old_name' else x for x in df.columns]
However i have lots of columns to update (but not all 240 of them) and I don't want to have to write it out for each single change if i can help it.
I have tried to follow the advice from @StefanK in this thread:
Changing multiple column names but not all of them - Pandas Python
my code:
df.columns=[[4,18,181,182,187,188,189,190,203,204]]=['Brand','Reason','Chat_helpful','Chat_expertise','Answered_questions','Recommend_chat','Alternate_help','Customer_comments','Agent_category','Agent_outcome']
but i am getting an error message:
File "<ipython-input-17-2808488b712d>", line 3
df.columns=[[4,18,181,182,187,188,189,190,203,204]]=['Brand','Reason','Chat_helpful','Chat_expertise','Answered_questions','Recommend_chat','Alternate_help','Customer_comments','Agent_category','Agent_outcome']
^
SyntaxError: can't assign to literal
So having googled the error and read many more S.O. questions here it looks to me like it is trying to read the numbers as integers instead of an index? I'm not certain here though.
So how do i fix it so it looks at the numbers as the index?! The column names I am replacing are at least 10 words long each so I'm keen not to have to type them all out! my only ideas are to use iloc somehow but i'm going into new territory here!
really appreciate some help please
|
66,960,269
|
Drop duplicates based on condition
|
<p>I have the following pandas dataframe:</p>
<pre><code>df = pd.DataFrame([[5, 10],[8, 40],[8, 50],[10, 390], [10, 395], [10, 405], [11, 390], [11, 395], [11, 405], [13, 390], [13, 395], [13, 405]], columns=['index', 'so_id'])
</code></pre>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>index</th>
<th>so_id</th>
</tr>
</thead>
<tbody>
<tr>
<td>5</td>
<td>10</td>
</tr>
<tr>
<td>8</td>
<td>40</td>
</tr>
<tr>
<td>8</td>
<td>50</td>
</tr>
<tr>
<td>10</td>
<td>390</td>
</tr>
<tr>
<td>10</td>
<td>395</td>
</tr>
<tr>
<td>10</td>
<td>405</td>
</tr>
<tr>
<td>11</td>
<td>390</td>
</tr>
<tr>
<td>11</td>
<td>395</td>
</tr>
<tr>
<td>11</td>
<td>405</td>
</tr>
<tr>
<td>13</td>
<td>390</td>
</tr>
<tr>
<td>13</td>
<td>395</td>
</tr>
<tr>
<td>13</td>
<td>405</td>
</tr>
</tbody>
</table>
</div>
<p>The desired output would be the following:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>index</th>
<th>so_id</th>
</tr>
</thead>
<tbody>
<tr>
<td>5</td>
<td>10</td>
</tr>
<tr>
<td>8</td>
<td>40</td>
</tr>
<tr>
<td>10</td>
<td>390</td>
</tr>
<tr>
<td>11</td>
<td>395</td>
</tr>
<tr>
<td>13</td>
<td>405</td>
</tr>
</tbody>
</table>
</div>
<p>Basically my goal is to drop duplicates on the column 'index' while keeping a <strong>different</strong> ascending value for the column 'so_id'.</p>
<p>The key point is that I don't want a simple drop_duplicates on the variable 'index' since this would get me the same 'so_id' after the drop_duplicates. I want drop_duplicates on 'index' and at the same time get the different values of the column 'so_id'.</p>
| 66,960,494
| 2021-04-05T21:54:51.127000
| 2
| null | 1
| 126
|
python|pandas
|
<p>If your values are sorted, you can do:</p>
<pre><code>seen = set()
def fn(x):
for val in x:
if val in seen:
continue
seen.add(val)
return val
df = df.groupby("index")["so_id"].apply(fn).reset_index()
print(df)
</code></pre>
<p>Prints:</p>
<pre><code> index so_id
0 5 10
1 8 40
2 10 390
3 11 395
4 13 405
</code></pre>
| 2021-04-05T22:20:43.410000
| 4
|
https://pandas.pydata.org/docs/reference/api/pandas.Series.replace.html
|
pandas.Series.replace#
pandas.Series.replace#
Series.replace(to_replace=None, value=_NoDefault.no_default, *, inplace=False, limit=None, regex=False, method=_NoDefault.no_default)[source]#
Replace values given in to_replace with value.
Values of the Series are replaced with other values dynamically.
This differs from updating with .loc or .iloc, which require
you to specify a location to update with some value.
Parameters
to_replacestr, regex, list, dict, Series, int, float, or NoneHow to find the values that will be replaced.
numeric, str or regex:
numeric: numeric values equal to to_replace will be
replaced with value
str: string exactly matching to_replace will be replaced
with value
regex: regexs matching to_replace will be replaced with
If your values are sorted, you can do:
seen = set()
def fn(x):
for val in x:
if val in seen:
continue
seen.add(val)
return val
df = df.groupby("index")["so_id"].apply(fn).reset_index()
print(df)
Prints:
index so_id
0 5 10
1 8 40
2 10 390
3 11 395
4 13 405
value
list of str, regex, or numeric:
First, if to_replace and value are both lists, they
must be the same length.
Second, if regex=True then all of the strings in both
lists will be interpreted as regexs otherwise they will match
directly. This doesn’t matter much for value since there
are only a few possible substitution regexes you can use.
str, regex and numeric rules apply as above.
dict:
Dicts can be used to specify different replacement values
for different existing values. For example,
{'a': 'b', 'y': 'z'} replaces the value ‘a’ with ‘b’ and
‘y’ with ‘z’. To use a dict in this way, the optional value
parameter should not be given.
For a DataFrame a dict can specify that different values
should be replaced in different columns. For example,
{'a': 1, 'b': 'z'} looks for the value 1 in column ‘a’
and the value ‘z’ in column ‘b’ and replaces these values
with whatever is specified in value. The value parameter
should not be None in this case. You can treat this as a
special case of passing two lists except that you are
specifying the column to search in.
For a DataFrame nested dictionaries, e.g.,
{'a': {'b': np.nan}}, are read as follows: look in column
‘a’ for the value ‘b’ and replace it with NaN. The optional value
parameter should not be specified to use a nested dict in this
way. You can nest regular expressions as well. Note that
column names (the top-level dictionary keys in a nested
dictionary) cannot be regular expressions.
None:
This means that the regex argument must be a string,
compiled regular expression, or list, dict, ndarray or
Series of such elements. If value is also None then
this must be a nested dictionary or Series.
See the examples section for examples of each of these.
valuescalar, dict, list, str, regex, default NoneValue to replace any values matching to_replace with.
For a DataFrame a dict of values can be used to specify which
value to use for each column (columns not in the dict will not be
filled). Regular expressions, strings and lists or dicts of such
objects are also allowed.
inplacebool, default FalseIf True, performs operation inplace and returns None.
limitint, default NoneMaximum size gap to forward or backward fill.
regexbool or same types as to_replace, default FalseWhether to interpret to_replace and/or value as regular
expressions. If this is True then to_replace must be a
string. Alternatively, this could be a regular expression or a
list, dict, or array of regular expressions in which case
to_replace must be None.
method{‘pad’, ‘ffill’, ‘bfill’}The method to use when for replacement, when to_replace is a
scalar, list or tuple and value is None.
Changed in version 0.23.0: Added to DataFrame.
Returns
SeriesObject after replacement.
Raises
AssertionError
If regex is not a bool and to_replace is not
None.
TypeError
If to_replace is not a scalar, array-like, dict, or None
If to_replace is a dict and value is not a list,
dict, ndarray, or Series
If to_replace is None and regex is not compilable
into a regular expression or is a list, dict, ndarray, or
Series.
When replacing multiple bool or datetime64 objects and
the arguments to to_replace does not match the type of the
value being replaced
ValueError
If a list or an ndarray is passed to to_replace and
value but they are not the same length.
See also
Series.fillnaFill NA values.
Series.whereReplace values based on boolean condition.
Series.str.replaceSimple string replacement.
Notes
Regex substitution is performed under the hood with re.sub. The
rules for substitution for re.sub are the same.
Regular expressions will only substitute on strings, meaning you
cannot provide, for example, a regular expression matching floating
point numbers and expect the columns in your frame that have a
numeric dtype to be matched. However, if those floating point
numbers are strings, then you can do this.
This method has a lot of options. You are encouraged to experiment
and play with this method to gain intuition about how it works.
When dict is used as the to_replace value, it is like
key(s) in the dict are the to_replace part and
value(s) in the dict are the value parameter.
Examples
Scalar `to_replace` and `value`
>>> s = pd.Series([1, 2, 3, 4, 5])
>>> s.replace(1, 5)
0 5
1 2
2 3
3 4
4 5
dtype: int64
>>> df = pd.DataFrame({'A': [0, 1, 2, 3, 4],
... 'B': [5, 6, 7, 8, 9],
... 'C': ['a', 'b', 'c', 'd', 'e']})
>>> df.replace(0, 5)
A B C
0 5 5 a
1 1 6 b
2 2 7 c
3 3 8 d
4 4 9 e
List-like `to_replace`
>>> df.replace([0, 1, 2, 3], 4)
A B C
0 4 5 a
1 4 6 b
2 4 7 c
3 4 8 d
4 4 9 e
>>> df.replace([0, 1, 2, 3], [4, 3, 2, 1])
A B C
0 4 5 a
1 3 6 b
2 2 7 c
3 1 8 d
4 4 9 e
>>> s.replace([1, 2], method='bfill')
0 3
1 3
2 3
3 4
4 5
dtype: int64
dict-like `to_replace`
>>> df.replace({0: 10, 1: 100})
A B C
0 10 5 a
1 100 6 b
2 2 7 c
3 3 8 d
4 4 9 e
>>> df.replace({'A': 0, 'B': 5}, 100)
A B C
0 100 100 a
1 1 6 b
2 2 7 c
3 3 8 d
4 4 9 e
>>> df.replace({'A': {0: 100, 4: 400}})
A B C
0 100 5 a
1 1 6 b
2 2 7 c
3 3 8 d
4 400 9 e
Regular expression `to_replace`
>>> df = pd.DataFrame({'A': ['bat', 'foo', 'bait'],
... 'B': ['abc', 'bar', 'xyz']})
>>> df.replace(to_replace=r'^ba.$', value='new', regex=True)
A B
0 new abc
1 foo new
2 bait xyz
>>> df.replace({'A': r'^ba.$'}, {'A': 'new'}, regex=True)
A B
0 new abc
1 foo bar
2 bait xyz
>>> df.replace(regex=r'^ba.$', value='new')
A B
0 new abc
1 foo new
2 bait xyz
>>> df.replace(regex={r'^ba.$': 'new', 'foo': 'xyz'})
A B
0 new abc
1 xyz new
2 bait xyz
>>> df.replace(regex=[r'^ba.$', 'foo'], value='new')
A B
0 new abc
1 new new
2 bait xyz
Compare the behavior of s.replace({'a': None}) and
s.replace('a', None) to understand the peculiarities
of the to_replace parameter:
>>> s = pd.Series([10, 'a', 'a', 'b', 'a'])
When one uses a dict as the to_replace value, it is like the
value(s) in the dict are equal to the value parameter.
s.replace({'a': None}) is equivalent to
s.replace(to_replace={'a': None}, value=None, method=None):
>>> s.replace({'a': None})
0 10
1 None
2 None
3 b
4 None
dtype: object
When value is not explicitly passed and to_replace is a scalar, list
or tuple, replace uses the method parameter (default ‘pad’) to do the
replacement. So this is why the ‘a’ values are being replaced by 10
in rows 1 and 2 and ‘b’ in row 4 in this case.
>>> s.replace('a')
0 10
1 10
2 10
3 b
4 b
dtype: object
On the other hand, if None is explicitly passed for value, it will
be respected:
>>> s.replace('a', None)
0 10
1 None
2 None
3 b
4 None
dtype: object
Changed in version 1.4.0: Previously the explicit None was silently ignored.
| 761
| 1,105
|
Drop duplicates based on condition
I have the following pandas dataframe:
df = pd.DataFrame([[5, 10],[8, 40],[8, 50],[10, 390], [10, 395], [10, 405], [11, 390], [11, 395], [11, 405], [13, 390], [13, 395], [13, 405]], columns=['index', 'so_id'])
index
so_id
5
10
8
40
8
50
10
390
10
395
10
405
11
390
11
395
11
405
13
390
13
395
13
405
The desired output would be the following:
index
so_id
5
10
8
40
10
390
11
395
13
405
Basically my goal is to drop duplicates on the column 'index' while keeping a different ascending value for the column 'so_id'.
The key point is that I don't want a simple drop_duplicates on the variable 'index' since this would get me the same 'so_id' after the drop_duplicates. I want drop_duplicates on 'index' and at the same time get the different values of the column 'so_id'.
|
67,914,330
|
Cleaning column names in pandas
|
<p>I have a Dataframe I receive from a crawler that I am importing into a database for long-term storage.</p>
<p>The problem I am running into is a large amount of the various dataframes have uppercase and whitespace.</p>
<p>I have a fix for it but I was wondering if it can be done any cleaner than this:</p>
<pre><code>def clean_columns(dataframe):
for column in dataframe:
dataframe.rename(columns = {column : column.lower().replace(" ", "_")},
inplace = 1)
return dataframe
</code></pre>
<p>print(dataframe.columns)</p>
<p><em>Index(['Daily Foo', 'Weekly Bar'])</em></p>
<pre><code>dataframe = clean_columns(dataframe)
print(dataframe.columns)
</code></pre>
<p><em>Index(['daily_foo', 'weekly_bar'])</em></p>
| 67,914,441
| 2021-06-10T03:33:22.843000
| 1
| null | 0
| 1,435
|
python|pandas
|
<p>You can try via <code>columns</code> attribute:</p>
<pre><code>df.columns=df.columns.str.lower().str.replace(' ','_')
</code></pre>
<p><strong>OR</strong></p>
<p>via <code>rename()</code> method:</p>
<pre><code>df=df.rename(columns=lambda x:x.lower().replace(' ','_'))
</code></pre>
| 2021-06-10T03:45:49.780000
| 4
|
https://pandas.pydata.org/docs/user_guide/text.html
|
Working with text data#
Working with text data#
Text data types#
New in version 1.0.0.
There are two ways to store text data in pandas:
object -dtype NumPy array.
StringDtype extension type.
We recommend using StringDtype to store text data.
Prior to pandas 1.0, object dtype was the only option. This was unfortunate
for many reasons:
You can accidentally store a mixture of strings and non-strings in an
object dtype array. It’s better to have a dedicated dtype.
object dtype breaks dtype-specific operations like DataFrame.select_dtypes().
There isn’t a clear way to select just text while excluding non-text
but still object-dtype columns.
When reading code, the contents of an object dtype array is less clear
You can try via columns attribute:
df.columns=df.columns.str.lower().str.replace(' ','_')
OR
via rename() method:
df=df.rename(columns=lambda x:x.lower().replace(' ','_'))
than 'string'.
Currently, the performance of object dtype arrays of strings and
arrays.StringArray are about the same. We expect future enhancements
to significantly increase the performance and lower the memory overhead of
StringArray.
Warning
StringArray is currently considered experimental. The implementation
and parts of the API may change without warning.
For backwards-compatibility, object dtype remains the default type we
infer a list of strings to
In [1]: pd.Series(["a", "b", "c"])
Out[1]:
0 a
1 b
2 c
dtype: object
To explicitly request string dtype, specify the dtype
In [2]: pd.Series(["a", "b", "c"], dtype="string")
Out[2]:
0 a
1 b
2 c
dtype: string
In [3]: pd.Series(["a", "b", "c"], dtype=pd.StringDtype())
Out[3]:
0 a
1 b
2 c
dtype: string
Or astype after the Series or DataFrame is created
In [4]: s = pd.Series(["a", "b", "c"])
In [5]: s
Out[5]:
0 a
1 b
2 c
dtype: object
In [6]: s.astype("string")
Out[6]:
0 a
1 b
2 c
dtype: string
Changed in version 1.1.0.
You can also use StringDtype/"string" as the dtype on non-string data and
it will be converted to string dtype:
In [7]: s = pd.Series(["a", 2, np.nan], dtype="string")
In [8]: s
Out[8]:
0 a
1 2
2 <NA>
dtype: string
In [9]: type(s[1])
Out[9]: str
or convert from existing pandas data:
In [10]: s1 = pd.Series([1, 2, np.nan], dtype="Int64")
In [11]: s1
Out[11]:
0 1
1 2
2 <NA>
dtype: Int64
In [12]: s2 = s1.astype("string")
In [13]: s2
Out[13]:
0 1
1 2
2 <NA>
dtype: string
In [14]: type(s2[0])
Out[14]: str
Behavior differences#
These are places where the behavior of StringDtype objects differ from
object dtype
For StringDtype, string accessor methods
that return numeric output will always return a nullable integer dtype,
rather than either int or float dtype, depending on the presence of NA values.
Methods returning boolean output will return a nullable boolean dtype.
In [15]: s = pd.Series(["a", None, "b"], dtype="string")
In [16]: s
Out[16]:
0 a
1 <NA>
2 b
dtype: string
In [17]: s.str.count("a")
Out[17]:
0 1
1 <NA>
2 0
dtype: Int64
In [18]: s.dropna().str.count("a")
Out[18]:
0 1
2 0
dtype: Int64
Both outputs are Int64 dtype. Compare that with object-dtype
In [19]: s2 = pd.Series(["a", None, "b"], dtype="object")
In [20]: s2.str.count("a")
Out[20]:
0 1.0
1 NaN
2 0.0
dtype: float64
In [21]: s2.dropna().str.count("a")
Out[21]:
0 1
2 0
dtype: int64
When NA values are present, the output dtype is float64. Similarly for
methods returning boolean values.
In [22]: s.str.isdigit()
Out[22]:
0 False
1 <NA>
2 False
dtype: boolean
In [23]: s.str.match("a")
Out[23]:
0 True
1 <NA>
2 False
dtype: boolean
Some string methods, like Series.str.decode() are not available
on StringArray because StringArray only holds strings, not
bytes.
In comparison operations, arrays.StringArray and Series backed
by a StringArray will return an object with BooleanDtype,
rather than a bool dtype object. Missing values in a StringArray
will propagate in comparison operations, rather than always comparing
unequal like numpy.nan.
Everything else that follows in the rest of this document applies equally to
string and object dtype.
String methods#
Series and Index are equipped with a set of string processing methods
that make it easy to operate on each element of the array. Perhaps most
importantly, these methods exclude missing/NA values automatically. These are
accessed via the str attribute and generally have names matching
the equivalent (scalar) built-in string methods:
In [24]: s = pd.Series(
....: ["A", "B", "C", "Aaba", "Baca", np.nan, "CABA", "dog", "cat"], dtype="string"
....: )
....:
In [25]: s.str.lower()
Out[25]:
0 a
1 b
2 c
3 aaba
4 baca
5 <NA>
6 caba
7 dog
8 cat
dtype: string
In [26]: s.str.upper()
Out[26]:
0 A
1 B
2 C
3 AABA
4 BACA
5 <NA>
6 CABA
7 DOG
8 CAT
dtype: string
In [27]: s.str.len()
Out[27]:
0 1
1 1
2 1
3 4
4 4
5 <NA>
6 4
7 3
8 3
dtype: Int64
In [28]: idx = pd.Index([" jack", "jill ", " jesse ", "frank"])
In [29]: idx.str.strip()
Out[29]: Index(['jack', 'jill', 'jesse', 'frank'], dtype='object')
In [30]: idx.str.lstrip()
Out[30]: Index(['jack', 'jill ', 'jesse ', 'frank'], dtype='object')
In [31]: idx.str.rstrip()
Out[31]: Index([' jack', 'jill', ' jesse', 'frank'], dtype='object')
The string methods on Index are especially useful for cleaning up or
transforming DataFrame columns. For instance, you may have columns with
leading or trailing whitespace:
In [32]: df = pd.DataFrame(
....: np.random.randn(3, 2), columns=[" Column A ", " Column B "], index=range(3)
....: )
....:
In [33]: df
Out[33]:
Column A Column B
0 0.469112 -0.282863
1 -1.509059 -1.135632
2 1.212112 -0.173215
Since df.columns is an Index object, we can use the .str accessor
In [34]: df.columns.str.strip()
Out[34]: Index(['Column A', 'Column B'], dtype='object')
In [35]: df.columns.str.lower()
Out[35]: Index([' column a ', ' column b '], dtype='object')
These string methods can then be used to clean up the columns as needed.
Here we are removing leading and trailing whitespaces, lower casing all names,
and replacing any remaining whitespaces with underscores:
In [36]: df.columns = df.columns.str.strip().str.lower().str.replace(" ", "_")
In [37]: df
Out[37]:
column_a column_b
0 0.469112 -0.282863
1 -1.509059 -1.135632
2 1.212112 -0.173215
Note
If you have a Series where lots of elements are repeated
(i.e. the number of unique elements in the Series is a lot smaller than the length of the
Series), it can be faster to convert the original Series to one of type
category and then use .str.<method> or .dt.<property> on that.
The performance difference comes from the fact that, for Series of type category, the
string operations are done on the .categories and not on each element of the
Series.
Please note that a Series of type category with string .categories has
some limitations in comparison to Series of type string (e.g. you can’t add strings to
each other: s + " " + s won’t work if s is a Series of type category). Also,
.str methods which operate on elements of type list are not available on such a
Series.
Warning
Before v.0.25.0, the .str-accessor did only the most rudimentary type checks. Starting with
v.0.25.0, the type of the Series is inferred and the allowed types (i.e. strings) are enforced more rigorously.
Generally speaking, the .str accessor is intended to work only on strings. With very few
exceptions, other uses are not supported, and may be disabled at a later point.
Splitting and replacing strings#
Methods like split return a Series of lists:
In [38]: s2 = pd.Series(["a_b_c", "c_d_e", np.nan, "f_g_h"], dtype="string")
In [39]: s2.str.split("_")
Out[39]:
0 [a, b, c]
1 [c, d, e]
2 <NA>
3 [f, g, h]
dtype: object
Elements in the split lists can be accessed using get or [] notation:
In [40]: s2.str.split("_").str.get(1)
Out[40]:
0 b
1 d
2 <NA>
3 g
dtype: object
In [41]: s2.str.split("_").str[1]
Out[41]:
0 b
1 d
2 <NA>
3 g
dtype: object
It is easy to expand this to return a DataFrame using expand.
In [42]: s2.str.split("_", expand=True)
Out[42]:
0 1 2
0 a b c
1 c d e
2 <NA> <NA> <NA>
3 f g h
When original Series has StringDtype, the output columns will all
be StringDtype as well.
It is also possible to limit the number of splits:
In [43]: s2.str.split("_", expand=True, n=1)
Out[43]:
0 1
0 a b_c
1 c d_e
2 <NA> <NA>
3 f g_h
rsplit is similar to split except it works in the reverse direction,
i.e., from the end of the string to the beginning of the string:
In [44]: s2.str.rsplit("_", expand=True, n=1)
Out[44]:
0 1
0 a_b c
1 c_d e
2 <NA> <NA>
3 f_g h
replace optionally uses regular expressions:
In [45]: s3 = pd.Series(
....: ["A", "B", "C", "Aaba", "Baca", "", np.nan, "CABA", "dog", "cat"],
....: dtype="string",
....: )
....:
In [46]: s3
Out[46]:
0 A
1 B
2 C
3 Aaba
4 Baca
5
6 <NA>
7 CABA
8 dog
9 cat
dtype: string
In [47]: s3.str.replace("^.a|dog", "XX-XX ", case=False, regex=True)
Out[47]:
0 A
1 B
2 C
3 XX-XX ba
4 XX-XX ca
5
6 <NA>
7 XX-XX BA
8 XX-XX
9 XX-XX t
dtype: string
Warning
Some caution must be taken when dealing with regular expressions! The current behavior
is to treat single character patterns as literal strings, even when regex is set
to True. This behavior is deprecated and will be removed in a future version so
that the regex keyword is always respected.
Changed in version 1.2.0.
If you want literal replacement of a string (equivalent to str.replace()), you
can set the optional regex parameter to False, rather than escaping each
character. In this case both pat and repl must be strings:
In [48]: dollars = pd.Series(["12", "-$10", "$10,000"], dtype="string")
# These lines are equivalent
In [49]: dollars.str.replace(r"-\$", "-", regex=True)
Out[49]:
0 12
1 -10
2 $10,000
dtype: string
In [50]: dollars.str.replace("-$", "-", regex=False)
Out[50]:
0 12
1 -10
2 $10,000
dtype: string
The replace method can also take a callable as replacement. It is called
on every pat using re.sub(). The callable should expect one
positional argument (a regex object) and return a string.
# Reverse every lowercase alphabetic word
In [51]: pat = r"[a-z]+"
In [52]: def repl(m):
....: return m.group(0)[::-1]
....:
In [53]: pd.Series(["foo 123", "bar baz", np.nan], dtype="string").str.replace(
....: pat, repl, regex=True
....: )
....:
Out[53]:
0 oof 123
1 rab zab
2 <NA>
dtype: string
# Using regex groups
In [54]: pat = r"(?P<one>\w+) (?P<two>\w+) (?P<three>\w+)"
In [55]: def repl(m):
....: return m.group("two").swapcase()
....:
In [56]: pd.Series(["Foo Bar Baz", np.nan], dtype="string").str.replace(
....: pat, repl, regex=True
....: )
....:
Out[56]:
0 bAR
1 <NA>
dtype: string
The replace method also accepts a compiled regular expression object
from re.compile() as a pattern. All flags should be included in the
compiled regular expression object.
In [57]: import re
In [58]: regex_pat = re.compile(r"^.a|dog", flags=re.IGNORECASE)
In [59]: s3.str.replace(regex_pat, "XX-XX ", regex=True)
Out[59]:
0 A
1 B
2 C
3 XX-XX ba
4 XX-XX ca
5
6 <NA>
7 XX-XX BA
8 XX-XX
9 XX-XX t
dtype: string
Including a flags argument when calling replace with a compiled
regular expression object will raise a ValueError.
In [60]: s3.str.replace(regex_pat, 'XX-XX ', flags=re.IGNORECASE)
---------------------------------------------------------------------------
ValueError: case and flags cannot be set when pat is a compiled regex
removeprefix and removesuffix have the same effect as str.removeprefix and str.removesuffix added in Python 3.9
<https://docs.python.org/3/library/stdtypes.html#str.removeprefix>`__:
New in version 1.4.0.
In [61]: s = pd.Series(["str_foo", "str_bar", "no_prefix"])
In [62]: s.str.removeprefix("str_")
Out[62]:
0 foo
1 bar
2 no_prefix
dtype: object
In [63]: s = pd.Series(["foo_str", "bar_str", "no_suffix"])
In [64]: s.str.removesuffix("_str")
Out[64]:
0 foo
1 bar
2 no_suffix
dtype: object
Concatenation#
There are several ways to concatenate a Series or Index, either with itself or others, all based on cat(),
resp. Index.str.cat.
Concatenating a single Series into a string#
The content of a Series (or Index) can be concatenated:
In [65]: s = pd.Series(["a", "b", "c", "d"], dtype="string")
In [66]: s.str.cat(sep=",")
Out[66]: 'a,b,c,d'
If not specified, the keyword sep for the separator defaults to the empty string, sep='':
In [67]: s.str.cat()
Out[67]: 'abcd'
By default, missing values are ignored. Using na_rep, they can be given a representation:
In [68]: t = pd.Series(["a", "b", np.nan, "d"], dtype="string")
In [69]: t.str.cat(sep=",")
Out[69]: 'a,b,d'
In [70]: t.str.cat(sep=",", na_rep="-")
Out[70]: 'a,b,-,d'
Concatenating a Series and something list-like into a Series#
The first argument to cat() can be a list-like object, provided that it matches the length of the calling Series (or Index).
In [71]: s.str.cat(["A", "B", "C", "D"])
Out[71]:
0 aA
1 bB
2 cC
3 dD
dtype: string
Missing values on either side will result in missing values in the result as well, unless na_rep is specified:
In [72]: s.str.cat(t)
Out[72]:
0 aa
1 bb
2 <NA>
3 dd
dtype: string
In [73]: s.str.cat(t, na_rep="-")
Out[73]:
0 aa
1 bb
2 c-
3 dd
dtype: string
Concatenating a Series and something array-like into a Series#
The parameter others can also be two-dimensional. In this case, the number or rows must match the lengths of the calling Series (or Index).
In [74]: d = pd.concat([t, s], axis=1)
In [75]: s
Out[75]:
0 a
1 b
2 c
3 d
dtype: string
In [76]: d
Out[76]:
0 1
0 a a
1 b b
2 <NA> c
3 d d
In [77]: s.str.cat(d, na_rep="-")
Out[77]:
0 aaa
1 bbb
2 c-c
3 ddd
dtype: string
Concatenating a Series and an indexed object into a Series, with alignment#
For concatenation with a Series or DataFrame, it is possible to align the indexes before concatenation by setting
the join-keyword.
In [78]: u = pd.Series(["b", "d", "a", "c"], index=[1, 3, 0, 2], dtype="string")
In [79]: s
Out[79]:
0 a
1 b
2 c
3 d
dtype: string
In [80]: u
Out[80]:
1 b
3 d
0 a
2 c
dtype: string
In [81]: s.str.cat(u)
Out[81]:
0 aa
1 bb
2 cc
3 dd
dtype: string
In [82]: s.str.cat(u, join="left")
Out[82]:
0 aa
1 bb
2 cc
3 dd
dtype: string
Warning
If the join keyword is not passed, the method cat() will currently fall back to the behavior before version 0.23.0 (i.e. no alignment),
but a FutureWarning will be raised if any of the involved indexes differ, since this default will change to join='left' in a future version.
The usual options are available for join (one of 'left', 'outer', 'inner', 'right').
In particular, alignment also means that the different lengths do not need to coincide anymore.
In [83]: v = pd.Series(["z", "a", "b", "d", "e"], index=[-1, 0, 1, 3, 4], dtype="string")
In [84]: s
Out[84]:
0 a
1 b
2 c
3 d
dtype: string
In [85]: v
Out[85]:
-1 z
0 a
1 b
3 d
4 e
dtype: string
In [86]: s.str.cat(v, join="left", na_rep="-")
Out[86]:
0 aa
1 bb
2 c-
3 dd
dtype: string
In [87]: s.str.cat(v, join="outer", na_rep="-")
Out[87]:
-1 -z
0 aa
1 bb
2 c-
3 dd
4 -e
dtype: string
The same alignment can be used when others is a DataFrame:
In [88]: f = d.loc[[3, 2, 1, 0], :]
In [89]: s
Out[89]:
0 a
1 b
2 c
3 d
dtype: string
In [90]: f
Out[90]:
0 1
3 d d
2 <NA> c
1 b b
0 a a
In [91]: s.str.cat(f, join="left", na_rep="-")
Out[91]:
0 aaa
1 bbb
2 c-c
3 ddd
dtype: string
Concatenating a Series and many objects into a Series#
Several array-like items (specifically: Series, Index, and 1-dimensional variants of np.ndarray)
can be combined in a list-like container (including iterators, dict-views, etc.).
In [92]: s
Out[92]:
0 a
1 b
2 c
3 d
dtype: string
In [93]: u
Out[93]:
1 b
3 d
0 a
2 c
dtype: string
In [94]: s.str.cat([u, u.to_numpy()], join="left")
Out[94]:
0 aab
1 bbd
2 cca
3 ddc
dtype: string
All elements without an index (e.g. np.ndarray) within the passed list-like must match in length to the calling Series (or Index),
but Series and Index may have arbitrary length (as long as alignment is not disabled with join=None):
In [95]: v
Out[95]:
-1 z
0 a
1 b
3 d
4 e
dtype: string
In [96]: s.str.cat([v, u, u.to_numpy()], join="outer", na_rep="-")
Out[96]:
-1 -z--
0 aaab
1 bbbd
2 c-ca
3 dddc
4 -e--
dtype: string
If using join='right' on a list-like of others that contains different indexes,
the union of these indexes will be used as the basis for the final concatenation:
In [97]: u.loc[[3]]
Out[97]:
3 d
dtype: string
In [98]: v.loc[[-1, 0]]
Out[98]:
-1 z
0 a
dtype: string
In [99]: s.str.cat([u.loc[[3]], v.loc[[-1, 0]]], join="right", na_rep="-")
Out[99]:
3 dd-
-1 --z
0 a-a
dtype: string
Indexing with .str#
You can use [] notation to directly index by position locations. If you index past the end
of the string, the result will be a NaN.
In [100]: s = pd.Series(
.....: ["A", "B", "C", "Aaba", "Baca", np.nan, "CABA", "dog", "cat"], dtype="string"
.....: )
.....:
In [101]: s.str[0]
Out[101]:
0 A
1 B
2 C
3 A
4 B
5 <NA>
6 C
7 d
8 c
dtype: string
In [102]: s.str[1]
Out[102]:
0 <NA>
1 <NA>
2 <NA>
3 a
4 a
5 <NA>
6 A
7 o
8 a
dtype: string
Extracting substrings#
Extract first match in each subject (extract)#
Warning
Before version 0.23, argument expand of the extract method defaulted to
False. When expand=False, expand returns a Series, Index, or
DataFrame, depending on the subject and regular expression
pattern. When expand=True, it always returns a DataFrame,
which is more consistent and less confusing from the perspective of a user.
expand=True has been the default since version 0.23.0.
The extract method accepts a regular expression with at least one
capture group.
Extracting a regular expression with more than one group returns a
DataFrame with one column per group.
In [103]: pd.Series(
.....: ["a1", "b2", "c3"],
.....: dtype="string",
.....: ).str.extract(r"([ab])(\d)", expand=False)
.....:
Out[103]:
0 1
0 a 1
1 b 2
2 <NA> <NA>
Elements that do not match return a row filled with NaN. Thus, a
Series of messy strings can be “converted” into a like-indexed Series
or DataFrame of cleaned-up or more useful strings, without
necessitating get() to access tuples or re.match objects. The
dtype of the result is always object, even if no match is found and
the result only contains NaN.
Named groups like
In [104]: pd.Series(["a1", "b2", "c3"], dtype="string").str.extract(
.....: r"(?P<letter>[ab])(?P<digit>\d)", expand=False
.....: )
.....:
Out[104]:
letter digit
0 a 1
1 b 2
2 <NA> <NA>
and optional groups like
In [105]: pd.Series(
.....: ["a1", "b2", "3"],
.....: dtype="string",
.....: ).str.extract(r"([ab])?(\d)", expand=False)
.....:
Out[105]:
0 1
0 a 1
1 b 2
2 <NA> 3
can also be used. Note that any capture group names in the regular
expression will be used for column names; otherwise capture group
numbers will be used.
Extracting a regular expression with one group returns a DataFrame
with one column if expand=True.
In [106]: pd.Series(["a1", "b2", "c3"], dtype="string").str.extract(r"[ab](\d)", expand=True)
Out[106]:
0
0 1
1 2
2 <NA>
It returns a Series if expand=False.
In [107]: pd.Series(["a1", "b2", "c3"], dtype="string").str.extract(r"[ab](\d)", expand=False)
Out[107]:
0 1
1 2
2 <NA>
dtype: string
Calling on an Index with a regex with exactly one capture group
returns a DataFrame with one column if expand=True.
In [108]: s = pd.Series(["a1", "b2", "c3"], ["A11", "B22", "C33"], dtype="string")
In [109]: s
Out[109]:
A11 a1
B22 b2
C33 c3
dtype: string
In [110]: s.index.str.extract("(?P<letter>[a-zA-Z])", expand=True)
Out[110]:
letter
0 A
1 B
2 C
It returns an Index if expand=False.
In [111]: s.index.str.extract("(?P<letter>[a-zA-Z])", expand=False)
Out[111]: Index(['A', 'B', 'C'], dtype='object', name='letter')
Calling on an Index with a regex with more than one capture group
returns a DataFrame if expand=True.
In [112]: s.index.str.extract("(?P<letter>[a-zA-Z])([0-9]+)", expand=True)
Out[112]:
letter 1
0 A 11
1 B 22
2 C 33
It raises ValueError if expand=False.
>>> s.index.str.extract("(?P<letter>[a-zA-Z])([0-9]+)", expand=False)
ValueError: only one regex group is supported with Index
The table below summarizes the behavior of extract(expand=False)
(input subject in first column, number of groups in regex in
first row)
1 group
>1 group
Index
Index
ValueError
Series
Series
DataFrame
Extract all matches in each subject (extractall)#
Unlike extract (which returns only the first match),
In [113]: s = pd.Series(["a1a2", "b1", "c1"], index=["A", "B", "C"], dtype="string")
In [114]: s
Out[114]:
A a1a2
B b1
C c1
dtype: string
In [115]: two_groups = "(?P<letter>[a-z])(?P<digit>[0-9])"
In [116]: s.str.extract(two_groups, expand=True)
Out[116]:
letter digit
A a 1
B b 1
C c 1
the extractall method returns every match. The result of
extractall is always a DataFrame with a MultiIndex on its
rows. The last level of the MultiIndex is named match and
indicates the order in the subject.
In [117]: s.str.extractall(two_groups)
Out[117]:
letter digit
match
A 0 a 1
1 a 2
B 0 b 1
C 0 c 1
When each subject string in the Series has exactly one match,
In [118]: s = pd.Series(["a3", "b3", "c2"], dtype="string")
In [119]: s
Out[119]:
0 a3
1 b3
2 c2
dtype: string
then extractall(pat).xs(0, level='match') gives the same result as
extract(pat).
In [120]: extract_result = s.str.extract(two_groups, expand=True)
In [121]: extract_result
Out[121]:
letter digit
0 a 3
1 b 3
2 c 2
In [122]: extractall_result = s.str.extractall(two_groups)
In [123]: extractall_result
Out[123]:
letter digit
match
0 0 a 3
1 0 b 3
2 0 c 2
In [124]: extractall_result.xs(0, level="match")
Out[124]:
letter digit
0 a 3
1 b 3
2 c 2
Index also supports .str.extractall. It returns a DataFrame which has the
same result as a Series.str.extractall with a default index (starts from 0).
In [125]: pd.Index(["a1a2", "b1", "c1"]).str.extractall(two_groups)
Out[125]:
letter digit
match
0 0 a 1
1 a 2
1 0 b 1
2 0 c 1
In [126]: pd.Series(["a1a2", "b1", "c1"], dtype="string").str.extractall(two_groups)
Out[126]:
letter digit
match
0 0 a 1
1 a 2
1 0 b 1
2 0 c 1
Testing for strings that match or contain a pattern#
You can check whether elements contain a pattern:
In [127]: pattern = r"[0-9][a-z]"
In [128]: pd.Series(
.....: ["1", "2", "3a", "3b", "03c", "4dx"],
.....: dtype="string",
.....: ).str.contains(pattern)
.....:
Out[128]:
0 False
1 False
2 True
3 True
4 True
5 True
dtype: boolean
Or whether elements match a pattern:
In [129]: pd.Series(
.....: ["1", "2", "3a", "3b", "03c", "4dx"],
.....: dtype="string",
.....: ).str.match(pattern)
.....:
Out[129]:
0 False
1 False
2 True
3 True
4 False
5 True
dtype: boolean
New in version 1.1.0.
In [130]: pd.Series(
.....: ["1", "2", "3a", "3b", "03c", "4dx"],
.....: dtype="string",
.....: ).str.fullmatch(pattern)
.....:
Out[130]:
0 False
1 False
2 True
3 True
4 False
5 False
dtype: boolean
Note
The distinction between match, fullmatch, and contains is strictness:
fullmatch tests whether the entire string matches the regular expression;
match tests whether there is a match of the regular expression that begins
at the first character of the string; and contains tests whether there is
a match of the regular expression at any position within the string.
The corresponding functions in the re package for these three match modes are
re.fullmatch,
re.match, and
re.search,
respectively.
Methods like match, fullmatch, contains, startswith, and
endswith take an extra na argument so missing values can be considered
True or False:
In [131]: s4 = pd.Series(
.....: ["A", "B", "C", "Aaba", "Baca", np.nan, "CABA", "dog", "cat"], dtype="string"
.....: )
.....:
In [132]: s4.str.contains("A", na=False)
Out[132]:
0 True
1 False
2 False
3 True
4 False
5 False
6 True
7 False
8 False
dtype: boolean
Creating indicator variables#
You can extract dummy variables from string columns.
For example if they are separated by a '|':
In [133]: s = pd.Series(["a", "a|b", np.nan, "a|c"], dtype="string")
In [134]: s.str.get_dummies(sep="|")
Out[134]:
a b c
0 1 0 0
1 1 1 0
2 0 0 0
3 1 0 1
String Index also supports get_dummies which returns a MultiIndex.
In [135]: idx = pd.Index(["a", "a|b", np.nan, "a|c"])
In [136]: idx.str.get_dummies(sep="|")
Out[136]:
MultiIndex([(1, 0, 0),
(1, 1, 0),
(0, 0, 0),
(1, 0, 1)],
names=['a', 'b', 'c'])
See also get_dummies().
Method summary#
Method
Description
cat()
Concatenate strings
split()
Split strings on delimiter
rsplit()
Split strings on delimiter working from the end of the string
get()
Index into each element (retrieve i-th element)
join()
Join strings in each element of the Series with passed separator
get_dummies()
Split strings on the delimiter returning DataFrame of dummy variables
contains()
Return boolean array if each string contains pattern/regex
replace()
Replace occurrences of pattern/regex/string with some other string or the return value of a callable given the occurrence
removeprefix()
Remove prefix from string, i.e. only remove if string starts with prefix.
removesuffix()
Remove suffix from string, i.e. only remove if string ends with suffix.
repeat()
Duplicate values (s.str.repeat(3) equivalent to x * 3)
pad()
Add whitespace to left, right, or both sides of strings
center()
Equivalent to str.center
ljust()
Equivalent to str.ljust
rjust()
Equivalent to str.rjust
zfill()
Equivalent to str.zfill
wrap()
Split long strings into lines with length less than a given width
slice()
Slice each string in the Series
slice_replace()
Replace slice in each string with passed value
count()
Count occurrences of pattern
startswith()
Equivalent to str.startswith(pat) for each element
endswith()
Equivalent to str.endswith(pat) for each element
findall()
Compute list of all occurrences of pattern/regex for each string
match()
Call re.match on each element, returning matched groups as list
extract()
Call re.search on each element, returning DataFrame with one row for each element and one column for each regex capture group
extractall()
Call re.findall on each element, returning DataFrame with one row for each match and one column for each regex capture group
len()
Compute string lengths
strip()
Equivalent to str.strip
rstrip()
Equivalent to str.rstrip
lstrip()
Equivalent to str.lstrip
partition()
Equivalent to str.partition
rpartition()
Equivalent to str.rpartition
lower()
Equivalent to str.lower
casefold()
Equivalent to str.casefold
upper()
Equivalent to str.upper
find()
Equivalent to str.find
rfind()
Equivalent to str.rfind
index()
Equivalent to str.index
rindex()
Equivalent to str.rindex
capitalize()
Equivalent to str.capitalize
swapcase()
Equivalent to str.swapcase
normalize()
Return Unicode normal form. Equivalent to unicodedata.normalize
translate()
Equivalent to str.translate
isalnum()
Equivalent to str.isalnum
isalpha()
Equivalent to str.isalpha
isdigit()
Equivalent to str.isdigit
isspace()
Equivalent to str.isspace
islower()
Equivalent to str.islower
isupper()
Equivalent to str.isupper
istitle()
Equivalent to str.istitle
isnumeric()
Equivalent to str.isnumeric
isdecimal()
Equivalent to str.isdecimal
| 723
| 896
|
Cleaning column names in pandas
I have a Dataframe I receive from a crawler that I am importing into a database for long-term storage.
The problem I am running into is a large amount of the various dataframes have uppercase and whitespace.
I have a fix for it but I was wondering if it can be done any cleaner than this:
def clean_columns(dataframe):
for column in dataframe:
dataframe.rename(columns = {column : column.lower().replace(" ", "_")},
inplace = 1)
return dataframe
print(dataframe.columns)
Index(['Daily Foo', 'Weekly Bar'])
dataframe = clean_columns(dataframe)
print(dataframe.columns)
Index(['daily_foo', 'weekly_bar'])
|
68,080,572
|
Python: how to drop columns if contain all negative values?
|
<p>I have a dataframe that looks like the following</p>
<pre><code>df
A B C D E
0 -1 -3 0 5 -2
1 3 -2 -1 -4 -5
2 0 -4 -3 -2 -1
</code></pre>
<p>I want to drop the columns that contain all negative values and save them in a second dataframe. In this way I would like to have</p>
<pre><code>df
A C D
0 -1 0 5
1 3 -1 -4
2 0 -3 -2
df2
B E
0 -3 -2
1 -2 -5
2 -4 -1
</code></pre>
| 68,080,599
| 2021-06-22T09:00:40.800000
| 1
| 1
| 1
| 185
|
python|pandas
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.lt.html" rel="nofollow noreferrer"><code>DataFrame.lt</code></a> for less like <code>0</code> and <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.all.html" rel="nofollow noreferrer"><code>DataFrame.all</code></a> for test all <code>True</code>s, then filter in <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.loc.html" rel="nofollow noreferrer"><code>DataFrame.loc</code></a>, here <code>:</code> means get all rows and columns by mask:</p>
<pre><code>m = df.lt(0).all()
df1 = df.loc[:, ~m]
df2 = df.loc[:, m]
</code></pre>
<p>Or invert logic for test at least one <code>True</code>s by greater or equal value:</p>
<pre><code>m = df.ge(0).any()
df1 = df.loc[:, m]
df2 = df.loc[:, ~m]
</code></pre>
| 2021-06-22T09:02:22.107000
| 4
|
https://pandas.pydata.org/docs/user_guide/visualization.html
|
Chart visualization#
Chart visualization#
Use DataFrame.lt for less like 0 and DataFrame.all for test all Trues, then filter in DataFrame.loc, here : means get all rows and columns by mask:
m = df.lt(0).all()
df1 = df.loc[:, ~m]
df2 = df.loc[:, m]
Or invert logic for test at least one Trues by greater or equal value:
m = df.ge(0).any()
df1 = df.loc[:, m]
df2 = df.loc[:, ~m]
Note
The examples below assume that you’re using Jupyter.
This section demonstrates visualization through charting. For information on
visualization of tabular data please see the section on Table Visualization.
We use the standard convention for referencing the matplotlib API:
In [1]: import matplotlib.pyplot as plt
In [2]: plt.close("all")
We provide the basics in pandas to easily create decent looking plots.
See the ecosystem section for visualization
libraries that go beyond the basics documented here.
Note
All calls to np.random are seeded with 123456.
Basic plotting: plot#
We will demonstrate the basics, see the cookbook for
some advanced strategies.
The plot method on Series and DataFrame is just a simple wrapper around
plt.plot():
In [3]: ts = pd.Series(np.random.randn(1000), index=pd.date_range("1/1/2000", periods=1000))
In [4]: ts = ts.cumsum()
In [5]: ts.plot();
If the index consists of dates, it calls gcf().autofmt_xdate()
to try to format the x-axis nicely as per above.
On DataFrame, plot() is a convenience to plot all of the columns with labels:
In [6]: df = pd.DataFrame(np.random.randn(1000, 4), index=ts.index, columns=list("ABCD"))
In [7]: df = df.cumsum()
In [8]: plt.figure();
In [9]: df.plot();
You can plot one column versus another using the x and y keywords in
plot():
In [10]: df3 = pd.DataFrame(np.random.randn(1000, 2), columns=["B", "C"]).cumsum()
In [11]: df3["A"] = pd.Series(list(range(len(df))))
In [12]: df3.plot(x="A", y="B");
Note
For more formatting and styling options, see
formatting below.
Other plots#
Plotting methods allow for a handful of plot styles other than the
default line plot. These methods can be provided as the kind
keyword argument to plot(), and include:
‘bar’ or ‘barh’ for bar plots
‘hist’ for histogram
‘box’ for boxplot
‘kde’ or ‘density’ for density plots
‘area’ for area plots
‘scatter’ for scatter plots
‘hexbin’ for hexagonal bin plots
‘pie’ for pie plots
For example, a bar plot can be created the following way:
In [13]: plt.figure();
In [14]: df.iloc[5].plot(kind="bar");
You can also create these other plots using the methods DataFrame.plot.<kind> instead of providing the kind keyword argument. This makes it easier to discover plot methods and the specific arguments they use:
In [15]: df = pd.DataFrame()
In [16]: df.plot.<TAB> # noqa: E225, E999
df.plot.area df.plot.barh df.plot.density df.plot.hist df.plot.line df.plot.scatter
df.plot.bar df.plot.box df.plot.hexbin df.plot.kde df.plot.pie
In addition to these kind s, there are the DataFrame.hist(),
and DataFrame.boxplot() methods, which use a separate interface.
Finally, there are several plotting functions in pandas.plotting
that take a Series or DataFrame as an argument. These
include:
Scatter Matrix
Andrews Curves
Parallel Coordinates
Lag Plot
Autocorrelation Plot
Bootstrap Plot
RadViz
Plots may also be adorned with errorbars
or tables.
Bar plots#
For labeled, non-time series data, you may wish to produce a bar plot:
In [17]: plt.figure();
In [18]: df.iloc[5].plot.bar();
In [19]: plt.axhline(0, color="k");
Calling a DataFrame’s plot.bar() method produces a multiple
bar plot:
In [20]: df2 = pd.DataFrame(np.random.rand(10, 4), columns=["a", "b", "c", "d"])
In [21]: df2.plot.bar();
To produce a stacked bar plot, pass stacked=True:
In [22]: df2.plot.bar(stacked=True);
To get horizontal bar plots, use the barh method:
In [23]: df2.plot.barh(stacked=True);
Histograms#
Histograms can be drawn by using the DataFrame.plot.hist() and Series.plot.hist() methods.
In [24]: df4 = pd.DataFrame(
....: {
....: "a": np.random.randn(1000) + 1,
....: "b": np.random.randn(1000),
....: "c": np.random.randn(1000) - 1,
....: },
....: columns=["a", "b", "c"],
....: )
....:
In [25]: plt.figure();
In [26]: df4.plot.hist(alpha=0.5);
A histogram can be stacked using stacked=True. Bin size can be changed
using the bins keyword.
In [27]: plt.figure();
In [28]: df4.plot.hist(stacked=True, bins=20);
You can pass other keywords supported by matplotlib hist. For example,
horizontal and cumulative histograms can be drawn by
orientation='horizontal' and cumulative=True.
In [29]: plt.figure();
In [30]: df4["a"].plot.hist(orientation="horizontal", cumulative=True);
See the hist method and the
matplotlib hist documentation for more.
The existing interface DataFrame.hist to plot histogram still can be used.
In [31]: plt.figure();
In [32]: df["A"].diff().hist();
DataFrame.hist() plots the histograms of the columns on multiple
subplots:
In [33]: plt.figure();
In [34]: df.diff().hist(color="k", alpha=0.5, bins=50);
The by keyword can be specified to plot grouped histograms:
In [35]: data = pd.Series(np.random.randn(1000))
In [36]: data.hist(by=np.random.randint(0, 4, 1000), figsize=(6, 4));
In addition, the by keyword can also be specified in DataFrame.plot.hist().
Changed in version 1.4.0.
In [37]: data = pd.DataFrame(
....: {
....: "a": np.random.choice(["x", "y", "z"], 1000),
....: "b": np.random.choice(["e", "f", "g"], 1000),
....: "c": np.random.randn(1000),
....: "d": np.random.randn(1000) - 1,
....: },
....: )
....:
In [38]: data.plot.hist(by=["a", "b"], figsize=(10, 5));
Box plots#
Boxplot can be drawn calling Series.plot.box() and DataFrame.plot.box(),
or DataFrame.boxplot() to visualize the distribution of values within each column.
For instance, here is a boxplot representing five trials of 10 observations of
a uniform random variable on [0,1).
In [39]: df = pd.DataFrame(np.random.rand(10, 5), columns=["A", "B", "C", "D", "E"])
In [40]: df.plot.box();
Boxplot can be colorized by passing color keyword. You can pass a dict
whose keys are boxes, whiskers, medians and caps.
If some keys are missing in the dict, default colors are used
for the corresponding artists. Also, boxplot has sym keyword to specify fliers style.
When you pass other type of arguments via color keyword, it will be directly
passed to matplotlib for all the boxes, whiskers, medians and caps
colorization.
The colors are applied to every boxes to be drawn. If you want
more complicated colorization, you can get each drawn artists by passing
return_type.
In [41]: color = {
....: "boxes": "DarkGreen",
....: "whiskers": "DarkOrange",
....: "medians": "DarkBlue",
....: "caps": "Gray",
....: }
....:
In [42]: df.plot.box(color=color, sym="r+");
Also, you can pass other keywords supported by matplotlib boxplot.
For example, horizontal and custom-positioned boxplot can be drawn by
vert=False and positions keywords.
In [43]: df.plot.box(vert=False, positions=[1, 4, 5, 6, 8]);
See the boxplot method and the
matplotlib boxplot documentation for more.
The existing interface DataFrame.boxplot to plot boxplot still can be used.
In [44]: df = pd.DataFrame(np.random.rand(10, 5))
In [45]: plt.figure();
In [46]: bp = df.boxplot()
You can create a stratified boxplot using the by keyword argument to create
groupings. For instance,
In [47]: df = pd.DataFrame(np.random.rand(10, 2), columns=["Col1", "Col2"])
In [48]: df["X"] = pd.Series(["A", "A", "A", "A", "A", "B", "B", "B", "B", "B"])
In [49]: plt.figure();
In [50]: bp = df.boxplot(by="X")
You can also pass a subset of columns to plot, as well as group by multiple
columns:
In [51]: df = pd.DataFrame(np.random.rand(10, 3), columns=["Col1", "Col2", "Col3"])
In [52]: df["X"] = pd.Series(["A", "A", "A", "A", "A", "B", "B", "B", "B", "B"])
In [53]: df["Y"] = pd.Series(["A", "B", "A", "B", "A", "B", "A", "B", "A", "B"])
In [54]: plt.figure();
In [55]: bp = df.boxplot(column=["Col1", "Col2"], by=["X", "Y"])
You could also create groupings with DataFrame.plot.box(), for instance:
Changed in version 1.4.0.
In [56]: df = pd.DataFrame(np.random.rand(10, 3), columns=["Col1", "Col2", "Col3"])
In [57]: df["X"] = pd.Series(["A", "A", "A", "A", "A", "B", "B", "B", "B", "B"])
In [58]: plt.figure();
In [59]: bp = df.plot.box(column=["Col1", "Col2"], by="X")
In boxplot, the return type can be controlled by the return_type, keyword. The valid choices are {"axes", "dict", "both", None}.
Faceting, created by DataFrame.boxplot with the by
keyword, will affect the output type as well:
return_type
Faceted
Output type
None
No
axes
None
Yes
2-D ndarray of axes
'axes'
No
axes
'axes'
Yes
Series of axes
'dict'
No
dict of artists
'dict'
Yes
Series of dicts of artists
'both'
No
namedtuple
'both'
Yes
Series of namedtuples
Groupby.boxplot always returns a Series of return_type.
In [60]: np.random.seed(1234)
In [61]: df_box = pd.DataFrame(np.random.randn(50, 2))
In [62]: df_box["g"] = np.random.choice(["A", "B"], size=50)
In [63]: df_box.loc[df_box["g"] == "B", 1] += 3
In [64]: bp = df_box.boxplot(by="g")
The subplots above are split by the numeric columns first, then the value of
the g column. Below the subplots are first split by the value of g,
then by the numeric columns.
In [65]: bp = df_box.groupby("g").boxplot()
Area plot#
You can create area plots with Series.plot.area() and DataFrame.plot.area().
Area plots are stacked by default. To produce stacked area plot, each column must be either all positive or all negative values.
When input data contains NaN, it will be automatically filled by 0. If you want to drop or fill by different values, use dataframe.dropna() or dataframe.fillna() before calling plot.
In [66]: df = pd.DataFrame(np.random.rand(10, 4), columns=["a", "b", "c", "d"])
In [67]: df.plot.area();
To produce an unstacked plot, pass stacked=False. Alpha value is set to 0.5 unless otherwise specified:
In [68]: df.plot.area(stacked=False);
Scatter plot#
Scatter plot can be drawn by using the DataFrame.plot.scatter() method.
Scatter plot requires numeric columns for the x and y axes.
These can be specified by the x and y keywords.
In [69]: df = pd.DataFrame(np.random.rand(50, 4), columns=["a", "b", "c", "d"])
In [70]: df["species"] = pd.Categorical(
....: ["setosa"] * 20 + ["versicolor"] * 20 + ["virginica"] * 10
....: )
....:
In [71]: df.plot.scatter(x="a", y="b");
To plot multiple column groups in a single axes, repeat plot method specifying target ax.
It is recommended to specify color and label keywords to distinguish each groups.
In [72]: ax = df.plot.scatter(x="a", y="b", color="DarkBlue", label="Group 1")
In [73]: df.plot.scatter(x="c", y="d", color="DarkGreen", label="Group 2", ax=ax);
The keyword c may be given as the name of a column to provide colors for
each point:
In [74]: df.plot.scatter(x="a", y="b", c="c", s=50);
If a categorical column is passed to c, then a discrete colorbar will be produced:
New in version 1.3.0.
In [75]: df.plot.scatter(x="a", y="b", c="species", cmap="viridis", s=50);
You can pass other keywords supported by matplotlib
scatter. The example below shows a
bubble chart using a column of the DataFrame as the bubble size.
In [76]: df.plot.scatter(x="a", y="b", s=df["c"] * 200);
See the scatter method and the
matplotlib scatter documentation for more.
Hexagonal bin plot#
You can create hexagonal bin plots with DataFrame.plot.hexbin().
Hexbin plots can be a useful alternative to scatter plots if your data are
too dense to plot each point individually.
In [77]: df = pd.DataFrame(np.random.randn(1000, 2), columns=["a", "b"])
In [78]: df["b"] = df["b"] + np.arange(1000)
In [79]: df.plot.hexbin(x="a", y="b", gridsize=25);
A useful keyword argument is gridsize; it controls the number of hexagons
in the x-direction, and defaults to 100. A larger gridsize means more, smaller
bins.
By default, a histogram of the counts around each (x, y) point is computed.
You can specify alternative aggregations by passing values to the C and
reduce_C_function arguments. C specifies the value at each (x, y) point
and reduce_C_function is a function of one argument that reduces all the
values in a bin to a single number (e.g. mean, max, sum, std). In this
example the positions are given by columns a and b, while the value is
given by column z. The bins are aggregated with NumPy’s max function.
In [80]: df = pd.DataFrame(np.random.randn(1000, 2), columns=["a", "b"])
In [81]: df["b"] = df["b"] + np.arange(1000)
In [82]: df["z"] = np.random.uniform(0, 3, 1000)
In [83]: df.plot.hexbin(x="a", y="b", C="z", reduce_C_function=np.max, gridsize=25);
See the hexbin method and the
matplotlib hexbin documentation for more.
Pie plot#
You can create a pie plot with DataFrame.plot.pie() or Series.plot.pie().
If your data includes any NaN, they will be automatically filled with 0.
A ValueError will be raised if there are any negative values in your data.
In [84]: series = pd.Series(3 * np.random.rand(4), index=["a", "b", "c", "d"], name="series")
In [85]: series.plot.pie(figsize=(6, 6));
For pie plots it’s best to use square figures, i.e. a figure aspect ratio 1.
You can create the figure with equal width and height, or force the aspect ratio
to be equal after plotting by calling ax.set_aspect('equal') on the returned
axes object.
Note that pie plot with DataFrame requires that you either specify a
target column by the y argument or subplots=True. When y is
specified, pie plot of selected column will be drawn. If subplots=True is
specified, pie plots for each column are drawn as subplots. A legend will be
drawn in each pie plots by default; specify legend=False to hide it.
In [86]: df = pd.DataFrame(
....: 3 * np.random.rand(4, 2), index=["a", "b", "c", "d"], columns=["x", "y"]
....: )
....:
In [87]: df.plot.pie(subplots=True, figsize=(8, 4));
You can use the labels and colors keywords to specify the labels and colors of each wedge.
Warning
Most pandas plots use the label and color arguments (note the lack of “s” on those).
To be consistent with matplotlib.pyplot.pie() you must use labels and colors.
If you want to hide wedge labels, specify labels=None.
If fontsize is specified, the value will be applied to wedge labels.
Also, other keywords supported by matplotlib.pyplot.pie() can be used.
In [88]: series.plot.pie(
....: labels=["AA", "BB", "CC", "DD"],
....: colors=["r", "g", "b", "c"],
....: autopct="%.2f",
....: fontsize=20,
....: figsize=(6, 6),
....: );
....:
If you pass values whose sum total is less than 1.0 they will be rescaled so that they sum to 1.
In [89]: series = pd.Series([0.1] * 4, index=["a", "b", "c", "d"], name="series2")
In [90]: series.plot.pie(figsize=(6, 6));
See the matplotlib pie documentation for more.
Plotting with missing data#
pandas tries to be pragmatic about plotting DataFrames or Series
that contain missing data. Missing values are dropped, left out, or filled
depending on the plot type.
Plot Type
NaN Handling
Line
Leave gaps at NaNs
Line (stacked)
Fill 0’s
Bar
Fill 0’s
Scatter
Drop NaNs
Histogram
Drop NaNs (column-wise)
Box
Drop NaNs (column-wise)
Area
Fill 0’s
KDE
Drop NaNs (column-wise)
Hexbin
Drop NaNs
Pie
Fill 0’s
If any of these defaults are not what you want, or if you want to be
explicit about how missing values are handled, consider using
fillna() or dropna()
before plotting.
Plotting tools#
These functions can be imported from pandas.plotting
and take a Series or DataFrame as an argument.
Scatter matrix plot#
You can create a scatter plot matrix using the
scatter_matrix method in pandas.plotting:
In [91]: from pandas.plotting import scatter_matrix
In [92]: df = pd.DataFrame(np.random.randn(1000, 4), columns=["a", "b", "c", "d"])
In [93]: scatter_matrix(df, alpha=0.2, figsize=(6, 6), diagonal="kde");
Density plot#
You can create density plots using the Series.plot.kde() and DataFrame.plot.kde() methods.
In [94]: ser = pd.Series(np.random.randn(1000))
In [95]: ser.plot.kde();
Andrews curves#
Andrews curves allow one to plot multivariate data as a large number
of curves that are created using the attributes of samples as coefficients
for Fourier series, see the Wikipedia entry
for more information. By coloring these curves differently for each class
it is possible to visualize data clustering. Curves belonging to samples
of the same class will usually be closer together and form larger structures.
Note: The “Iris” dataset is available here.
In [96]: from pandas.plotting import andrews_curves
In [97]: data = pd.read_csv("data/iris.data")
In [98]: plt.figure();
In [99]: andrews_curves(data, "Name");
Parallel coordinates#
Parallel coordinates is a plotting technique for plotting multivariate data,
see the Wikipedia entry
for an introduction.
Parallel coordinates allows one to see clusters in data and to estimate other statistics visually.
Using parallel coordinates points are represented as connected line segments.
Each vertical line represents one attribute. One set of connected line segments
represents one data point. Points that tend to cluster will appear closer together.
In [100]: from pandas.plotting import parallel_coordinates
In [101]: data = pd.read_csv("data/iris.data")
In [102]: plt.figure();
In [103]: parallel_coordinates(data, "Name");
Lag plot#
Lag plots are used to check if a data set or time series is random. Random
data should not exhibit any structure in the lag plot. Non-random structure
implies that the underlying data are not random. The lag argument may
be passed, and when lag=1 the plot is essentially data[:-1] vs.
data[1:].
In [104]: from pandas.plotting import lag_plot
In [105]: plt.figure();
In [106]: spacing = np.linspace(-99 * np.pi, 99 * np.pi, num=1000)
In [107]: data = pd.Series(0.1 * np.random.rand(1000) + 0.9 * np.sin(spacing))
In [108]: lag_plot(data);
Autocorrelation plot#
Autocorrelation plots are often used for checking randomness in time series.
This is done by computing autocorrelations for data values at varying time lags.
If time series is random, such autocorrelations should be near zero for any and
all time-lag separations. If time series is non-random then one or more of the
autocorrelations will be significantly non-zero. The horizontal lines displayed
in the plot correspond to 95% and 99% confidence bands. The dashed line is 99%
confidence band. See the
Wikipedia entry for more about
autocorrelation plots.
In [109]: from pandas.plotting import autocorrelation_plot
In [110]: plt.figure();
In [111]: spacing = np.linspace(-9 * np.pi, 9 * np.pi, num=1000)
In [112]: data = pd.Series(0.7 * np.random.rand(1000) + 0.3 * np.sin(spacing))
In [113]: autocorrelation_plot(data);
Bootstrap plot#
Bootstrap plots are used to visually assess the uncertainty of a statistic, such
as mean, median, midrange, etc. A random subset of a specified size is selected
from a data set, the statistic in question is computed for this subset and the
process is repeated a specified number of times. Resulting plots and histograms
are what constitutes the bootstrap plot.
In [114]: from pandas.plotting import bootstrap_plot
In [115]: data = pd.Series(np.random.rand(1000))
In [116]: bootstrap_plot(data, size=50, samples=500, color="grey");
RadViz#
RadViz is a way of visualizing multi-variate data. It is based on a simple
spring tension minimization algorithm. Basically you set up a bunch of points in
a plane. In our case they are equally spaced on a unit circle. Each point
represents a single attribute. You then pretend that each sample in the data set
is attached to each of these points by a spring, the stiffness of which is
proportional to the numerical value of that attribute (they are normalized to
unit interval). The point in the plane, where our sample settles to (where the
forces acting on our sample are at an equilibrium) is where a dot representing
our sample will be drawn. Depending on which class that sample belongs it will
be colored differently.
See the R package Radviz
for more information.
Note: The “Iris” dataset is available here.
In [117]: from pandas.plotting import radviz
In [118]: data = pd.read_csv("data/iris.data")
In [119]: plt.figure();
In [120]: radviz(data, "Name");
Plot formatting#
Setting the plot style#
From version 1.5 and up, matplotlib offers a range of pre-configured plotting styles. Setting the
style can be used to easily give plots the general look that you want.
Setting the style is as easy as calling matplotlib.style.use(my_plot_style) before
creating your plot. For example you could write matplotlib.style.use('ggplot') for ggplot-style
plots.
You can see the various available style names at matplotlib.style.available and it’s very
easy to try them out.
General plot style arguments#
Most plotting methods have a set of keyword arguments that control the
layout and formatting of the returned plot:
In [121]: plt.figure();
In [122]: ts.plot(style="k--", label="Series");
For each kind of plot (e.g. line, bar, scatter) any additional arguments
keywords are passed along to the corresponding matplotlib function
(ax.plot(),
ax.bar(),
ax.scatter()). These can be used
to control additional styling, beyond what pandas provides.
Controlling the legend#
You may set the legend argument to False to hide the legend, which is
shown by default.
In [123]: df = pd.DataFrame(np.random.randn(1000, 4), index=ts.index, columns=list("ABCD"))
In [124]: df = df.cumsum()
In [125]: df.plot(legend=False);
Controlling the labels#
New in version 1.1.0.
You may set the xlabel and ylabel arguments to give the plot custom labels
for x and y axis. By default, pandas will pick up index name as xlabel, while leaving
it empty for ylabel.
In [126]: df.plot();
In [127]: df.plot(xlabel="new x", ylabel="new y");
Scales#
You may pass logy to get a log-scale Y axis.
In [128]: ts = pd.Series(np.random.randn(1000), index=pd.date_range("1/1/2000", periods=1000))
In [129]: ts = np.exp(ts.cumsum())
In [130]: ts.plot(logy=True);
See also the logx and loglog keyword arguments.
Plotting on a secondary y-axis#
To plot data on a secondary y-axis, use the secondary_y keyword:
In [131]: df["A"].plot();
In [132]: df["B"].plot(secondary_y=True, style="g");
To plot some columns in a DataFrame, give the column names to the secondary_y
keyword:
In [133]: plt.figure();
In [134]: ax = df.plot(secondary_y=["A", "B"])
In [135]: ax.set_ylabel("CD scale");
In [136]: ax.right_ax.set_ylabel("AB scale");
Note that the columns plotted on the secondary y-axis is automatically marked
with “(right)” in the legend. To turn off the automatic marking, use the
mark_right=False keyword:
In [137]: plt.figure();
In [138]: df.plot(secondary_y=["A", "B"], mark_right=False);
Custom formatters for timeseries plots#
Changed in version 1.0.0.
pandas provides custom formatters for timeseries plots. These change the
formatting of the axis labels for dates and times. By default,
the custom formatters are applied only to plots created by pandas with
DataFrame.plot() or Series.plot(). To have them apply to all
plots, including those made by matplotlib, set the option
pd.options.plotting.matplotlib.register_converters = True or use
pandas.plotting.register_matplotlib_converters().
Suppressing tick resolution adjustment#
pandas includes automatic tick resolution adjustment for regular frequency
time-series data. For limited cases where pandas cannot infer the frequency
information (e.g., in an externally created twinx), you can choose to
suppress this behavior for alignment purposes.
Here is the default behavior, notice how the x-axis tick labeling is performed:
In [139]: plt.figure();
In [140]: df["A"].plot();
Using the x_compat parameter, you can suppress this behavior:
In [141]: plt.figure();
In [142]: df["A"].plot(x_compat=True);
If you have more than one plot that needs to be suppressed, the use method
in pandas.plotting.plot_params can be used in a with statement:
In [143]: plt.figure();
In [144]: with pd.plotting.plot_params.use("x_compat", True):
.....: df["A"].plot(color="r")
.....: df["B"].plot(color="g")
.....: df["C"].plot(color="b")
.....:
Automatic date tick adjustment#
TimedeltaIndex now uses the native matplotlib
tick locator methods, it is useful to call the automatic
date tick adjustment from matplotlib for figures whose ticklabels overlap.
See the autofmt_xdate method and the
matplotlib documentation for more.
Subplots#
Each Series in a DataFrame can be plotted on a different axis
with the subplots keyword:
In [145]: df.plot(subplots=True, figsize=(6, 6));
Using layout and targeting multiple axes#
The layout of subplots can be specified by the layout keyword. It can accept
(rows, columns). The layout keyword can be used in
hist and boxplot also. If the input is invalid, a ValueError will be raised.
The number of axes which can be contained by rows x columns specified by layout must be
larger than the number of required subplots. If layout can contain more axes than required,
blank axes are not drawn. Similar to a NumPy array’s reshape method, you
can use -1 for one dimension to automatically calculate the number of rows
or columns needed, given the other.
In [146]: df.plot(subplots=True, layout=(2, 3), figsize=(6, 6), sharex=False);
The above example is identical to using:
In [147]: df.plot(subplots=True, layout=(2, -1), figsize=(6, 6), sharex=False);
The required number of columns (3) is inferred from the number of series to plot
and the given number of rows (2).
You can pass multiple axes created beforehand as list-like via ax keyword.
This allows more complicated layouts.
The passed axes must be the same number as the subplots being drawn.
When multiple axes are passed via the ax keyword, layout, sharex and sharey keywords
don’t affect to the output. You should explicitly pass sharex=False and sharey=False,
otherwise you will see a warning.
In [148]: fig, axes = plt.subplots(4, 4, figsize=(9, 9))
In [149]: plt.subplots_adjust(wspace=0.5, hspace=0.5)
In [150]: target1 = [axes[0][0], axes[1][1], axes[2][2], axes[3][3]]
In [151]: target2 = [axes[3][0], axes[2][1], axes[1][2], axes[0][3]]
In [152]: df.plot(subplots=True, ax=target1, legend=False, sharex=False, sharey=False);
In [153]: (-df).plot(subplots=True, ax=target2, legend=False, sharex=False, sharey=False);
Another option is passing an ax argument to Series.plot() to plot on a particular axis:
In [154]: fig, axes = plt.subplots(nrows=2, ncols=2)
In [155]: plt.subplots_adjust(wspace=0.2, hspace=0.5)
In [156]: df["A"].plot(ax=axes[0, 0]);
In [157]: axes[0, 0].set_title("A");
In [158]: df["B"].plot(ax=axes[0, 1]);
In [159]: axes[0, 1].set_title("B");
In [160]: df["C"].plot(ax=axes[1, 0]);
In [161]: axes[1, 0].set_title("C");
In [162]: df["D"].plot(ax=axes[1, 1]);
In [163]: axes[1, 1].set_title("D");
Plotting with error bars#
Plotting with error bars is supported in DataFrame.plot() and Series.plot().
Horizontal and vertical error bars can be supplied to the xerr and yerr keyword arguments to plot(). The error values can be specified using a variety of formats:
As a DataFrame or dict of errors with column names matching the columns attribute of the plotting DataFrame or matching the name attribute of the Series.
As a str indicating which of the columns of plotting DataFrame contain the error values.
As raw values (list, tuple, or np.ndarray). Must be the same length as the plotting DataFrame/Series.
Here is an example of one way to easily plot group means with standard deviations from the raw data.
# Generate the data
In [164]: ix3 = pd.MultiIndex.from_arrays(
.....: [
.....: ["a", "a", "a", "a", "a", "b", "b", "b", "b", "b"],
.....: ["foo", "foo", "foo", "bar", "bar", "foo", "foo", "bar", "bar", "bar"],
.....: ],
.....: names=["letter", "word"],
.....: )
.....:
In [165]: df3 = pd.DataFrame(
.....: {
.....: "data1": [9, 3, 2, 4, 3, 2, 4, 6, 3, 2],
.....: "data2": [9, 6, 5, 7, 5, 4, 5, 6, 5, 1],
.....: },
.....: index=ix3,
.....: )
.....:
# Group by index labels and take the means and standard deviations
# for each group
In [166]: gp3 = df3.groupby(level=("letter", "word"))
In [167]: means = gp3.mean()
In [168]: errors = gp3.std()
In [169]: means
Out[169]:
data1 data2
letter word
a bar 3.500000 6.000000
foo 4.666667 6.666667
b bar 3.666667 4.000000
foo 3.000000 4.500000
In [170]: errors
Out[170]:
data1 data2
letter word
a bar 0.707107 1.414214
foo 3.785939 2.081666
b bar 2.081666 2.645751
foo 1.414214 0.707107
# Plot
In [171]: fig, ax = plt.subplots()
In [172]: means.plot.bar(yerr=errors, ax=ax, capsize=4, rot=0);
Asymmetrical error bars are also supported, however raw error values must be provided in this case. For a N length Series, a 2xN array should be provided indicating lower and upper (or left and right) errors. For a MxN DataFrame, asymmetrical errors should be in a Mx2xN array.
Here is an example of one way to plot the min/max range using asymmetrical error bars.
In [173]: mins = gp3.min()
In [174]: maxs = gp3.max()
# errors should be positive, and defined in the order of lower, upper
In [175]: errors = [[means[c] - mins[c], maxs[c] - means[c]] for c in df3.columns]
# Plot
In [176]: fig, ax = plt.subplots()
In [177]: means.plot.bar(yerr=errors, ax=ax, capsize=4, rot=0);
Plotting tables#
Plotting with matplotlib table is now supported in DataFrame.plot() and Series.plot() with a table keyword. The table keyword can accept bool, DataFrame or Series. The simple way to draw a table is to specify table=True. Data will be transposed to meet matplotlib’s default layout.
In [178]: fig, ax = plt.subplots(1, 1, figsize=(7, 6.5))
In [179]: df = pd.DataFrame(np.random.rand(5, 3), columns=["a", "b", "c"])
In [180]: ax.xaxis.tick_top() # Display x-axis ticks on top.
In [181]: df.plot(table=True, ax=ax);
Also, you can pass a different DataFrame or Series to the
table keyword. The data will be drawn as displayed in print method
(not transposed automatically). If required, it should be transposed manually
as seen in the example below.
In [182]: fig, ax = plt.subplots(1, 1, figsize=(7, 6.75))
In [183]: ax.xaxis.tick_top() # Display x-axis ticks on top.
In [184]: df.plot(table=np.round(df.T, 2), ax=ax);
There also exists a helper function pandas.plotting.table, which creates a
table from DataFrame or Series, and adds it to an
matplotlib.Axes instance. This function can accept keywords which the
matplotlib table has.
In [185]: from pandas.plotting import table
In [186]: fig, ax = plt.subplots(1, 1)
In [187]: table(ax, np.round(df.describe(), 2), loc="upper right", colWidths=[0.2, 0.2, 0.2]);
In [188]: df.plot(ax=ax, ylim=(0, 2), legend=None);
Note: You can get table instances on the axes using axes.tables property for further decorations. See the matplotlib table documentation for more.
Colormaps#
A potential issue when plotting a large number of columns is that it can be
difficult to distinguish some series due to repetition in the default colors. To
remedy this, DataFrame plotting supports the use of the colormap argument,
which accepts either a Matplotlib colormap
or a string that is a name of a colormap registered with Matplotlib. A
visualization of the default matplotlib colormaps is available here.
As matplotlib does not directly support colormaps for line-based plots, the
colors are selected based on an even spacing determined by the number of columns
in the DataFrame. There is no consideration made for background color, so some
colormaps will produce lines that are not easily visible.
To use the cubehelix colormap, we can pass colormap='cubehelix'.
In [189]: df = pd.DataFrame(np.random.randn(1000, 10), index=ts.index)
In [190]: df = df.cumsum()
In [191]: plt.figure();
In [192]: df.plot(colormap="cubehelix");
Alternatively, we can pass the colormap itself:
In [193]: from matplotlib import cm
In [194]: plt.figure();
In [195]: df.plot(colormap=cm.cubehelix);
Colormaps can also be used other plot types, like bar charts:
In [196]: dd = pd.DataFrame(np.random.randn(10, 10)).applymap(abs)
In [197]: dd = dd.cumsum()
In [198]: plt.figure();
In [199]: dd.plot.bar(colormap="Greens");
Parallel coordinates charts:
In [200]: plt.figure();
In [201]: parallel_coordinates(data, "Name", colormap="gist_rainbow");
Andrews curves charts:
In [202]: plt.figure();
In [203]: andrews_curves(data, "Name", colormap="winter");
Plotting directly with Matplotlib#
In some situations it may still be preferable or necessary to prepare plots
directly with matplotlib, for instance when a certain type of plot or
customization is not (yet) supported by pandas. Series and DataFrame
objects behave like arrays and can therefore be passed directly to
matplotlib functions without explicit casts.
pandas also automatically registers formatters and locators that recognize date
indices, thereby extending date and time support to practically all plot types
available in matplotlib. Although this formatting does not provide the same
level of refinement you would get when plotting via pandas, it can be faster
when plotting a large number of points.
In [204]: price = pd.Series(
.....: np.random.randn(150).cumsum(),
.....: index=pd.date_range("2000-1-1", periods=150, freq="B"),
.....: )
.....:
In [205]: ma = price.rolling(20).mean()
In [206]: mstd = price.rolling(20).std()
In [207]: plt.figure();
In [208]: plt.plot(price.index, price, "k");
In [209]: plt.plot(ma.index, ma, "b");
In [210]: plt.fill_between(mstd.index, ma - 2 * mstd, ma + 2 * mstd, color="b", alpha=0.2);
Plotting backends#
Starting in version 0.25, pandas can be extended with third-party plotting backends. The
main idea is letting users select a plotting backend different than the provided
one based on Matplotlib.
This can be done by passing ‘backend.module’ as the argument backend in plot
function. For example:
>>> Series([1, 2, 3]).plot(backend="backend.module")
Alternatively, you can also set this option globally, do you don’t need to specify
the keyword in each plot call. For example:
>>> pd.set_option("plotting.backend", "backend.module")
>>> pd.Series([1, 2, 3]).plot()
Or:
>>> pd.options.plotting.backend = "backend.module"
>>> pd.Series([1, 2, 3]).plot()
This would be more or less equivalent to:
>>> import backend.module
>>> backend.module.plot(pd.Series([1, 2, 3]))
The backend module can then use other visualization tools (Bokeh, Altair, hvplot,…)
to generate the plots. Some libraries implementing a backend for pandas are listed
on the ecosystem Visualization page.
Developers guide can be found at
https://pandas.pydata.org/docs/dev/development/extending.html#plotting-backends
| 44
| 382
|
Python: how to drop columns if contain all negative values?
I have a dataframe that looks like the following
df
A B C D E
0 -1 -3 0 5 -2
1 3 -2 -1 -4 -5
2 0 -4 -3 -2 -1
I want to drop the columns that contain all negative values and save them in a second dataframe. In this way I would like to have
df
A C D
0 -1 0 5
1 3 -1 -4
2 0 -3 -2
df2
B E
0 -3 -2
1 -2 -5
2 -4 -1
|
64,776,923
|
Checking if column in dataframe contains any item from list of strings
|
<p>My goal is to check my dataframe column, and if that column contains items from a list of strings (matches in ex), then I want to create a new dataframe with all of those items that match.</p>
<p>With my current code I'm able to grab a list of the columns that match, however, It takes it as a list and I want to create a new dataframe with the previous information I had.</p>
<p>Here is my current code - Rather than resulting to a list I want the entire dataframe information I previously had</p>
<pre><code>matches = ['beat saber', 'half life', 'walking dead', 'population one']
checking = []
for x in hot_quest1['all_text']:
if any(z in x for z in matches):
checking.append(x)
</code></pre>
<p><a href="https://i.stack.imgur.com/Ro6HY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Ro6HY.png" alt="enter image description here" /></a></p>
| 64,777,090
| 2020-11-10T21:20:26.817000
| 1
| null | 2
| 1,767
|
python|pandas
|
<p>Pandas generally allows you to filter data frames without resorting to <code>for</code> loops.</p>
<p>This is one approach that should work:</p>
<pre class="lang-py prettyprint-override"><code>matches = ['beat saber', 'half life', 'walking dead', 'population one']
# matches_regex is a regular expression meaning any of your strings:
# "beat saber|half life|walking dead|population one"
matches_regex = "|".join(matches)
# matches_bools will be a series of booleans indicating whether there was a match
# for each item in the series
matches_bools = hot_quest1.all_text.str.contains(matches_regex, regex=True)
# You can then use that series of booleans to derive a new data frame
# containing only matching rows
matched_rows = hot_quest1[matches_bools]
</code></pre>
<p>Here's the documentation for the <code>str.contains</code> method.
<a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.str.contains.html" rel="nofollow noreferrer">https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.str.contains.html</a></p>
| 2020-11-10T21:34:58.143000
| 4
|
https://pandas.pydata.org/docs/reference/api/pandas.Series.isin.html
|
pandas.Series.isin#
pandas.Series.isin#
Series.isin(values)[source]#
Whether elements in Series are contained in values.
Return a boolean Series showing whether each element in the Series
matches an element in the passed sequence of values exactly.
Parameters
valuesset or list-likeThe sequence of values to test. Passing in a single string will
Pandas generally allows you to filter data frames without resorting to for loops.
This is one approach that should work:
matches = ['beat saber', 'half life', 'walking dead', 'population one']
# matches_regex is a regular expression meaning any of your strings:
# "beat saber|half life|walking dead|population one"
matches_regex = "|".join(matches)
# matches_bools will be a series of booleans indicating whether there was a match
# for each item in the series
matches_bools = hot_quest1.all_text.str.contains(matches_regex, regex=True)
# You can then use that series of booleans to derive a new data frame
# containing only matching rows
matched_rows = hot_quest1[matches_bools]
Here's the documentation for the str.contains method.
https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.str.contains.html
raise a TypeError. Instead, turn a single string into a
list of one element.
Returns
SeriesSeries of booleans indicating if each element is in values.
Raises
TypeError
If values is a string
See also
DataFrame.isinEquivalent method on DataFrame.
Examples
>>> s = pd.Series(['lama', 'cow', 'lama', 'beetle', 'lama',
... 'hippo'], name='animal')
>>> s.isin(['cow', 'lama'])
0 True
1 True
2 True
3 False
4 True
5 False
Name: animal, dtype: bool
To invert the boolean values, use the ~ operator:
>>> ~s.isin(['cow', 'lama'])
0 False
1 False
2 False
3 True
4 False
5 True
Name: animal, dtype: bool
Passing a single string as s.isin('lama') will raise an error. Use
a list of one element instead:
>>> s.isin(['lama'])
0 True
1 False
2 True
3 False
4 True
5 False
Name: animal, dtype: bool
Strings and integers are distinct and are therefore not comparable:
>>> pd.Series([1]).isin(['1'])
0 False
dtype: bool
>>> pd.Series([1.1]).isin(['1.1'])
0 False
dtype: bool
| 352
| 1,182
|
Checking if column in dataframe contains any item from list of strings
My goal is to check my dataframe column, and if that column contains items from a list of strings (matches in ex), then I want to create a new dataframe with all of those items that match.
With my current code I'm able to grab a list of the columns that match, however, It takes it as a list and I want to create a new dataframe with the previous information I had.
Here is my current code - Rather than resulting to a list I want the entire dataframe information I previously had
matches = ['beat saber', 'half life', 'walking dead', 'population one']
checking = []
for x in hot_quest1['all_text']:
if any(z in x for z in matches):
checking.append(x)
|
61,487,840
|
efficiently check if values in one column belong to the threshold defined by two other columns
|
<p>The goal of this question is efficiently improve the execution time of the problem I will now detail:</p>
<p>I have a df like this one:</p>
<pre><code>df
| | min | max | value |
|---|------|-------|-------|
| 0 | 1.0 | 10.0 | 15 |
| 1 | 50.0 | 100.0 | 20 |
| 2 | 30.0 | 50.0 | 40 |
| 3 | 10.0 | 90.0 | 91 |
| 4 | NaN | NaN | 1000 |
</code></pre>
<p>And what I want to check is if the values of the value column are within the threshold defined by the min and max columns.</p>
<p>If min and max columns are equal to Nan then we consider that the value in column value is within the threshold.</p>
<p>To solve this I have created the following code:</p>
<pre><code>In[1]:
def boundary(row):
if row['value'] <= row['min'] or row['value'] >= row['max']:
return 'out of range'
else:
return 'ok'
</code></pre>
<pre><code>In[2]:
%%timeit
df["boundary"] = df.apply(lambda row: boundary(row), axis=1)
</code></pre>
<pre><code>Out[2]:
959 µs ± 21.9 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
| | min | max | value | boundary |
| - | ---- | ----- | ----- | ------------ |
| 0 | 1.0 | 10.0 | 15 | out of range |
| 1 | 50.0 | 100.0 | 20 | out of range |
| 2 | 30.0 | 50.0 | 40 | ok |
| 3 | 10.0 | 90.0 | 91 | out of range |
| 4 | NaN | NaN | 1000 | ok |
</code></pre>
<p>My question is, is there a less expensive way to solve this problem?</p>
| 61,488,035
| 2020-04-28T18:55:41.527000
| 1
| null | 0
| 235
|
python|pandas
|
<p>Try using:</p>
<pre><code>df['boundary'] = ((df['min'] < df['value']) & (df['value'] < df['max'])) | (df['min'].isna() | df['max'].isna())
</code></pre>
<p>Timings:</p>
<pre><code>771 µs ± 5.82 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
</code></pre>
<p>Verus:</p>
<pre><code>df["boundary"] = df.apply(lambda row: boundary(row), axis=1)
999 µs ± 11.7 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
</code></pre>
<p>You don't need to loop nor apply here because pandas will automatically line up that data on index to compare and will do this vectorized.</p>
| 2020-04-28T19:07:31.303000
| 4
|
https://pandas.pydata.org/docs/reference/api/pandas.Series.replace.html
|
pandas.Series.replace#
pandas.Series.replace#
Series.replace(to_replace=None, value=_NoDefault.no_default, *, inplace=False, limit=None, regex=False, method=_NoDefault.no_default)[source]#
Replace values given in to_replace with value.
Values of the Series are replaced with other values dynamically.
This differs from updating with .loc or .iloc, which require
you to specify a location to update with some value.
Parameters
to_replacestr, regex, list, dict, Series, int, float, or NoneHow to find the values that will be replaced.
numeric, str or regex:
numeric: numeric values equal to to_replace will be
replaced with value
str: string exactly matching to_replace will be replaced
with value
Try using:
df['boundary'] = ((df['min'] < df['value']) & (df['value'] < df['max'])) | (df['min'].isna() | df['max'].isna())
Timings:
771 µs ± 5.82 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
Verus:
df["boundary"] = df.apply(lambda row: boundary(row), axis=1)
999 µs ± 11.7 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
You don't need to loop nor apply here because pandas will automatically line up that data on index to compare and will do this vectorized.
regex: regexs matching to_replace will be replaced with
value
list of str, regex, or numeric:
First, if to_replace and value are both lists, they
must be the same length.
Second, if regex=True then all of the strings in both
lists will be interpreted as regexs otherwise they will match
directly. This doesn’t matter much for value since there
are only a few possible substitution regexes you can use.
str, regex and numeric rules apply as above.
dict:
Dicts can be used to specify different replacement values
for different existing values. For example,
{'a': 'b', 'y': 'z'} replaces the value ‘a’ with ‘b’ and
‘y’ with ‘z’. To use a dict in this way, the optional value
parameter should not be given.
For a DataFrame a dict can specify that different values
should be replaced in different columns. For example,
{'a': 1, 'b': 'z'} looks for the value 1 in column ‘a’
and the value ‘z’ in column ‘b’ and replaces these values
with whatever is specified in value. The value parameter
should not be None in this case. You can treat this as a
special case of passing two lists except that you are
specifying the column to search in.
For a DataFrame nested dictionaries, e.g.,
{'a': {'b': np.nan}}, are read as follows: look in column
‘a’ for the value ‘b’ and replace it with NaN. The optional value
parameter should not be specified to use a nested dict in this
way. You can nest regular expressions as well. Note that
column names (the top-level dictionary keys in a nested
dictionary) cannot be regular expressions.
None:
This means that the regex argument must be a string,
compiled regular expression, or list, dict, ndarray or
Series of such elements. If value is also None then
this must be a nested dictionary or Series.
See the examples section for examples of each of these.
valuescalar, dict, list, str, regex, default NoneValue to replace any values matching to_replace with.
For a DataFrame a dict of values can be used to specify which
value to use for each column (columns not in the dict will not be
filled). Regular expressions, strings and lists or dicts of such
objects are also allowed.
inplacebool, default FalseIf True, performs operation inplace and returns None.
limitint, default NoneMaximum size gap to forward or backward fill.
regexbool or same types as to_replace, default FalseWhether to interpret to_replace and/or value as regular
expressions. If this is True then to_replace must be a
string. Alternatively, this could be a regular expression or a
list, dict, or array of regular expressions in which case
to_replace must be None.
method{‘pad’, ‘ffill’, ‘bfill’}The method to use when for replacement, when to_replace is a
scalar, list or tuple and value is None.
Changed in version 0.23.0: Added to DataFrame.
Returns
SeriesObject after replacement.
Raises
AssertionError
If regex is not a bool and to_replace is not
None.
TypeError
If to_replace is not a scalar, array-like, dict, or None
If to_replace is a dict and value is not a list,
dict, ndarray, or Series
If to_replace is None and regex is not compilable
into a regular expression or is a list, dict, ndarray, or
Series.
When replacing multiple bool or datetime64 objects and
the arguments to to_replace does not match the type of the
value being replaced
ValueError
If a list or an ndarray is passed to to_replace and
value but they are not the same length.
See also
Series.fillnaFill NA values.
Series.whereReplace values based on boolean condition.
Series.str.replaceSimple string replacement.
Notes
Regex substitution is performed under the hood with re.sub. The
rules for substitution for re.sub are the same.
Regular expressions will only substitute on strings, meaning you
cannot provide, for example, a regular expression matching floating
point numbers and expect the columns in your frame that have a
numeric dtype to be matched. However, if those floating point
numbers are strings, then you can do this.
This method has a lot of options. You are encouraged to experiment
and play with this method to gain intuition about how it works.
When dict is used as the to_replace value, it is like
key(s) in the dict are the to_replace part and
value(s) in the dict are the value parameter.
Examples
Scalar `to_replace` and `value`
>>> s = pd.Series([1, 2, 3, 4, 5])
>>> s.replace(1, 5)
0 5
1 2
2 3
3 4
4 5
dtype: int64
>>> df = pd.DataFrame({'A': [0, 1, 2, 3, 4],
... 'B': [5, 6, 7, 8, 9],
... 'C': ['a', 'b', 'c', 'd', 'e']})
>>> df.replace(0, 5)
A B C
0 5 5 a
1 1 6 b
2 2 7 c
3 3 8 d
4 4 9 e
List-like `to_replace`
>>> df.replace([0, 1, 2, 3], 4)
A B C
0 4 5 a
1 4 6 b
2 4 7 c
3 4 8 d
4 4 9 e
>>> df.replace([0, 1, 2, 3], [4, 3, 2, 1])
A B C
0 4 5 a
1 3 6 b
2 2 7 c
3 1 8 d
4 4 9 e
>>> s.replace([1, 2], method='bfill')
0 3
1 3
2 3
3 4
4 5
dtype: int64
dict-like `to_replace`
>>> df.replace({0: 10, 1: 100})
A B C
0 10 5 a
1 100 6 b
2 2 7 c
3 3 8 d
4 4 9 e
>>> df.replace({'A': 0, 'B': 5}, 100)
A B C
0 100 100 a
1 1 6 b
2 2 7 c
3 3 8 d
4 4 9 e
>>> df.replace({'A': {0: 100, 4: 400}})
A B C
0 100 5 a
1 1 6 b
2 2 7 c
3 3 8 d
4 400 9 e
Regular expression `to_replace`
>>> df = pd.DataFrame({'A': ['bat', 'foo', 'bait'],
... 'B': ['abc', 'bar', 'xyz']})
>>> df.replace(to_replace=r'^ba.$', value='new', regex=True)
A B
0 new abc
1 foo new
2 bait xyz
>>> df.replace({'A': r'^ba.$'}, {'A': 'new'}, regex=True)
A B
0 new abc
1 foo bar
2 bait xyz
>>> df.replace(regex=r'^ba.$', value='new')
A B
0 new abc
1 foo new
2 bait xyz
>>> df.replace(regex={r'^ba.$': 'new', 'foo': 'xyz'})
A B
0 new abc
1 xyz new
2 bait xyz
>>> df.replace(regex=[r'^ba.$', 'foo'], value='new')
A B
0 new abc
1 new new
2 bait xyz
Compare the behavior of s.replace({'a': None}) and
s.replace('a', None) to understand the peculiarities
of the to_replace parameter:
>>> s = pd.Series([10, 'a', 'a', 'b', 'a'])
When one uses a dict as the to_replace value, it is like the
value(s) in the dict are equal to the value parameter.
s.replace({'a': None}) is equivalent to
s.replace(to_replace={'a': None}, value=None, method=None):
>>> s.replace({'a': None})
0 10
1 None
2 None
3 b
4 None
dtype: object
When value is not explicitly passed and to_replace is a scalar, list
or tuple, replace uses the method parameter (default ‘pad’) to do the
replacement. So this is why the ‘a’ values are being replaced by 10
in rows 1 and 2 and ‘b’ in row 4 in this case.
>>> s.replace('a')
0 10
1 10
2 10
3 b
4 b
dtype: object
On the other hand, if None is explicitly passed for value, it will
be respected:
>>> s.replace('a', None)
0 10
1 None
2 None
3 b
4 None
dtype: object
Changed in version 1.4.0: Previously the explicit None was silently ignored.
| 705
| 1,191
|
efficiently check if values in one column belong to the threshold defined by two other columns
The goal of this question is efficiently improve the execution time of the problem I will now detail:
I have a df like this one:
df
| | min | max | value |
|---|------|-------|-------|
| 0 | 1.0 | 10.0 | 15 |
| 1 | 50.0 | 100.0 | 20 |
| 2 | 30.0 | 50.0 | 40 |
| 3 | 10.0 | 90.0 | 91 |
| 4 | NaN | NaN | 1000 |
And what I want to check is if the values of the value column are within the threshold defined by the min and max columns.
If min and max columns are equal to Nan then we consider that the value in column value is within the threshold.
To solve this I have created the following code:
In[1]:
def boundary(row):
if row['value'] <= row['min'] or row['value'] >= row['max']:
return 'out of range'
else:
return 'ok'
In[2]:
%%timeit
df["boundary"] = df.apply(lambda row: boundary(row), axis=1)
Out[2]:
959 µs ± 21.9 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
| | min | max | value | boundary |
| - | ---- | ----- | ----- | ------------ |
| 0 | 1.0 | 10.0 | 15 | out of range |
| 1 | 50.0 | 100.0 | 20 | out of range |
| 2 | 30.0 | 50.0 | 40 | ok |
| 3 | 10.0 | 90.0 | 91 | out of range |
| 4 | NaN | NaN | 1000 | ok |
My question is, is there a less expensive way to solve this problem?
|
60,345,858
|
Split a column in df by another column value
|
<p>In python, I have the following df (headers in first row):</p>
<pre><code>FullName FirstName
'MichaelJordan' 'Michael'
'KobeBryant' 'Kobe'
'LeBronJames' 'LeBron'
</code></pre>
<p>I am trying to split each record in "FullName" based on the value in "FirstName" but am not having luck... </p>
<p>This is what I tried:</p>
<pre><code>df['Names'] = df['FullName'].str.split(df['FirstName'])
</code></pre>
<p>Which produces error:</p>
<pre><code>'Series' objects are mutable, thus they cannot be hashed
</code></pre>
<p>Desired output:</p>
<pre><code>print(df['Names'])
['Michael', 'Jordan']
['Kobe', 'Bryant']
['LeBron', 'James']
</code></pre>
| 60,345,955
| 2020-02-21T20:31:02.223000
| 4
| null | 2
| 75
|
python|pandas
|
<h3><code>str.replace</code></h3>
<pre><code>lastnames = [full.replace(first, '') for full, first in zip(df.FullName, df.FirstName)]
df.assign(LastName=lastnames)
FullName FirstName LastName
0 MichaelJordan Michael Jordan
1 KobeBryant Kobe Bryant
2 LeBronJames LeBron James
</code></pre>
<hr>
<p>Same exact idea but using <code>map</code></p>
<pre><code>df.assign(LastName=[*map(lambda a, b: a.replace(b, ''), df.FullName, df.FirstName)])
FullName FirstName LastName
0 MichaelJordan Michael Jordan
1 KobeBryant Kobe Bryant
2 LeBronJames LeBron James
</code></pre>
| 2020-02-21T20:40:22.217000
| 5
|
https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.divide.html
|
pandas.DataFrame.divide#
pandas.DataFrame.divide#
DataFrame.divide(other, axis='columns', level=None, fill_value=None)[source]#
Get Floating division of dataframe and other, element-wise (binary operator truediv).
Equivalent to dataframe / other, but with support to substitute a fill_value
for missing data in one of the inputs. With reverse version, rtruediv.
Among flexible wrappers (add, sub, mul, div, mod, pow) to
arithmetic operators: +, -, *, /, //, %, **.
Parameters
otherscalar, sequence, Series, dict or DataFrameAny single or multiple element data structure, or list-like object.
str.replace
lastnames = [full.replace(first, '') for full, first in zip(df.FullName, df.FirstName)]
df.assign(LastName=lastnames)
FullName FirstName LastName
0 MichaelJordan Michael Jordan
1 KobeBryant Kobe Bryant
2 LeBronJames LeBron James
Same exact idea but using map
df.assign(LastName=[*map(lambda a, b: a.replace(b, ''), df.FullName, df.FirstName)])
FullName FirstName LastName
0 MichaelJordan Michael Jordan
1 KobeBryant Kobe Bryant
2 LeBronJames LeBron James
axis{0 or ‘index’, 1 or ‘columns’}Whether to compare by the index (0 or ‘index’) or columns.
(1 or ‘columns’). For Series input, axis to match Series index on.
levelint or labelBroadcast across a level, matching Index values on the
passed MultiIndex level.
fill_valuefloat or None, default NoneFill existing missing (NaN) values, and any new element needed for
successful DataFrame alignment, with this value before computation.
If data in both corresponding DataFrame locations is missing
the result will be missing.
Returns
DataFrameResult of the arithmetic operation.
See also
DataFrame.addAdd DataFrames.
DataFrame.subSubtract DataFrames.
DataFrame.mulMultiply DataFrames.
DataFrame.divDivide DataFrames (float division).
DataFrame.truedivDivide DataFrames (float division).
DataFrame.floordivDivide DataFrames (integer division).
DataFrame.modCalculate modulo (remainder after division).
DataFrame.powCalculate exponential power.
Notes
Mismatched indices will be unioned together.
Examples
>>> df = pd.DataFrame({'angles': [0, 3, 4],
... 'degrees': [360, 180, 360]},
... index=['circle', 'triangle', 'rectangle'])
>>> df
angles degrees
circle 0 360
triangle 3 180
rectangle 4 360
Add a scalar with operator version which return the same
results.
>>> df + 1
angles degrees
circle 1 361
triangle 4 181
rectangle 5 361
>>> df.add(1)
angles degrees
circle 1 361
triangle 4 181
rectangle 5 361
Divide by constant with reverse version.
>>> df.div(10)
angles degrees
circle 0.0 36.0
triangle 0.3 18.0
rectangle 0.4 36.0
>>> df.rdiv(10)
angles degrees
circle inf 0.027778
triangle 3.333333 0.055556
rectangle 2.500000 0.027778
Subtract a list and Series by axis with operator version.
>>> df - [1, 2]
angles degrees
circle -1 358
triangle 2 178
rectangle 3 358
>>> df.sub([1, 2], axis='columns')
angles degrees
circle -1 358
triangle 2 178
rectangle 3 358
>>> df.sub(pd.Series([1, 1, 1], index=['circle', 'triangle', 'rectangle']),
... axis='index')
angles degrees
circle -1 359
triangle 2 179
rectangle 3 359
Multiply a dictionary by axis.
>>> df.mul({'angles': 0, 'degrees': 2})
angles degrees
circle 0 720
triangle 0 360
rectangle 0 720
>>> df.mul({'circle': 0, 'triangle': 2, 'rectangle': 3}, axis='index')
angles degrees
circle 0 0
triangle 6 360
rectangle 12 1080
Multiply a DataFrame of different shape with operator version.
>>> other = pd.DataFrame({'angles': [0, 3, 4]},
... index=['circle', 'triangle', 'rectangle'])
>>> other
angles
circle 0
triangle 3
rectangle 4
>>> df * other
angles degrees
circle 0 NaN
triangle 9 NaN
rectangle 16 NaN
>>> df.mul(other, fill_value=0)
angles degrees
circle 0 0.0
triangle 9 0.0
rectangle 16 0.0
Divide by a MultiIndex by level.
>>> df_multindex = pd.DataFrame({'angles': [0, 3, 4, 4, 5, 6],
... 'degrees': [360, 180, 360, 360, 540, 720]},
... index=[['A', 'A', 'A', 'B', 'B', 'B'],
... ['circle', 'triangle', 'rectangle',
... 'square', 'pentagon', 'hexagon']])
>>> df_multindex
angles degrees
A circle 0 360
triangle 3 180
rectangle 4 360
B square 4 360
pentagon 5 540
hexagon 6 720
>>> df.div(df_multindex, level=1, fill_value=0)
angles degrees
A circle NaN 1.0
triangle 1.0 1.0
rectangle 1.0 1.0
B square 0.0 0.0
pentagon 0.0 0.0
hexagon 0.0 0.0
| 598
| 1,135
|
Split a column in df by another column value
In python, I have the following df (headers in first row):
FullName FirstName
'MichaelJordan' 'Michael'
'KobeBryant' 'Kobe'
'LeBronJames' 'LeBron'
I am trying to split each record in "FullName" based on the value in "FirstName" but am not having luck...
This is what I tried:
df['Names'] = df['FullName'].str.split(df['FirstName'])
Which produces error:
'Series' objects are mutable, thus they cannot be hashed
Desired output:
print(df['Names'])
['Michael', 'Jordan']
['Kobe', 'Bryant']
['LeBron', 'James']
|
62,252,506
|
Drop rows that have same values as column names in Pandas
|
<p>I want drop rows that have same values as column names in Pandas.
I was thinking about making an nested array of my dataframe and looping trough that array and checking if nested array is the same as my df.columns. But maybe there is some faster way?</p>
<pre><code>df = pd.DataFrame({"ColA":[1,3,"ColA",1],
"ColB":[5,1,"ColB",2],
"ColC":[1,5,"ColC",2]})
print(df)
ColA ColB ColC
0 1 5 1
1 3 1 5
2 ColA ColB ColC
3 1 2 2
</code></pre>
<p>And my result should look like:</p>
<pre><code> ColA ColB ColC
0 1 5 1
1 3 1 5
3 1 2 2
</code></pre>
<p>Row 2 should be removed</p>
| 62,252,550
| 2020-06-07T22:29:28.537000
| 1
| null | 0
| 87
|
python|pandas
|
<p>You can pass <code>eq</code> , with <code>any</code> (any cell contain columns name ) or <code>all</code> (all cell for each contain the columns name)</p>
<pre><code>df[~df.eq(df.columns).any(1)]
ColA ColB ColC
0 1 5 1
1 3 1 5
3 1 2 2
</code></pre>
| 2020-06-07T22:34:33.233000
| 5
|
https://pandas.pydata.org/docs/dev/user_guide/merging.html
|
Merge, join, concatenate and compare#
Merge, join, concatenate and compare#
pandas provides various facilities for easily combining together Series or
DataFrame with various kinds of set logic for the indexes
and relational algebra functionality in the case of join / merge-type
operations.
In addition, pandas also provides utilities to compare two Series or DataFrame
and summarize their differences.
Concatenating objects#
The concat() function (in the main pandas namespace) does all of
the heavy lifting of performing concatenation operations along an axis while
You can pass eq , with any (any cell contain columns name ) or all (all cell for each contain the columns name)
df[~df.eq(df.columns).any(1)]
ColA ColB ColC
0 1 5 1
1 3 1 5
3 1 2 2
performing optional set logic (union or intersection) of the indexes (if any) on
the other axes. Note that I say “if any” because there is only a single possible
axis of concatenation for Series.
Before diving into all of the details of concat and what it can do, here is
a simple example:
In [1]: df1 = pd.DataFrame(
...: {
...: "A": ["A0", "A1", "A2", "A3"],
...: "B": ["B0", "B1", "B2", "B3"],
...: "C": ["C0", "C1", "C2", "C3"],
...: "D": ["D0", "D1", "D2", "D3"],
...: },
...: index=[0, 1, 2, 3],
...: )
...:
In [2]: df2 = pd.DataFrame(
...: {
...: "A": ["A4", "A5", "A6", "A7"],
...: "B": ["B4", "B5", "B6", "B7"],
...: "C": ["C4", "C5", "C6", "C7"],
...: "D": ["D4", "D5", "D6", "D7"],
...: },
...: index=[4, 5, 6, 7],
...: )
...:
In [3]: df3 = pd.DataFrame(
...: {
...: "A": ["A8", "A9", "A10", "A11"],
...: "B": ["B8", "B9", "B10", "B11"],
...: "C": ["C8", "C9", "C10", "C11"],
...: "D": ["D8", "D9", "D10", "D11"],
...: },
...: index=[8, 9, 10, 11],
...: )
...:
In [4]: frames = [df1, df2, df3]
In [5]: result = pd.concat(frames)
Like its sibling function on ndarrays, numpy.concatenate, pandas.concat
takes a list or dict of homogeneously-typed objects and concatenates them with
some configurable handling of “what to do with the other axes”:
pd.concat(
objs,
axis=0,
join="outer",
ignore_index=False,
keys=None,
levels=None,
names=None,
verify_integrity=False,
copy=True,
)
objs : a sequence or mapping of Series or DataFrame objects. If a
dict is passed, the sorted keys will be used as the keys argument, unless
it is passed, in which case the values will be selected (see below). Any None
objects will be dropped silently unless they are all None in which case a
ValueError will be raised.
axis : {0, 1, …}, default 0. The axis to concatenate along.
join : {‘inner’, ‘outer’}, default ‘outer’. How to handle indexes on
other axis(es). Outer for union and inner for intersection.
ignore_index : boolean, default False. If True, do not use the index
values on the concatenation axis. The resulting axis will be labeled 0, …,
n - 1. This is useful if you are concatenating objects where the
concatenation axis does not have meaningful indexing information. Note
the index values on the other axes are still respected in the join.
keys : sequence, default None. Construct hierarchical index using the
passed keys as the outermost level. If multiple levels passed, should
contain tuples.
levels : list of sequences, default None. Specific levels (unique values)
to use for constructing a MultiIndex. Otherwise they will be inferred from the
keys.
names : list, default None. Names for the levels in the resulting
hierarchical index.
verify_integrity : boolean, default False. Check whether the new
concatenated axis contains duplicates. This can be very expensive relative
to the actual data concatenation.
copy : boolean, default True. If False, do not copy data unnecessarily.
Without a little bit of context many of these arguments don’t make much sense.
Let’s revisit the above example. Suppose we wanted to associate specific keys
with each of the pieces of the chopped up DataFrame. We can do this using the
keys argument:
In [6]: result = pd.concat(frames, keys=["x", "y", "z"])
As you can see (if you’ve read the rest of the documentation), the resulting
object’s index has a hierarchical index. This
means that we can now select out each chunk by key:
In [7]: result.loc["y"]
Out[7]:
A B C D
4 A4 B4 C4 D4
5 A5 B5 C5 D5
6 A6 B6 C6 D6
7 A7 B7 C7 D7
It’s not a stretch to see how this can be very useful. More detail on this
functionality below.
Note
It is worth noting that concat() makes a full copy of the data, and that constantly
reusing this function can create a significant performance hit. If you need
to use the operation over several datasets, use a list comprehension.
frames = [ process_your_file(f) for f in files ]
result = pd.concat(frames)
Note
When concatenating DataFrames with named axes, pandas will attempt to preserve
these index/column names whenever possible. In the case where all inputs share a
common name, this name will be assigned to the result. When the input names do
not all agree, the result will be unnamed. The same is true for MultiIndex,
but the logic is applied separately on a level-by-level basis.
Set logic on the other axes#
When gluing together multiple DataFrames, you have a choice of how to handle
the other axes (other than the one being concatenated). This can be done in
the following two ways:
Take the union of them all, join='outer'. This is the default
option as it results in zero information loss.
Take the intersection, join='inner'.
Here is an example of each of these methods. First, the default join='outer'
behavior:
In [8]: df4 = pd.DataFrame(
...: {
...: "B": ["B2", "B3", "B6", "B7"],
...: "D": ["D2", "D3", "D6", "D7"],
...: "F": ["F2", "F3", "F6", "F7"],
...: },
...: index=[2, 3, 6, 7],
...: )
...:
In [9]: result = pd.concat([df1, df4], axis=1)
Here is the same thing with join='inner':
In [10]: result = pd.concat([df1, df4], axis=1, join="inner")
Lastly, suppose we just wanted to reuse the exact index from the original
DataFrame:
In [11]: result = pd.concat([df1, df4], axis=1).reindex(df1.index)
Similarly, we could index before the concatenation:
In [12]: pd.concat([df1, df4.reindex(df1.index)], axis=1)
Out[12]:
A B C D B D F
0 A0 B0 C0 D0 NaN NaN NaN
1 A1 B1 C1 D1 NaN NaN NaN
2 A2 B2 C2 D2 B2 D2 F2
3 A3 B3 C3 D3 B3 D3 F3
Ignoring indexes on the concatenation axis#
For DataFrame objects which don’t have a meaningful index, you may wish
to append them and ignore the fact that they may have overlapping indexes. To
do this, use the ignore_index argument:
In [13]: result = pd.concat([df1, df4], ignore_index=True, sort=False)
Concatenating with mixed ndims#
You can concatenate a mix of Series and DataFrame objects. The
Series will be transformed to DataFrame with the column name as
the name of the Series.
In [14]: s1 = pd.Series(["X0", "X1", "X2", "X3"], name="X")
In [15]: result = pd.concat([df1, s1], axis=1)
Note
Since we’re concatenating a Series to a DataFrame, we could have
achieved the same result with DataFrame.assign(). To concatenate an
arbitrary number of pandas objects (DataFrame or Series), use
concat.
If unnamed Series are passed they will be numbered consecutively.
In [16]: s2 = pd.Series(["_0", "_1", "_2", "_3"])
In [17]: result = pd.concat([df1, s2, s2, s2], axis=1)
Passing ignore_index=True will drop all name references.
In [18]: result = pd.concat([df1, s1], axis=1, ignore_index=True)
More concatenating with group keys#
A fairly common use of the keys argument is to override the column names
when creating a new DataFrame based on existing Series.
Notice how the default behaviour consists on letting the resulting DataFrame
inherit the parent Series’ name, when these existed.
In [19]: s3 = pd.Series([0, 1, 2, 3], name="foo")
In [20]: s4 = pd.Series([0, 1, 2, 3])
In [21]: s5 = pd.Series([0, 1, 4, 5])
In [22]: pd.concat([s3, s4, s5], axis=1)
Out[22]:
foo 0 1
0 0 0 0
1 1 1 1
2 2 2 4
3 3 3 5
Through the keys argument we can override the existing column names.
In [23]: pd.concat([s3, s4, s5], axis=1, keys=["red", "blue", "yellow"])
Out[23]:
red blue yellow
0 0 0 0
1 1 1 1
2 2 2 4
3 3 3 5
Let’s consider a variation of the very first example presented:
In [24]: result = pd.concat(frames, keys=["x", "y", "z"])
You can also pass a dict to concat in which case the dict keys will be used
for the keys argument (unless other keys are specified):
In [25]: pieces = {"x": df1, "y": df2, "z": df3}
In [26]: result = pd.concat(pieces)
In [27]: result = pd.concat(pieces, keys=["z", "y"])
The MultiIndex created has levels that are constructed from the passed keys and
the index of the DataFrame pieces:
In [28]: result.index.levels
Out[28]: FrozenList([['z', 'y'], [4, 5, 6, 7, 8, 9, 10, 11]])
If you wish to specify other levels (as will occasionally be the case), you can
do so using the levels argument:
In [29]: result = pd.concat(
....: pieces, keys=["x", "y", "z"], levels=[["z", "y", "x", "w"]], names=["group_key"]
....: )
....:
In [30]: result.index.levels
Out[30]: FrozenList([['z', 'y', 'x', 'w'], [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]])
This is fairly esoteric, but it is actually necessary for implementing things
like GroupBy where the order of a categorical variable is meaningful.
Appending rows to a DataFrame#
If you have a series that you want to append as a single row to a DataFrame, you can convert the row into a
DataFrame and use concat
In [31]: s2 = pd.Series(["X0", "X1", "X2", "X3"], index=["A", "B", "C", "D"])
In [32]: result = pd.concat([df1, s2.to_frame().T], ignore_index=True)
You should use ignore_index with this method to instruct DataFrame to
discard its index. If you wish to preserve the index, you should construct an
appropriately-indexed DataFrame and append or concatenate those objects.
Database-style DataFrame or named Series joining/merging#
pandas has full-featured, high performance in-memory join operations
idiomatically very similar to relational databases like SQL. These methods
perform significantly better (in some cases well over an order of magnitude
better) than other open source implementations (like base::merge.data.frame
in R). The reason for this is careful algorithmic design and the internal layout
of the data in DataFrame.
See the cookbook for some advanced strategies.
Users who are familiar with SQL but new to pandas might be interested in a
comparison with SQL.
pandas provides a single function, merge(), as the entry point for
all standard database join operations between DataFrame or named Series objects:
pd.merge(
left,
right,
how="inner",
on=None,
left_on=None,
right_on=None,
left_index=False,
right_index=False,
sort=True,
suffixes=("_x", "_y"),
copy=True,
indicator=False,
validate=None,
)
left: A DataFrame or named Series object.
right: Another DataFrame or named Series object.
on: Column or index level names to join on. Must be found in both the left
and right DataFrame and/or Series objects. If not passed and left_index and
right_index are False, the intersection of the columns in the
DataFrames and/or Series will be inferred to be the join keys.
left_on: Columns or index levels from the left DataFrame or Series to use as
keys. Can either be column names, index level names, or arrays with length
equal to the length of the DataFrame or Series.
right_on: Columns or index levels from the right DataFrame or Series to use as
keys. Can either be column names, index level names, or arrays with length
equal to the length of the DataFrame or Series.
left_index: If True, use the index (row labels) from the left
DataFrame or Series as its join key(s). In the case of a DataFrame or Series with a MultiIndex
(hierarchical), the number of levels must match the number of join keys
from the right DataFrame or Series.
right_index: Same usage as left_index for the right DataFrame or Series
how: One of 'left', 'right', 'outer', 'inner', 'cross'. Defaults
to inner. See below for more detailed description of each method.
sort: Sort the result DataFrame by the join keys in lexicographical
order. Defaults to True, setting to False will improve performance
substantially in many cases.
suffixes: A tuple of string suffixes to apply to overlapping
columns. Defaults to ('_x', '_y').
copy: Always copy data (default True) from the passed DataFrame or named Series
objects, even when reindexing is not necessary. Cannot be avoided in many
cases but may improve performance / memory usage. The cases where copying
can be avoided are somewhat pathological but this option is provided
nonetheless.
indicator: Add a column to the output DataFrame called _merge
with information on the source of each row. _merge is Categorical-type
and takes on a value of left_only for observations whose merge key
only appears in 'left' DataFrame or Series, right_only for observations whose
merge key only appears in 'right' DataFrame or Series, and both if the
observation’s merge key is found in both.
validate : string, default None.
If specified, checks if merge is of specified type.
“one_to_one” or “1:1”: checks if merge keys are unique in both
left and right datasets.
“one_to_many” or “1:m”: checks if merge keys are unique in left
dataset.
“many_to_one” or “m:1”: checks if merge keys are unique in right
dataset.
“many_to_many” or “m:m”: allowed, but does not result in checks.
Note
Support for specifying index levels as the on, left_on, and
right_on parameters was added in version 0.23.0.
Support for merging named Series objects was added in version 0.24.0.
The return type will be the same as left. If left is a DataFrame or named Series
and right is a subclass of DataFrame, the return type will still be DataFrame.
merge is a function in the pandas namespace, and it is also available as a
DataFrame instance method merge(), with the calling
DataFrame being implicitly considered the left object in the join.
The related join() method, uses merge internally for the
index-on-index (by default) and column(s)-on-index join. If you are joining on
index only, you may wish to use DataFrame.join to save yourself some typing.
Brief primer on merge methods (relational algebra)#
Experienced users of relational databases like SQL will be familiar with the
terminology used to describe join operations between two SQL-table like
structures (DataFrame objects). There are several cases to consider which
are very important to understand:
one-to-one joins: for example when joining two DataFrame objects on
their indexes (which must contain unique values).
many-to-one joins: for example when joining an index (unique) to one or
more columns in a different DataFrame.
many-to-many joins: joining columns on columns.
Note
When joining columns on columns (potentially a many-to-many join), any
indexes on the passed DataFrame objects will be discarded.
It is worth spending some time understanding the result of the many-to-many
join case. In SQL / standard relational algebra, if a key combination appears
more than once in both tables, the resulting table will have the Cartesian
product of the associated data. Here is a very basic example with one unique
key combination:
In [33]: left = pd.DataFrame(
....: {
....: "key": ["K0", "K1", "K2", "K3"],
....: "A": ["A0", "A1", "A2", "A3"],
....: "B": ["B0", "B1", "B2", "B3"],
....: }
....: )
....:
In [34]: right = pd.DataFrame(
....: {
....: "key": ["K0", "K1", "K2", "K3"],
....: "C": ["C0", "C1", "C2", "C3"],
....: "D": ["D0", "D1", "D2", "D3"],
....: }
....: )
....:
In [35]: result = pd.merge(left, right, on="key")
Here is a more complicated example with multiple join keys. Only the keys
appearing in left and right are present (the intersection), since
how='inner' by default.
In [36]: left = pd.DataFrame(
....: {
....: "key1": ["K0", "K0", "K1", "K2"],
....: "key2": ["K0", "K1", "K0", "K1"],
....: "A": ["A0", "A1", "A2", "A3"],
....: "B": ["B0", "B1", "B2", "B3"],
....: }
....: )
....:
In [37]: right = pd.DataFrame(
....: {
....: "key1": ["K0", "K1", "K1", "K2"],
....: "key2": ["K0", "K0", "K0", "K0"],
....: "C": ["C0", "C1", "C2", "C3"],
....: "D": ["D0", "D1", "D2", "D3"],
....: }
....: )
....:
In [38]: result = pd.merge(left, right, on=["key1", "key2"])
The how argument to merge specifies how to determine which keys are to
be included in the resulting table. If a key combination does not appear in
either the left or right tables, the values in the joined table will be
NA. Here is a summary of the how options and their SQL equivalent names:
Merge method
SQL Join Name
Description
left
LEFT OUTER JOIN
Use keys from left frame only
right
RIGHT OUTER JOIN
Use keys from right frame only
outer
FULL OUTER JOIN
Use union of keys from both frames
inner
INNER JOIN
Use intersection of keys from both frames
cross
CROSS JOIN
Create the cartesian product of rows of both frames
In [39]: result = pd.merge(left, right, how="left", on=["key1", "key2"])
In [40]: result = pd.merge(left, right, how="right", on=["key1", "key2"])
In [41]: result = pd.merge(left, right, how="outer", on=["key1", "key2"])
In [42]: result = pd.merge(left, right, how="inner", on=["key1", "key2"])
In [43]: result = pd.merge(left, right, how="cross")
You can merge a mult-indexed Series and a DataFrame, if the names of
the MultiIndex correspond to the columns from the DataFrame. Transform
the Series to a DataFrame using Series.reset_index() before merging,
as shown in the following example.
In [44]: df = pd.DataFrame({"Let": ["A", "B", "C"], "Num": [1, 2, 3]})
In [45]: df
Out[45]:
Let Num
0 A 1
1 B 2
2 C 3
In [46]: ser = pd.Series(
....: ["a", "b", "c", "d", "e", "f"],
....: index=pd.MultiIndex.from_arrays(
....: [["A", "B", "C"] * 2, [1, 2, 3, 4, 5, 6]], names=["Let", "Num"]
....: ),
....: )
....:
In [47]: ser
Out[47]:
Let Num
A 1 a
B 2 b
C 3 c
A 4 d
B 5 e
C 6 f
dtype: object
In [48]: pd.merge(df, ser.reset_index(), on=["Let", "Num"])
Out[48]:
Let Num 0
0 A 1 a
1 B 2 b
2 C 3 c
Here is another example with duplicate join keys in DataFrames:
In [49]: left = pd.DataFrame({"A": [1, 2], "B": [2, 2]})
In [50]: right = pd.DataFrame({"A": [4, 5, 6], "B": [2, 2, 2]})
In [51]: result = pd.merge(left, right, on="B", how="outer")
Warning
Joining / merging on duplicate keys can cause a returned frame that is the multiplication of the row dimensions, which may result in memory overflow. It is the user’ s responsibility to manage duplicate values in keys before joining large DataFrames.
Checking for duplicate keys#
Users can use the validate argument to automatically check whether there
are unexpected duplicates in their merge keys. Key uniqueness is checked before
merge operations and so should protect against memory overflows. Checking key
uniqueness is also a good way to ensure user data structures are as expected.
In the following example, there are duplicate values of B in the right
DataFrame. As this is not a one-to-one merge – as specified in the
validate argument – an exception will be raised.
In [52]: left = pd.DataFrame({"A": [1, 2], "B": [1, 2]})
In [53]: right = pd.DataFrame({"A": [4, 5, 6], "B": [2, 2, 2]})
In [53]: result = pd.merge(left, right, on="B", how="outer", validate="one_to_one")
...
MergeError: Merge keys are not unique in right dataset; not a one-to-one merge
If the user is aware of the duplicates in the right DataFrame but wants to
ensure there are no duplicates in the left DataFrame, one can use the
validate='one_to_many' argument instead, which will not raise an exception.
In [54]: pd.merge(left, right, on="B", how="outer", validate="one_to_many")
Out[54]:
A_x B A_y
0 1 1 NaN
1 2 2 4.0
2 2 2 5.0
3 2 2 6.0
The merge indicator#
merge() accepts the argument indicator. If True, a
Categorical-type column called _merge will be added to the output object
that takes on values:
Observation Origin
_merge value
Merge key only in 'left' frame
left_only
Merge key only in 'right' frame
right_only
Merge key in both frames
both
In [55]: df1 = pd.DataFrame({"col1": [0, 1], "col_left": ["a", "b"]})
In [56]: df2 = pd.DataFrame({"col1": [1, 2, 2], "col_right": [2, 2, 2]})
In [57]: pd.merge(df1, df2, on="col1", how="outer", indicator=True)
Out[57]:
col1 col_left col_right _merge
0 0 a NaN left_only
1 1 b 2.0 both
2 2 NaN 2.0 right_only
3 2 NaN 2.0 right_only
The indicator argument will also accept string arguments, in which case the indicator function will use the value of the passed string as the name for the indicator column.
In [58]: pd.merge(df1, df2, on="col1", how="outer", indicator="indicator_column")
Out[58]:
col1 col_left col_right indicator_column
0 0 a NaN left_only
1 1 b 2.0 both
2 2 NaN 2.0 right_only
3 2 NaN 2.0 right_only
Merge dtypes#
Merging will preserve the dtype of the join keys.
In [59]: left = pd.DataFrame({"key": [1], "v1": [10]})
In [60]: left
Out[60]:
key v1
0 1 10
In [61]: right = pd.DataFrame({"key": [1, 2], "v1": [20, 30]})
In [62]: right
Out[62]:
key v1
0 1 20
1 2 30
We are able to preserve the join keys:
In [63]: pd.merge(left, right, how="outer")
Out[63]:
key v1
0 1 10
1 1 20
2 2 30
In [64]: pd.merge(left, right, how="outer").dtypes
Out[64]:
key int64
v1 int64
dtype: object
Of course if you have missing values that are introduced, then the
resulting dtype will be upcast.
In [65]: pd.merge(left, right, how="outer", on="key")
Out[65]:
key v1_x v1_y
0 1 10.0 20
1 2 NaN 30
In [66]: pd.merge(left, right, how="outer", on="key").dtypes
Out[66]:
key int64
v1_x float64
v1_y int64
dtype: object
Merging will preserve category dtypes of the mergands. See also the section on categoricals.
The left frame.
In [67]: from pandas.api.types import CategoricalDtype
In [68]: X = pd.Series(np.random.choice(["foo", "bar"], size=(10,)))
In [69]: X = X.astype(CategoricalDtype(categories=["foo", "bar"]))
In [70]: left = pd.DataFrame(
....: {"X": X, "Y": np.random.choice(["one", "two", "three"], size=(10,))}
....: )
....:
In [71]: left
Out[71]:
X Y
0 bar one
1 foo one
2 foo three
3 bar three
4 foo one
5 bar one
6 bar three
7 bar three
8 bar three
9 foo three
In [72]: left.dtypes
Out[72]:
X category
Y object
dtype: object
The right frame.
In [73]: right = pd.DataFrame(
....: {
....: "X": pd.Series(["foo", "bar"], dtype=CategoricalDtype(["foo", "bar"])),
....: "Z": [1, 2],
....: }
....: )
....:
In [74]: right
Out[74]:
X Z
0 foo 1
1 bar 2
In [75]: right.dtypes
Out[75]:
X category
Z int64
dtype: object
The merged result:
In [76]: result = pd.merge(left, right, how="outer")
In [77]: result
Out[77]:
X Y Z
0 bar one 2
1 bar three 2
2 bar one 2
3 bar three 2
4 bar three 2
5 bar three 2
6 foo one 1
7 foo three 1
8 foo one 1
9 foo three 1
In [78]: result.dtypes
Out[78]:
X category
Y object
Z int64
dtype: object
Note
The category dtypes must be exactly the same, meaning the same categories and the ordered attribute.
Otherwise the result will coerce to the categories’ dtype.
Note
Merging on category dtypes that are the same can be quite performant compared to object dtype merging.
Joining on index#
DataFrame.join() is a convenient method for combining the columns of two
potentially differently-indexed DataFrames into a single result
DataFrame. Here is a very basic example:
In [79]: left = pd.DataFrame(
....: {"A": ["A0", "A1", "A2"], "B": ["B0", "B1", "B2"]}, index=["K0", "K1", "K2"]
....: )
....:
In [80]: right = pd.DataFrame(
....: {"C": ["C0", "C2", "C3"], "D": ["D0", "D2", "D3"]}, index=["K0", "K2", "K3"]
....: )
....:
In [81]: result = left.join(right)
In [82]: result = left.join(right, how="outer")
The same as above, but with how='inner'.
In [83]: result = left.join(right, how="inner")
The data alignment here is on the indexes (row labels). This same behavior can
be achieved using merge plus additional arguments instructing it to use the
indexes:
In [84]: result = pd.merge(left, right, left_index=True, right_index=True, how="outer")
In [85]: result = pd.merge(left, right, left_index=True, right_index=True, how="inner")
Joining key columns on an index#
join() takes an optional on argument which may be a column
or multiple column names, which specifies that the passed DataFrame is to be
aligned on that column in the DataFrame. These two function calls are
completely equivalent:
left.join(right, on=key_or_keys)
pd.merge(
left, right, left_on=key_or_keys, right_index=True, how="left", sort=False
)
Obviously you can choose whichever form you find more convenient. For
many-to-one joins (where one of the DataFrame’s is already indexed by the
join key), using join may be more convenient. Here is a simple example:
In [86]: left = pd.DataFrame(
....: {
....: "A": ["A0", "A1", "A2", "A3"],
....: "B": ["B0", "B1", "B2", "B3"],
....: "key": ["K0", "K1", "K0", "K1"],
....: }
....: )
....:
In [87]: right = pd.DataFrame({"C": ["C0", "C1"], "D": ["D0", "D1"]}, index=["K0", "K1"])
In [88]: result = left.join(right, on="key")
In [89]: result = pd.merge(
....: left, right, left_on="key", right_index=True, how="left", sort=False
....: )
....:
To join on multiple keys, the passed DataFrame must have a MultiIndex:
In [90]: left = pd.DataFrame(
....: {
....: "A": ["A0", "A1", "A2", "A3"],
....: "B": ["B0", "B1", "B2", "B3"],
....: "key1": ["K0", "K0", "K1", "K2"],
....: "key2": ["K0", "K1", "K0", "K1"],
....: }
....: )
....:
In [91]: index = pd.MultiIndex.from_tuples(
....: [("K0", "K0"), ("K1", "K0"), ("K2", "K0"), ("K2", "K1")]
....: )
....:
In [92]: right = pd.DataFrame(
....: {"C": ["C0", "C1", "C2", "C3"], "D": ["D0", "D1", "D2", "D3"]}, index=index
....: )
....:
Now this can be joined by passing the two key column names:
In [93]: result = left.join(right, on=["key1", "key2"])
The default for DataFrame.join is to perform a left join (essentially a
“VLOOKUP” operation, for Excel users), which uses only the keys found in the
calling DataFrame. Other join types, for example inner join, can be just as
easily performed:
In [94]: result = left.join(right, on=["key1", "key2"], how="inner")
As you can see, this drops any rows where there was no match.
Joining a single Index to a MultiIndex#
You can join a singly-indexed DataFrame with a level of a MultiIndexed DataFrame.
The level will match on the name of the index of the singly-indexed frame against
a level name of the MultiIndexed frame.
In [95]: left = pd.DataFrame(
....: {"A": ["A0", "A1", "A2"], "B": ["B0", "B1", "B2"]},
....: index=pd.Index(["K0", "K1", "K2"], name="key"),
....: )
....:
In [96]: index = pd.MultiIndex.from_tuples(
....: [("K0", "Y0"), ("K1", "Y1"), ("K2", "Y2"), ("K2", "Y3")],
....: names=["key", "Y"],
....: )
....:
In [97]: right = pd.DataFrame(
....: {"C": ["C0", "C1", "C2", "C3"], "D": ["D0", "D1", "D2", "D3"]},
....: index=index,
....: )
....:
In [98]: result = left.join(right, how="inner")
This is equivalent but less verbose and more memory efficient / faster than this.
In [99]: result = pd.merge(
....: left.reset_index(), right.reset_index(), on=["key"], how="inner"
....: ).set_index(["key","Y"])
....:
Joining with two MultiIndexes#
This is supported in a limited way, provided that the index for the right
argument is completely used in the join, and is a subset of the indices in
the left argument, as in this example:
In [100]: leftindex = pd.MultiIndex.from_product(
.....: [list("abc"), list("xy"), [1, 2]], names=["abc", "xy", "num"]
.....: )
.....:
In [101]: left = pd.DataFrame({"v1": range(12)}, index=leftindex)
In [102]: left
Out[102]:
v1
abc xy num
a x 1 0
2 1
y 1 2
2 3
b x 1 4
2 5
y 1 6
2 7
c x 1 8
2 9
y 1 10
2 11
In [103]: rightindex = pd.MultiIndex.from_product(
.....: [list("abc"), list("xy")], names=["abc", "xy"]
.....: )
.....:
In [104]: right = pd.DataFrame({"v2": [100 * i for i in range(1, 7)]}, index=rightindex)
In [105]: right
Out[105]:
v2
abc xy
a x 100
y 200
b x 300
y 400
c x 500
y 600
In [106]: left.join(right, on=["abc", "xy"], how="inner")
Out[106]:
v1 v2
abc xy num
a x 1 0 100
2 1 100
y 1 2 200
2 3 200
b x 1 4 300
2 5 300
y 1 6 400
2 7 400
c x 1 8 500
2 9 500
y 1 10 600
2 11 600
If that condition is not satisfied, a join with two multi-indexes can be
done using the following code.
In [107]: leftindex = pd.MultiIndex.from_tuples(
.....: [("K0", "X0"), ("K0", "X1"), ("K1", "X2")], names=["key", "X"]
.....: )
.....:
In [108]: left = pd.DataFrame(
.....: {"A": ["A0", "A1", "A2"], "B": ["B0", "B1", "B2"]}, index=leftindex
.....: )
.....:
In [109]: rightindex = pd.MultiIndex.from_tuples(
.....: [("K0", "Y0"), ("K1", "Y1"), ("K2", "Y2"), ("K2", "Y3")], names=["key", "Y"]
.....: )
.....:
In [110]: right = pd.DataFrame(
.....: {"C": ["C0", "C1", "C2", "C3"], "D": ["D0", "D1", "D2", "D3"]}, index=rightindex
.....: )
.....:
In [111]: result = pd.merge(
.....: left.reset_index(), right.reset_index(), on=["key"], how="inner"
.....: ).set_index(["key", "X", "Y"])
.....:
Merging on a combination of columns and index levels#
Strings passed as the on, left_on, and right_on parameters
may refer to either column names or index level names. This enables merging
DataFrame instances on a combination of index levels and columns without
resetting indexes.
In [112]: left_index = pd.Index(["K0", "K0", "K1", "K2"], name="key1")
In [113]: left = pd.DataFrame(
.....: {
.....: "A": ["A0", "A1", "A2", "A3"],
.....: "B": ["B0", "B1", "B2", "B3"],
.....: "key2": ["K0", "K1", "K0", "K1"],
.....: },
.....: index=left_index,
.....: )
.....:
In [114]: right_index = pd.Index(["K0", "K1", "K2", "K2"], name="key1")
In [115]: right = pd.DataFrame(
.....: {
.....: "C": ["C0", "C1", "C2", "C3"],
.....: "D": ["D0", "D1", "D2", "D3"],
.....: "key2": ["K0", "K0", "K0", "K1"],
.....: },
.....: index=right_index,
.....: )
.....:
In [116]: result = left.merge(right, on=["key1", "key2"])
Note
When DataFrames are merged on a string that matches an index level in both
frames, the index level is preserved as an index level in the resulting
DataFrame.
Note
When DataFrames are merged using only some of the levels of a MultiIndex,
the extra levels will be dropped from the resulting merge. In order to
preserve those levels, use reset_index on those level names to move
those levels to columns prior to doing the merge.
Note
If a string matches both a column name and an index level name, then a
warning is issued and the column takes precedence. This will result in an
ambiguity error in a future version.
Overlapping value columns#
The merge suffixes argument takes a tuple of list of strings to append to
overlapping column names in the input DataFrames to disambiguate the result
columns:
In [117]: left = pd.DataFrame({"k": ["K0", "K1", "K2"], "v": [1, 2, 3]})
In [118]: right = pd.DataFrame({"k": ["K0", "K0", "K3"], "v": [4, 5, 6]})
In [119]: result = pd.merge(left, right, on="k")
In [120]: result = pd.merge(left, right, on="k", suffixes=("_l", "_r"))
DataFrame.join() has lsuffix and rsuffix arguments which behave
similarly.
In [121]: left = left.set_index("k")
In [122]: right = right.set_index("k")
In [123]: result = left.join(right, lsuffix="_l", rsuffix="_r")
Joining multiple DataFrames#
A list or tuple of DataFrames can also be passed to join()
to join them together on their indexes.
In [124]: right2 = pd.DataFrame({"v": [7, 8, 9]}, index=["K1", "K1", "K2"])
In [125]: result = left.join([right, right2])
Merging together values within Series or DataFrame columns#
Another fairly common situation is to have two like-indexed (or similarly
indexed) Series or DataFrame objects and wanting to “patch” values in
one object from values for matching indices in the other. Here is an example:
In [126]: df1 = pd.DataFrame(
.....: [[np.nan, 3.0, 5.0], [-4.6, np.nan, np.nan], [np.nan, 7.0, np.nan]]
.....: )
.....:
In [127]: df2 = pd.DataFrame([[-42.6, np.nan, -8.2], [-5.0, 1.6, 4]], index=[1, 2])
For this, use the combine_first() method:
In [128]: result = df1.combine_first(df2)
Note that this method only takes values from the right DataFrame if they are
missing in the left DataFrame. A related method, update(),
alters non-NA values in place:
In [129]: df1.update(df2)
Timeseries friendly merging#
Merging ordered data#
A merge_ordered() function allows combining time series and other
ordered data. In particular it has an optional fill_method keyword to
fill/interpolate missing data:
In [130]: left = pd.DataFrame(
.....: {"k": ["K0", "K1", "K1", "K2"], "lv": [1, 2, 3, 4], "s": ["a", "b", "c", "d"]}
.....: )
.....:
In [131]: right = pd.DataFrame({"k": ["K1", "K2", "K4"], "rv": [1, 2, 3]})
In [132]: pd.merge_ordered(left, right, fill_method="ffill", left_by="s")
Out[132]:
k lv s rv
0 K0 1.0 a NaN
1 K1 1.0 a 1.0
2 K2 1.0 a 2.0
3 K4 1.0 a 3.0
4 K1 2.0 b 1.0
5 K2 2.0 b 2.0
6 K4 2.0 b 3.0
7 K1 3.0 c 1.0
8 K2 3.0 c 2.0
9 K4 3.0 c 3.0
10 K1 NaN d 1.0
11 K2 4.0 d 2.0
12 K4 4.0 d 3.0
Merging asof#
A merge_asof() is similar to an ordered left-join except that we match on
nearest key rather than equal keys. For each row in the left DataFrame,
we select the last row in the right DataFrame whose on key is less
than the left’s key. Both DataFrames must be sorted by the key.
Optionally an asof merge can perform a group-wise merge. This matches the
by key equally, in addition to the nearest match on the on key.
For example; we might have trades and quotes and we want to asof
merge them.
In [133]: trades = pd.DataFrame(
.....: {
.....: "time": pd.to_datetime(
.....: [
.....: "20160525 13:30:00.023",
.....: "20160525 13:30:00.038",
.....: "20160525 13:30:00.048",
.....: "20160525 13:30:00.048",
.....: "20160525 13:30:00.048",
.....: ]
.....: ),
.....: "ticker": ["MSFT", "MSFT", "GOOG", "GOOG", "AAPL"],
.....: "price": [51.95, 51.95, 720.77, 720.92, 98.00],
.....: "quantity": [75, 155, 100, 100, 100],
.....: },
.....: columns=["time", "ticker", "price", "quantity"],
.....: )
.....:
In [134]: quotes = pd.DataFrame(
.....: {
.....: "time": pd.to_datetime(
.....: [
.....: "20160525 13:30:00.023",
.....: "20160525 13:30:00.023",
.....: "20160525 13:30:00.030",
.....: "20160525 13:30:00.041",
.....: "20160525 13:30:00.048",
.....: "20160525 13:30:00.049",
.....: "20160525 13:30:00.072",
.....: "20160525 13:30:00.075",
.....: ]
.....: ),
.....: "ticker": ["GOOG", "MSFT", "MSFT", "MSFT", "GOOG", "AAPL", "GOOG", "MSFT"],
.....: "bid": [720.50, 51.95, 51.97, 51.99, 720.50, 97.99, 720.50, 52.01],
.....: "ask": [720.93, 51.96, 51.98, 52.00, 720.93, 98.01, 720.88, 52.03],
.....: },
.....: columns=["time", "ticker", "bid", "ask"],
.....: )
.....:
In [135]: trades
Out[135]:
time ticker price quantity
0 2016-05-25 13:30:00.023 MSFT 51.95 75
1 2016-05-25 13:30:00.038 MSFT 51.95 155
2 2016-05-25 13:30:00.048 GOOG 720.77 100
3 2016-05-25 13:30:00.048 GOOG 720.92 100
4 2016-05-25 13:30:00.048 AAPL 98.00 100
In [136]: quotes
Out[136]:
time ticker bid ask
0 2016-05-25 13:30:00.023 GOOG 720.50 720.93
1 2016-05-25 13:30:00.023 MSFT 51.95 51.96
2 2016-05-25 13:30:00.030 MSFT 51.97 51.98
3 2016-05-25 13:30:00.041 MSFT 51.99 52.00
4 2016-05-25 13:30:00.048 GOOG 720.50 720.93
5 2016-05-25 13:30:00.049 AAPL 97.99 98.01
6 2016-05-25 13:30:00.072 GOOG 720.50 720.88
7 2016-05-25 13:30:00.075 MSFT 52.01 52.03
By default we are taking the asof of the quotes.
In [137]: pd.merge_asof(trades, quotes, on="time", by="ticker")
Out[137]:
time ticker price quantity bid ask
0 2016-05-25 13:30:00.023 MSFT 51.95 75 51.95 51.96
1 2016-05-25 13:30:00.038 MSFT 51.95 155 51.97 51.98
2 2016-05-25 13:30:00.048 GOOG 720.77 100 720.50 720.93
3 2016-05-25 13:30:00.048 GOOG 720.92 100 720.50 720.93
4 2016-05-25 13:30:00.048 AAPL 98.00 100 NaN NaN
We only asof within 2ms between the quote time and the trade time.
In [138]: pd.merge_asof(trades, quotes, on="time", by="ticker", tolerance=pd.Timedelta("2ms"))
Out[138]:
time ticker price quantity bid ask
0 2016-05-25 13:30:00.023 MSFT 51.95 75 51.95 51.96
1 2016-05-25 13:30:00.038 MSFT 51.95 155 NaN NaN
2 2016-05-25 13:30:00.048 GOOG 720.77 100 720.50 720.93
3 2016-05-25 13:30:00.048 GOOG 720.92 100 720.50 720.93
4 2016-05-25 13:30:00.048 AAPL 98.00 100 NaN NaN
We only asof within 10ms between the quote time and the trade time and we
exclude exact matches on time. Note that though we exclude the exact matches
(of the quotes), prior quotes do propagate to that point in time.
In [139]: pd.merge_asof(
.....: trades,
.....: quotes,
.....: on="time",
.....: by="ticker",
.....: tolerance=pd.Timedelta("10ms"),
.....: allow_exact_matches=False,
.....: )
.....:
Out[139]:
time ticker price quantity bid ask
0 2016-05-25 13:30:00.023 MSFT 51.95 75 NaN NaN
1 2016-05-25 13:30:00.038 MSFT 51.95 155 51.97 51.98
2 2016-05-25 13:30:00.048 GOOG 720.77 100 NaN NaN
3 2016-05-25 13:30:00.048 GOOG 720.92 100 NaN NaN
4 2016-05-25 13:30:00.048 AAPL 98.00 100 NaN NaN
Comparing objects#
The compare() and compare() methods allow you to
compare two DataFrame or Series, respectively, and summarize their differences.
This feature was added in V1.1.0.
For example, you might want to compare two DataFrame and stack their differences
side by side.
In [140]: df = pd.DataFrame(
.....: {
.....: "col1": ["a", "a", "b", "b", "a"],
.....: "col2": [1.0, 2.0, 3.0, np.nan, 5.0],
.....: "col3": [1.0, 2.0, 3.0, 4.0, 5.0],
.....: },
.....: columns=["col1", "col2", "col3"],
.....: )
.....:
In [141]: df
Out[141]:
col1 col2 col3
0 a 1.0 1.0
1 a 2.0 2.0
2 b 3.0 3.0
3 b NaN 4.0
4 a 5.0 5.0
In [142]: df2 = df.copy()
In [143]: df2.loc[0, "col1"] = "c"
In [144]: df2.loc[2, "col3"] = 4.0
In [145]: df2
Out[145]:
col1 col2 col3
0 c 1.0 1.0
1 a 2.0 2.0
2 b 3.0 4.0
3 b NaN 4.0
4 a 5.0 5.0
In [146]: df.compare(df2)
Out[146]:
col1 col3
self other self other
0 a c NaN NaN
2 NaN NaN 3.0 4.0
By default, if two corresponding values are equal, they will be shown as NaN.
Furthermore, if all values in an entire row / column, the row / column will be
omitted from the result. The remaining differences will be aligned on columns.
If you wish, you may choose to stack the differences on rows.
In [147]: df.compare(df2, align_axis=0)
Out[147]:
col1 col3
0 self a NaN
other c NaN
2 self NaN 3.0
other NaN 4.0
If you wish to keep all original rows and columns, set keep_shape argument
to True.
In [148]: df.compare(df2, keep_shape=True)
Out[148]:
col1 col2 col3
self other self other self other
0 a c NaN NaN NaN NaN
1 NaN NaN NaN NaN NaN NaN
2 NaN NaN NaN NaN 3.0 4.0
3 NaN NaN NaN NaN NaN NaN
4 NaN NaN NaN NaN NaN NaN
You may also keep all the original values even if they are equal.
In [149]: df.compare(df2, keep_shape=True, keep_equal=True)
Out[149]:
col1 col2 col3
self other self other self other
0 a c 1.0 1.0 1.0 1.0
1 a a 2.0 2.0 2.0 2.0
2 b b 3.0 3.0 3.0 4.0
3 b b NaN NaN 4.0 4.0
4 a a 5.0 5.0 5.0 5.0
| 571
| 781
|
Drop rows that have same values as column names in Pandas
I want drop rows that have same values as column names in Pandas.
I was thinking about making an nested array of my dataframe and looping trough that array and checking if nested array is the same as my df.columns. But maybe there is some faster way?
df = pd.DataFrame({"ColA":[1,3,"ColA",1],
"ColB":[5,1,"ColB",2],
"ColC":[1,5,"ColC",2]})
print(df)
ColA ColB ColC
0 1 5 1
1 3 1 5
2 ColA ColB ColC
3 1 2 2
And my result should look like:
ColA ColB ColC
0 1 5 1
1 3 1 5
3 1 2 2
Row 2 should be removed
|
60,850,596
|
Calculate average of every n rows from a csv file
|
<p>I have a csv file that has 25000 rows. I want to put the average of every 30 rows in another csv file.</p>
<p>I've given an example with 9 rows as below and the new csv file has 3 rows <strong>(3, 1, 2)</strong>:</p>
<pre><code>| H |
========
| 1 |---\
| 3 | |--->| 3 |
| 5 |---/
| -1 |---\
| 3 | |--->| 1 |
| 1 |---/
| 0 |---\
| 5 | |--->| 2 |
| 1 |---/
</code></pre>
<p><strong>What I did:</strong></p>
<pre><code>import numpy as np
import pandas as pd
m_path = "file.csv"
m_df = pd.read_csv(m_path, usecols=['Col-01'])
m_arr = np.array([])
temp = m_df.to_numpy()
step = 30
for i in range(1, 25000, step):
arr = np.append(m_arr,np.array([np.average(temp[i:i + step])]))
data = np.array(m_arr)[np.newaxis]
m_df = pd.DataFrame({'Column1': data[0, :]})
m_df.to_csv('AVG.csv')
</code></pre>
<p>This works well but <strong>Is there any other option to do this?</strong></p>
| 60,850,623
| 2020-03-25T14:11:39.410000
| 3
| 1
| 9
| 2,549
|
python|pandas
|
<p>You can use integer division by <code>step</code> for consecutive groups and pass to <code>groupby</code> for aggregate <code>mean</code>:</p>
<pre><code>step = 30
m_df = pd.read_csv(m_path, usecols=['Col-01'])
df = m_df.groupby(m_df.index // step).mean()
</code></pre>
<p>Or:</p>
<pre><code>df = m_df.groupby(np.arange(len(dfm_df// step).mean()
</code></pre>
<p>Sample data:</p>
<pre><code>step = 3
df = m_df.groupby(m_df.index // step).mean()
print (df)
H
0 3
1 1
2 2
</code></pre>
| 2020-03-25T14:12:56.703000
| 8
|
https://pandas.pydata.org/docs/user_guide/groupby.html
|
Group by: split-apply-combine#
Group by: split-apply-combine#
By “group by” we are referring to a process involving one or more of the following
You can use integer division by step for consecutive groups and pass to groupby for aggregate mean:
step = 30
m_df = pd.read_csv(m_path, usecols=['Col-01'])
df = m_df.groupby(m_df.index // step).mean()
Or:
df = m_df.groupby(np.arange(len(dfm_df// step).mean()
Sample data:
step = 3
df = m_df.groupby(m_df.index // step).mean()
print (df)
H
0 3
1 1
2 2
steps:
Splitting the data into groups based on some criteria.
Applying a function to each group independently.
Combining the results into a data structure.
Out of these, the split step is the most straightforward. In fact, in many
situations we may wish to split the data set into groups and do something with
those groups. In the apply step, we might wish to do one of the
following:
Aggregation: compute a summary statistic (or statistics) for each
group. Some examples:
Compute group sums or means.
Compute group sizes / counts.
Transformation: perform some group-specific computations and return a
like-indexed object. Some examples:
Standardize data (zscore) within a group.
Filling NAs within groups with a value derived from each group.
Filtration: discard some groups, according to a group-wise computation
that evaluates True or False. Some examples:
Discard data that belongs to groups with only a few members.
Filter out data based on the group sum or mean.
Some combination of the above: GroupBy will examine the results of the apply
step and try to return a sensibly combined result if it doesn’t fit into
either of the above two categories.
Since the set of object instance methods on pandas data structures are generally
rich and expressive, we often simply want to invoke, say, a DataFrame function
on each group. The name GroupBy should be quite familiar to those who have used
a SQL-based tool (or itertools), in which you can write code like:
SELECT Column1, Column2, mean(Column3), sum(Column4)
FROM SomeTable
GROUP BY Column1, Column2
We aim to make operations like this natural and easy to express using
pandas. We’ll address each area of GroupBy functionality then provide some
non-trivial examples / use cases.
See the cookbook for some advanced strategies.
Splitting an object into groups#
pandas objects can be split on any of their axes. The abstract definition of
grouping is to provide a mapping of labels to group names. To create a GroupBy
object (more on what the GroupBy object is later), you may do the following:
In [1]: df = pd.DataFrame(
...: [
...: ("bird", "Falconiformes", 389.0),
...: ("bird", "Psittaciformes", 24.0),
...: ("mammal", "Carnivora", 80.2),
...: ("mammal", "Primates", np.nan),
...: ("mammal", "Carnivora", 58),
...: ],
...: index=["falcon", "parrot", "lion", "monkey", "leopard"],
...: columns=("class", "order", "max_speed"),
...: )
...:
In [2]: df
Out[2]:
class order max_speed
falcon bird Falconiformes 389.0
parrot bird Psittaciformes 24.0
lion mammal Carnivora 80.2
monkey mammal Primates NaN
leopard mammal Carnivora 58.0
# default is axis=0
In [3]: grouped = df.groupby("class")
In [4]: grouped = df.groupby("order", axis="columns")
In [5]: grouped = df.groupby(["class", "order"])
The mapping can be specified many different ways:
A Python function, to be called on each of the axis labels.
A list or NumPy array of the same length as the selected axis.
A dict or Series, providing a label -> group name mapping.
For DataFrame objects, a string indicating either a column name or
an index level name to be used to group.
df.groupby('A') is just syntactic sugar for df.groupby(df['A']).
A list of any of the above things.
Collectively we refer to the grouping objects as the keys. For example,
consider the following DataFrame:
Note
A string passed to groupby may refer to either a column or an index level.
If a string matches both a column name and an index level name, a
ValueError will be raised.
In [6]: df = pd.DataFrame(
...: {
...: "A": ["foo", "bar", "foo", "bar", "foo", "bar", "foo", "foo"],
...: "B": ["one", "one", "two", "three", "two", "two", "one", "three"],
...: "C": np.random.randn(8),
...: "D": np.random.randn(8),
...: }
...: )
...:
In [7]: df
Out[7]:
A B C D
0 foo one 0.469112 -0.861849
1 bar one -0.282863 -2.104569
2 foo two -1.509059 -0.494929
3 bar three -1.135632 1.071804
4 foo two 1.212112 0.721555
5 bar two -0.173215 -0.706771
6 foo one 0.119209 -1.039575
7 foo three -1.044236 0.271860
On a DataFrame, we obtain a GroupBy object by calling groupby().
We could naturally group by either the A or B columns, or both:
In [8]: grouped = df.groupby("A")
In [9]: grouped = df.groupby(["A", "B"])
If we also have a MultiIndex on columns A and B, we can group by all
but the specified columns
In [10]: df2 = df.set_index(["A", "B"])
In [11]: grouped = df2.groupby(level=df2.index.names.difference(["B"]))
In [12]: grouped.sum()
Out[12]:
C D
A
bar -1.591710 -1.739537
foo -0.752861 -1.402938
These will split the DataFrame on its index (rows). We could also split by the
columns:
In [13]: def get_letter_type(letter):
....: if letter.lower() in 'aeiou':
....: return 'vowel'
....: else:
....: return 'consonant'
....:
In [14]: grouped = df.groupby(get_letter_type, axis=1)
pandas Index objects support duplicate values. If a
non-unique index is used as the group key in a groupby operation, all values
for the same index value will be considered to be in one group and thus the
output of aggregation functions will only contain unique index values:
In [15]: lst = [1, 2, 3, 1, 2, 3]
In [16]: s = pd.Series([1, 2, 3, 10, 20, 30], lst)
In [17]: grouped = s.groupby(level=0)
In [18]: grouped.first()
Out[18]:
1 1
2 2
3 3
dtype: int64
In [19]: grouped.last()
Out[19]:
1 10
2 20
3 30
dtype: int64
In [20]: grouped.sum()
Out[20]:
1 11
2 22
3 33
dtype: int64
Note that no splitting occurs until it’s needed. Creating the GroupBy object
only verifies that you’ve passed a valid mapping.
Note
Many kinds of complicated data manipulations can be expressed in terms of
GroupBy operations (though can’t be guaranteed to be the most
efficient). You can get quite creative with the label mapping functions.
GroupBy sorting#
By default the group keys are sorted during the groupby operation. You may however pass sort=False for potential speedups:
In [21]: df2 = pd.DataFrame({"X": ["B", "B", "A", "A"], "Y": [1, 2, 3, 4]})
In [22]: df2.groupby(["X"]).sum()
Out[22]:
Y
X
A 7
B 3
In [23]: df2.groupby(["X"], sort=False).sum()
Out[23]:
Y
X
B 3
A 7
Note that groupby will preserve the order in which observations are sorted within each group.
For example, the groups created by groupby() below are in the order they appeared in the original DataFrame:
In [24]: df3 = pd.DataFrame({"X": ["A", "B", "A", "B"], "Y": [1, 4, 3, 2]})
In [25]: df3.groupby(["X"]).get_group("A")
Out[25]:
X Y
0 A 1
2 A 3
In [26]: df3.groupby(["X"]).get_group("B")
Out[26]:
X Y
1 B 4
3 B 2
New in version 1.1.0.
GroupBy dropna#
By default NA values are excluded from group keys during the groupby operation. However,
in case you want to include NA values in group keys, you could pass dropna=False to achieve it.
In [27]: df_list = [[1, 2, 3], [1, None, 4], [2, 1, 3], [1, 2, 2]]
In [28]: df_dropna = pd.DataFrame(df_list, columns=["a", "b", "c"])
In [29]: df_dropna
Out[29]:
a b c
0 1 2.0 3
1 1 NaN 4
2 2 1.0 3
3 1 2.0 2
# Default ``dropna`` is set to True, which will exclude NaNs in keys
In [30]: df_dropna.groupby(by=["b"], dropna=True).sum()
Out[30]:
a c
b
1.0 2 3
2.0 2 5
# In order to allow NaN in keys, set ``dropna`` to False
In [31]: df_dropna.groupby(by=["b"], dropna=False).sum()
Out[31]:
a c
b
1.0 2 3
2.0 2 5
NaN 1 4
The default setting of dropna argument is True which means NA are not included in group keys.
GroupBy object attributes#
The groups attribute is a dict whose keys are the computed unique groups
and corresponding values being the axis labels belonging to each group. In the
above example we have:
In [32]: df.groupby("A").groups
Out[32]: {'bar': [1, 3, 5], 'foo': [0, 2, 4, 6, 7]}
In [33]: df.groupby(get_letter_type, axis=1).groups
Out[33]: {'consonant': ['B', 'C', 'D'], 'vowel': ['A']}
Calling the standard Python len function on the GroupBy object just returns
the length of the groups dict, so it is largely just a convenience:
In [34]: grouped = df.groupby(["A", "B"])
In [35]: grouped.groups
Out[35]: {('bar', 'one'): [1], ('bar', 'three'): [3], ('bar', 'two'): [5], ('foo', 'one'): [0, 6], ('foo', 'three'): [7], ('foo', 'two'): [2, 4]}
In [36]: len(grouped)
Out[36]: 6
GroupBy will tab complete column names (and other attributes):
In [37]: df
Out[37]:
height weight gender
2000-01-01 42.849980 157.500553 male
2000-01-02 49.607315 177.340407 male
2000-01-03 56.293531 171.524640 male
2000-01-04 48.421077 144.251986 female
2000-01-05 46.556882 152.526206 male
2000-01-06 68.448851 168.272968 female
2000-01-07 70.757698 136.431469 male
2000-01-08 58.909500 176.499753 female
2000-01-09 76.435631 174.094104 female
2000-01-10 45.306120 177.540920 male
In [38]: gb = df.groupby("gender")
In [39]: gb.<TAB> # noqa: E225, E999
gb.agg gb.boxplot gb.cummin gb.describe gb.filter gb.get_group gb.height gb.last gb.median gb.ngroups gb.plot gb.rank gb.std gb.transform
gb.aggregate gb.count gb.cumprod gb.dtype gb.first gb.groups gb.hist gb.max gb.min gb.nth gb.prod gb.resample gb.sum gb.var
gb.apply gb.cummax gb.cumsum gb.fillna gb.gender gb.head gb.indices gb.mean gb.name gb.ohlc gb.quantile gb.size gb.tail gb.weight
GroupBy with MultiIndex#
With hierarchically-indexed data, it’s quite
natural to group by one of the levels of the hierarchy.
Let’s create a Series with a two-level MultiIndex.
In [40]: arrays = [
....: ["bar", "bar", "baz", "baz", "foo", "foo", "qux", "qux"],
....: ["one", "two", "one", "two", "one", "two", "one", "two"],
....: ]
....:
In [41]: index = pd.MultiIndex.from_arrays(arrays, names=["first", "second"])
In [42]: s = pd.Series(np.random.randn(8), index=index)
In [43]: s
Out[43]:
first second
bar one -0.919854
two -0.042379
baz one 1.247642
two -0.009920
foo one 0.290213
two 0.495767
qux one 0.362949
two 1.548106
dtype: float64
We can then group by one of the levels in s.
In [44]: grouped = s.groupby(level=0)
In [45]: grouped.sum()
Out[45]:
first
bar -0.962232
baz 1.237723
foo 0.785980
qux 1.911055
dtype: float64
If the MultiIndex has names specified, these can be passed instead of the level
number:
In [46]: s.groupby(level="second").sum()
Out[46]:
second
one 0.980950
two 1.991575
dtype: float64
Grouping with multiple levels is supported.
In [47]: s
Out[47]:
first second third
bar doo one -1.131345
two -0.089329
baz bee one 0.337863
two -0.945867
foo bop one -0.932132
two 1.956030
qux bop one 0.017587
two -0.016692
dtype: float64
In [48]: s.groupby(level=["first", "second"]).sum()
Out[48]:
first second
bar doo -1.220674
baz bee -0.608004
foo bop 1.023898
qux bop 0.000895
dtype: float64
Index level names may be supplied as keys.
In [49]: s.groupby(["first", "second"]).sum()
Out[49]:
first second
bar doo -1.220674
baz bee -0.608004
foo bop 1.023898
qux bop 0.000895
dtype: float64
More on the sum function and aggregation later.
Grouping DataFrame with Index levels and columns#
A DataFrame may be grouped by a combination of columns and index levels by
specifying the column names as strings and the index levels as pd.Grouper
objects.
In [50]: arrays = [
....: ["bar", "bar", "baz", "baz", "foo", "foo", "qux", "qux"],
....: ["one", "two", "one", "two", "one", "two", "one", "two"],
....: ]
....:
In [51]: index = pd.MultiIndex.from_arrays(arrays, names=["first", "second"])
In [52]: df = pd.DataFrame({"A": [1, 1, 1, 1, 2, 2, 3, 3], "B": np.arange(8)}, index=index)
In [53]: df
Out[53]:
A B
first second
bar one 1 0
two 1 1
baz one 1 2
two 1 3
foo one 2 4
two 2 5
qux one 3 6
two 3 7
The following example groups df by the second index level and
the A column.
In [54]: df.groupby([pd.Grouper(level=1), "A"]).sum()
Out[54]:
B
second A
one 1 2
2 4
3 6
two 1 4
2 5
3 7
Index levels may also be specified by name.
In [55]: df.groupby([pd.Grouper(level="second"), "A"]).sum()
Out[55]:
B
second A
one 1 2
2 4
3 6
two 1 4
2 5
3 7
Index level names may be specified as keys directly to groupby.
In [56]: df.groupby(["second", "A"]).sum()
Out[56]:
B
second A
one 1 2
2 4
3 6
two 1 4
2 5
3 7
DataFrame column selection in GroupBy#
Once you have created the GroupBy object from a DataFrame, you might want to do
something different for each of the columns. Thus, using [] similar to
getting a column from a DataFrame, you can do:
In [57]: df = pd.DataFrame(
....: {
....: "A": ["foo", "bar", "foo", "bar", "foo", "bar", "foo", "foo"],
....: "B": ["one", "one", "two", "three", "two", "two", "one", "three"],
....: "C": np.random.randn(8),
....: "D": np.random.randn(8),
....: }
....: )
....:
In [58]: df
Out[58]:
A B C D
0 foo one -0.575247 1.346061
1 bar one 0.254161 1.511763
2 foo two -1.143704 1.627081
3 bar three 0.215897 -0.990582
4 foo two 1.193555 -0.441652
5 bar two -0.077118 1.211526
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
In [59]: grouped = df.groupby(["A"])
In [60]: grouped_C = grouped["C"]
In [61]: grouped_D = grouped["D"]
This is mainly syntactic sugar for the alternative and much more verbose:
In [62]: df["C"].groupby(df["A"])
Out[62]: <pandas.core.groupby.generic.SeriesGroupBy object at 0x7f1ea100a490>
Additionally this method avoids recomputing the internal grouping information
derived from the passed key.
Iterating through groups#
With the GroupBy object in hand, iterating through the grouped data is very
natural and functions similarly to itertools.groupby():
In [63]: grouped = df.groupby('A')
In [64]: for name, group in grouped:
....: print(name)
....: print(group)
....:
bar
A B C D
1 bar one 0.254161 1.511763
3 bar three 0.215897 -0.990582
5 bar two -0.077118 1.211526
foo
A B C D
0 foo one -0.575247 1.346061
2 foo two -1.143704 1.627081
4 foo two 1.193555 -0.441652
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
In the case of grouping by multiple keys, the group name will be a tuple:
In [65]: for name, group in df.groupby(['A', 'B']):
....: print(name)
....: print(group)
....:
('bar', 'one')
A B C D
1 bar one 0.254161 1.511763
('bar', 'three')
A B C D
3 bar three 0.215897 -0.990582
('bar', 'two')
A B C D
5 bar two -0.077118 1.211526
('foo', 'one')
A B C D
0 foo one -0.575247 1.346061
6 foo one -0.408530 0.268520
('foo', 'three')
A B C D
7 foo three -0.862495 0.02458
('foo', 'two')
A B C D
2 foo two -1.143704 1.627081
4 foo two 1.193555 -0.441652
See Iterating through groups.
Selecting a group#
A single group can be selected using
get_group():
In [66]: grouped.get_group("bar")
Out[66]:
A B C D
1 bar one 0.254161 1.511763
3 bar three 0.215897 -0.990582
5 bar two -0.077118 1.211526
Or for an object grouped on multiple columns:
In [67]: df.groupby(["A", "B"]).get_group(("bar", "one"))
Out[67]:
A B C D
1 bar one 0.254161 1.511763
Aggregation#
Once the GroupBy object has been created, several methods are available to
perform a computation on the grouped data. These operations are similar to the
aggregating API, window API,
and resample API.
An obvious one is aggregation via the
aggregate() or equivalently
agg() method:
In [68]: grouped = df.groupby("A")
In [69]: grouped[["C", "D"]].aggregate(np.sum)
Out[69]:
C D
A
bar 0.392940 1.732707
foo -1.796421 2.824590
In [70]: grouped = df.groupby(["A", "B"])
In [71]: grouped.aggregate(np.sum)
Out[71]:
C D
A B
bar one 0.254161 1.511763
three 0.215897 -0.990582
two -0.077118 1.211526
foo one -0.983776 1.614581
three -0.862495 0.024580
two 0.049851 1.185429
As you can see, the result of the aggregation will have the group names as the
new index along the grouped axis. In the case of multiple keys, the result is a
MultiIndex by default, though this can be
changed by using the as_index option:
In [72]: grouped = df.groupby(["A", "B"], as_index=False)
In [73]: grouped.aggregate(np.sum)
Out[73]:
A B C D
0 bar one 0.254161 1.511763
1 bar three 0.215897 -0.990582
2 bar two -0.077118 1.211526
3 foo one -0.983776 1.614581
4 foo three -0.862495 0.024580
5 foo two 0.049851 1.185429
In [74]: df.groupby("A", as_index=False)[["C", "D"]].sum()
Out[74]:
A C D
0 bar 0.392940 1.732707
1 foo -1.796421 2.824590
Note that you could use the reset_index DataFrame function to achieve the
same result as the column names are stored in the resulting MultiIndex:
In [75]: df.groupby(["A", "B"]).sum().reset_index()
Out[75]:
A B C D
0 bar one 0.254161 1.511763
1 bar three 0.215897 -0.990582
2 bar two -0.077118 1.211526
3 foo one -0.983776 1.614581
4 foo three -0.862495 0.024580
5 foo two 0.049851 1.185429
Another simple aggregation example is to compute the size of each group.
This is included in GroupBy as the size method. It returns a Series whose
index are the group names and whose values are the sizes of each group.
In [76]: grouped.size()
Out[76]:
A B size
0 bar one 1
1 bar three 1
2 bar two 1
3 foo one 2
4 foo three 1
5 foo two 2
In [77]: grouped.describe()
Out[77]:
C ... D
count mean std min ... 25% 50% 75% max
0 1.0 0.254161 NaN 0.254161 ... 1.511763 1.511763 1.511763 1.511763
1 1.0 0.215897 NaN 0.215897 ... -0.990582 -0.990582 -0.990582 -0.990582
2 1.0 -0.077118 NaN -0.077118 ... 1.211526 1.211526 1.211526 1.211526
3 2.0 -0.491888 0.117887 -0.575247 ... 0.537905 0.807291 1.076676 1.346061
4 1.0 -0.862495 NaN -0.862495 ... 0.024580 0.024580 0.024580 0.024580
5 2.0 0.024925 1.652692 -1.143704 ... 0.075531 0.592714 1.109898 1.627081
[6 rows x 16 columns]
Another aggregation example is to compute the number of unique values of each group. This is similar to the value_counts function, except that it only counts unique values.
In [78]: ll = [['foo', 1], ['foo', 2], ['foo', 2], ['bar', 1], ['bar', 1]]
In [79]: df4 = pd.DataFrame(ll, columns=["A", "B"])
In [80]: df4
Out[80]:
A B
0 foo 1
1 foo 2
2 foo 2
3 bar 1
4 bar 1
In [81]: df4.groupby("A")["B"].nunique()
Out[81]:
A
bar 1
foo 2
Name: B, dtype: int64
Note
Aggregation functions will not return the groups that you are aggregating over
if they are named columns, when as_index=True, the default. The grouped columns will
be the indices of the returned object.
Passing as_index=False will return the groups that you are aggregating over, if they are
named columns.
Aggregating functions are the ones that reduce the dimension of the returned objects.
Some common aggregating functions are tabulated below:
Function
Description
mean()
Compute mean of groups
sum()
Compute sum of group values
size()
Compute group sizes
count()
Compute count of group
std()
Standard deviation of groups
var()
Compute variance of groups
sem()
Standard error of the mean of groups
describe()
Generates descriptive statistics
first()
Compute first of group values
last()
Compute last of group values
nth()
Take nth value, or a subset if n is a list
min()
Compute min of group values
max()
Compute max of group values
The aggregating functions above will exclude NA values. Any function which
reduces a Series to a scalar value is an aggregation function and will work,
a trivial example is df.groupby('A').agg(lambda ser: 1). Note that
nth() can act as a reducer or a
filter, see here.
Applying multiple functions at once#
With grouped Series you can also pass a list or dict of functions to do
aggregation with, outputting a DataFrame:
In [82]: grouped = df.groupby("A")
In [83]: grouped["C"].agg([np.sum, np.mean, np.std])
Out[83]:
sum mean std
A
bar 0.392940 0.130980 0.181231
foo -1.796421 -0.359284 0.912265
On a grouped DataFrame, you can pass a list of functions to apply to each
column, which produces an aggregated result with a hierarchical index:
In [84]: grouped[["C", "D"]].agg([np.sum, np.mean, np.std])
Out[84]:
C D
sum mean std sum mean std
A
bar 0.392940 0.130980 0.181231 1.732707 0.577569 1.366330
foo -1.796421 -0.359284 0.912265 2.824590 0.564918 0.884785
The resulting aggregations are named for the functions themselves. If you
need to rename, then you can add in a chained operation for a Series like this:
In [85]: (
....: grouped["C"]
....: .agg([np.sum, np.mean, np.std])
....: .rename(columns={"sum": "foo", "mean": "bar", "std": "baz"})
....: )
....:
Out[85]:
foo bar baz
A
bar 0.392940 0.130980 0.181231
foo -1.796421 -0.359284 0.912265
For a grouped DataFrame, you can rename in a similar manner:
In [86]: (
....: grouped[["C", "D"]].agg([np.sum, np.mean, np.std]).rename(
....: columns={"sum": "foo", "mean": "bar", "std": "baz"}
....: )
....: )
....:
Out[86]:
C D
foo bar baz foo bar baz
A
bar 0.392940 0.130980 0.181231 1.732707 0.577569 1.366330
foo -1.796421 -0.359284 0.912265 2.824590 0.564918 0.884785
Note
In general, the output column names should be unique. You can’t apply
the same function (or two functions with the same name) to the same
column.
In [87]: grouped["C"].agg(["sum", "sum"])
Out[87]:
sum sum
A
bar 0.392940 0.392940
foo -1.796421 -1.796421
pandas does allow you to provide multiple lambdas. In this case, pandas
will mangle the name of the (nameless) lambda functions, appending _<i>
to each subsequent lambda.
In [88]: grouped["C"].agg([lambda x: x.max() - x.min(), lambda x: x.median() - x.mean()])
Out[88]:
<lambda_0> <lambda_1>
A
bar 0.331279 0.084917
foo 2.337259 -0.215962
Named aggregation#
New in version 0.25.0.
To support column-specific aggregation with control over the output column names, pandas
accepts the special syntax in GroupBy.agg(), known as “named aggregation”, where
The keywords are the output column names
The values are tuples whose first element is the column to select
and the second element is the aggregation to apply to that column. pandas
provides the pandas.NamedAgg namedtuple with the fields ['column', 'aggfunc']
to make it clearer what the arguments are. As usual, the aggregation can
be a callable or a string alias.
In [89]: animals = pd.DataFrame(
....: {
....: "kind": ["cat", "dog", "cat", "dog"],
....: "height": [9.1, 6.0, 9.5, 34.0],
....: "weight": [7.9, 7.5, 9.9, 198.0],
....: }
....: )
....:
In [90]: animals
Out[90]:
kind height weight
0 cat 9.1 7.9
1 dog 6.0 7.5
2 cat 9.5 9.9
3 dog 34.0 198.0
In [91]: animals.groupby("kind").agg(
....: min_height=pd.NamedAgg(column="height", aggfunc="min"),
....: max_height=pd.NamedAgg(column="height", aggfunc="max"),
....: average_weight=pd.NamedAgg(column="weight", aggfunc=np.mean),
....: )
....:
Out[91]:
min_height max_height average_weight
kind
cat 9.1 9.5 8.90
dog 6.0 34.0 102.75
pandas.NamedAgg is just a namedtuple. Plain tuples are allowed as well.
In [92]: animals.groupby("kind").agg(
....: min_height=("height", "min"),
....: max_height=("height", "max"),
....: average_weight=("weight", np.mean),
....: )
....:
Out[92]:
min_height max_height average_weight
kind
cat 9.1 9.5 8.90
dog 6.0 34.0 102.75
If your desired output column names are not valid Python keywords, construct a dictionary
and unpack the keyword arguments
In [93]: animals.groupby("kind").agg(
....: **{
....: "total weight": pd.NamedAgg(column="weight", aggfunc=sum)
....: }
....: )
....:
Out[93]:
total weight
kind
cat 17.8
dog 205.5
Additional keyword arguments are not passed through to the aggregation functions. Only pairs
of (column, aggfunc) should be passed as **kwargs. If your aggregation functions
requires additional arguments, partially apply them with functools.partial().
Note
For Python 3.5 and earlier, the order of **kwargs in a functions was not
preserved. This means that the output column ordering would not be
consistent. To ensure consistent ordering, the keys (and so output columns)
will always be sorted for Python 3.5.
Named aggregation is also valid for Series groupby aggregations. In this case there’s
no column selection, so the values are just the functions.
In [94]: animals.groupby("kind").height.agg(
....: min_height="min",
....: max_height="max",
....: )
....:
Out[94]:
min_height max_height
kind
cat 9.1 9.5
dog 6.0 34.0
Applying different functions to DataFrame columns#
By passing a dict to aggregate you can apply a different aggregation to the
columns of a DataFrame:
In [95]: grouped.agg({"C": np.sum, "D": lambda x: np.std(x, ddof=1)})
Out[95]:
C D
A
bar 0.392940 1.366330
foo -1.796421 0.884785
The function names can also be strings. In order for a string to be valid it
must be either implemented on GroupBy or available via dispatching:
In [96]: grouped.agg({"C": "sum", "D": "std"})
Out[96]:
C D
A
bar 0.392940 1.366330
foo -1.796421 0.884785
Cython-optimized aggregation functions#
Some common aggregations, currently only sum, mean, std, and sem, have
optimized Cython implementations:
In [97]: df.groupby("A")[["C", "D"]].sum()
Out[97]:
C D
A
bar 0.392940 1.732707
foo -1.796421 2.824590
In [98]: df.groupby(["A", "B"]).mean()
Out[98]:
C D
A B
bar one 0.254161 1.511763
three 0.215897 -0.990582
two -0.077118 1.211526
foo one -0.491888 0.807291
three -0.862495 0.024580
two 0.024925 0.592714
Of course sum and mean are implemented on pandas objects, so the above
code would work even without the special versions via dispatching (see below).
Aggregations with User-Defined Functions#
Users can also provide their own functions for custom aggregations. When aggregating
with a User-Defined Function (UDF), the UDF should not mutate the provided Series, see
Mutating with User Defined Function (UDF) methods for more information.
In [99]: animals.groupby("kind")[["height"]].agg(lambda x: set(x))
Out[99]:
height
kind
cat {9.1, 9.5}
dog {34.0, 6.0}
The resulting dtype will reflect that of the aggregating function. If the results from different groups have
different dtypes, then a common dtype will be determined in the same way as DataFrame construction.
In [100]: animals.groupby("kind")[["height"]].agg(lambda x: x.astype(int).sum())
Out[100]:
height
kind
cat 18
dog 40
Transformation#
The transform method returns an object that is indexed the same
as the one being grouped. The transform function must:
Return a result that is either the same size as the group chunk or
broadcastable to the size of the group chunk (e.g., a scalar,
grouped.transform(lambda x: x.iloc[-1])).
Operate column-by-column on the group chunk. The transform is applied to
the first group chunk using chunk.apply.
Not perform in-place operations on the group chunk. Group chunks should
be treated as immutable, and changes to a group chunk may produce unexpected
results. For example, when using fillna, inplace must be False
(grouped.transform(lambda x: x.fillna(inplace=False))).
(Optionally) operates on the entire group chunk. If this is supported, a
fast path is used starting from the second chunk.
Deprecated since version 1.5.0: When using .transform on a grouped DataFrame and the transformation function
returns a DataFrame, currently pandas does not align the result’s index
with the input’s index. This behavior is deprecated and alignment will
be performed in a future version of pandas. You can apply .to_numpy() to the
result of the transformation function to avoid alignment.
Similar to Aggregations with User-Defined Functions, the resulting dtype will reflect that of the
transformation function. If the results from different groups have different dtypes, then
a common dtype will be determined in the same way as DataFrame construction.
Suppose we wished to standardize the data within each group:
In [101]: index = pd.date_range("10/1/1999", periods=1100)
In [102]: ts = pd.Series(np.random.normal(0.5, 2, 1100), index)
In [103]: ts = ts.rolling(window=100, min_periods=100).mean().dropna()
In [104]: ts.head()
Out[104]:
2000-01-08 0.779333
2000-01-09 0.778852
2000-01-10 0.786476
2000-01-11 0.782797
2000-01-12 0.798110
Freq: D, dtype: float64
In [105]: ts.tail()
Out[105]:
2002-09-30 0.660294
2002-10-01 0.631095
2002-10-02 0.673601
2002-10-03 0.709213
2002-10-04 0.719369
Freq: D, dtype: float64
In [106]: transformed = ts.groupby(lambda x: x.year).transform(
.....: lambda x: (x - x.mean()) / x.std()
.....: )
.....:
We would expect the result to now have mean 0 and standard deviation 1 within
each group, which we can easily check:
# Original Data
In [107]: grouped = ts.groupby(lambda x: x.year)
In [108]: grouped.mean()
Out[108]:
2000 0.442441
2001 0.526246
2002 0.459365
dtype: float64
In [109]: grouped.std()
Out[109]:
2000 0.131752
2001 0.210945
2002 0.128753
dtype: float64
# Transformed Data
In [110]: grouped_trans = transformed.groupby(lambda x: x.year)
In [111]: grouped_trans.mean()
Out[111]:
2000 -4.870756e-16
2001 -1.545187e-16
2002 4.136282e-16
dtype: float64
In [112]: grouped_trans.std()
Out[112]:
2000 1.0
2001 1.0
2002 1.0
dtype: float64
We can also visually compare the original and transformed data sets.
In [113]: compare = pd.DataFrame({"Original": ts, "Transformed": transformed})
In [114]: compare.plot()
Out[114]: <AxesSubplot: >
Transformation functions that have lower dimension outputs are broadcast to
match the shape of the input array.
In [115]: ts.groupby(lambda x: x.year).transform(lambda x: x.max() - x.min())
Out[115]:
2000-01-08 0.623893
2000-01-09 0.623893
2000-01-10 0.623893
2000-01-11 0.623893
2000-01-12 0.623893
...
2002-09-30 0.558275
2002-10-01 0.558275
2002-10-02 0.558275
2002-10-03 0.558275
2002-10-04 0.558275
Freq: D, Length: 1001, dtype: float64
Alternatively, the built-in methods could be used to produce the same outputs.
In [116]: max_ts = ts.groupby(lambda x: x.year).transform("max")
In [117]: min_ts = ts.groupby(lambda x: x.year).transform("min")
In [118]: max_ts - min_ts
Out[118]:
2000-01-08 0.623893
2000-01-09 0.623893
2000-01-10 0.623893
2000-01-11 0.623893
2000-01-12 0.623893
...
2002-09-30 0.558275
2002-10-01 0.558275
2002-10-02 0.558275
2002-10-03 0.558275
2002-10-04 0.558275
Freq: D, Length: 1001, dtype: float64
Another common data transform is to replace missing data with the group mean.
In [119]: data_df
Out[119]:
A B C
0 1.539708 -1.166480 0.533026
1 1.302092 -0.505754 NaN
2 -0.371983 1.104803 -0.651520
3 -1.309622 1.118697 -1.161657
4 -1.924296 0.396437 0.812436
.. ... ... ...
995 -0.093110 0.683847 -0.774753
996 -0.185043 1.438572 NaN
997 -0.394469 -0.642343 0.011374
998 -1.174126 1.857148 NaN
999 0.234564 0.517098 0.393534
[1000 rows x 3 columns]
In [120]: countries = np.array(["US", "UK", "GR", "JP"])
In [121]: key = countries[np.random.randint(0, 4, 1000)]
In [122]: grouped = data_df.groupby(key)
# Non-NA count in each group
In [123]: grouped.count()
Out[123]:
A B C
GR 209 217 189
JP 240 255 217
UK 216 231 193
US 239 250 217
In [124]: transformed = grouped.transform(lambda x: x.fillna(x.mean()))
We can verify that the group means have not changed in the transformed data
and that the transformed data contains no NAs.
In [125]: grouped_trans = transformed.groupby(key)
In [126]: grouped.mean() # original group means
Out[126]:
A B C
GR -0.098371 -0.015420 0.068053
JP 0.069025 0.023100 -0.077324
UK 0.034069 -0.052580 -0.116525
US 0.058664 -0.020399 0.028603
In [127]: grouped_trans.mean() # transformation did not change group means
Out[127]:
A B C
GR -0.098371 -0.015420 0.068053
JP 0.069025 0.023100 -0.077324
UK 0.034069 -0.052580 -0.116525
US 0.058664 -0.020399 0.028603
In [128]: grouped.count() # original has some missing data points
Out[128]:
A B C
GR 209 217 189
JP 240 255 217
UK 216 231 193
US 239 250 217
In [129]: grouped_trans.count() # counts after transformation
Out[129]:
A B C
GR 228 228 228
JP 267 267 267
UK 247 247 247
US 258 258 258
In [130]: grouped_trans.size() # Verify non-NA count equals group size
Out[130]:
GR 228
JP 267
UK 247
US 258
dtype: int64
Note
Some functions will automatically transform the input when applied to a
GroupBy object, but returning an object of the same shape as the original.
Passing as_index=False will not affect these transformation methods.
For example: fillna, ffill, bfill, shift..
In [131]: grouped.ffill()
Out[131]:
A B C
0 1.539708 -1.166480 0.533026
1 1.302092 -0.505754 0.533026
2 -0.371983 1.104803 -0.651520
3 -1.309622 1.118697 -1.161657
4 -1.924296 0.396437 0.812436
.. ... ... ...
995 -0.093110 0.683847 -0.774753
996 -0.185043 1.438572 -0.774753
997 -0.394469 -0.642343 0.011374
998 -1.174126 1.857148 -0.774753
999 0.234564 0.517098 0.393534
[1000 rows x 3 columns]
Window and resample operations#
It is possible to use resample(), expanding() and
rolling() as methods on groupbys.
The example below will apply the rolling() method on the samples of
the column B based on the groups of column A.
In [132]: df_re = pd.DataFrame({"A": [1] * 10 + [5] * 10, "B": np.arange(20)})
In [133]: df_re
Out[133]:
A B
0 1 0
1 1 1
2 1 2
3 1 3
4 1 4
.. .. ..
15 5 15
16 5 16
17 5 17
18 5 18
19 5 19
[20 rows x 2 columns]
In [134]: df_re.groupby("A").rolling(4).B.mean()
Out[134]:
A
1 0 NaN
1 NaN
2 NaN
3 1.5
4 2.5
...
5 15 13.5
16 14.5
17 15.5
18 16.5
19 17.5
Name: B, Length: 20, dtype: float64
The expanding() method will accumulate a given operation
(sum() in the example) for all the members of each particular
group.
In [135]: df_re.groupby("A").expanding().sum()
Out[135]:
B
A
1 0 0.0
1 1.0
2 3.0
3 6.0
4 10.0
... ...
5 15 75.0
16 91.0
17 108.0
18 126.0
19 145.0
[20 rows x 1 columns]
Suppose you want to use the resample() method to get a daily
frequency in each group of your dataframe and wish to complete the
missing values with the ffill() method.
In [136]: df_re = pd.DataFrame(
.....: {
.....: "date": pd.date_range(start="2016-01-01", periods=4, freq="W"),
.....: "group": [1, 1, 2, 2],
.....: "val": [5, 6, 7, 8],
.....: }
.....: ).set_index("date")
.....:
In [137]: df_re
Out[137]:
group val
date
2016-01-03 1 5
2016-01-10 1 6
2016-01-17 2 7
2016-01-24 2 8
In [138]: df_re.groupby("group").resample("1D").ffill()
Out[138]:
group val
group date
1 2016-01-03 1 5
2016-01-04 1 5
2016-01-05 1 5
2016-01-06 1 5
2016-01-07 1 5
... ... ...
2 2016-01-20 2 7
2016-01-21 2 7
2016-01-22 2 7
2016-01-23 2 7
2016-01-24 2 8
[16 rows x 2 columns]
Filtration#
The filter method returns a subset of the original object. Suppose we
want to take only elements that belong to groups with a group sum greater
than 2.
In [139]: sf = pd.Series([1, 1, 2, 3, 3, 3])
In [140]: sf.groupby(sf).filter(lambda x: x.sum() > 2)
Out[140]:
3 3
4 3
5 3
dtype: int64
The argument of filter must be a function that, applied to the group as a
whole, returns True or False.
Another useful operation is filtering out elements that belong to groups
with only a couple members.
In [141]: dff = pd.DataFrame({"A": np.arange(8), "B": list("aabbbbcc")})
In [142]: dff.groupby("B").filter(lambda x: len(x) > 2)
Out[142]:
A B
2 2 b
3 3 b
4 4 b
5 5 b
Alternatively, instead of dropping the offending groups, we can return a
like-indexed objects where the groups that do not pass the filter are filled
with NaNs.
In [143]: dff.groupby("B").filter(lambda x: len(x) > 2, dropna=False)
Out[143]:
A B
0 NaN NaN
1 NaN NaN
2 2.0 b
3 3.0 b
4 4.0 b
5 5.0 b
6 NaN NaN
7 NaN NaN
For DataFrames with multiple columns, filters should explicitly specify a column as the filter criterion.
In [144]: dff["C"] = np.arange(8)
In [145]: dff.groupby("B").filter(lambda x: len(x["C"]) > 2)
Out[145]:
A B C
2 2 b 2
3 3 b 3
4 4 b 4
5 5 b 5
Note
Some functions when applied to a groupby object will act as a filter on the input, returning
a reduced shape of the original (and potentially eliminating groups), but with the index unchanged.
Passing as_index=False will not affect these transformation methods.
For example: head, tail.
In [146]: dff.groupby("B").head(2)
Out[146]:
A B C
0 0 a 0
1 1 a 1
2 2 b 2
3 3 b 3
6 6 c 6
7 7 c 7
Dispatching to instance methods#
When doing an aggregation or transformation, you might just want to call an
instance method on each data group. This is pretty easy to do by passing lambda
functions:
In [147]: grouped = df.groupby("A")
In [148]: grouped.agg(lambda x: x.std())
Out[148]:
C D
A
bar 0.181231 1.366330
foo 0.912265 0.884785
But, it’s rather verbose and can be untidy if you need to pass additional
arguments. Using a bit of metaprogramming cleverness, GroupBy now has the
ability to “dispatch” method calls to the groups:
In [149]: grouped.std()
Out[149]:
C D
A
bar 0.181231 1.366330
foo 0.912265 0.884785
What is actually happening here is that a function wrapper is being
generated. When invoked, it takes any passed arguments and invokes the function
with any arguments on each group (in the above example, the std
function). The results are then combined together much in the style of agg
and transform (it actually uses apply to infer the gluing, documented
next). This enables some operations to be carried out rather succinctly:
In [150]: tsdf = pd.DataFrame(
.....: np.random.randn(1000, 3),
.....: index=pd.date_range("1/1/2000", periods=1000),
.....: columns=["A", "B", "C"],
.....: )
.....:
In [151]: tsdf.iloc[::2] = np.nan
In [152]: grouped = tsdf.groupby(lambda x: x.year)
In [153]: grouped.fillna(method="pad")
Out[153]:
A B C
2000-01-01 NaN NaN NaN
2000-01-02 -0.353501 -0.080957 -0.876864
2000-01-03 -0.353501 -0.080957 -0.876864
2000-01-04 0.050976 0.044273 -0.559849
2000-01-05 0.050976 0.044273 -0.559849
... ... ... ...
2002-09-22 0.005011 0.053897 -1.026922
2002-09-23 0.005011 0.053897 -1.026922
2002-09-24 -0.456542 -1.849051 1.559856
2002-09-25 -0.456542 -1.849051 1.559856
2002-09-26 1.123162 0.354660 1.128135
[1000 rows x 3 columns]
In this example, we chopped the collection of time series into yearly chunks
then independently called fillna on the
groups.
The nlargest and nsmallest methods work on Series style groupbys:
In [154]: s = pd.Series([9, 8, 7, 5, 19, 1, 4.2, 3.3])
In [155]: g = pd.Series(list("abababab"))
In [156]: gb = s.groupby(g)
In [157]: gb.nlargest(3)
Out[157]:
a 4 19.0
0 9.0
2 7.0
b 1 8.0
3 5.0
7 3.3
dtype: float64
In [158]: gb.nsmallest(3)
Out[158]:
a 6 4.2
2 7.0
0 9.0
b 5 1.0
7 3.3
3 5.0
dtype: float64
Flexible apply#
Some operations on the grouped data might not fit into either the aggregate or
transform categories. Or, you may simply want GroupBy to infer how to combine
the results. For these, use the apply function, which can be substituted
for both aggregate and transform in many standard use cases. However,
apply can handle some exceptional use cases.
Note
apply can act as a reducer, transformer, or filter function, depending
on exactly what is passed to it. It can depend on the passed function and
exactly what you are grouping. Thus the grouped column(s) may be included in
the output as well as set the indices.
In [159]: df
Out[159]:
A B C D
0 foo one -0.575247 1.346061
1 bar one 0.254161 1.511763
2 foo two -1.143704 1.627081
3 bar three 0.215897 -0.990582
4 foo two 1.193555 -0.441652
5 bar two -0.077118 1.211526
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
In [160]: grouped = df.groupby("A")
# could also just call .describe()
In [161]: grouped["C"].apply(lambda x: x.describe())
Out[161]:
A
bar count 3.000000
mean 0.130980
std 0.181231
min -0.077118
25% 0.069390
...
foo min -1.143704
25% -0.862495
50% -0.575247
75% -0.408530
max 1.193555
Name: C, Length: 16, dtype: float64
The dimension of the returned result can also change:
In [162]: grouped = df.groupby('A')['C']
In [163]: def f(group):
.....: return pd.DataFrame({'original': group,
.....: 'demeaned': group - group.mean()})
.....:
apply on a Series can operate on a returned value from the applied function,
that is itself a series, and possibly upcast the result to a DataFrame:
In [164]: def f(x):
.....: return pd.Series([x, x ** 2], index=["x", "x^2"])
.....:
In [165]: s = pd.Series(np.random.rand(5))
In [166]: s
Out[166]:
0 0.321438
1 0.493496
2 0.139505
3 0.910103
4 0.194158
dtype: float64
In [167]: s.apply(f)
Out[167]:
x x^2
0 0.321438 0.103323
1 0.493496 0.243538
2 0.139505 0.019462
3 0.910103 0.828287
4 0.194158 0.037697
Control grouped column(s) placement with group_keys#
Note
If group_keys=True is specified when calling groupby(),
functions passed to apply that return like-indexed outputs will have the
group keys added to the result index. Previous versions of pandas would add
the group keys only when the result from the applied function had a different
index than the input. If group_keys is not specified, the group keys will
not be added for like-indexed outputs. In the future this behavior
will change to always respect group_keys, which defaults to True.
Changed in version 1.5.0.
To control whether the grouped column(s) are included in the indices, you can use
the argument group_keys. Compare
In [168]: df.groupby("A", group_keys=True).apply(lambda x: x)
Out[168]:
A B C D
A
bar 1 bar one 0.254161 1.511763
3 bar three 0.215897 -0.990582
5 bar two -0.077118 1.211526
foo 0 foo one -0.575247 1.346061
2 foo two -1.143704 1.627081
4 foo two 1.193555 -0.441652
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
with
In [169]: df.groupby("A", group_keys=False).apply(lambda x: x)
Out[169]:
A B C D
0 foo one -0.575247 1.346061
1 bar one 0.254161 1.511763
2 foo two -1.143704 1.627081
3 bar three 0.215897 -0.990582
4 foo two 1.193555 -0.441652
5 bar two -0.077118 1.211526
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
Similar to Aggregations with User-Defined Functions, the resulting dtype will reflect that of the
apply function. If the results from different groups have different dtypes, then
a common dtype will be determined in the same way as DataFrame construction.
Numba Accelerated Routines#
New in version 1.1.
If Numba is installed as an optional dependency, the transform and
aggregate methods support engine='numba' and engine_kwargs arguments.
See enhancing performance with Numba for general usage of the arguments
and performance considerations.
The function signature must start with values, index exactly as the data belonging to each group
will be passed into values, and the group index will be passed into index.
Warning
When using engine='numba', there will be no “fall back” behavior internally. The group
data and group index will be passed as NumPy arrays to the JITed user defined function, and no
alternative execution attempts will be tried.
Other useful features#
Automatic exclusion of “nuisance” columns#
Again consider the example DataFrame we’ve been looking at:
In [170]: df
Out[170]:
A B C D
0 foo one -0.575247 1.346061
1 bar one 0.254161 1.511763
2 foo two -1.143704 1.627081
3 bar three 0.215897 -0.990582
4 foo two 1.193555 -0.441652
5 bar two -0.077118 1.211526
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
Suppose we wish to compute the standard deviation grouped by the A
column. There is a slight problem, namely that we don’t care about the data in
column B. We refer to this as a “nuisance” column. You can avoid nuisance
columns by specifying numeric_only=True:
In [171]: df.groupby("A").std(numeric_only=True)
Out[171]:
C D
A
bar 0.181231 1.366330
foo 0.912265 0.884785
Note that df.groupby('A').colname.std(). is more efficient than
df.groupby('A').std().colname, so if the result of an aggregation function
is only interesting over one column (here colname), it may be filtered
before applying the aggregation function.
Note
Any object column, also if it contains numerical values such as Decimal
objects, is considered as a “nuisance” columns. They are excluded from
aggregate functions automatically in groupby.
If you do wish to include decimal or object columns in an aggregation with
other non-nuisance data types, you must do so explicitly.
Warning
The automatic dropping of nuisance columns has been deprecated and will be removed
in a future version of pandas. If columns are included that cannot be operated
on, pandas will instead raise an error. In order to avoid this, either select
the columns you wish to operate on or specify numeric_only=True.
In [172]: from decimal import Decimal
In [173]: df_dec = pd.DataFrame(
.....: {
.....: "id": [1, 2, 1, 2],
.....: "int_column": [1, 2, 3, 4],
.....: "dec_column": [
.....: Decimal("0.50"),
.....: Decimal("0.15"),
.....: Decimal("0.25"),
.....: Decimal("0.40"),
.....: ],
.....: }
.....: )
.....:
# Decimal columns can be sum'd explicitly by themselves...
In [174]: df_dec.groupby(["id"])[["dec_column"]].sum()
Out[174]:
dec_column
id
1 0.75
2 0.55
# ...but cannot be combined with standard data types or they will be excluded
In [175]: df_dec.groupby(["id"])[["int_column", "dec_column"]].sum()
Out[175]:
int_column
id
1 4
2 6
# Use .agg function to aggregate over standard and "nuisance" data types
# at the same time
In [176]: df_dec.groupby(["id"]).agg({"int_column": "sum", "dec_column": "sum"})
Out[176]:
int_column dec_column
id
1 4 0.75
2 6 0.55
Handling of (un)observed Categorical values#
When using a Categorical grouper (as a single grouper, or as part of multiple groupers), the observed keyword
controls whether to return a cartesian product of all possible groupers values (observed=False) or only those
that are observed groupers (observed=True).
Show all values:
In [177]: pd.Series([1, 1, 1]).groupby(
.....: pd.Categorical(["a", "a", "a"], categories=["a", "b"]), observed=False
.....: ).count()
.....:
Out[177]:
a 3
b 0
dtype: int64
Show only the observed values:
In [178]: pd.Series([1, 1, 1]).groupby(
.....: pd.Categorical(["a", "a", "a"], categories=["a", "b"]), observed=True
.....: ).count()
.....:
Out[178]:
a 3
dtype: int64
The returned dtype of the grouped will always include all of the categories that were grouped.
In [179]: s = (
.....: pd.Series([1, 1, 1])
.....: .groupby(pd.Categorical(["a", "a", "a"], categories=["a", "b"]), observed=False)
.....: .count()
.....: )
.....:
In [180]: s.index.dtype
Out[180]: CategoricalDtype(categories=['a', 'b'], ordered=False)
NA and NaT group handling#
If there are any NaN or NaT values in the grouping key, these will be
automatically excluded. In other words, there will never be an “NA group” or
“NaT group”. This was not the case in older versions of pandas, but users were
generally discarding the NA group anyway (and supporting it was an
implementation headache).
Grouping with ordered factors#
Categorical variables represented as instance of pandas’s Categorical class
can be used as group keys. If so, the order of the levels will be preserved:
In [181]: data = pd.Series(np.random.randn(100))
In [182]: factor = pd.qcut(data, [0, 0.25, 0.5, 0.75, 1.0])
In [183]: data.groupby(factor).mean()
Out[183]:
(-2.645, -0.523] -1.362896
(-0.523, 0.0296] -0.260266
(0.0296, 0.654] 0.361802
(0.654, 2.21] 1.073801
dtype: float64
Grouping with a grouper specification#
You may need to specify a bit more data to properly group. You can
use the pd.Grouper to provide this local control.
In [184]: import datetime
In [185]: df = pd.DataFrame(
.....: {
.....: "Branch": "A A A A A A A B".split(),
.....: "Buyer": "Carl Mark Carl Carl Joe Joe Joe Carl".split(),
.....: "Quantity": [1, 3, 5, 1, 8, 1, 9, 3],
.....: "Date": [
.....: datetime.datetime(2013, 1, 1, 13, 0),
.....: datetime.datetime(2013, 1, 1, 13, 5),
.....: datetime.datetime(2013, 10, 1, 20, 0),
.....: datetime.datetime(2013, 10, 2, 10, 0),
.....: datetime.datetime(2013, 10, 1, 20, 0),
.....: datetime.datetime(2013, 10, 2, 10, 0),
.....: datetime.datetime(2013, 12, 2, 12, 0),
.....: datetime.datetime(2013, 12, 2, 14, 0),
.....: ],
.....: }
.....: )
.....:
In [186]: df
Out[186]:
Branch Buyer Quantity Date
0 A Carl 1 2013-01-01 13:00:00
1 A Mark 3 2013-01-01 13:05:00
2 A Carl 5 2013-10-01 20:00:00
3 A Carl 1 2013-10-02 10:00:00
4 A Joe 8 2013-10-01 20:00:00
5 A Joe 1 2013-10-02 10:00:00
6 A Joe 9 2013-12-02 12:00:00
7 B Carl 3 2013-12-02 14:00:00
Groupby a specific column with the desired frequency. This is like resampling.
In [187]: df.groupby([pd.Grouper(freq="1M", key="Date"), "Buyer"])[["Quantity"]].sum()
Out[187]:
Quantity
Date Buyer
2013-01-31 Carl 1
Mark 3
2013-10-31 Carl 6
Joe 9
2013-12-31 Carl 3
Joe 9
You have an ambiguous specification in that you have a named index and a column
that could be potential groupers.
In [188]: df = df.set_index("Date")
In [189]: df["Date"] = df.index + pd.offsets.MonthEnd(2)
In [190]: df.groupby([pd.Grouper(freq="6M", key="Date"), "Buyer"])[["Quantity"]].sum()
Out[190]:
Quantity
Date Buyer
2013-02-28 Carl 1
Mark 3
2014-02-28 Carl 9
Joe 18
In [191]: df.groupby([pd.Grouper(freq="6M", level="Date"), "Buyer"])[["Quantity"]].sum()
Out[191]:
Quantity
Date Buyer
2013-01-31 Carl 1
Mark 3
2014-01-31 Carl 9
Joe 18
Taking the first rows of each group#
Just like for a DataFrame or Series you can call head and tail on a groupby:
In [192]: df = pd.DataFrame([[1, 2], [1, 4], [5, 6]], columns=["A", "B"])
In [193]: df
Out[193]:
A B
0 1 2
1 1 4
2 5 6
In [194]: g = df.groupby("A")
In [195]: g.head(1)
Out[195]:
A B
0 1 2
2 5 6
In [196]: g.tail(1)
Out[196]:
A B
1 1 4
2 5 6
This shows the first or last n rows from each group.
Taking the nth row of each group#
To select from a DataFrame or Series the nth item, use
nth(). This is a reduction method, and
will return a single row (or no row) per group if you pass an int for n:
In [197]: df = pd.DataFrame([[1, np.nan], [1, 4], [5, 6]], columns=["A", "B"])
In [198]: g = df.groupby("A")
In [199]: g.nth(0)
Out[199]:
B
A
1 NaN
5 6.0
In [200]: g.nth(-1)
Out[200]:
B
A
1 4.0
5 6.0
In [201]: g.nth(1)
Out[201]:
B
A
1 4.0
If you want to select the nth not-null item, use the dropna kwarg. For a DataFrame this should be either 'any' or 'all' just like you would pass to dropna:
# nth(0) is the same as g.first()
In [202]: g.nth(0, dropna="any")
Out[202]:
B
A
1 4.0
5 6.0
In [203]: g.first()
Out[203]:
B
A
1 4.0
5 6.0
# nth(-1) is the same as g.last()
In [204]: g.nth(-1, dropna="any") # NaNs denote group exhausted when using dropna
Out[204]:
B
A
1 4.0
5 6.0
In [205]: g.last()
Out[205]:
B
A
1 4.0
5 6.0
In [206]: g.B.nth(0, dropna="all")
Out[206]:
A
1 4.0
5 6.0
Name: B, dtype: float64
As with other methods, passing as_index=False, will achieve a filtration, which returns the grouped row.
In [207]: df = pd.DataFrame([[1, np.nan], [1, 4], [5, 6]], columns=["A", "B"])
In [208]: g = df.groupby("A", as_index=False)
In [209]: g.nth(0)
Out[209]:
A B
0 1 NaN
2 5 6.0
In [210]: g.nth(-1)
Out[210]:
A B
1 1 4.0
2 5 6.0
You can also select multiple rows from each group by specifying multiple nth values as a list of ints.
In [211]: business_dates = pd.date_range(start="4/1/2014", end="6/30/2014", freq="B")
In [212]: df = pd.DataFrame(1, index=business_dates, columns=["a", "b"])
# get the first, 4th, and last date index for each month
In [213]: df.groupby([df.index.year, df.index.month]).nth([0, 3, -1])
Out[213]:
a b
2014 4 1 1
4 1 1
4 1 1
5 1 1
5 1 1
5 1 1
6 1 1
6 1 1
6 1 1
Enumerate group items#
To see the order in which each row appears within its group, use the
cumcount method:
In [214]: dfg = pd.DataFrame(list("aaabba"), columns=["A"])
In [215]: dfg
Out[215]:
A
0 a
1 a
2 a
3 b
4 b
5 a
In [216]: dfg.groupby("A").cumcount()
Out[216]:
0 0
1 1
2 2
3 0
4 1
5 3
dtype: int64
In [217]: dfg.groupby("A").cumcount(ascending=False)
Out[217]:
0 3
1 2
2 1
3 1
4 0
5 0
dtype: int64
Enumerate groups#
To see the ordering of the groups (as opposed to the order of rows
within a group given by cumcount) you can use
ngroup().
Note that the numbers given to the groups match the order in which the
groups would be seen when iterating over the groupby object, not the
order they are first observed.
In [218]: dfg = pd.DataFrame(list("aaabba"), columns=["A"])
In [219]: dfg
Out[219]:
A
0 a
1 a
2 a
3 b
4 b
5 a
In [220]: dfg.groupby("A").ngroup()
Out[220]:
0 0
1 0
2 0
3 1
4 1
5 0
dtype: int64
In [221]: dfg.groupby("A").ngroup(ascending=False)
Out[221]:
0 1
1 1
2 1
3 0
4 0
5 1
dtype: int64
Plotting#
Groupby also works with some plotting methods. For example, suppose we
suspect that some features in a DataFrame may differ by group, in this case,
the values in column 1 where the group is “B” are 3 higher on average.
In [222]: np.random.seed(1234)
In [223]: df = pd.DataFrame(np.random.randn(50, 2))
In [224]: df["g"] = np.random.choice(["A", "B"], size=50)
In [225]: df.loc[df["g"] == "B", 1] += 3
We can easily visualize this with a boxplot:
In [226]: df.groupby("g").boxplot()
Out[226]:
A AxesSubplot(0.1,0.15;0.363636x0.75)
B AxesSubplot(0.536364,0.15;0.363636x0.75)
dtype: object
The result of calling boxplot is a dictionary whose keys are the values
of our grouping column g (“A” and “B”). The values of the resulting dictionary
can be controlled by the return_type keyword of boxplot.
See the visualization documentation for more.
Warning
For historical reasons, df.groupby("g").boxplot() is not equivalent
to df.boxplot(by="g"). See here for
an explanation.
Piping function calls#
Similar to the functionality provided by DataFrame and Series, functions
that take GroupBy objects can be chained together using a pipe method to
allow for a cleaner, more readable syntax. To read about .pipe in general terms,
see here.
Combining .groupby and .pipe is often useful when you need to reuse
GroupBy objects.
As an example, imagine having a DataFrame with columns for stores, products,
revenue and quantity sold. We’d like to do a groupwise calculation of prices
(i.e. revenue/quantity) per store and per product. We could do this in a
multi-step operation, but expressing it in terms of piping can make the
code more readable. First we set the data:
In [227]: n = 1000
In [228]: df = pd.DataFrame(
.....: {
.....: "Store": np.random.choice(["Store_1", "Store_2"], n),
.....: "Product": np.random.choice(["Product_1", "Product_2"], n),
.....: "Revenue": (np.random.random(n) * 50 + 10).round(2),
.....: "Quantity": np.random.randint(1, 10, size=n),
.....: }
.....: )
.....:
In [229]: df.head(2)
Out[229]:
Store Product Revenue Quantity
0 Store_2 Product_1 26.12 1
1 Store_2 Product_1 28.86 1
Now, to find prices per store/product, we can simply do:
In [230]: (
.....: df.groupby(["Store", "Product"])
.....: .pipe(lambda grp: grp.Revenue.sum() / grp.Quantity.sum())
.....: .unstack()
.....: .round(2)
.....: )
.....:
Out[230]:
Product Product_1 Product_2
Store
Store_1 6.82 7.05
Store_2 6.30 6.64
Piping can also be expressive when you want to deliver a grouped object to some
arbitrary function, for example:
In [231]: def mean(groupby):
.....: return groupby.mean()
.....:
In [232]: df.groupby(["Store", "Product"]).pipe(mean)
Out[232]:
Revenue Quantity
Store Product
Store_1 Product_1 34.622727 5.075758
Product_2 35.482815 5.029630
Store_2 Product_1 32.972837 5.237589
Product_2 34.684360 5.224000
where mean takes a GroupBy object and finds the mean of the Revenue and Quantity
columns respectively for each Store-Product combination. The mean function can
be any function that takes in a GroupBy object; the .pipe will pass the GroupBy
object as a parameter into the function you specify.
Examples#
Regrouping by factor#
Regroup columns of a DataFrame according to their sum, and sum the aggregated ones.
In [233]: df = pd.DataFrame({"a": [1, 0, 0], "b": [0, 1, 0], "c": [1, 0, 0], "d": [2, 3, 4]})
In [234]: df
Out[234]:
a b c d
0 1 0 1 2
1 0 1 0 3
2 0 0 0 4
In [235]: df.groupby(df.sum(), axis=1).sum()
Out[235]:
1 9
0 2 2
1 1 3
2 0 4
Multi-column factorization#
By using ngroup(), we can extract
information about the groups in a way similar to factorize() (as described
further in the reshaping API) but which applies
naturally to multiple columns of mixed type and different
sources. This can be useful as an intermediate categorical-like step
in processing, when the relationships between the group rows are more
important than their content, or as input to an algorithm which only
accepts the integer encoding. (For more information about support in
pandas for full categorical data, see the Categorical
introduction and the
API documentation.)
In [236]: dfg = pd.DataFrame({"A": [1, 1, 2, 3, 2], "B": list("aaaba")})
In [237]: dfg
Out[237]:
A B
0 1 a
1 1 a
2 2 a
3 3 b
4 2 a
In [238]: dfg.groupby(["A", "B"]).ngroup()
Out[238]:
0 0
1 0
2 1
3 2
4 1
dtype: int64
In [239]: dfg.groupby(["A", [0, 0, 0, 1, 1]]).ngroup()
Out[239]:
0 0
1 0
2 1
3 3
4 2
dtype: int64
Groupby by indexer to ‘resample’ data#
Resampling produces new hypothetical samples (resamples) from already existing observed data or from a model that generates data. These new samples are similar to the pre-existing samples.
In order to resample to work on indices that are non-datetimelike, the following procedure can be utilized.
In the following examples, df.index // 5 returns a binary array which is used to determine what gets selected for the groupby operation.
Note
The below example shows how we can downsample by consolidation of samples into fewer samples. Here by using df.index // 5, we are aggregating the samples in bins. By applying std() function, we aggregate the information contained in many samples into a small subset of values which is their standard deviation thereby reducing the number of samples.
In [240]: df = pd.DataFrame(np.random.randn(10, 2))
In [241]: df
Out[241]:
0 1
0 -0.793893 0.321153
1 0.342250 1.618906
2 -0.975807 1.918201
3 -0.810847 -1.405919
4 -1.977759 0.461659
5 0.730057 -1.316938
6 -0.751328 0.528290
7 -0.257759 -1.081009
8 0.505895 -1.701948
9 -1.006349 0.020208
In [242]: df.index // 5
Out[242]: Int64Index([0, 0, 0, 0, 0, 1, 1, 1, 1, 1], dtype='int64')
In [243]: df.groupby(df.index // 5).std()
Out[243]:
0 1
0 0.823647 1.312912
1 0.760109 0.942941
Returning a Series to propagate names#
Group DataFrame columns, compute a set of metrics and return a named Series.
The Series name is used as the name for the column index. This is especially
useful in conjunction with reshaping operations such as stacking in which the
column index name will be used as the name of the inserted column:
In [244]: df = pd.DataFrame(
.....: {
.....: "a": [0, 0, 0, 0, 1, 1, 1, 1, 2, 2, 2, 2],
.....: "b": [0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1],
.....: "c": [1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0],
.....: "d": [0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1],
.....: }
.....: )
.....:
In [245]: def compute_metrics(x):
.....: result = {"b_sum": x["b"].sum(), "c_mean": x["c"].mean()}
.....: return pd.Series(result, name="metrics")
.....:
In [246]: result = df.groupby("a").apply(compute_metrics)
In [247]: result
Out[247]:
metrics b_sum c_mean
a
0 2.0 0.5
1 2.0 0.5
2 2.0 0.5
In [248]: result.stack()
Out[248]:
a metrics
0 b_sum 2.0
c_mean 0.5
1 b_sum 2.0
c_mean 0.5
2 b_sum 2.0
c_mean 0.5
dtype: float64
| 147
| 508
|
Calculate average of every n rows from a csv file
I have a csv file that has 25000 rows. I want to put the average of every 30 rows in another csv file.
I've given an example with 9 rows as below and the new csv file has 3 rows (3, 1, 2):
| H |
========
| 1 |---\
| 3 | |--->| 3 |
| 5 |---/
| -1 |---\
| 3 | |--->| 1 |
| 1 |---/
| 0 |---\
| 5 | |--->| 2 |
| 1 |---/
What I did:
import numpy as np
import pandas as pd
m_path = "file.csv"
m_df = pd.read_csv(m_path, usecols=['Col-01'])
m_arr = np.array([])
temp = m_df.to_numpy()
step = 30
for i in range(1, 25000, step):
arr = np.append(m_arr,np.array([np.average(temp[i:i + step])]))
data = np.array(m_arr)[np.newaxis]
m_df = pd.DataFrame({'Column1': data[0, :]})
m_df.to_csv('AVG.csv')
This works well but Is there any other option to do this?
|
64,079,055
|
Row-wise conditional counting keeping all columns without iterating over dataframe
|
<p>I'm struggling with conditional counting in pandas.</p>
<h1>Problem</h1>
<p>I have a pandas dataframe that has 4 columns (for the sake of this example) : "id", "id2", "col1" and "type". The type column can have 3 values, namely "A", "B" and "C". What I'd like to do is, for each row, count the number of type C with the same id and id2. Here is a sample dataframe:</p>
<pre><code> id id2 col1 type
0 "e" "z" 0 "A"
1 "e" "z" 1 "C"
2 "e" "z" 2 "C"
3 "e" "y" 3 "C"
4 "e" "y" 4 "A"
5 "f" "y" 4 "A"
6 "f" "x" 3 "B"
7 "f" "x" 4 "B"
8 "g" "w" 5 "C"
9 "g" "w" 6 "B"
</code></pre>
<p>The code to build the sample dataframe:</p>
<pre class="lang-py prettyprint-override"><code>pd.DataFrame({
"id": ["e", "e", "e", "e", "e", "f", "f", "f", "g", "g"],
"id2": ["z", "z", "z", "y", "y", "x", "x", "x", "w", "w"],
"col1": [ 0 , 1 , 2 , 3 , 4 , 4 , 3 , 4 , 5 , 6 ],
"type": ["A", "C", "C", "C", "A", "A", "B", "B", "C", "B"]
})
</code></pre>
<p>And the desired result :</p>
<pre><code> id id2 col1 type count
0 "e" "z" 0 "A" 2
1 "e" "z" 1 "C" 2
2 "e" "z" 2 "C" 2
3 "e" "y" 3 "C" 1
4 "e" "y" 4 "A" 1
5 "f" "y" 4 "A" 0
6 "f" "x" 3 "B" 0
7 "f" "x" 4 "B" 0
8 "g" "w" 5 "C" 1
9 "g" "w" 6 "B" 1
</code></pre>
<p>I don't really care about what happens to row with type "C" (eg. row 1, 2, 3, 8) so that's not a problem if they don't appear in the resulting dataframe.</p>
<p>I'd like a solution that doesn't rely on iterating "myself" through the dataset (no apply nor for loop) as they are too slow. I'm hopping to find a "pandaic" way of solving the problem.</p>
<p>Note: in the "real" dataset there are 3 columns used to index, type can have 5 different values and 36 data column should be preserved. But I prefer a scalable solution, not bounded to those number.</p>
<h1>What I've tried</h1>
<p>I can solve the problem using sqlalchemy and a query. Indeed, results should match the following query :</p>
<pre class="lang-sql prettyprint-override"><code>SELECT a.*, (SELECT COUNT(*)
FROM df b
WHERE
b.id = a.id AND
b.id2 = a.id2 AND
b.type = "C")
FROM df a
</code></pre>
<p>The initial problem can also be reworded as "what's the python code equivalent to this query ?".</p>
<p>I can also solve the problem using apply. Both are very slow due to the size of the dataset, although sql method is probably slow because it has to build the database at first.</p>
<h1>Related posts</h1>
<p>This <a href="https://stackoverflow.com/questions/45752601/how-to-do-a-conditional-count-after-groupby-on-a-pandas-dataframe">post</a> almost solves the problem, but doesn't work with external data column nor with multiple indexing and I couldn't adapt them for my example.</p>
<p>This line is close to what I'm looking for, the only issue is that it only keeps column you grouped by :</p>
<pre class="lang-py prettyprint-override"><code>df.groupby(["id", "id2", "type"]).size().unstack().reset_index()
</code></pre>
<p>If any information is missing, please let me know.
Thank you for taking the time to read my post and sorry for the spelling mistakes !</p>
| 64,082,490
| 2020-09-26T14:51:04.973000
| 1
| null | 1
| 259
|
python|pandas
|
<p>Try this:</p>
<pre><code>answer = df.groupby(['id','id2']).transform(sum)['type'].str.count('C')
pd.concat([df,answer], axis=1)
id id2 col1 type type
0 e z 0 A 2
1 e z 1 C 2
2 e z 2 C 2
3 e y 3 C 1
4 e y 4 A 1
5 f x 4 A 0
6 f x 3 B 0
7 f x 4 B 0
8 g w 5 C 1
9 g w 6 B 1
</code></pre>
<p>You can increase the columns in the groupby to whichever/how many you wish.</p>
| 2020-09-26T21:02:59.180000
| 0
|
https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.cumsum.html
|
pandas.DataFrame.cumsum#
pandas.DataFrame.cumsum#
DataFrame.cumsum(axis=None, skipna=True, *args, **kwargs)[source]#
Return cumulative sum over a DataFrame or Series axis.
Try this:
answer = df.groupby(['id','id2']).transform(sum)['type'].str.count('C')
pd.concat([df,answer], axis=1)
id id2 col1 type type
0 e z 0 A 2
1 e z 1 C 2
2 e z 2 C 2
3 e y 3 C 1
4 e y 4 A 1
5 f x 4 A 0
6 f x 3 B 0
7 f x 4 B 0
8 g w 5 C 1
9 g w 6 B 1
You can increase the columns in the groupby to whichever/how many you wish.
Returns a DataFrame or Series of the same size containing the cumulative
sum.
Parameters
axis{0 or ‘index’, 1 or ‘columns’}, default 0The index or the name of the axis. 0 is equivalent to None or ‘index’.
For Series this parameter is unused and defaults to 0.
skipnabool, default TrueExclude NA/null values. If an entire row/column is NA, the result
will be NA.
*args, **kwargsAdditional keywords have no effect but might be accepted for
compatibility with NumPy.
Returns
Series or DataFrameReturn cumulative sum of Series or DataFrame.
See also
core.window.expanding.Expanding.sumSimilar functionality but ignores NaN values.
DataFrame.sumReturn the sum over DataFrame axis.
DataFrame.cummaxReturn cumulative maximum over DataFrame axis.
DataFrame.cumminReturn cumulative minimum over DataFrame axis.
DataFrame.cumsumReturn cumulative sum over DataFrame axis.
DataFrame.cumprodReturn cumulative product over DataFrame axis.
Examples
Series
>>> s = pd.Series([2, np.nan, 5, -1, 0])
>>> s
0 2.0
1 NaN
2 5.0
3 -1.0
4 0.0
dtype: float64
By default, NA values are ignored.
>>> s.cumsum()
0 2.0
1 NaN
2 7.0
3 6.0
4 6.0
dtype: float64
To include NA values in the operation, use skipna=False
>>> s.cumsum(skipna=False)
0 2.0
1 NaN
2 NaN
3 NaN
4 NaN
dtype: float64
DataFrame
>>> df = pd.DataFrame([[2.0, 1.0],
... [3.0, np.nan],
... [1.0, 0.0]],
... columns=list('AB'))
>>> df
A B
0 2.0 1.0
1 3.0 NaN
2 1.0 0.0
By default, iterates over rows and finds the sum
in each column. This is equivalent to axis=None or axis='index'.
>>> df.cumsum()
A B
0 2.0 1.0
1 5.0 NaN
2 6.0 1.0
To iterate over columns and find the sum in each row,
use axis=1
>>> df.cumsum(axis=1)
A B
0 2.0 3.0
1 3.0 NaN
2 1.0 1.0
| 176
| 663
|
Row-wise conditional counting keeping all columns without iterating over dataframe
I'm struggling with conditional counting in pandas.
Problem
I have a pandas dataframe that has 4 columns (for the sake of this example) : "id", "id2", "col1" and "type". The type column can have 3 values, namely "A", "B" and "C". What I'd like to do is, for each row, count the number of type C with the same id and id2. Here is a sample dataframe:
id id2 col1 type
0 "e" "z" 0 "A"
1 "e" "z" 1 "C"
2 "e" "z" 2 "C"
3 "e" "y" 3 "C"
4 "e" "y" 4 "A"
5 "f" "y" 4 "A"
6 "f" "x" 3 "B"
7 "f" "x" 4 "B"
8 "g" "w" 5 "C"
9 "g" "w" 6 "B"
The code to build the sample dataframe:
pd.DataFrame({
"id": ["e", "e", "e", "e", "e", "f", "f", "f", "g", "g"],
"id2": ["z", "z", "z", "y", "y", "x", "x", "x", "w", "w"],
"col1": [ 0 , 1 , 2 , 3 , 4 , 4 , 3 , 4 , 5 , 6 ],
"type": ["A", "C", "C", "C", "A", "A", "B", "B", "C", "B"]
})
And the desired result :
id id2 col1 type count
0 "e" "z" 0 "A" 2
1 "e" "z" 1 "C" 2
2 "e" "z" 2 "C" 2
3 "e" "y" 3 "C" 1
4 "e" "y" 4 "A" 1
5 "f" "y" 4 "A" 0
6 "f" "x" 3 "B" 0
7 "f" "x" 4 "B" 0
8 "g" "w" 5 "C" 1
9 "g" "w" 6 "B" 1
I don't really care about what happens to row with type "C" (eg. row 1, 2, 3, 8) so that's not a problem if they don't appear in the resulting dataframe.
I'd like a solution that doesn't rely on iterating "myself" through the dataset (no apply nor for loop) as they are too slow. I'm hopping to find a "pandaic" way of solving the problem.
Note: in the "real" dataset there are 3 columns used to index, type can have 5 different values and 36 data column should be preserved. But I prefer a scalable solution, not bounded to those number.
What I've tried
I can solve the problem using sqlalchemy and a query. Indeed, results should match the following query :
SELECT a.*, (SELECT COUNT(*)
FROM df b
WHERE
b.id = a.id AND
b.id2 = a.id2 AND
b.type = "C")
FROM df a
The initial problem can also be reworded as "what's the python code equivalent to this query ?".
I can also solve the problem using apply. Both are very slow due to the size of the dataset, although sql method is probably slow because it has to build the database at first.
Related posts
This post almost solves the problem, but doesn't work with external data column nor with multiple indexing and I couldn't adapt them for my example.
This line is close to what I'm looking for, the only issue is that it only keeps column you grouped by :
df.groupby(["id", "id2", "type"]).size().unstack().reset_index()
If any information is missing, please let me know.
Thank you for taking the time to read my post and sorry for the spelling mistakes !
|
69,965,488
|
Trying to create a % column on a Pandas Dataframe, but only getting NaN Values
|
<p>I´m trying to add a percentage column into a dataframe, but when i try to add it to the new column all i get is NaN values</p>
<p>To create the column 'percent_clicked' on the 'clicks_pivot' df:</p>
<pre><code>clicks_pivot['percent_clicked'] = (clicks_pivot.user_id / clicks_pivot.user_id.sum()) * 100
</code></pre>
<p>Printing the modified 'clicks_pivot' i get:</p>
<p>utm_source</p>
<p>email 255 NaN</p>
<p>facebook504 NaN</p>
<p>google 680 NaN</p>
<p>twitter 215 NaN</p>
<p>How can i get the % instead of the NaN values?</p>
| 69,965,592
| 2021-11-14T17:27:16.237000
| 1
| null | 0
| 13
|
python|pandas
|
<p>it works, i tested. Before using the code make sure the column in question is not holding strings instead of ints.</p>
<pre><code>df['b'] = (df['a'] / df['a'].sum()) * 100
</code></pre>
| 2021-11-14T17:39:51.500000
| 0
|
https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.reindex.html
|
pandas.DataFrame.reindex#
pandas.DataFrame.reindex#
DataFrame.reindex(labels=None, index=None, columns=None, axis=None, method=None, copy=None, level=None, fill_value=nan, limit=None, tolerance=None)[source]#
Conform Series/DataFrame to new index with optional filling logic.
Places NA/NaN in locations having no value in the previous index. A new object
is produced unless the new index is equivalent to the current one and
copy=False.
Parameters
keywords for axesarray-like, optionalNew labels / index to conform to, should be specified using
keywords. Preferably an Index object to avoid duplicating data.
it works, i tested. Before using the code make sure the column in question is not holding strings instead of ints.
df['b'] = (df['a'] / df['a'].sum()) * 100
method{None, ‘backfill’/’bfill’, ‘pad’/’ffill’, ‘nearest’}Method to use for filling holes in reindexed DataFrame.
Please note: this is only applicable to DataFrames/Series with a
monotonically increasing/decreasing index.
None (default): don’t fill gaps
pad / ffill: Propagate last valid observation forward to next
valid.
backfill / bfill: Use next valid observation to fill gap.
nearest: Use nearest valid observations to fill gap.
copybool, default TrueReturn a new object, even if the passed indexes are the same.
levelint or nameBroadcast across a level, matching Index values on the
passed MultiIndex level.
fill_valuescalar, default np.NaNValue to use for missing values. Defaults to NaN, but can be any
“compatible” value.
limitint, default NoneMaximum number of consecutive elements to forward or backward fill.
toleranceoptionalMaximum distance between original and new labels for inexact
matches. The values of the index at the matching locations most
satisfy the equation abs(index[indexer] - target) <= tolerance.
Tolerance may be a scalar value, which applies the same tolerance
to all values, or list-like, which applies variable tolerance per
element. List-like includes list, tuple, array, Series, and must be
the same size as the index and its dtype must exactly match the
index’s type.
Returns
Series/DataFrame with changed index.
See also
DataFrame.set_indexSet row labels.
DataFrame.reset_indexRemove row labels or move them to new columns.
DataFrame.reindex_likeChange to same indices as other DataFrame.
Examples
DataFrame.reindex supports two calling conventions
(index=index_labels, columns=column_labels, ...)
(labels, axis={'index', 'columns'}, ...)
We highly recommend using keyword arguments to clarify your
intent.
Create a dataframe with some fictional data.
>>> index = ['Firefox', 'Chrome', 'Safari', 'IE10', 'Konqueror']
>>> df = pd.DataFrame({'http_status': [200, 200, 404, 404, 301],
... 'response_time': [0.04, 0.02, 0.07, 0.08, 1.0]},
... index=index)
>>> df
http_status response_time
Firefox 200 0.04
Chrome 200 0.02
Safari 404 0.07
IE10 404 0.08
Konqueror 301 1.00
Create a new index and reindex the dataframe. By default
values in the new index that do not have corresponding
records in the dataframe are assigned NaN.
>>> new_index = ['Safari', 'Iceweasel', 'Comodo Dragon', 'IE10',
... 'Chrome']
>>> df.reindex(new_index)
http_status response_time
Safari 404.0 0.07
Iceweasel NaN NaN
Comodo Dragon NaN NaN
IE10 404.0 0.08
Chrome 200.0 0.02
We can fill in the missing values by passing a value to
the keyword fill_value. Because the index is not monotonically
increasing or decreasing, we cannot use arguments to the keyword
method to fill the NaN values.
>>> df.reindex(new_index, fill_value=0)
http_status response_time
Safari 404 0.07
Iceweasel 0 0.00
Comodo Dragon 0 0.00
IE10 404 0.08
Chrome 200 0.02
>>> df.reindex(new_index, fill_value='missing')
http_status response_time
Safari 404 0.07
Iceweasel missing missing
Comodo Dragon missing missing
IE10 404 0.08
Chrome 200 0.02
We can also reindex the columns.
>>> df.reindex(columns=['http_status', 'user_agent'])
http_status user_agent
Firefox 200 NaN
Chrome 200 NaN
Safari 404 NaN
IE10 404 NaN
Konqueror 301 NaN
Or we can use “axis-style” keyword arguments
>>> df.reindex(['http_status', 'user_agent'], axis="columns")
http_status user_agent
Firefox 200 NaN
Chrome 200 NaN
Safari 404 NaN
IE10 404 NaN
Konqueror 301 NaN
To further illustrate the filling functionality in
reindex, we will create a dataframe with a
monotonically increasing index (for example, a sequence
of dates).
>>> date_index = pd.date_range('1/1/2010', periods=6, freq='D')
>>> df2 = pd.DataFrame({"prices": [100, 101, np.nan, 100, 89, 88]},
... index=date_index)
>>> df2
prices
2010-01-01 100.0
2010-01-02 101.0
2010-01-03 NaN
2010-01-04 100.0
2010-01-05 89.0
2010-01-06 88.0
Suppose we decide to expand the dataframe to cover a wider
date range.
>>> date_index2 = pd.date_range('12/29/2009', periods=10, freq='D')
>>> df2.reindex(date_index2)
prices
2009-12-29 NaN
2009-12-30 NaN
2009-12-31 NaN
2010-01-01 100.0
2010-01-02 101.0
2010-01-03 NaN
2010-01-04 100.0
2010-01-05 89.0
2010-01-06 88.0
2010-01-07 NaN
The index entries that did not have a value in the original data frame
(for example, ‘2009-12-29’) are by default filled with NaN.
If desired, we can fill in the missing values using one of several
options.
For example, to back-propagate the last valid value to fill the NaN
values, pass bfill as an argument to the method keyword.
>>> df2.reindex(date_index2, method='bfill')
prices
2009-12-29 100.0
2009-12-30 100.0
2009-12-31 100.0
2010-01-01 100.0
2010-01-02 101.0
2010-01-03 NaN
2010-01-04 100.0
2010-01-05 89.0
2010-01-06 88.0
2010-01-07 NaN
Please note that the NaN value present in the original dataframe
(at index value 2010-01-03) will not be filled by any of the
value propagation schemes. This is because filling while reindexing
does not look at dataframe values, but only compares the original and
desired indexes. If you do want to fill in the NaN values present
in the original dataframe, use the fillna() method.
See the user guide for more.
| 615
| 772
|
Trying to create a % column on a Pandas Dataframe, but only getting NaN Values
I´m trying to add a percentage column into a dataframe, but when i try to add it to the new column all i get is NaN values
To create the column 'percent_clicked' on the 'clicks_pivot' df:
clicks_pivot['percent_clicked'] = (clicks_pivot.user_id / clicks_pivot.user_id.sum()) * 100
Printing the modified 'clicks_pivot' i get:
utm_source
email 255 NaN
facebook504 NaN
google 680 NaN
twitter 215 NaN
How can i get the % instead of the NaN values?
|
64,573,251
|
Is there a way to use the groupby function in pandas so that something could be referenced as 0?
|
<p>So I have this CSV file that I'm using in Pandas, and it contains info on if a post it pulled from online has a certain word in it. So let's say I'm looking at sports, the CSV file basically looks like this:</p>
<pre><code>Date of Post Sport Mentioned
9-22 Basketball
9-22 Hockey
9-22 Football
9-24 Baseball
9-24 Hockey
9-24 Football
</code></pre>
<p>I want it so that when I use groupby('Date of Post').count(), it would show 0 on 9-23, since there's no mention of any sport on that date. Is there a way to do this? I'm pretty certain that pandas sees the first column as being dates, not just a regular string.</p>
| 64,573,352
| 2020-10-28T12:50:51.327000
| 1
| null | 0
| 18
|
python|pandas
|
<p>Use <code>DataFrame.resample</code>:</p>
<pre><code>df['Date of Post'] = pd.to_datetime(df['Date of Post'], format='%m-%d')
df.resample('D', on='Date of Post').size()
Date of Post
1900-09-22 3
1900-09-23 0
1900-09-24 3
Freq: D, dtype: int64
</code></pre>
<p>If you want to add the correct year, use:</p>
<pre><code>df['Date of Post'] = pd.to_datetime('2020-' + df['Date of Post'], format='%Y-%m-%d')
df.resample('D', on='Date of Post').size()
Date of Post
2020-09-22 3
2020-09-23 0
2020-09-24 3
Freq: D, dtype: int64
</code></pre>
| 2020-10-28T12:58:15.417000
| 0
|
https://pandas.pydata.org/docs/user_guide/groupby.html
|
Group by: split-apply-combine#
Group by: split-apply-combine#
By “group by” we are referring to a process involving one or more of the following
steps:
Splitting the data into groups based on some criteria.
Applying a function to each group independently.
Combining the results into a data structure.
Out of these, the split step is the most straightforward. In fact, in many
situations we may wish to split the data set into groups and do something with
Use DataFrame.resample:
df['Date of Post'] = pd.to_datetime(df['Date of Post'], format='%m-%d')
df.resample('D', on='Date of Post').size()
Date of Post
1900-09-22 3
1900-09-23 0
1900-09-24 3
Freq: D, dtype: int64
If you want to add the correct year, use:
df['Date of Post'] = pd.to_datetime('2020-' + df['Date of Post'], format='%Y-%m-%d')
df.resample('D', on='Date of Post').size()
Date of Post
2020-09-22 3
2020-09-23 0
2020-09-24 3
Freq: D, dtype: int64
those groups. In the apply step, we might wish to do one of the
following:
Aggregation: compute a summary statistic (or statistics) for each
group. Some examples:
Compute group sums or means.
Compute group sizes / counts.
Transformation: perform some group-specific computations and return a
like-indexed object. Some examples:
Standardize data (zscore) within a group.
Filling NAs within groups with a value derived from each group.
Filtration: discard some groups, according to a group-wise computation
that evaluates True or False. Some examples:
Discard data that belongs to groups with only a few members.
Filter out data based on the group sum or mean.
Some combination of the above: GroupBy will examine the results of the apply
step and try to return a sensibly combined result if it doesn’t fit into
either of the above two categories.
Since the set of object instance methods on pandas data structures are generally
rich and expressive, we often simply want to invoke, say, a DataFrame function
on each group. The name GroupBy should be quite familiar to those who have used
a SQL-based tool (or itertools), in which you can write code like:
SELECT Column1, Column2, mean(Column3), sum(Column4)
FROM SomeTable
GROUP BY Column1, Column2
We aim to make operations like this natural and easy to express using
pandas. We’ll address each area of GroupBy functionality then provide some
non-trivial examples / use cases.
See the cookbook for some advanced strategies.
Splitting an object into groups#
pandas objects can be split on any of their axes. The abstract definition of
grouping is to provide a mapping of labels to group names. To create a GroupBy
object (more on what the GroupBy object is later), you may do the following:
In [1]: df = pd.DataFrame(
...: [
...: ("bird", "Falconiformes", 389.0),
...: ("bird", "Psittaciformes", 24.0),
...: ("mammal", "Carnivora", 80.2),
...: ("mammal", "Primates", np.nan),
...: ("mammal", "Carnivora", 58),
...: ],
...: index=["falcon", "parrot", "lion", "monkey", "leopard"],
...: columns=("class", "order", "max_speed"),
...: )
...:
In [2]: df
Out[2]:
class order max_speed
falcon bird Falconiformes 389.0
parrot bird Psittaciformes 24.0
lion mammal Carnivora 80.2
monkey mammal Primates NaN
leopard mammal Carnivora 58.0
# default is axis=0
In [3]: grouped = df.groupby("class")
In [4]: grouped = df.groupby("order", axis="columns")
In [5]: grouped = df.groupby(["class", "order"])
The mapping can be specified many different ways:
A Python function, to be called on each of the axis labels.
A list or NumPy array of the same length as the selected axis.
A dict or Series, providing a label -> group name mapping.
For DataFrame objects, a string indicating either a column name or
an index level name to be used to group.
df.groupby('A') is just syntactic sugar for df.groupby(df['A']).
A list of any of the above things.
Collectively we refer to the grouping objects as the keys. For example,
consider the following DataFrame:
Note
A string passed to groupby may refer to either a column or an index level.
If a string matches both a column name and an index level name, a
ValueError will be raised.
In [6]: df = pd.DataFrame(
...: {
...: "A": ["foo", "bar", "foo", "bar", "foo", "bar", "foo", "foo"],
...: "B": ["one", "one", "two", "three", "two", "two", "one", "three"],
...: "C": np.random.randn(8),
...: "D": np.random.randn(8),
...: }
...: )
...:
In [7]: df
Out[7]:
A B C D
0 foo one 0.469112 -0.861849
1 bar one -0.282863 -2.104569
2 foo two -1.509059 -0.494929
3 bar three -1.135632 1.071804
4 foo two 1.212112 0.721555
5 bar two -0.173215 -0.706771
6 foo one 0.119209 -1.039575
7 foo three -1.044236 0.271860
On a DataFrame, we obtain a GroupBy object by calling groupby().
We could naturally group by either the A or B columns, or both:
In [8]: grouped = df.groupby("A")
In [9]: grouped = df.groupby(["A", "B"])
If we also have a MultiIndex on columns A and B, we can group by all
but the specified columns
In [10]: df2 = df.set_index(["A", "B"])
In [11]: grouped = df2.groupby(level=df2.index.names.difference(["B"]))
In [12]: grouped.sum()
Out[12]:
C D
A
bar -1.591710 -1.739537
foo -0.752861 -1.402938
These will split the DataFrame on its index (rows). We could also split by the
columns:
In [13]: def get_letter_type(letter):
....: if letter.lower() in 'aeiou':
....: return 'vowel'
....: else:
....: return 'consonant'
....:
In [14]: grouped = df.groupby(get_letter_type, axis=1)
pandas Index objects support duplicate values. If a
non-unique index is used as the group key in a groupby operation, all values
for the same index value will be considered to be in one group and thus the
output of aggregation functions will only contain unique index values:
In [15]: lst = [1, 2, 3, 1, 2, 3]
In [16]: s = pd.Series([1, 2, 3, 10, 20, 30], lst)
In [17]: grouped = s.groupby(level=0)
In [18]: grouped.first()
Out[18]:
1 1
2 2
3 3
dtype: int64
In [19]: grouped.last()
Out[19]:
1 10
2 20
3 30
dtype: int64
In [20]: grouped.sum()
Out[20]:
1 11
2 22
3 33
dtype: int64
Note that no splitting occurs until it’s needed. Creating the GroupBy object
only verifies that you’ve passed a valid mapping.
Note
Many kinds of complicated data manipulations can be expressed in terms of
GroupBy operations (though can’t be guaranteed to be the most
efficient). You can get quite creative with the label mapping functions.
GroupBy sorting#
By default the group keys are sorted during the groupby operation. You may however pass sort=False for potential speedups:
In [21]: df2 = pd.DataFrame({"X": ["B", "B", "A", "A"], "Y": [1, 2, 3, 4]})
In [22]: df2.groupby(["X"]).sum()
Out[22]:
Y
X
A 7
B 3
In [23]: df2.groupby(["X"], sort=False).sum()
Out[23]:
Y
X
B 3
A 7
Note that groupby will preserve the order in which observations are sorted within each group.
For example, the groups created by groupby() below are in the order they appeared in the original DataFrame:
In [24]: df3 = pd.DataFrame({"X": ["A", "B", "A", "B"], "Y": [1, 4, 3, 2]})
In [25]: df3.groupby(["X"]).get_group("A")
Out[25]:
X Y
0 A 1
2 A 3
In [26]: df3.groupby(["X"]).get_group("B")
Out[26]:
X Y
1 B 4
3 B 2
New in version 1.1.0.
GroupBy dropna#
By default NA values are excluded from group keys during the groupby operation. However,
in case you want to include NA values in group keys, you could pass dropna=False to achieve it.
In [27]: df_list = [[1, 2, 3], [1, None, 4], [2, 1, 3], [1, 2, 2]]
In [28]: df_dropna = pd.DataFrame(df_list, columns=["a", "b", "c"])
In [29]: df_dropna
Out[29]:
a b c
0 1 2.0 3
1 1 NaN 4
2 2 1.0 3
3 1 2.0 2
# Default ``dropna`` is set to True, which will exclude NaNs in keys
In [30]: df_dropna.groupby(by=["b"], dropna=True).sum()
Out[30]:
a c
b
1.0 2 3
2.0 2 5
# In order to allow NaN in keys, set ``dropna`` to False
In [31]: df_dropna.groupby(by=["b"], dropna=False).sum()
Out[31]:
a c
b
1.0 2 3
2.0 2 5
NaN 1 4
The default setting of dropna argument is True which means NA are not included in group keys.
GroupBy object attributes#
The groups attribute is a dict whose keys are the computed unique groups
and corresponding values being the axis labels belonging to each group. In the
above example we have:
In [32]: df.groupby("A").groups
Out[32]: {'bar': [1, 3, 5], 'foo': [0, 2, 4, 6, 7]}
In [33]: df.groupby(get_letter_type, axis=1).groups
Out[33]: {'consonant': ['B', 'C', 'D'], 'vowel': ['A']}
Calling the standard Python len function on the GroupBy object just returns
the length of the groups dict, so it is largely just a convenience:
In [34]: grouped = df.groupby(["A", "B"])
In [35]: grouped.groups
Out[35]: {('bar', 'one'): [1], ('bar', 'three'): [3], ('bar', 'two'): [5], ('foo', 'one'): [0, 6], ('foo', 'three'): [7], ('foo', 'two'): [2, 4]}
In [36]: len(grouped)
Out[36]: 6
GroupBy will tab complete column names (and other attributes):
In [37]: df
Out[37]:
height weight gender
2000-01-01 42.849980 157.500553 male
2000-01-02 49.607315 177.340407 male
2000-01-03 56.293531 171.524640 male
2000-01-04 48.421077 144.251986 female
2000-01-05 46.556882 152.526206 male
2000-01-06 68.448851 168.272968 female
2000-01-07 70.757698 136.431469 male
2000-01-08 58.909500 176.499753 female
2000-01-09 76.435631 174.094104 female
2000-01-10 45.306120 177.540920 male
In [38]: gb = df.groupby("gender")
In [39]: gb.<TAB> # noqa: E225, E999
gb.agg gb.boxplot gb.cummin gb.describe gb.filter gb.get_group gb.height gb.last gb.median gb.ngroups gb.plot gb.rank gb.std gb.transform
gb.aggregate gb.count gb.cumprod gb.dtype gb.first gb.groups gb.hist gb.max gb.min gb.nth gb.prod gb.resample gb.sum gb.var
gb.apply gb.cummax gb.cumsum gb.fillna gb.gender gb.head gb.indices gb.mean gb.name gb.ohlc gb.quantile gb.size gb.tail gb.weight
GroupBy with MultiIndex#
With hierarchically-indexed data, it’s quite
natural to group by one of the levels of the hierarchy.
Let’s create a Series with a two-level MultiIndex.
In [40]: arrays = [
....: ["bar", "bar", "baz", "baz", "foo", "foo", "qux", "qux"],
....: ["one", "two", "one", "two", "one", "two", "one", "two"],
....: ]
....:
In [41]: index = pd.MultiIndex.from_arrays(arrays, names=["first", "second"])
In [42]: s = pd.Series(np.random.randn(8), index=index)
In [43]: s
Out[43]:
first second
bar one -0.919854
two -0.042379
baz one 1.247642
two -0.009920
foo one 0.290213
two 0.495767
qux one 0.362949
two 1.548106
dtype: float64
We can then group by one of the levels in s.
In [44]: grouped = s.groupby(level=0)
In [45]: grouped.sum()
Out[45]:
first
bar -0.962232
baz 1.237723
foo 0.785980
qux 1.911055
dtype: float64
If the MultiIndex has names specified, these can be passed instead of the level
number:
In [46]: s.groupby(level="second").sum()
Out[46]:
second
one 0.980950
two 1.991575
dtype: float64
Grouping with multiple levels is supported.
In [47]: s
Out[47]:
first second third
bar doo one -1.131345
two -0.089329
baz bee one 0.337863
two -0.945867
foo bop one -0.932132
two 1.956030
qux bop one 0.017587
two -0.016692
dtype: float64
In [48]: s.groupby(level=["first", "second"]).sum()
Out[48]:
first second
bar doo -1.220674
baz bee -0.608004
foo bop 1.023898
qux bop 0.000895
dtype: float64
Index level names may be supplied as keys.
In [49]: s.groupby(["first", "second"]).sum()
Out[49]:
first second
bar doo -1.220674
baz bee -0.608004
foo bop 1.023898
qux bop 0.000895
dtype: float64
More on the sum function and aggregation later.
Grouping DataFrame with Index levels and columns#
A DataFrame may be grouped by a combination of columns and index levels by
specifying the column names as strings and the index levels as pd.Grouper
objects.
In [50]: arrays = [
....: ["bar", "bar", "baz", "baz", "foo", "foo", "qux", "qux"],
....: ["one", "two", "one", "two", "one", "two", "one", "two"],
....: ]
....:
In [51]: index = pd.MultiIndex.from_arrays(arrays, names=["first", "second"])
In [52]: df = pd.DataFrame({"A": [1, 1, 1, 1, 2, 2, 3, 3], "B": np.arange(8)}, index=index)
In [53]: df
Out[53]:
A B
first second
bar one 1 0
two 1 1
baz one 1 2
two 1 3
foo one 2 4
two 2 5
qux one 3 6
two 3 7
The following example groups df by the second index level and
the A column.
In [54]: df.groupby([pd.Grouper(level=1), "A"]).sum()
Out[54]:
B
second A
one 1 2
2 4
3 6
two 1 4
2 5
3 7
Index levels may also be specified by name.
In [55]: df.groupby([pd.Grouper(level="second"), "A"]).sum()
Out[55]:
B
second A
one 1 2
2 4
3 6
two 1 4
2 5
3 7
Index level names may be specified as keys directly to groupby.
In [56]: df.groupby(["second", "A"]).sum()
Out[56]:
B
second A
one 1 2
2 4
3 6
two 1 4
2 5
3 7
DataFrame column selection in GroupBy#
Once you have created the GroupBy object from a DataFrame, you might want to do
something different for each of the columns. Thus, using [] similar to
getting a column from a DataFrame, you can do:
In [57]: df = pd.DataFrame(
....: {
....: "A": ["foo", "bar", "foo", "bar", "foo", "bar", "foo", "foo"],
....: "B": ["one", "one", "two", "three", "two", "two", "one", "three"],
....: "C": np.random.randn(8),
....: "D": np.random.randn(8),
....: }
....: )
....:
In [58]: df
Out[58]:
A B C D
0 foo one -0.575247 1.346061
1 bar one 0.254161 1.511763
2 foo two -1.143704 1.627081
3 bar three 0.215897 -0.990582
4 foo two 1.193555 -0.441652
5 bar two -0.077118 1.211526
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
In [59]: grouped = df.groupby(["A"])
In [60]: grouped_C = grouped["C"]
In [61]: grouped_D = grouped["D"]
This is mainly syntactic sugar for the alternative and much more verbose:
In [62]: df["C"].groupby(df["A"])
Out[62]: <pandas.core.groupby.generic.SeriesGroupBy object at 0x7f1ea100a490>
Additionally this method avoids recomputing the internal grouping information
derived from the passed key.
Iterating through groups#
With the GroupBy object in hand, iterating through the grouped data is very
natural and functions similarly to itertools.groupby():
In [63]: grouped = df.groupby('A')
In [64]: for name, group in grouped:
....: print(name)
....: print(group)
....:
bar
A B C D
1 bar one 0.254161 1.511763
3 bar three 0.215897 -0.990582
5 bar two -0.077118 1.211526
foo
A B C D
0 foo one -0.575247 1.346061
2 foo two -1.143704 1.627081
4 foo two 1.193555 -0.441652
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
In the case of grouping by multiple keys, the group name will be a tuple:
In [65]: for name, group in df.groupby(['A', 'B']):
....: print(name)
....: print(group)
....:
('bar', 'one')
A B C D
1 bar one 0.254161 1.511763
('bar', 'three')
A B C D
3 bar three 0.215897 -0.990582
('bar', 'two')
A B C D
5 bar two -0.077118 1.211526
('foo', 'one')
A B C D
0 foo one -0.575247 1.346061
6 foo one -0.408530 0.268520
('foo', 'three')
A B C D
7 foo three -0.862495 0.02458
('foo', 'two')
A B C D
2 foo two -1.143704 1.627081
4 foo two 1.193555 -0.441652
See Iterating through groups.
Selecting a group#
A single group can be selected using
get_group():
In [66]: grouped.get_group("bar")
Out[66]:
A B C D
1 bar one 0.254161 1.511763
3 bar three 0.215897 -0.990582
5 bar two -0.077118 1.211526
Or for an object grouped on multiple columns:
In [67]: df.groupby(["A", "B"]).get_group(("bar", "one"))
Out[67]:
A B C D
1 bar one 0.254161 1.511763
Aggregation#
Once the GroupBy object has been created, several methods are available to
perform a computation on the grouped data. These operations are similar to the
aggregating API, window API,
and resample API.
An obvious one is aggregation via the
aggregate() or equivalently
agg() method:
In [68]: grouped = df.groupby("A")
In [69]: grouped[["C", "D"]].aggregate(np.sum)
Out[69]:
C D
A
bar 0.392940 1.732707
foo -1.796421 2.824590
In [70]: grouped = df.groupby(["A", "B"])
In [71]: grouped.aggregate(np.sum)
Out[71]:
C D
A B
bar one 0.254161 1.511763
three 0.215897 -0.990582
two -0.077118 1.211526
foo one -0.983776 1.614581
three -0.862495 0.024580
two 0.049851 1.185429
As you can see, the result of the aggregation will have the group names as the
new index along the grouped axis. In the case of multiple keys, the result is a
MultiIndex by default, though this can be
changed by using the as_index option:
In [72]: grouped = df.groupby(["A", "B"], as_index=False)
In [73]: grouped.aggregate(np.sum)
Out[73]:
A B C D
0 bar one 0.254161 1.511763
1 bar three 0.215897 -0.990582
2 bar two -0.077118 1.211526
3 foo one -0.983776 1.614581
4 foo three -0.862495 0.024580
5 foo two 0.049851 1.185429
In [74]: df.groupby("A", as_index=False)[["C", "D"]].sum()
Out[74]:
A C D
0 bar 0.392940 1.732707
1 foo -1.796421 2.824590
Note that you could use the reset_index DataFrame function to achieve the
same result as the column names are stored in the resulting MultiIndex:
In [75]: df.groupby(["A", "B"]).sum().reset_index()
Out[75]:
A B C D
0 bar one 0.254161 1.511763
1 bar three 0.215897 -0.990582
2 bar two -0.077118 1.211526
3 foo one -0.983776 1.614581
4 foo three -0.862495 0.024580
5 foo two 0.049851 1.185429
Another simple aggregation example is to compute the size of each group.
This is included in GroupBy as the size method. It returns a Series whose
index are the group names and whose values are the sizes of each group.
In [76]: grouped.size()
Out[76]:
A B size
0 bar one 1
1 bar three 1
2 bar two 1
3 foo one 2
4 foo three 1
5 foo two 2
In [77]: grouped.describe()
Out[77]:
C ... D
count mean std min ... 25% 50% 75% max
0 1.0 0.254161 NaN 0.254161 ... 1.511763 1.511763 1.511763 1.511763
1 1.0 0.215897 NaN 0.215897 ... -0.990582 -0.990582 -0.990582 -0.990582
2 1.0 -0.077118 NaN -0.077118 ... 1.211526 1.211526 1.211526 1.211526
3 2.0 -0.491888 0.117887 -0.575247 ... 0.537905 0.807291 1.076676 1.346061
4 1.0 -0.862495 NaN -0.862495 ... 0.024580 0.024580 0.024580 0.024580
5 2.0 0.024925 1.652692 -1.143704 ... 0.075531 0.592714 1.109898 1.627081
[6 rows x 16 columns]
Another aggregation example is to compute the number of unique values of each group. This is similar to the value_counts function, except that it only counts unique values.
In [78]: ll = [['foo', 1], ['foo', 2], ['foo', 2], ['bar', 1], ['bar', 1]]
In [79]: df4 = pd.DataFrame(ll, columns=["A", "B"])
In [80]: df4
Out[80]:
A B
0 foo 1
1 foo 2
2 foo 2
3 bar 1
4 bar 1
In [81]: df4.groupby("A")["B"].nunique()
Out[81]:
A
bar 1
foo 2
Name: B, dtype: int64
Note
Aggregation functions will not return the groups that you are aggregating over
if they are named columns, when as_index=True, the default. The grouped columns will
be the indices of the returned object.
Passing as_index=False will return the groups that you are aggregating over, if they are
named columns.
Aggregating functions are the ones that reduce the dimension of the returned objects.
Some common aggregating functions are tabulated below:
Function
Description
mean()
Compute mean of groups
sum()
Compute sum of group values
size()
Compute group sizes
count()
Compute count of group
std()
Standard deviation of groups
var()
Compute variance of groups
sem()
Standard error of the mean of groups
describe()
Generates descriptive statistics
first()
Compute first of group values
last()
Compute last of group values
nth()
Take nth value, or a subset if n is a list
min()
Compute min of group values
max()
Compute max of group values
The aggregating functions above will exclude NA values. Any function which
reduces a Series to a scalar value is an aggregation function and will work,
a trivial example is df.groupby('A').agg(lambda ser: 1). Note that
nth() can act as a reducer or a
filter, see here.
Applying multiple functions at once#
With grouped Series you can also pass a list or dict of functions to do
aggregation with, outputting a DataFrame:
In [82]: grouped = df.groupby("A")
In [83]: grouped["C"].agg([np.sum, np.mean, np.std])
Out[83]:
sum mean std
A
bar 0.392940 0.130980 0.181231
foo -1.796421 -0.359284 0.912265
On a grouped DataFrame, you can pass a list of functions to apply to each
column, which produces an aggregated result with a hierarchical index:
In [84]: grouped[["C", "D"]].agg([np.sum, np.mean, np.std])
Out[84]:
C D
sum mean std sum mean std
A
bar 0.392940 0.130980 0.181231 1.732707 0.577569 1.366330
foo -1.796421 -0.359284 0.912265 2.824590 0.564918 0.884785
The resulting aggregations are named for the functions themselves. If you
need to rename, then you can add in a chained operation for a Series like this:
In [85]: (
....: grouped["C"]
....: .agg([np.sum, np.mean, np.std])
....: .rename(columns={"sum": "foo", "mean": "bar", "std": "baz"})
....: )
....:
Out[85]:
foo bar baz
A
bar 0.392940 0.130980 0.181231
foo -1.796421 -0.359284 0.912265
For a grouped DataFrame, you can rename in a similar manner:
In [86]: (
....: grouped[["C", "D"]].agg([np.sum, np.mean, np.std]).rename(
....: columns={"sum": "foo", "mean": "bar", "std": "baz"}
....: )
....: )
....:
Out[86]:
C D
foo bar baz foo bar baz
A
bar 0.392940 0.130980 0.181231 1.732707 0.577569 1.366330
foo -1.796421 -0.359284 0.912265 2.824590 0.564918 0.884785
Note
In general, the output column names should be unique. You can’t apply
the same function (or two functions with the same name) to the same
column.
In [87]: grouped["C"].agg(["sum", "sum"])
Out[87]:
sum sum
A
bar 0.392940 0.392940
foo -1.796421 -1.796421
pandas does allow you to provide multiple lambdas. In this case, pandas
will mangle the name of the (nameless) lambda functions, appending _<i>
to each subsequent lambda.
In [88]: grouped["C"].agg([lambda x: x.max() - x.min(), lambda x: x.median() - x.mean()])
Out[88]:
<lambda_0> <lambda_1>
A
bar 0.331279 0.084917
foo 2.337259 -0.215962
Named aggregation#
New in version 0.25.0.
To support column-specific aggregation with control over the output column names, pandas
accepts the special syntax in GroupBy.agg(), known as “named aggregation”, where
The keywords are the output column names
The values are tuples whose first element is the column to select
and the second element is the aggregation to apply to that column. pandas
provides the pandas.NamedAgg namedtuple with the fields ['column', 'aggfunc']
to make it clearer what the arguments are. As usual, the aggregation can
be a callable or a string alias.
In [89]: animals = pd.DataFrame(
....: {
....: "kind": ["cat", "dog", "cat", "dog"],
....: "height": [9.1, 6.0, 9.5, 34.0],
....: "weight": [7.9, 7.5, 9.9, 198.0],
....: }
....: )
....:
In [90]: animals
Out[90]:
kind height weight
0 cat 9.1 7.9
1 dog 6.0 7.5
2 cat 9.5 9.9
3 dog 34.0 198.0
In [91]: animals.groupby("kind").agg(
....: min_height=pd.NamedAgg(column="height", aggfunc="min"),
....: max_height=pd.NamedAgg(column="height", aggfunc="max"),
....: average_weight=pd.NamedAgg(column="weight", aggfunc=np.mean),
....: )
....:
Out[91]:
min_height max_height average_weight
kind
cat 9.1 9.5 8.90
dog 6.0 34.0 102.75
pandas.NamedAgg is just a namedtuple. Plain tuples are allowed as well.
In [92]: animals.groupby("kind").agg(
....: min_height=("height", "min"),
....: max_height=("height", "max"),
....: average_weight=("weight", np.mean),
....: )
....:
Out[92]:
min_height max_height average_weight
kind
cat 9.1 9.5 8.90
dog 6.0 34.0 102.75
If your desired output column names are not valid Python keywords, construct a dictionary
and unpack the keyword arguments
In [93]: animals.groupby("kind").agg(
....: **{
....: "total weight": pd.NamedAgg(column="weight", aggfunc=sum)
....: }
....: )
....:
Out[93]:
total weight
kind
cat 17.8
dog 205.5
Additional keyword arguments are not passed through to the aggregation functions. Only pairs
of (column, aggfunc) should be passed as **kwargs. If your aggregation functions
requires additional arguments, partially apply them with functools.partial().
Note
For Python 3.5 and earlier, the order of **kwargs in a functions was not
preserved. This means that the output column ordering would not be
consistent. To ensure consistent ordering, the keys (and so output columns)
will always be sorted for Python 3.5.
Named aggregation is also valid for Series groupby aggregations. In this case there’s
no column selection, so the values are just the functions.
In [94]: animals.groupby("kind").height.agg(
....: min_height="min",
....: max_height="max",
....: )
....:
Out[94]:
min_height max_height
kind
cat 9.1 9.5
dog 6.0 34.0
Applying different functions to DataFrame columns#
By passing a dict to aggregate you can apply a different aggregation to the
columns of a DataFrame:
In [95]: grouped.agg({"C": np.sum, "D": lambda x: np.std(x, ddof=1)})
Out[95]:
C D
A
bar 0.392940 1.366330
foo -1.796421 0.884785
The function names can also be strings. In order for a string to be valid it
must be either implemented on GroupBy or available via dispatching:
In [96]: grouped.agg({"C": "sum", "D": "std"})
Out[96]:
C D
A
bar 0.392940 1.366330
foo -1.796421 0.884785
Cython-optimized aggregation functions#
Some common aggregations, currently only sum, mean, std, and sem, have
optimized Cython implementations:
In [97]: df.groupby("A")[["C", "D"]].sum()
Out[97]:
C D
A
bar 0.392940 1.732707
foo -1.796421 2.824590
In [98]: df.groupby(["A", "B"]).mean()
Out[98]:
C D
A B
bar one 0.254161 1.511763
three 0.215897 -0.990582
two -0.077118 1.211526
foo one -0.491888 0.807291
three -0.862495 0.024580
two 0.024925 0.592714
Of course sum and mean are implemented on pandas objects, so the above
code would work even without the special versions via dispatching (see below).
Aggregations with User-Defined Functions#
Users can also provide their own functions for custom aggregations. When aggregating
with a User-Defined Function (UDF), the UDF should not mutate the provided Series, see
Mutating with User Defined Function (UDF) methods for more information.
In [99]: animals.groupby("kind")[["height"]].agg(lambda x: set(x))
Out[99]:
height
kind
cat {9.1, 9.5}
dog {34.0, 6.0}
The resulting dtype will reflect that of the aggregating function. If the results from different groups have
different dtypes, then a common dtype will be determined in the same way as DataFrame construction.
In [100]: animals.groupby("kind")[["height"]].agg(lambda x: x.astype(int).sum())
Out[100]:
height
kind
cat 18
dog 40
Transformation#
The transform method returns an object that is indexed the same
as the one being grouped. The transform function must:
Return a result that is either the same size as the group chunk or
broadcastable to the size of the group chunk (e.g., a scalar,
grouped.transform(lambda x: x.iloc[-1])).
Operate column-by-column on the group chunk. The transform is applied to
the first group chunk using chunk.apply.
Not perform in-place operations on the group chunk. Group chunks should
be treated as immutable, and changes to a group chunk may produce unexpected
results. For example, when using fillna, inplace must be False
(grouped.transform(lambda x: x.fillna(inplace=False))).
(Optionally) operates on the entire group chunk. If this is supported, a
fast path is used starting from the second chunk.
Deprecated since version 1.5.0: When using .transform on a grouped DataFrame and the transformation function
returns a DataFrame, currently pandas does not align the result’s index
with the input’s index. This behavior is deprecated and alignment will
be performed in a future version of pandas. You can apply .to_numpy() to the
result of the transformation function to avoid alignment.
Similar to Aggregations with User-Defined Functions, the resulting dtype will reflect that of the
transformation function. If the results from different groups have different dtypes, then
a common dtype will be determined in the same way as DataFrame construction.
Suppose we wished to standardize the data within each group:
In [101]: index = pd.date_range("10/1/1999", periods=1100)
In [102]: ts = pd.Series(np.random.normal(0.5, 2, 1100), index)
In [103]: ts = ts.rolling(window=100, min_periods=100).mean().dropna()
In [104]: ts.head()
Out[104]:
2000-01-08 0.779333
2000-01-09 0.778852
2000-01-10 0.786476
2000-01-11 0.782797
2000-01-12 0.798110
Freq: D, dtype: float64
In [105]: ts.tail()
Out[105]:
2002-09-30 0.660294
2002-10-01 0.631095
2002-10-02 0.673601
2002-10-03 0.709213
2002-10-04 0.719369
Freq: D, dtype: float64
In [106]: transformed = ts.groupby(lambda x: x.year).transform(
.....: lambda x: (x - x.mean()) / x.std()
.....: )
.....:
We would expect the result to now have mean 0 and standard deviation 1 within
each group, which we can easily check:
# Original Data
In [107]: grouped = ts.groupby(lambda x: x.year)
In [108]: grouped.mean()
Out[108]:
2000 0.442441
2001 0.526246
2002 0.459365
dtype: float64
In [109]: grouped.std()
Out[109]:
2000 0.131752
2001 0.210945
2002 0.128753
dtype: float64
# Transformed Data
In [110]: grouped_trans = transformed.groupby(lambda x: x.year)
In [111]: grouped_trans.mean()
Out[111]:
2000 -4.870756e-16
2001 -1.545187e-16
2002 4.136282e-16
dtype: float64
In [112]: grouped_trans.std()
Out[112]:
2000 1.0
2001 1.0
2002 1.0
dtype: float64
We can also visually compare the original and transformed data sets.
In [113]: compare = pd.DataFrame({"Original": ts, "Transformed": transformed})
In [114]: compare.plot()
Out[114]: <AxesSubplot: >
Transformation functions that have lower dimension outputs are broadcast to
match the shape of the input array.
In [115]: ts.groupby(lambda x: x.year).transform(lambda x: x.max() - x.min())
Out[115]:
2000-01-08 0.623893
2000-01-09 0.623893
2000-01-10 0.623893
2000-01-11 0.623893
2000-01-12 0.623893
...
2002-09-30 0.558275
2002-10-01 0.558275
2002-10-02 0.558275
2002-10-03 0.558275
2002-10-04 0.558275
Freq: D, Length: 1001, dtype: float64
Alternatively, the built-in methods could be used to produce the same outputs.
In [116]: max_ts = ts.groupby(lambda x: x.year).transform("max")
In [117]: min_ts = ts.groupby(lambda x: x.year).transform("min")
In [118]: max_ts - min_ts
Out[118]:
2000-01-08 0.623893
2000-01-09 0.623893
2000-01-10 0.623893
2000-01-11 0.623893
2000-01-12 0.623893
...
2002-09-30 0.558275
2002-10-01 0.558275
2002-10-02 0.558275
2002-10-03 0.558275
2002-10-04 0.558275
Freq: D, Length: 1001, dtype: float64
Another common data transform is to replace missing data with the group mean.
In [119]: data_df
Out[119]:
A B C
0 1.539708 -1.166480 0.533026
1 1.302092 -0.505754 NaN
2 -0.371983 1.104803 -0.651520
3 -1.309622 1.118697 -1.161657
4 -1.924296 0.396437 0.812436
.. ... ... ...
995 -0.093110 0.683847 -0.774753
996 -0.185043 1.438572 NaN
997 -0.394469 -0.642343 0.011374
998 -1.174126 1.857148 NaN
999 0.234564 0.517098 0.393534
[1000 rows x 3 columns]
In [120]: countries = np.array(["US", "UK", "GR", "JP"])
In [121]: key = countries[np.random.randint(0, 4, 1000)]
In [122]: grouped = data_df.groupby(key)
# Non-NA count in each group
In [123]: grouped.count()
Out[123]:
A B C
GR 209 217 189
JP 240 255 217
UK 216 231 193
US 239 250 217
In [124]: transformed = grouped.transform(lambda x: x.fillna(x.mean()))
We can verify that the group means have not changed in the transformed data
and that the transformed data contains no NAs.
In [125]: grouped_trans = transformed.groupby(key)
In [126]: grouped.mean() # original group means
Out[126]:
A B C
GR -0.098371 -0.015420 0.068053
JP 0.069025 0.023100 -0.077324
UK 0.034069 -0.052580 -0.116525
US 0.058664 -0.020399 0.028603
In [127]: grouped_trans.mean() # transformation did not change group means
Out[127]:
A B C
GR -0.098371 -0.015420 0.068053
JP 0.069025 0.023100 -0.077324
UK 0.034069 -0.052580 -0.116525
US 0.058664 -0.020399 0.028603
In [128]: grouped.count() # original has some missing data points
Out[128]:
A B C
GR 209 217 189
JP 240 255 217
UK 216 231 193
US 239 250 217
In [129]: grouped_trans.count() # counts after transformation
Out[129]:
A B C
GR 228 228 228
JP 267 267 267
UK 247 247 247
US 258 258 258
In [130]: grouped_trans.size() # Verify non-NA count equals group size
Out[130]:
GR 228
JP 267
UK 247
US 258
dtype: int64
Note
Some functions will automatically transform the input when applied to a
GroupBy object, but returning an object of the same shape as the original.
Passing as_index=False will not affect these transformation methods.
For example: fillna, ffill, bfill, shift..
In [131]: grouped.ffill()
Out[131]:
A B C
0 1.539708 -1.166480 0.533026
1 1.302092 -0.505754 0.533026
2 -0.371983 1.104803 -0.651520
3 -1.309622 1.118697 -1.161657
4 -1.924296 0.396437 0.812436
.. ... ... ...
995 -0.093110 0.683847 -0.774753
996 -0.185043 1.438572 -0.774753
997 -0.394469 -0.642343 0.011374
998 -1.174126 1.857148 -0.774753
999 0.234564 0.517098 0.393534
[1000 rows x 3 columns]
Window and resample operations#
It is possible to use resample(), expanding() and
rolling() as methods on groupbys.
The example below will apply the rolling() method on the samples of
the column B based on the groups of column A.
In [132]: df_re = pd.DataFrame({"A": [1] * 10 + [5] * 10, "B": np.arange(20)})
In [133]: df_re
Out[133]:
A B
0 1 0
1 1 1
2 1 2
3 1 3
4 1 4
.. .. ..
15 5 15
16 5 16
17 5 17
18 5 18
19 5 19
[20 rows x 2 columns]
In [134]: df_re.groupby("A").rolling(4).B.mean()
Out[134]:
A
1 0 NaN
1 NaN
2 NaN
3 1.5
4 2.5
...
5 15 13.5
16 14.5
17 15.5
18 16.5
19 17.5
Name: B, Length: 20, dtype: float64
The expanding() method will accumulate a given operation
(sum() in the example) for all the members of each particular
group.
In [135]: df_re.groupby("A").expanding().sum()
Out[135]:
B
A
1 0 0.0
1 1.0
2 3.0
3 6.0
4 10.0
... ...
5 15 75.0
16 91.0
17 108.0
18 126.0
19 145.0
[20 rows x 1 columns]
Suppose you want to use the resample() method to get a daily
frequency in each group of your dataframe and wish to complete the
missing values with the ffill() method.
In [136]: df_re = pd.DataFrame(
.....: {
.....: "date": pd.date_range(start="2016-01-01", periods=4, freq="W"),
.....: "group": [1, 1, 2, 2],
.....: "val": [5, 6, 7, 8],
.....: }
.....: ).set_index("date")
.....:
In [137]: df_re
Out[137]:
group val
date
2016-01-03 1 5
2016-01-10 1 6
2016-01-17 2 7
2016-01-24 2 8
In [138]: df_re.groupby("group").resample("1D").ffill()
Out[138]:
group val
group date
1 2016-01-03 1 5
2016-01-04 1 5
2016-01-05 1 5
2016-01-06 1 5
2016-01-07 1 5
... ... ...
2 2016-01-20 2 7
2016-01-21 2 7
2016-01-22 2 7
2016-01-23 2 7
2016-01-24 2 8
[16 rows x 2 columns]
Filtration#
The filter method returns a subset of the original object. Suppose we
want to take only elements that belong to groups with a group sum greater
than 2.
In [139]: sf = pd.Series([1, 1, 2, 3, 3, 3])
In [140]: sf.groupby(sf).filter(lambda x: x.sum() > 2)
Out[140]:
3 3
4 3
5 3
dtype: int64
The argument of filter must be a function that, applied to the group as a
whole, returns True or False.
Another useful operation is filtering out elements that belong to groups
with only a couple members.
In [141]: dff = pd.DataFrame({"A": np.arange(8), "B": list("aabbbbcc")})
In [142]: dff.groupby("B").filter(lambda x: len(x) > 2)
Out[142]:
A B
2 2 b
3 3 b
4 4 b
5 5 b
Alternatively, instead of dropping the offending groups, we can return a
like-indexed objects where the groups that do not pass the filter are filled
with NaNs.
In [143]: dff.groupby("B").filter(lambda x: len(x) > 2, dropna=False)
Out[143]:
A B
0 NaN NaN
1 NaN NaN
2 2.0 b
3 3.0 b
4 4.0 b
5 5.0 b
6 NaN NaN
7 NaN NaN
For DataFrames with multiple columns, filters should explicitly specify a column as the filter criterion.
In [144]: dff["C"] = np.arange(8)
In [145]: dff.groupby("B").filter(lambda x: len(x["C"]) > 2)
Out[145]:
A B C
2 2 b 2
3 3 b 3
4 4 b 4
5 5 b 5
Note
Some functions when applied to a groupby object will act as a filter on the input, returning
a reduced shape of the original (and potentially eliminating groups), but with the index unchanged.
Passing as_index=False will not affect these transformation methods.
For example: head, tail.
In [146]: dff.groupby("B").head(2)
Out[146]:
A B C
0 0 a 0
1 1 a 1
2 2 b 2
3 3 b 3
6 6 c 6
7 7 c 7
Dispatching to instance methods#
When doing an aggregation or transformation, you might just want to call an
instance method on each data group. This is pretty easy to do by passing lambda
functions:
In [147]: grouped = df.groupby("A")
In [148]: grouped.agg(lambda x: x.std())
Out[148]:
C D
A
bar 0.181231 1.366330
foo 0.912265 0.884785
But, it’s rather verbose and can be untidy if you need to pass additional
arguments. Using a bit of metaprogramming cleverness, GroupBy now has the
ability to “dispatch” method calls to the groups:
In [149]: grouped.std()
Out[149]:
C D
A
bar 0.181231 1.366330
foo 0.912265 0.884785
What is actually happening here is that a function wrapper is being
generated. When invoked, it takes any passed arguments and invokes the function
with any arguments on each group (in the above example, the std
function). The results are then combined together much in the style of agg
and transform (it actually uses apply to infer the gluing, documented
next). This enables some operations to be carried out rather succinctly:
In [150]: tsdf = pd.DataFrame(
.....: np.random.randn(1000, 3),
.....: index=pd.date_range("1/1/2000", periods=1000),
.....: columns=["A", "B", "C"],
.....: )
.....:
In [151]: tsdf.iloc[::2] = np.nan
In [152]: grouped = tsdf.groupby(lambda x: x.year)
In [153]: grouped.fillna(method="pad")
Out[153]:
A B C
2000-01-01 NaN NaN NaN
2000-01-02 -0.353501 -0.080957 -0.876864
2000-01-03 -0.353501 -0.080957 -0.876864
2000-01-04 0.050976 0.044273 -0.559849
2000-01-05 0.050976 0.044273 -0.559849
... ... ... ...
2002-09-22 0.005011 0.053897 -1.026922
2002-09-23 0.005011 0.053897 -1.026922
2002-09-24 -0.456542 -1.849051 1.559856
2002-09-25 -0.456542 -1.849051 1.559856
2002-09-26 1.123162 0.354660 1.128135
[1000 rows x 3 columns]
In this example, we chopped the collection of time series into yearly chunks
then independently called fillna on the
groups.
The nlargest and nsmallest methods work on Series style groupbys:
In [154]: s = pd.Series([9, 8, 7, 5, 19, 1, 4.2, 3.3])
In [155]: g = pd.Series(list("abababab"))
In [156]: gb = s.groupby(g)
In [157]: gb.nlargest(3)
Out[157]:
a 4 19.0
0 9.0
2 7.0
b 1 8.0
3 5.0
7 3.3
dtype: float64
In [158]: gb.nsmallest(3)
Out[158]:
a 6 4.2
2 7.0
0 9.0
b 5 1.0
7 3.3
3 5.0
dtype: float64
Flexible apply#
Some operations on the grouped data might not fit into either the aggregate or
transform categories. Or, you may simply want GroupBy to infer how to combine
the results. For these, use the apply function, which can be substituted
for both aggregate and transform in many standard use cases. However,
apply can handle some exceptional use cases.
Note
apply can act as a reducer, transformer, or filter function, depending
on exactly what is passed to it. It can depend on the passed function and
exactly what you are grouping. Thus the grouped column(s) may be included in
the output as well as set the indices.
In [159]: df
Out[159]:
A B C D
0 foo one -0.575247 1.346061
1 bar one 0.254161 1.511763
2 foo two -1.143704 1.627081
3 bar three 0.215897 -0.990582
4 foo two 1.193555 -0.441652
5 bar two -0.077118 1.211526
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
In [160]: grouped = df.groupby("A")
# could also just call .describe()
In [161]: grouped["C"].apply(lambda x: x.describe())
Out[161]:
A
bar count 3.000000
mean 0.130980
std 0.181231
min -0.077118
25% 0.069390
...
foo min -1.143704
25% -0.862495
50% -0.575247
75% -0.408530
max 1.193555
Name: C, Length: 16, dtype: float64
The dimension of the returned result can also change:
In [162]: grouped = df.groupby('A')['C']
In [163]: def f(group):
.....: return pd.DataFrame({'original': group,
.....: 'demeaned': group - group.mean()})
.....:
apply on a Series can operate on a returned value from the applied function,
that is itself a series, and possibly upcast the result to a DataFrame:
In [164]: def f(x):
.....: return pd.Series([x, x ** 2], index=["x", "x^2"])
.....:
In [165]: s = pd.Series(np.random.rand(5))
In [166]: s
Out[166]:
0 0.321438
1 0.493496
2 0.139505
3 0.910103
4 0.194158
dtype: float64
In [167]: s.apply(f)
Out[167]:
x x^2
0 0.321438 0.103323
1 0.493496 0.243538
2 0.139505 0.019462
3 0.910103 0.828287
4 0.194158 0.037697
Control grouped column(s) placement with group_keys#
Note
If group_keys=True is specified when calling groupby(),
functions passed to apply that return like-indexed outputs will have the
group keys added to the result index. Previous versions of pandas would add
the group keys only when the result from the applied function had a different
index than the input. If group_keys is not specified, the group keys will
not be added for like-indexed outputs. In the future this behavior
will change to always respect group_keys, which defaults to True.
Changed in version 1.5.0.
To control whether the grouped column(s) are included in the indices, you can use
the argument group_keys. Compare
In [168]: df.groupby("A", group_keys=True).apply(lambda x: x)
Out[168]:
A B C D
A
bar 1 bar one 0.254161 1.511763
3 bar three 0.215897 -0.990582
5 bar two -0.077118 1.211526
foo 0 foo one -0.575247 1.346061
2 foo two -1.143704 1.627081
4 foo two 1.193555 -0.441652
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
with
In [169]: df.groupby("A", group_keys=False).apply(lambda x: x)
Out[169]:
A B C D
0 foo one -0.575247 1.346061
1 bar one 0.254161 1.511763
2 foo two -1.143704 1.627081
3 bar three 0.215897 -0.990582
4 foo two 1.193555 -0.441652
5 bar two -0.077118 1.211526
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
Similar to Aggregations with User-Defined Functions, the resulting dtype will reflect that of the
apply function. If the results from different groups have different dtypes, then
a common dtype will be determined in the same way as DataFrame construction.
Numba Accelerated Routines#
New in version 1.1.
If Numba is installed as an optional dependency, the transform and
aggregate methods support engine='numba' and engine_kwargs arguments.
See enhancing performance with Numba for general usage of the arguments
and performance considerations.
The function signature must start with values, index exactly as the data belonging to each group
will be passed into values, and the group index will be passed into index.
Warning
When using engine='numba', there will be no “fall back” behavior internally. The group
data and group index will be passed as NumPy arrays to the JITed user defined function, and no
alternative execution attempts will be tried.
Other useful features#
Automatic exclusion of “nuisance” columns#
Again consider the example DataFrame we’ve been looking at:
In [170]: df
Out[170]:
A B C D
0 foo one -0.575247 1.346061
1 bar one 0.254161 1.511763
2 foo two -1.143704 1.627081
3 bar three 0.215897 -0.990582
4 foo two 1.193555 -0.441652
5 bar two -0.077118 1.211526
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
Suppose we wish to compute the standard deviation grouped by the A
column. There is a slight problem, namely that we don’t care about the data in
column B. We refer to this as a “nuisance” column. You can avoid nuisance
columns by specifying numeric_only=True:
In [171]: df.groupby("A").std(numeric_only=True)
Out[171]:
C D
A
bar 0.181231 1.366330
foo 0.912265 0.884785
Note that df.groupby('A').colname.std(). is more efficient than
df.groupby('A').std().colname, so if the result of an aggregation function
is only interesting over one column (here colname), it may be filtered
before applying the aggregation function.
Note
Any object column, also if it contains numerical values such as Decimal
objects, is considered as a “nuisance” columns. They are excluded from
aggregate functions automatically in groupby.
If you do wish to include decimal or object columns in an aggregation with
other non-nuisance data types, you must do so explicitly.
Warning
The automatic dropping of nuisance columns has been deprecated and will be removed
in a future version of pandas. If columns are included that cannot be operated
on, pandas will instead raise an error. In order to avoid this, either select
the columns you wish to operate on or specify numeric_only=True.
In [172]: from decimal import Decimal
In [173]: df_dec = pd.DataFrame(
.....: {
.....: "id": [1, 2, 1, 2],
.....: "int_column": [1, 2, 3, 4],
.....: "dec_column": [
.....: Decimal("0.50"),
.....: Decimal("0.15"),
.....: Decimal("0.25"),
.....: Decimal("0.40"),
.....: ],
.....: }
.....: )
.....:
# Decimal columns can be sum'd explicitly by themselves...
In [174]: df_dec.groupby(["id"])[["dec_column"]].sum()
Out[174]:
dec_column
id
1 0.75
2 0.55
# ...but cannot be combined with standard data types or they will be excluded
In [175]: df_dec.groupby(["id"])[["int_column", "dec_column"]].sum()
Out[175]:
int_column
id
1 4
2 6
# Use .agg function to aggregate over standard and "nuisance" data types
# at the same time
In [176]: df_dec.groupby(["id"]).agg({"int_column": "sum", "dec_column": "sum"})
Out[176]:
int_column dec_column
id
1 4 0.75
2 6 0.55
Handling of (un)observed Categorical values#
When using a Categorical grouper (as a single grouper, or as part of multiple groupers), the observed keyword
controls whether to return a cartesian product of all possible groupers values (observed=False) or only those
that are observed groupers (observed=True).
Show all values:
In [177]: pd.Series([1, 1, 1]).groupby(
.....: pd.Categorical(["a", "a", "a"], categories=["a", "b"]), observed=False
.....: ).count()
.....:
Out[177]:
a 3
b 0
dtype: int64
Show only the observed values:
In [178]: pd.Series([1, 1, 1]).groupby(
.....: pd.Categorical(["a", "a", "a"], categories=["a", "b"]), observed=True
.....: ).count()
.....:
Out[178]:
a 3
dtype: int64
The returned dtype of the grouped will always include all of the categories that were grouped.
In [179]: s = (
.....: pd.Series([1, 1, 1])
.....: .groupby(pd.Categorical(["a", "a", "a"], categories=["a", "b"]), observed=False)
.....: .count()
.....: )
.....:
In [180]: s.index.dtype
Out[180]: CategoricalDtype(categories=['a', 'b'], ordered=False)
NA and NaT group handling#
If there are any NaN or NaT values in the grouping key, these will be
automatically excluded. In other words, there will never be an “NA group” or
“NaT group”. This was not the case in older versions of pandas, but users were
generally discarding the NA group anyway (and supporting it was an
implementation headache).
Grouping with ordered factors#
Categorical variables represented as instance of pandas’s Categorical class
can be used as group keys. If so, the order of the levels will be preserved:
In [181]: data = pd.Series(np.random.randn(100))
In [182]: factor = pd.qcut(data, [0, 0.25, 0.5, 0.75, 1.0])
In [183]: data.groupby(factor).mean()
Out[183]:
(-2.645, -0.523] -1.362896
(-0.523, 0.0296] -0.260266
(0.0296, 0.654] 0.361802
(0.654, 2.21] 1.073801
dtype: float64
Grouping with a grouper specification#
You may need to specify a bit more data to properly group. You can
use the pd.Grouper to provide this local control.
In [184]: import datetime
In [185]: df = pd.DataFrame(
.....: {
.....: "Branch": "A A A A A A A B".split(),
.....: "Buyer": "Carl Mark Carl Carl Joe Joe Joe Carl".split(),
.....: "Quantity": [1, 3, 5, 1, 8, 1, 9, 3],
.....: "Date": [
.....: datetime.datetime(2013, 1, 1, 13, 0),
.....: datetime.datetime(2013, 1, 1, 13, 5),
.....: datetime.datetime(2013, 10, 1, 20, 0),
.....: datetime.datetime(2013, 10, 2, 10, 0),
.....: datetime.datetime(2013, 10, 1, 20, 0),
.....: datetime.datetime(2013, 10, 2, 10, 0),
.....: datetime.datetime(2013, 12, 2, 12, 0),
.....: datetime.datetime(2013, 12, 2, 14, 0),
.....: ],
.....: }
.....: )
.....:
In [186]: df
Out[186]:
Branch Buyer Quantity Date
0 A Carl 1 2013-01-01 13:00:00
1 A Mark 3 2013-01-01 13:05:00
2 A Carl 5 2013-10-01 20:00:00
3 A Carl 1 2013-10-02 10:00:00
4 A Joe 8 2013-10-01 20:00:00
5 A Joe 1 2013-10-02 10:00:00
6 A Joe 9 2013-12-02 12:00:00
7 B Carl 3 2013-12-02 14:00:00
Groupby a specific column with the desired frequency. This is like resampling.
In [187]: df.groupby([pd.Grouper(freq="1M", key="Date"), "Buyer"])[["Quantity"]].sum()
Out[187]:
Quantity
Date Buyer
2013-01-31 Carl 1
Mark 3
2013-10-31 Carl 6
Joe 9
2013-12-31 Carl 3
Joe 9
You have an ambiguous specification in that you have a named index and a column
that could be potential groupers.
In [188]: df = df.set_index("Date")
In [189]: df["Date"] = df.index + pd.offsets.MonthEnd(2)
In [190]: df.groupby([pd.Grouper(freq="6M", key="Date"), "Buyer"])[["Quantity"]].sum()
Out[190]:
Quantity
Date Buyer
2013-02-28 Carl 1
Mark 3
2014-02-28 Carl 9
Joe 18
In [191]: df.groupby([pd.Grouper(freq="6M", level="Date"), "Buyer"])[["Quantity"]].sum()
Out[191]:
Quantity
Date Buyer
2013-01-31 Carl 1
Mark 3
2014-01-31 Carl 9
Joe 18
Taking the first rows of each group#
Just like for a DataFrame or Series you can call head and tail on a groupby:
In [192]: df = pd.DataFrame([[1, 2], [1, 4], [5, 6]], columns=["A", "B"])
In [193]: df
Out[193]:
A B
0 1 2
1 1 4
2 5 6
In [194]: g = df.groupby("A")
In [195]: g.head(1)
Out[195]:
A B
0 1 2
2 5 6
In [196]: g.tail(1)
Out[196]:
A B
1 1 4
2 5 6
This shows the first or last n rows from each group.
Taking the nth row of each group#
To select from a DataFrame or Series the nth item, use
nth(). This is a reduction method, and
will return a single row (or no row) per group if you pass an int for n:
In [197]: df = pd.DataFrame([[1, np.nan], [1, 4], [5, 6]], columns=["A", "B"])
In [198]: g = df.groupby("A")
In [199]: g.nth(0)
Out[199]:
B
A
1 NaN
5 6.0
In [200]: g.nth(-1)
Out[200]:
B
A
1 4.0
5 6.0
In [201]: g.nth(1)
Out[201]:
B
A
1 4.0
If you want to select the nth not-null item, use the dropna kwarg. For a DataFrame this should be either 'any' or 'all' just like you would pass to dropna:
# nth(0) is the same as g.first()
In [202]: g.nth(0, dropna="any")
Out[202]:
B
A
1 4.0
5 6.0
In [203]: g.first()
Out[203]:
B
A
1 4.0
5 6.0
# nth(-1) is the same as g.last()
In [204]: g.nth(-1, dropna="any") # NaNs denote group exhausted when using dropna
Out[204]:
B
A
1 4.0
5 6.0
In [205]: g.last()
Out[205]:
B
A
1 4.0
5 6.0
In [206]: g.B.nth(0, dropna="all")
Out[206]:
A
1 4.0
5 6.0
Name: B, dtype: float64
As with other methods, passing as_index=False, will achieve a filtration, which returns the grouped row.
In [207]: df = pd.DataFrame([[1, np.nan], [1, 4], [5, 6]], columns=["A", "B"])
In [208]: g = df.groupby("A", as_index=False)
In [209]: g.nth(0)
Out[209]:
A B
0 1 NaN
2 5 6.0
In [210]: g.nth(-1)
Out[210]:
A B
1 1 4.0
2 5 6.0
You can also select multiple rows from each group by specifying multiple nth values as a list of ints.
In [211]: business_dates = pd.date_range(start="4/1/2014", end="6/30/2014", freq="B")
In [212]: df = pd.DataFrame(1, index=business_dates, columns=["a", "b"])
# get the first, 4th, and last date index for each month
In [213]: df.groupby([df.index.year, df.index.month]).nth([0, 3, -1])
Out[213]:
a b
2014 4 1 1
4 1 1
4 1 1
5 1 1
5 1 1
5 1 1
6 1 1
6 1 1
6 1 1
Enumerate group items#
To see the order in which each row appears within its group, use the
cumcount method:
In [214]: dfg = pd.DataFrame(list("aaabba"), columns=["A"])
In [215]: dfg
Out[215]:
A
0 a
1 a
2 a
3 b
4 b
5 a
In [216]: dfg.groupby("A").cumcount()
Out[216]:
0 0
1 1
2 2
3 0
4 1
5 3
dtype: int64
In [217]: dfg.groupby("A").cumcount(ascending=False)
Out[217]:
0 3
1 2
2 1
3 1
4 0
5 0
dtype: int64
Enumerate groups#
To see the ordering of the groups (as opposed to the order of rows
within a group given by cumcount) you can use
ngroup().
Note that the numbers given to the groups match the order in which the
groups would be seen when iterating over the groupby object, not the
order they are first observed.
In [218]: dfg = pd.DataFrame(list("aaabba"), columns=["A"])
In [219]: dfg
Out[219]:
A
0 a
1 a
2 a
3 b
4 b
5 a
In [220]: dfg.groupby("A").ngroup()
Out[220]:
0 0
1 0
2 0
3 1
4 1
5 0
dtype: int64
In [221]: dfg.groupby("A").ngroup(ascending=False)
Out[221]:
0 1
1 1
2 1
3 0
4 0
5 1
dtype: int64
Plotting#
Groupby also works with some plotting methods. For example, suppose we
suspect that some features in a DataFrame may differ by group, in this case,
the values in column 1 where the group is “B” are 3 higher on average.
In [222]: np.random.seed(1234)
In [223]: df = pd.DataFrame(np.random.randn(50, 2))
In [224]: df["g"] = np.random.choice(["A", "B"], size=50)
In [225]: df.loc[df["g"] == "B", 1] += 3
We can easily visualize this with a boxplot:
In [226]: df.groupby("g").boxplot()
Out[226]:
A AxesSubplot(0.1,0.15;0.363636x0.75)
B AxesSubplot(0.536364,0.15;0.363636x0.75)
dtype: object
The result of calling boxplot is a dictionary whose keys are the values
of our grouping column g (“A” and “B”). The values of the resulting dictionary
can be controlled by the return_type keyword of boxplot.
See the visualization documentation for more.
Warning
For historical reasons, df.groupby("g").boxplot() is not equivalent
to df.boxplot(by="g"). See here for
an explanation.
Piping function calls#
Similar to the functionality provided by DataFrame and Series, functions
that take GroupBy objects can be chained together using a pipe method to
allow for a cleaner, more readable syntax. To read about .pipe in general terms,
see here.
Combining .groupby and .pipe is often useful when you need to reuse
GroupBy objects.
As an example, imagine having a DataFrame with columns for stores, products,
revenue and quantity sold. We’d like to do a groupwise calculation of prices
(i.e. revenue/quantity) per store and per product. We could do this in a
multi-step operation, but expressing it in terms of piping can make the
code more readable. First we set the data:
In [227]: n = 1000
In [228]: df = pd.DataFrame(
.....: {
.....: "Store": np.random.choice(["Store_1", "Store_2"], n),
.....: "Product": np.random.choice(["Product_1", "Product_2"], n),
.....: "Revenue": (np.random.random(n) * 50 + 10).round(2),
.....: "Quantity": np.random.randint(1, 10, size=n),
.....: }
.....: )
.....:
In [229]: df.head(2)
Out[229]:
Store Product Revenue Quantity
0 Store_2 Product_1 26.12 1
1 Store_2 Product_1 28.86 1
Now, to find prices per store/product, we can simply do:
In [230]: (
.....: df.groupby(["Store", "Product"])
.....: .pipe(lambda grp: grp.Revenue.sum() / grp.Quantity.sum())
.....: .unstack()
.....: .round(2)
.....: )
.....:
Out[230]:
Product Product_1 Product_2
Store
Store_1 6.82 7.05
Store_2 6.30 6.64
Piping can also be expressive when you want to deliver a grouped object to some
arbitrary function, for example:
In [231]: def mean(groupby):
.....: return groupby.mean()
.....:
In [232]: df.groupby(["Store", "Product"]).pipe(mean)
Out[232]:
Revenue Quantity
Store Product
Store_1 Product_1 34.622727 5.075758
Product_2 35.482815 5.029630
Store_2 Product_1 32.972837 5.237589
Product_2 34.684360 5.224000
where mean takes a GroupBy object and finds the mean of the Revenue and Quantity
columns respectively for each Store-Product combination. The mean function can
be any function that takes in a GroupBy object; the .pipe will pass the GroupBy
object as a parameter into the function you specify.
Examples#
Regrouping by factor#
Regroup columns of a DataFrame according to their sum, and sum the aggregated ones.
In [233]: df = pd.DataFrame({"a": [1, 0, 0], "b": [0, 1, 0], "c": [1, 0, 0], "d": [2, 3, 4]})
In [234]: df
Out[234]:
a b c d
0 1 0 1 2
1 0 1 0 3
2 0 0 0 4
In [235]: df.groupby(df.sum(), axis=1).sum()
Out[235]:
1 9
0 2 2
1 1 3
2 0 4
Multi-column factorization#
By using ngroup(), we can extract
information about the groups in a way similar to factorize() (as described
further in the reshaping API) but which applies
naturally to multiple columns of mixed type and different
sources. This can be useful as an intermediate categorical-like step
in processing, when the relationships between the group rows are more
important than their content, or as input to an algorithm which only
accepts the integer encoding. (For more information about support in
pandas for full categorical data, see the Categorical
introduction and the
API documentation.)
In [236]: dfg = pd.DataFrame({"A": [1, 1, 2, 3, 2], "B": list("aaaba")})
In [237]: dfg
Out[237]:
A B
0 1 a
1 1 a
2 2 a
3 3 b
4 2 a
In [238]: dfg.groupby(["A", "B"]).ngroup()
Out[238]:
0 0
1 0
2 1
3 2
4 1
dtype: int64
In [239]: dfg.groupby(["A", [0, 0, 0, 1, 1]]).ngroup()
Out[239]:
0 0
1 0
2 1
3 3
4 2
dtype: int64
Groupby by indexer to ‘resample’ data#
Resampling produces new hypothetical samples (resamples) from already existing observed data or from a model that generates data. These new samples are similar to the pre-existing samples.
In order to resample to work on indices that are non-datetimelike, the following procedure can be utilized.
In the following examples, df.index // 5 returns a binary array which is used to determine what gets selected for the groupby operation.
Note
The below example shows how we can downsample by consolidation of samples into fewer samples. Here by using df.index // 5, we are aggregating the samples in bins. By applying std() function, we aggregate the information contained in many samples into a small subset of values which is their standard deviation thereby reducing the number of samples.
In [240]: df = pd.DataFrame(np.random.randn(10, 2))
In [241]: df
Out[241]:
0 1
0 -0.793893 0.321153
1 0.342250 1.618906
2 -0.975807 1.918201
3 -0.810847 -1.405919
4 -1.977759 0.461659
5 0.730057 -1.316938
6 -0.751328 0.528290
7 -0.257759 -1.081009
8 0.505895 -1.701948
9 -1.006349 0.020208
In [242]: df.index // 5
Out[242]: Int64Index([0, 0, 0, 0, 0, 1, 1, 1, 1, 1], dtype='int64')
In [243]: df.groupby(df.index // 5).std()
Out[243]:
0 1
0 0.823647 1.312912
1 0.760109 0.942941
Returning a Series to propagate names#
Group DataFrame columns, compute a set of metrics and return a named Series.
The Series name is used as the name for the column index. This is especially
useful in conjunction with reshaping operations such as stacking in which the
column index name will be used as the name of the inserted column:
In [244]: df = pd.DataFrame(
.....: {
.....: "a": [0, 0, 0, 0, 1, 1, 1, 1, 2, 2, 2, 2],
.....: "b": [0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1],
.....: "c": [1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0],
.....: "d": [0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1],
.....: }
.....: )
.....:
In [245]: def compute_metrics(x):
.....: result = {"b_sum": x["b"].sum(), "c_mean": x["c"].mean()}
.....: return pd.Series(result, name="metrics")
.....:
In [246]: result = df.groupby("a").apply(compute_metrics)
In [247]: result
Out[247]:
metrics b_sum c_mean
a
0 2.0 0.5
1 2.0 0.5
2 2.0 0.5
In [248]: result.stack()
Out[248]:
a metrics
0 b_sum 2.0
c_mean 0.5
1 b_sum 2.0
c_mean 0.5
2 b_sum 2.0
c_mean 0.5
dtype: float64
| 459
| 937
|
Is there a way to use the groupby function in pandas so that something could be referenced as 0?
So I have this CSV file that I'm using in Pandas, and it contains info on if a post it pulled from online has a certain word in it. So let's say I'm looking at sports, the CSV file basically looks like this:
Date of Post Sport Mentioned
9-22 Basketball
9-22 Hockey
9-22 Football
9-24 Baseball
9-24 Hockey
9-24 Football
I want it so that when I use groupby('Date of Post').count(), it would show 0 on 9-23, since there's no mention of any sport on that date. Is there a way to do this? I'm pretty certain that pandas sees the first column as being dates, not just a regular string.
|
60,234,414
|
Combining Multiple Dataframes with Unique Name
|
<p>I have for example 2 data frames with user and their rating for each place such as:</p>
<p><strong>Dataframe 1:</strong></p>
<pre><code>Name Golden Gate
Adam 1
Susan 4
Mike 5
John 4
</code></pre>
<p><strong>Dataframe 2:</strong></p>
<pre><code>Name Botanical Garden
Jenny 1
Susan 4
Leslie 5
John 3
</code></pre>
<p>I want to combine them into a single data frame with the result:</p>
<p><strong>Combined Dataframe:</strong></p>
<pre><code>Name Golden Gate Botanical Garden
Adam 1 NA
Susan 4 4
Mike 5 NA
John 4 3
Jenny NA 1
Leslie NA 5
</code></pre>
<p>How to do that? </p>
<p>Thank you.</p>
| 60,234,428
| 2020-02-14T22:47:18.127000
| 2
| null | -2
| 22
|
python|pandas
|
<p>You need to perform an <code>outer join</code> or a concatenation along an axis:</p>
<pre><code>final_df = df1.merge(df2,how='outer',on='Name')
</code></pre>
<p>Output:</p>
<pre><code> Name Golden Gate Botanical Garden
0 Adam 1.0 NaN
1 Susan 4.0 4.0
2 Mike 5.0 NaN
3 John 4.0 3.0
4 Jenny NaN 1.0
5 Leslie NaN 5.0
</code></pre>
| 2020-02-14T22:49:33.050000
| 0
|
https://pandas.pydata.org/docs/user_guide/merging.html
|
Merge, join, concatenate and compare#
Merge, join, concatenate and compare#
pandas provides various facilities for easily combining together Series or
DataFrame with various kinds of set logic for the indexes
and relational algebra functionality in the case of join / merge-type
operations.
In addition, pandas also provides utilities to compare two Series or DataFrame
and summarize their differences.
Concatenating objects#
The concat() function (in the main pandas namespace) does all of
the heavy lifting of performing concatenation operations along an axis while
performing optional set logic (union or intersection) of the indexes (if any) on
the other axes. Note that I say “if any” because there is only a single possible
axis of concatenation for Series.
Before diving into all of the details of concat and what it can do, here is
a simple example:
In [1]: df1 = pd.DataFrame(
You need to perform an outer join or a concatenation along an axis:
final_df = df1.merge(df2,how='outer',on='Name')
Output:
Name Golden Gate Botanical Garden
0 Adam 1.0 NaN
1 Susan 4.0 4.0
2 Mike 5.0 NaN
3 John 4.0 3.0
4 Jenny NaN 1.0
5 Leslie NaN 5.0
...: {
...: "A": ["A0", "A1", "A2", "A3"],
...: "B": ["B0", "B1", "B2", "B3"],
...: "C": ["C0", "C1", "C2", "C3"],
...: "D": ["D0", "D1", "D2", "D3"],
...: },
...: index=[0, 1, 2, 3],
...: )
...:
In [2]: df2 = pd.DataFrame(
...: {
...: "A": ["A4", "A5", "A6", "A7"],
...: "B": ["B4", "B5", "B6", "B7"],
...: "C": ["C4", "C5", "C6", "C7"],
...: "D": ["D4", "D5", "D6", "D7"],
...: },
...: index=[4, 5, 6, 7],
...: )
...:
In [3]: df3 = pd.DataFrame(
...: {
...: "A": ["A8", "A9", "A10", "A11"],
...: "B": ["B8", "B9", "B10", "B11"],
...: "C": ["C8", "C9", "C10", "C11"],
...: "D": ["D8", "D9", "D10", "D11"],
...: },
...: index=[8, 9, 10, 11],
...: )
...:
In [4]: frames = [df1, df2, df3]
In [5]: result = pd.concat(frames)
Like its sibling function on ndarrays, numpy.concatenate, pandas.concat
takes a list or dict of homogeneously-typed objects and concatenates them with
some configurable handling of “what to do with the other axes”:
pd.concat(
objs,
axis=0,
join="outer",
ignore_index=False,
keys=None,
levels=None,
names=None,
verify_integrity=False,
copy=True,
)
objs : a sequence or mapping of Series or DataFrame objects. If a
dict is passed, the sorted keys will be used as the keys argument, unless
it is passed, in which case the values will be selected (see below). Any None
objects will be dropped silently unless they are all None in which case a
ValueError will be raised.
axis : {0, 1, …}, default 0. The axis to concatenate along.
join : {‘inner’, ‘outer’}, default ‘outer’. How to handle indexes on
other axis(es). Outer for union and inner for intersection.
ignore_index : boolean, default False. If True, do not use the index
values on the concatenation axis. The resulting axis will be labeled 0, …,
n - 1. This is useful if you are concatenating objects where the
concatenation axis does not have meaningful indexing information. Note
the index values on the other axes are still respected in the join.
keys : sequence, default None. Construct hierarchical index using the
passed keys as the outermost level. If multiple levels passed, should
contain tuples.
levels : list of sequences, default None. Specific levels (unique values)
to use for constructing a MultiIndex. Otherwise they will be inferred from the
keys.
names : list, default None. Names for the levels in the resulting
hierarchical index.
verify_integrity : boolean, default False. Check whether the new
concatenated axis contains duplicates. This can be very expensive relative
to the actual data concatenation.
copy : boolean, default True. If False, do not copy data unnecessarily.
Without a little bit of context many of these arguments don’t make much sense.
Let’s revisit the above example. Suppose we wanted to associate specific keys
with each of the pieces of the chopped up DataFrame. We can do this using the
keys argument:
In [6]: result = pd.concat(frames, keys=["x", "y", "z"])
As you can see (if you’ve read the rest of the documentation), the resulting
object’s index has a hierarchical index. This
means that we can now select out each chunk by key:
In [7]: result.loc["y"]
Out[7]:
A B C D
4 A4 B4 C4 D4
5 A5 B5 C5 D5
6 A6 B6 C6 D6
7 A7 B7 C7 D7
It’s not a stretch to see how this can be very useful. More detail on this
functionality below.
Note
It is worth noting that concat() (and therefore
append()) makes a full copy of the data, and that constantly
reusing this function can create a significant performance hit. If you need
to use the operation over several datasets, use a list comprehension.
frames = [ process_your_file(f) for f in files ]
result = pd.concat(frames)
Note
When concatenating DataFrames with named axes, pandas will attempt to preserve
these index/column names whenever possible. In the case where all inputs share a
common name, this name will be assigned to the result. When the input names do
not all agree, the result will be unnamed. The same is true for MultiIndex,
but the logic is applied separately on a level-by-level basis.
Set logic on the other axes#
When gluing together multiple DataFrames, you have a choice of how to handle
the other axes (other than the one being concatenated). This can be done in
the following two ways:
Take the union of them all, join='outer'. This is the default
option as it results in zero information loss.
Take the intersection, join='inner'.
Here is an example of each of these methods. First, the default join='outer'
behavior:
In [8]: df4 = pd.DataFrame(
...: {
...: "B": ["B2", "B3", "B6", "B7"],
...: "D": ["D2", "D3", "D6", "D7"],
...: "F": ["F2", "F3", "F6", "F7"],
...: },
...: index=[2, 3, 6, 7],
...: )
...:
In [9]: result = pd.concat([df1, df4], axis=1)
Here is the same thing with join='inner':
In [10]: result = pd.concat([df1, df4], axis=1, join="inner")
Lastly, suppose we just wanted to reuse the exact index from the original
DataFrame:
In [11]: result = pd.concat([df1, df4], axis=1).reindex(df1.index)
Similarly, we could index before the concatenation:
In [12]: pd.concat([df1, df4.reindex(df1.index)], axis=1)
Out[12]:
A B C D B D F
0 A0 B0 C0 D0 NaN NaN NaN
1 A1 B1 C1 D1 NaN NaN NaN
2 A2 B2 C2 D2 B2 D2 F2
3 A3 B3 C3 D3 B3 D3 F3
Ignoring indexes on the concatenation axis#
For DataFrame objects which don’t have a meaningful index, you may wish
to append them and ignore the fact that they may have overlapping indexes. To
do this, use the ignore_index argument:
In [13]: result = pd.concat([df1, df4], ignore_index=True, sort=False)
Concatenating with mixed ndims#
You can concatenate a mix of Series and DataFrame objects. The
Series will be transformed to DataFrame with the column name as
the name of the Series.
In [14]: s1 = pd.Series(["X0", "X1", "X2", "X3"], name="X")
In [15]: result = pd.concat([df1, s1], axis=1)
Note
Since we’re concatenating a Series to a DataFrame, we could have
achieved the same result with DataFrame.assign(). To concatenate an
arbitrary number of pandas objects (DataFrame or Series), use
concat.
If unnamed Series are passed they will be numbered consecutively.
In [16]: s2 = pd.Series(["_0", "_1", "_2", "_3"])
In [17]: result = pd.concat([df1, s2, s2, s2], axis=1)
Passing ignore_index=True will drop all name references.
In [18]: result = pd.concat([df1, s1], axis=1, ignore_index=True)
More concatenating with group keys#
A fairly common use of the keys argument is to override the column names
when creating a new DataFrame based on existing Series.
Notice how the default behaviour consists on letting the resulting DataFrame
inherit the parent Series’ name, when these existed.
In [19]: s3 = pd.Series([0, 1, 2, 3], name="foo")
In [20]: s4 = pd.Series([0, 1, 2, 3])
In [21]: s5 = pd.Series([0, 1, 4, 5])
In [22]: pd.concat([s3, s4, s5], axis=1)
Out[22]:
foo 0 1
0 0 0 0
1 1 1 1
2 2 2 4
3 3 3 5
Through the keys argument we can override the existing column names.
In [23]: pd.concat([s3, s4, s5], axis=1, keys=["red", "blue", "yellow"])
Out[23]:
red blue yellow
0 0 0 0
1 1 1 1
2 2 2 4
3 3 3 5
Let’s consider a variation of the very first example presented:
In [24]: result = pd.concat(frames, keys=["x", "y", "z"])
You can also pass a dict to concat in which case the dict keys will be used
for the keys argument (unless other keys are specified):
In [25]: pieces = {"x": df1, "y": df2, "z": df3}
In [26]: result = pd.concat(pieces)
In [27]: result = pd.concat(pieces, keys=["z", "y"])
The MultiIndex created has levels that are constructed from the passed keys and
the index of the DataFrame pieces:
In [28]: result.index.levels
Out[28]: FrozenList([['z', 'y'], [4, 5, 6, 7, 8, 9, 10, 11]])
If you wish to specify other levels (as will occasionally be the case), you can
do so using the levels argument:
In [29]: result = pd.concat(
....: pieces, keys=["x", "y", "z"], levels=[["z", "y", "x", "w"]], names=["group_key"]
....: )
....:
In [30]: result.index.levels
Out[30]: FrozenList([['z', 'y', 'x', 'w'], [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]])
This is fairly esoteric, but it is actually necessary for implementing things
like GroupBy where the order of a categorical variable is meaningful.
Appending rows to a DataFrame#
If you have a series that you want to append as a single row to a DataFrame, you can convert the row into a
DataFrame and use concat
In [31]: s2 = pd.Series(["X0", "X1", "X2", "X3"], index=["A", "B", "C", "D"])
In [32]: result = pd.concat([df1, s2.to_frame().T], ignore_index=True)
You should use ignore_index with this method to instruct DataFrame to
discard its index. If you wish to preserve the index, you should construct an
appropriately-indexed DataFrame and append or concatenate those objects.
Database-style DataFrame or named Series joining/merging#
pandas has full-featured, high performance in-memory join operations
idiomatically very similar to relational databases like SQL. These methods
perform significantly better (in some cases well over an order of magnitude
better) than other open source implementations (like base::merge.data.frame
in R). The reason for this is careful algorithmic design and the internal layout
of the data in DataFrame.
See the cookbook for some advanced strategies.
Users who are familiar with SQL but new to pandas might be interested in a
comparison with SQL.
pandas provides a single function, merge(), as the entry point for
all standard database join operations between DataFrame or named Series objects:
pd.merge(
left,
right,
how="inner",
on=None,
left_on=None,
right_on=None,
left_index=False,
right_index=False,
sort=True,
suffixes=("_x", "_y"),
copy=True,
indicator=False,
validate=None,
)
left: A DataFrame or named Series object.
right: Another DataFrame or named Series object.
on: Column or index level names to join on. Must be found in both the left
and right DataFrame and/or Series objects. If not passed and left_index and
right_index are False, the intersection of the columns in the
DataFrames and/or Series will be inferred to be the join keys.
left_on: Columns or index levels from the left DataFrame or Series to use as
keys. Can either be column names, index level names, or arrays with length
equal to the length of the DataFrame or Series.
right_on: Columns or index levels from the right DataFrame or Series to use as
keys. Can either be column names, index level names, or arrays with length
equal to the length of the DataFrame or Series.
left_index: If True, use the index (row labels) from the left
DataFrame or Series as its join key(s). In the case of a DataFrame or Series with a MultiIndex
(hierarchical), the number of levels must match the number of join keys
from the right DataFrame or Series.
right_index: Same usage as left_index for the right DataFrame or Series
how: One of 'left', 'right', 'outer', 'inner', 'cross'. Defaults
to inner. See below for more detailed description of each method.
sort: Sort the result DataFrame by the join keys in lexicographical
order. Defaults to True, setting to False will improve performance
substantially in many cases.
suffixes: A tuple of string suffixes to apply to overlapping
columns. Defaults to ('_x', '_y').
copy: Always copy data (default True) from the passed DataFrame or named Series
objects, even when reindexing is not necessary. Cannot be avoided in many
cases but may improve performance / memory usage. The cases where copying
can be avoided are somewhat pathological but this option is provided
nonetheless.
indicator: Add a column to the output DataFrame called _merge
with information on the source of each row. _merge is Categorical-type
and takes on a value of left_only for observations whose merge key
only appears in 'left' DataFrame or Series, right_only for observations whose
merge key only appears in 'right' DataFrame or Series, and both if the
observation’s merge key is found in both.
validate : string, default None.
If specified, checks if merge is of specified type.
“one_to_one” or “1:1”: checks if merge keys are unique in both
left and right datasets.
“one_to_many” or “1:m”: checks if merge keys are unique in left
dataset.
“many_to_one” or “m:1”: checks if merge keys are unique in right
dataset.
“many_to_many” or “m:m”: allowed, but does not result in checks.
Note
Support for specifying index levels as the on, left_on, and
right_on parameters was added in version 0.23.0.
Support for merging named Series objects was added in version 0.24.0.
The return type will be the same as left. If left is a DataFrame or named Series
and right is a subclass of DataFrame, the return type will still be DataFrame.
merge is a function in the pandas namespace, and it is also available as a
DataFrame instance method merge(), with the calling
DataFrame being implicitly considered the left object in the join.
The related join() method, uses merge internally for the
index-on-index (by default) and column(s)-on-index join. If you are joining on
index only, you may wish to use DataFrame.join to save yourself some typing.
Brief primer on merge methods (relational algebra)#
Experienced users of relational databases like SQL will be familiar with the
terminology used to describe join operations between two SQL-table like
structures (DataFrame objects). There are several cases to consider which
are very important to understand:
one-to-one joins: for example when joining two DataFrame objects on
their indexes (which must contain unique values).
many-to-one joins: for example when joining an index (unique) to one or
more columns in a different DataFrame.
many-to-many joins: joining columns on columns.
Note
When joining columns on columns (potentially a many-to-many join), any
indexes on the passed DataFrame objects will be discarded.
It is worth spending some time understanding the result of the many-to-many
join case. In SQL / standard relational algebra, if a key combination appears
more than once in both tables, the resulting table will have the Cartesian
product of the associated data. Here is a very basic example with one unique
key combination:
In [33]: left = pd.DataFrame(
....: {
....: "key": ["K0", "K1", "K2", "K3"],
....: "A": ["A0", "A1", "A2", "A3"],
....: "B": ["B0", "B1", "B2", "B3"],
....: }
....: )
....:
In [34]: right = pd.DataFrame(
....: {
....: "key": ["K0", "K1", "K2", "K3"],
....: "C": ["C0", "C1", "C2", "C3"],
....: "D": ["D0", "D1", "D2", "D3"],
....: }
....: )
....:
In [35]: result = pd.merge(left, right, on="key")
Here is a more complicated example with multiple join keys. Only the keys
appearing in left and right are present (the intersection), since
how='inner' by default.
In [36]: left = pd.DataFrame(
....: {
....: "key1": ["K0", "K0", "K1", "K2"],
....: "key2": ["K0", "K1", "K0", "K1"],
....: "A": ["A0", "A1", "A2", "A3"],
....: "B": ["B0", "B1", "B2", "B3"],
....: }
....: )
....:
In [37]: right = pd.DataFrame(
....: {
....: "key1": ["K0", "K1", "K1", "K2"],
....: "key2": ["K0", "K0", "K0", "K0"],
....: "C": ["C0", "C1", "C2", "C3"],
....: "D": ["D0", "D1", "D2", "D3"],
....: }
....: )
....:
In [38]: result = pd.merge(left, right, on=["key1", "key2"])
The how argument to merge specifies how to determine which keys are to
be included in the resulting table. If a key combination does not appear in
either the left or right tables, the values in the joined table will be
NA. Here is a summary of the how options and their SQL equivalent names:
Merge method
SQL Join Name
Description
left
LEFT OUTER JOIN
Use keys from left frame only
right
RIGHT OUTER JOIN
Use keys from right frame only
outer
FULL OUTER JOIN
Use union of keys from both frames
inner
INNER JOIN
Use intersection of keys from both frames
cross
CROSS JOIN
Create the cartesian product of rows of both frames
In [39]: result = pd.merge(left, right, how="left", on=["key1", "key2"])
In [40]: result = pd.merge(left, right, how="right", on=["key1", "key2"])
In [41]: result = pd.merge(left, right, how="outer", on=["key1", "key2"])
In [42]: result = pd.merge(left, right, how="inner", on=["key1", "key2"])
In [43]: result = pd.merge(left, right, how="cross")
You can merge a mult-indexed Series and a DataFrame, if the names of
the MultiIndex correspond to the columns from the DataFrame. Transform
the Series to a DataFrame using Series.reset_index() before merging,
as shown in the following example.
In [44]: df = pd.DataFrame({"Let": ["A", "B", "C"], "Num": [1, 2, 3]})
In [45]: df
Out[45]:
Let Num
0 A 1
1 B 2
2 C 3
In [46]: ser = pd.Series(
....: ["a", "b", "c", "d", "e", "f"],
....: index=pd.MultiIndex.from_arrays(
....: [["A", "B", "C"] * 2, [1, 2, 3, 4, 5, 6]], names=["Let", "Num"]
....: ),
....: )
....:
In [47]: ser
Out[47]:
Let Num
A 1 a
B 2 b
C 3 c
A 4 d
B 5 e
C 6 f
dtype: object
In [48]: pd.merge(df, ser.reset_index(), on=["Let", "Num"])
Out[48]:
Let Num 0
0 A 1 a
1 B 2 b
2 C 3 c
Here is another example with duplicate join keys in DataFrames:
In [49]: left = pd.DataFrame({"A": [1, 2], "B": [2, 2]})
In [50]: right = pd.DataFrame({"A": [4, 5, 6], "B": [2, 2, 2]})
In [51]: result = pd.merge(left, right, on="B", how="outer")
Warning
Joining / merging on duplicate keys can cause a returned frame that is the multiplication of the row dimensions, which may result in memory overflow. It is the user’ s responsibility to manage duplicate values in keys before joining large DataFrames.
Checking for duplicate keys#
Users can use the validate argument to automatically check whether there
are unexpected duplicates in their merge keys. Key uniqueness is checked before
merge operations and so should protect against memory overflows. Checking key
uniqueness is also a good way to ensure user data structures are as expected.
In the following example, there are duplicate values of B in the right
DataFrame. As this is not a one-to-one merge – as specified in the
validate argument – an exception will be raised.
In [52]: left = pd.DataFrame({"A": [1, 2], "B": [1, 2]})
In [53]: right = pd.DataFrame({"A": [4, 5, 6], "B": [2, 2, 2]})
In [53]: result = pd.merge(left, right, on="B", how="outer", validate="one_to_one")
...
MergeError: Merge keys are not unique in right dataset; not a one-to-one merge
If the user is aware of the duplicates in the right DataFrame but wants to
ensure there are no duplicates in the left DataFrame, one can use the
validate='one_to_many' argument instead, which will not raise an exception.
In [54]: pd.merge(left, right, on="B", how="outer", validate="one_to_many")
Out[54]:
A_x B A_y
0 1 1 NaN
1 2 2 4.0
2 2 2 5.0
3 2 2 6.0
The merge indicator#
merge() accepts the argument indicator. If True, a
Categorical-type column called _merge will be added to the output object
that takes on values:
Observation Origin
_merge value
Merge key only in 'left' frame
left_only
Merge key only in 'right' frame
right_only
Merge key in both frames
both
In [55]: df1 = pd.DataFrame({"col1": [0, 1], "col_left": ["a", "b"]})
In [56]: df2 = pd.DataFrame({"col1": [1, 2, 2], "col_right": [2, 2, 2]})
In [57]: pd.merge(df1, df2, on="col1", how="outer", indicator=True)
Out[57]:
col1 col_left col_right _merge
0 0 a NaN left_only
1 1 b 2.0 both
2 2 NaN 2.0 right_only
3 2 NaN 2.0 right_only
The indicator argument will also accept string arguments, in which case the indicator function will use the value of the passed string as the name for the indicator column.
In [58]: pd.merge(df1, df2, on="col1", how="outer", indicator="indicator_column")
Out[58]:
col1 col_left col_right indicator_column
0 0 a NaN left_only
1 1 b 2.0 both
2 2 NaN 2.0 right_only
3 2 NaN 2.0 right_only
Merge dtypes#
Merging will preserve the dtype of the join keys.
In [59]: left = pd.DataFrame({"key": [1], "v1": [10]})
In [60]: left
Out[60]:
key v1
0 1 10
In [61]: right = pd.DataFrame({"key": [1, 2], "v1": [20, 30]})
In [62]: right
Out[62]:
key v1
0 1 20
1 2 30
We are able to preserve the join keys:
In [63]: pd.merge(left, right, how="outer")
Out[63]:
key v1
0 1 10
1 1 20
2 2 30
In [64]: pd.merge(left, right, how="outer").dtypes
Out[64]:
key int64
v1 int64
dtype: object
Of course if you have missing values that are introduced, then the
resulting dtype will be upcast.
In [65]: pd.merge(left, right, how="outer", on="key")
Out[65]:
key v1_x v1_y
0 1 10.0 20
1 2 NaN 30
In [66]: pd.merge(left, right, how="outer", on="key").dtypes
Out[66]:
key int64
v1_x float64
v1_y int64
dtype: object
Merging will preserve category dtypes of the mergands. See also the section on categoricals.
The left frame.
In [67]: from pandas.api.types import CategoricalDtype
In [68]: X = pd.Series(np.random.choice(["foo", "bar"], size=(10,)))
In [69]: X = X.astype(CategoricalDtype(categories=["foo", "bar"]))
In [70]: left = pd.DataFrame(
....: {"X": X, "Y": np.random.choice(["one", "two", "three"], size=(10,))}
....: )
....:
In [71]: left
Out[71]:
X Y
0 bar one
1 foo one
2 foo three
3 bar three
4 foo one
5 bar one
6 bar three
7 bar three
8 bar three
9 foo three
In [72]: left.dtypes
Out[72]:
X category
Y object
dtype: object
The right frame.
In [73]: right = pd.DataFrame(
....: {
....: "X": pd.Series(["foo", "bar"], dtype=CategoricalDtype(["foo", "bar"])),
....: "Z": [1, 2],
....: }
....: )
....:
In [74]: right
Out[74]:
X Z
0 foo 1
1 bar 2
In [75]: right.dtypes
Out[75]:
X category
Z int64
dtype: object
The merged result:
In [76]: result = pd.merge(left, right, how="outer")
In [77]: result
Out[77]:
X Y Z
0 bar one 2
1 bar three 2
2 bar one 2
3 bar three 2
4 bar three 2
5 bar three 2
6 foo one 1
7 foo three 1
8 foo one 1
9 foo three 1
In [78]: result.dtypes
Out[78]:
X category
Y object
Z int64
dtype: object
Note
The category dtypes must be exactly the same, meaning the same categories and the ordered attribute.
Otherwise the result will coerce to the categories’ dtype.
Note
Merging on category dtypes that are the same can be quite performant compared to object dtype merging.
Joining on index#
DataFrame.join() is a convenient method for combining the columns of two
potentially differently-indexed DataFrames into a single result
DataFrame. Here is a very basic example:
In [79]: left = pd.DataFrame(
....: {"A": ["A0", "A1", "A2"], "B": ["B0", "B1", "B2"]}, index=["K0", "K1", "K2"]
....: )
....:
In [80]: right = pd.DataFrame(
....: {"C": ["C0", "C2", "C3"], "D": ["D0", "D2", "D3"]}, index=["K0", "K2", "K3"]
....: )
....:
In [81]: result = left.join(right)
In [82]: result = left.join(right, how="outer")
The same as above, but with how='inner'.
In [83]: result = left.join(right, how="inner")
The data alignment here is on the indexes (row labels). This same behavior can
be achieved using merge plus additional arguments instructing it to use the
indexes:
In [84]: result = pd.merge(left, right, left_index=True, right_index=True, how="outer")
In [85]: result = pd.merge(left, right, left_index=True, right_index=True, how="inner")
Joining key columns on an index#
join() takes an optional on argument which may be a column
or multiple column names, which specifies that the passed DataFrame is to be
aligned on that column in the DataFrame. These two function calls are
completely equivalent:
left.join(right, on=key_or_keys)
pd.merge(
left, right, left_on=key_or_keys, right_index=True, how="left", sort=False
)
Obviously you can choose whichever form you find more convenient. For
many-to-one joins (where one of the DataFrame’s is already indexed by the
join key), using join may be more convenient. Here is a simple example:
In [86]: left = pd.DataFrame(
....: {
....: "A": ["A0", "A1", "A2", "A3"],
....: "B": ["B0", "B1", "B2", "B3"],
....: "key": ["K0", "K1", "K0", "K1"],
....: }
....: )
....:
In [87]: right = pd.DataFrame({"C": ["C0", "C1"], "D": ["D0", "D1"]}, index=["K0", "K1"])
In [88]: result = left.join(right, on="key")
In [89]: result = pd.merge(
....: left, right, left_on="key", right_index=True, how="left", sort=False
....: )
....:
To join on multiple keys, the passed DataFrame must have a MultiIndex:
In [90]: left = pd.DataFrame(
....: {
....: "A": ["A0", "A1", "A2", "A3"],
....: "B": ["B0", "B1", "B2", "B3"],
....: "key1": ["K0", "K0", "K1", "K2"],
....: "key2": ["K0", "K1", "K0", "K1"],
....: }
....: )
....:
In [91]: index = pd.MultiIndex.from_tuples(
....: [("K0", "K0"), ("K1", "K0"), ("K2", "K0"), ("K2", "K1")]
....: )
....:
In [92]: right = pd.DataFrame(
....: {"C": ["C0", "C1", "C2", "C3"], "D": ["D0", "D1", "D2", "D3"]}, index=index
....: )
....:
Now this can be joined by passing the two key column names:
In [93]: result = left.join(right, on=["key1", "key2"])
The default for DataFrame.join is to perform a left join (essentially a
“VLOOKUP” operation, for Excel users), which uses only the keys found in the
calling DataFrame. Other join types, for example inner join, can be just as
easily performed:
In [94]: result = left.join(right, on=["key1", "key2"], how="inner")
As you can see, this drops any rows where there was no match.
Joining a single Index to a MultiIndex#
You can join a singly-indexed DataFrame with a level of a MultiIndexed DataFrame.
The level will match on the name of the index of the singly-indexed frame against
a level name of the MultiIndexed frame.
In [95]: left = pd.DataFrame(
....: {"A": ["A0", "A1", "A2"], "B": ["B0", "B1", "B2"]},
....: index=pd.Index(["K0", "K1", "K2"], name="key"),
....: )
....:
In [96]: index = pd.MultiIndex.from_tuples(
....: [("K0", "Y0"), ("K1", "Y1"), ("K2", "Y2"), ("K2", "Y3")],
....: names=["key", "Y"],
....: )
....:
In [97]: right = pd.DataFrame(
....: {"C": ["C0", "C1", "C2", "C3"], "D": ["D0", "D1", "D2", "D3"]},
....: index=index,
....: )
....:
In [98]: result = left.join(right, how="inner")
This is equivalent but less verbose and more memory efficient / faster than this.
In [99]: result = pd.merge(
....: left.reset_index(), right.reset_index(), on=["key"], how="inner"
....: ).set_index(["key","Y"])
....:
Joining with two MultiIndexes#
This is supported in a limited way, provided that the index for the right
argument is completely used in the join, and is a subset of the indices in
the left argument, as in this example:
In [100]: leftindex = pd.MultiIndex.from_product(
.....: [list("abc"), list("xy"), [1, 2]], names=["abc", "xy", "num"]
.....: )
.....:
In [101]: left = pd.DataFrame({"v1": range(12)}, index=leftindex)
In [102]: left
Out[102]:
v1
abc xy num
a x 1 0
2 1
y 1 2
2 3
b x 1 4
2 5
y 1 6
2 7
c x 1 8
2 9
y 1 10
2 11
In [103]: rightindex = pd.MultiIndex.from_product(
.....: [list("abc"), list("xy")], names=["abc", "xy"]
.....: )
.....:
In [104]: right = pd.DataFrame({"v2": [100 * i for i in range(1, 7)]}, index=rightindex)
In [105]: right
Out[105]:
v2
abc xy
a x 100
y 200
b x 300
y 400
c x 500
y 600
In [106]: left.join(right, on=["abc", "xy"], how="inner")
Out[106]:
v1 v2
abc xy num
a x 1 0 100
2 1 100
y 1 2 200
2 3 200
b x 1 4 300
2 5 300
y 1 6 400
2 7 400
c x 1 8 500
2 9 500
y 1 10 600
2 11 600
If that condition is not satisfied, a join with two multi-indexes can be
done using the following code.
In [107]: leftindex = pd.MultiIndex.from_tuples(
.....: [("K0", "X0"), ("K0", "X1"), ("K1", "X2")], names=["key", "X"]
.....: )
.....:
In [108]: left = pd.DataFrame(
.....: {"A": ["A0", "A1", "A2"], "B": ["B0", "B1", "B2"]}, index=leftindex
.....: )
.....:
In [109]: rightindex = pd.MultiIndex.from_tuples(
.....: [("K0", "Y0"), ("K1", "Y1"), ("K2", "Y2"), ("K2", "Y3")], names=["key", "Y"]
.....: )
.....:
In [110]: right = pd.DataFrame(
.....: {"C": ["C0", "C1", "C2", "C3"], "D": ["D0", "D1", "D2", "D3"]}, index=rightindex
.....: )
.....:
In [111]: result = pd.merge(
.....: left.reset_index(), right.reset_index(), on=["key"], how="inner"
.....: ).set_index(["key", "X", "Y"])
.....:
Merging on a combination of columns and index levels#
Strings passed as the on, left_on, and right_on parameters
may refer to either column names or index level names. This enables merging
DataFrame instances on a combination of index levels and columns without
resetting indexes.
In [112]: left_index = pd.Index(["K0", "K0", "K1", "K2"], name="key1")
In [113]: left = pd.DataFrame(
.....: {
.....: "A": ["A0", "A1", "A2", "A3"],
.....: "B": ["B0", "B1", "B2", "B3"],
.....: "key2": ["K0", "K1", "K0", "K1"],
.....: },
.....: index=left_index,
.....: )
.....:
In [114]: right_index = pd.Index(["K0", "K1", "K2", "K2"], name="key1")
In [115]: right = pd.DataFrame(
.....: {
.....: "C": ["C0", "C1", "C2", "C3"],
.....: "D": ["D0", "D1", "D2", "D3"],
.....: "key2": ["K0", "K0", "K0", "K1"],
.....: },
.....: index=right_index,
.....: )
.....:
In [116]: result = left.merge(right, on=["key1", "key2"])
Note
When DataFrames are merged on a string that matches an index level in both
frames, the index level is preserved as an index level in the resulting
DataFrame.
Note
When DataFrames are merged using only some of the levels of a MultiIndex,
the extra levels will be dropped from the resulting merge. In order to
preserve those levels, use reset_index on those level names to move
those levels to columns prior to doing the merge.
Note
If a string matches both a column name and an index level name, then a
warning is issued and the column takes precedence. This will result in an
ambiguity error in a future version.
Overlapping value columns#
The merge suffixes argument takes a tuple of list of strings to append to
overlapping column names in the input DataFrames to disambiguate the result
columns:
In [117]: left = pd.DataFrame({"k": ["K0", "K1", "K2"], "v": [1, 2, 3]})
In [118]: right = pd.DataFrame({"k": ["K0", "K0", "K3"], "v": [4, 5, 6]})
In [119]: result = pd.merge(left, right, on="k")
In [120]: result = pd.merge(left, right, on="k", suffixes=("_l", "_r"))
DataFrame.join() has lsuffix and rsuffix arguments which behave
similarly.
In [121]: left = left.set_index("k")
In [122]: right = right.set_index("k")
In [123]: result = left.join(right, lsuffix="_l", rsuffix="_r")
Joining multiple DataFrames#
A list or tuple of DataFrames can also be passed to join()
to join them together on their indexes.
In [124]: right2 = pd.DataFrame({"v": [7, 8, 9]}, index=["K1", "K1", "K2"])
In [125]: result = left.join([right, right2])
Merging together values within Series or DataFrame columns#
Another fairly common situation is to have two like-indexed (or similarly
indexed) Series or DataFrame objects and wanting to “patch” values in
one object from values for matching indices in the other. Here is an example:
In [126]: df1 = pd.DataFrame(
.....: [[np.nan, 3.0, 5.0], [-4.6, np.nan, np.nan], [np.nan, 7.0, np.nan]]
.....: )
.....:
In [127]: df2 = pd.DataFrame([[-42.6, np.nan, -8.2], [-5.0, 1.6, 4]], index=[1, 2])
For this, use the combine_first() method:
In [128]: result = df1.combine_first(df2)
Note that this method only takes values from the right DataFrame if they are
missing in the left DataFrame. A related method, update(),
alters non-NA values in place:
In [129]: df1.update(df2)
Timeseries friendly merging#
Merging ordered data#
A merge_ordered() function allows combining time series and other
ordered data. In particular it has an optional fill_method keyword to
fill/interpolate missing data:
In [130]: left = pd.DataFrame(
.....: {"k": ["K0", "K1", "K1", "K2"], "lv": [1, 2, 3, 4], "s": ["a", "b", "c", "d"]}
.....: )
.....:
In [131]: right = pd.DataFrame({"k": ["K1", "K2", "K4"], "rv": [1, 2, 3]})
In [132]: pd.merge_ordered(left, right, fill_method="ffill", left_by="s")
Out[132]:
k lv s rv
0 K0 1.0 a NaN
1 K1 1.0 a 1.0
2 K2 1.0 a 2.0
3 K4 1.0 a 3.0
4 K1 2.0 b 1.0
5 K2 2.0 b 2.0
6 K4 2.0 b 3.0
7 K1 3.0 c 1.0
8 K2 3.0 c 2.0
9 K4 3.0 c 3.0
10 K1 NaN d 1.0
11 K2 4.0 d 2.0
12 K4 4.0 d 3.0
Merging asof#
A merge_asof() is similar to an ordered left-join except that we match on
nearest key rather than equal keys. For each row in the left DataFrame,
we select the last row in the right DataFrame whose on key is less
than the left’s key. Both DataFrames must be sorted by the key.
Optionally an asof merge can perform a group-wise merge. This matches the
by key equally, in addition to the nearest match on the on key.
For example; we might have trades and quotes and we want to asof
merge them.
In [133]: trades = pd.DataFrame(
.....: {
.....: "time": pd.to_datetime(
.....: [
.....: "20160525 13:30:00.023",
.....: "20160525 13:30:00.038",
.....: "20160525 13:30:00.048",
.....: "20160525 13:30:00.048",
.....: "20160525 13:30:00.048",
.....: ]
.....: ),
.....: "ticker": ["MSFT", "MSFT", "GOOG", "GOOG", "AAPL"],
.....: "price": [51.95, 51.95, 720.77, 720.92, 98.00],
.....: "quantity": [75, 155, 100, 100, 100],
.....: },
.....: columns=["time", "ticker", "price", "quantity"],
.....: )
.....:
In [134]: quotes = pd.DataFrame(
.....: {
.....: "time": pd.to_datetime(
.....: [
.....: "20160525 13:30:00.023",
.....: "20160525 13:30:00.023",
.....: "20160525 13:30:00.030",
.....: "20160525 13:30:00.041",
.....: "20160525 13:30:00.048",
.....: "20160525 13:30:00.049",
.....: "20160525 13:30:00.072",
.....: "20160525 13:30:00.075",
.....: ]
.....: ),
.....: "ticker": ["GOOG", "MSFT", "MSFT", "MSFT", "GOOG", "AAPL", "GOOG", "MSFT"],
.....: "bid": [720.50, 51.95, 51.97, 51.99, 720.50, 97.99, 720.50, 52.01],
.....: "ask": [720.93, 51.96, 51.98, 52.00, 720.93, 98.01, 720.88, 52.03],
.....: },
.....: columns=["time", "ticker", "bid", "ask"],
.....: )
.....:
In [135]: trades
Out[135]:
time ticker price quantity
0 2016-05-25 13:30:00.023 MSFT 51.95 75
1 2016-05-25 13:30:00.038 MSFT 51.95 155
2 2016-05-25 13:30:00.048 GOOG 720.77 100
3 2016-05-25 13:30:00.048 GOOG 720.92 100
4 2016-05-25 13:30:00.048 AAPL 98.00 100
In [136]: quotes
Out[136]:
time ticker bid ask
0 2016-05-25 13:30:00.023 GOOG 720.50 720.93
1 2016-05-25 13:30:00.023 MSFT 51.95 51.96
2 2016-05-25 13:30:00.030 MSFT 51.97 51.98
3 2016-05-25 13:30:00.041 MSFT 51.99 52.00
4 2016-05-25 13:30:00.048 GOOG 720.50 720.93
5 2016-05-25 13:30:00.049 AAPL 97.99 98.01
6 2016-05-25 13:30:00.072 GOOG 720.50 720.88
7 2016-05-25 13:30:00.075 MSFT 52.01 52.03
By default we are taking the asof of the quotes.
In [137]: pd.merge_asof(trades, quotes, on="time", by="ticker")
Out[137]:
time ticker price quantity bid ask
0 2016-05-25 13:30:00.023 MSFT 51.95 75 51.95 51.96
1 2016-05-25 13:30:00.038 MSFT 51.95 155 51.97 51.98
2 2016-05-25 13:30:00.048 GOOG 720.77 100 720.50 720.93
3 2016-05-25 13:30:00.048 GOOG 720.92 100 720.50 720.93
4 2016-05-25 13:30:00.048 AAPL 98.00 100 NaN NaN
We only asof within 2ms between the quote time and the trade time.
In [138]: pd.merge_asof(trades, quotes, on="time", by="ticker", tolerance=pd.Timedelta("2ms"))
Out[138]:
time ticker price quantity bid ask
0 2016-05-25 13:30:00.023 MSFT 51.95 75 51.95 51.96
1 2016-05-25 13:30:00.038 MSFT 51.95 155 NaN NaN
2 2016-05-25 13:30:00.048 GOOG 720.77 100 720.50 720.93
3 2016-05-25 13:30:00.048 GOOG 720.92 100 720.50 720.93
4 2016-05-25 13:30:00.048 AAPL 98.00 100 NaN NaN
We only asof within 10ms between the quote time and the trade time and we
exclude exact matches on time. Note that though we exclude the exact matches
(of the quotes), prior quotes do propagate to that point in time.
In [139]: pd.merge_asof(
.....: trades,
.....: quotes,
.....: on="time",
.....: by="ticker",
.....: tolerance=pd.Timedelta("10ms"),
.....: allow_exact_matches=False,
.....: )
.....:
Out[139]:
time ticker price quantity bid ask
0 2016-05-25 13:30:00.023 MSFT 51.95 75 NaN NaN
1 2016-05-25 13:30:00.038 MSFT 51.95 155 51.97 51.98
2 2016-05-25 13:30:00.048 GOOG 720.77 100 NaN NaN
3 2016-05-25 13:30:00.048 GOOG 720.92 100 NaN NaN
4 2016-05-25 13:30:00.048 AAPL 98.00 100 NaN NaN
Comparing objects#
The compare() and compare() methods allow you to
compare two DataFrame or Series, respectively, and summarize their differences.
This feature was added in V1.1.0.
For example, you might want to compare two DataFrame and stack their differences
side by side.
In [140]: df = pd.DataFrame(
.....: {
.....: "col1": ["a", "a", "b", "b", "a"],
.....: "col2": [1.0, 2.0, 3.0, np.nan, 5.0],
.....: "col3": [1.0, 2.0, 3.0, 4.0, 5.0],
.....: },
.....: columns=["col1", "col2", "col3"],
.....: )
.....:
In [141]: df
Out[141]:
col1 col2 col3
0 a 1.0 1.0
1 a 2.0 2.0
2 b 3.0 3.0
3 b NaN 4.0
4 a 5.0 5.0
In [142]: df2 = df.copy()
In [143]: df2.loc[0, "col1"] = "c"
In [144]: df2.loc[2, "col3"] = 4.0
In [145]: df2
Out[145]:
col1 col2 col3
0 c 1.0 1.0
1 a 2.0 2.0
2 b 3.0 4.0
3 b NaN 4.0
4 a 5.0 5.0
In [146]: df.compare(df2)
Out[146]:
col1 col3
self other self other
0 a c NaN NaN
2 NaN NaN 3.0 4.0
By default, if two corresponding values are equal, they will be shown as NaN.
Furthermore, if all values in an entire row / column, the row / column will be
omitted from the result. The remaining differences will be aligned on columns.
If you wish, you may choose to stack the differences on rows.
In [147]: df.compare(df2, align_axis=0)
Out[147]:
col1 col3
0 self a NaN
other c NaN
2 self NaN 3.0
other NaN 4.0
If you wish to keep all original rows and columns, set keep_shape argument
to True.
In [148]: df.compare(df2, keep_shape=True)
Out[148]:
col1 col2 col3
self other self other self other
0 a c NaN NaN NaN NaN
1 NaN NaN NaN NaN NaN NaN
2 NaN NaN NaN NaN 3.0 4.0
3 NaN NaN NaN NaN NaN NaN
4 NaN NaN NaN NaN NaN NaN
You may also keep all the original values even if they are equal.
In [149]: df.compare(df2, keep_shape=True, keep_equal=True)
Out[149]:
col1 col2 col3
self other self other self other
0 a c 1.0 1.0 1.0 1.0
1 a a 2.0 2.0 2.0 2.0
2 b b 3.0 3.0 3.0 4.0
3 b b NaN NaN 4.0 4.0
4 a a 5.0 5.0 5.0 5.0
| 889
| 1,301
|
Combining Multiple Dataframes with Unique Name
I have for example 2 data frames with user and their rating for each place such as:
Dataframe 1:
Name Golden Gate
Adam 1
Susan 4
Mike 5
John 4
Dataframe 2:
Name Botanical Garden
Jenny 1
Susan 4
Leslie 5
John 3
I want to combine them into a single data frame with the result:
Combined Dataframe:
Name Golden Gate Botanical Garden
Adam 1 NA
Susan 4 4
Mike 5 NA
John 4 3
Jenny NA 1
Leslie NA 5
How to do that?
Thank you.
|
64,651,554
|
Pandas Selecting By Checking Whether List Element Contains value
|
<p>I have a column in pandas dataframe that corresponds to lists in rows:</p>
<pre><code> tags contestId
20 [graphs, greedy, shortest paths, trees] 1437
27 [binary search, combinatorics] 1436
64 [constructive algorithms, data structures, gre... 1426
81 [binary search, math, number theory, two point... 1423
111 [binary search, brute force, constructive algo... 1419
... ... ...
6444 [math] 11
6449 [dp, implementation] 10
6464 [implementation] 7
6486 [hashing, implementation] 2
6488 [implementation, math] 1
</code></pre>
<p>How can I select all records that have either 'math' or 'trees' in tags list?</p>
| 64,651,668
| 2020-11-02T18:50:47.450000
| 1
| 0
| 0
| 23
|
python|pandas
|
<p>A quick and dirty solution:</p>
<pre><code>ans = df[df["tags"].apply(lambda el: "math" in el or "trees" in el)]
</code></pre>
<h1>Output</h1>
<pre><code>print(ans)
index tags contestId
0 20 [graphs, greedy, shortest paths, trees] 1437
3 81 [binary search, math, number theory, two point] 1423
5 6444 [math] 11
9 6488 [implementation, math] 1
</code></pre>
<h2>Test Data</h2>
<pre><code># in.txt
index tags contestId
20 [graphs, greedy, shortest paths, trees] 1437
27 [binary search, combinatorics] 1436
64 [constructive algorithms, data structures, gre] 1426
81 [binary search, math, number theory, two point] 1423
111 [binary search, brute force, constructive algo] 1419
6444 [math] 11
6449 [dp, implementation] 10
6464 [implementation] 7
6486 [hashing, implementation] 2
6488 [implementation, math] 1
</code></pre>
<p>Code to reconstruct <code>df</code> (please provide such code next time):</p>
<pre><code>df = pd.read_fwf("in.txt")
df["tags"] = df["tags"].apply(lambda s: s[1:-1].split(", "))
</code></pre>
<p>N.B. Unfortunately, <code>.isin()</code> and <code>.str.contains()</code> did not seem to be working.</p>
| 2020-11-02T18:58:17.427000
| 0
|
https://pandas.pydata.org/docs/reference/api/pandas.Series.isin.html
|
A quick and dirty solution:
ans = df[df["tags"].apply(lambda el: "math" in el or "trees" in el)]
Output
print(ans)
index tags contestId
0 20 [graphs, greedy, shortest paths, trees] 1437
3 81 [binary search, math, number theory, two point] 1423
5 6444 [math] 11
9 6488 [implementation, math] 1
Test Data
# in.txt
index tags contestId
20 [graphs, greedy, shortest paths, trees] 1437
27 [binary search, combinatorics] 1436
64 [constructive algorithms, data structures, gre] 1426
81 [binary search, math, number theory, two point] 1423
111 [binary search, brute force, constructive algo] 1419
6444 [math] 11
6449 [dp, implementation] 10
6464 [implementation] 7
6486 [hashing, implementation] 2
6488 [implementation, math] 1
Code to reconstruct df (please provide such code next time):
df = pd.read_fwf("in.txt")
df["tags"] = df["tags"].apply(lambda s: s[1:-1].split(", "))
N.B. Unfortunately, .isin() and .str.contains() did not seem to be working.
| 0
| 1,444
|
Pandas Selecting By Checking Whether List Element Contains value
I have a column in pandas dataframe that corresponds to lists in rows:
tags contestId
20 [graphs, greedy, shortest paths, trees] 1437
27 [binary search, combinatorics] 1436
64 [constructive algorithms, data structures, gre... 1426
81 [binary search, math, number theory, two point... 1423
111 [binary search, brute force, constructive algo... 1419
... ... ...
6444 [math] 11
6449 [dp, implementation] 10
6464 [implementation] 7
6486 [hashing, implementation] 2
6488 [implementation, math] 1
How can I select all records that have either 'math' or 'trees' in tags list?
|
63,940,635
|
Group by in pandas for criteria on one column and getting records for other columns as-is
|
<p>So my dataframe looks something like this -</p>
<pre><code>ORD_ID|TIME|VOL|VOL_DSCL|SMBL|EXP
ABC123|2020-05-18 09:01:35|30|10|CHH|2020-05-20
DEF123|2020-05-18 09:04:35|50|20|CHH|2020-06-19
ABC123|2020-05-18 09:06:45|20|10|CHH|2020-05-20
PQR333|2020-05-18 09:13:12|50|10|SSS|2020-06-19
DEF123|2020-05-18 09:24:35|20|20|CHH|2020-06-19
PQR333|2020-05-18 09:26:23|0|0|SSS|2020-06-19
</code></pre>
<p>I want to group by ORD_ID. And grab the record which is last in TIME for that ORD_ID (without performing any aggregate function on other columns). i.e. the desired output is -</p>
<pre><code>ORD_ID|TIME|VOL|VOL_DSCL|SMBL|EXP
ABC123|2020-05-18 09:06:45|20|10|CHH|2020-05-20
DEF123|2020-05-18 09:24:35|20|20|CHH|2020-06-19
PQR333|2020-05-18 09:26:23|0|0|SSS|2020-06-19
</code></pre>
<p>How can this be achieved? (so only the last record in TIME as per each unique ORD_ID )</p>
| 63,940,680
| 2020-09-17T14:48:12.857000
| 1
| null | 0
| 23
|
python|pandas
|
<p>You don't need <code>groupby</code>, <code>drop_duplicates</code> would do:</p>
<pre><code>df.sort_values('TIME').drop_duplicates('ORD_ID',keep='last')
</code></pre>
<p>Or if you really want groupby:</p>
<pre><code>df.groupby('ORD_ID').tail(1)
</code></pre>
<p>Output:</p>
<pre><code> ORD_ID TIME VOL VOL_DSCL SMBL EXP
2 ABC123 2020-05-18 09:06:45 20 10 CHH 2020-05-20
4 DEF123 2020-05-18 09:24:35 20 20 CHH 2020-06-19
5 PQR333 2020-05-18 09:26:23 0 0 SSS 2020-06-19
</code></pre>
| 2020-09-17T14:50:12.410000
| 0
|
https://pandas.pydata.org/docs/user_guide/groupby.html
|
Group by: split-apply-combine#
Group by: split-apply-combine#
By “group by” we are referring to a process involving one or more of the following
steps:
Splitting the data into groups based on some criteria.
Applying a function to each group independently.
Combining the results into a data structure.
You don't need groupby, drop_duplicates would do:
df.sort_values('TIME').drop_duplicates('ORD_ID',keep='last')
Or if you really want groupby:
df.groupby('ORD_ID').tail(1)
Output:
ORD_ID TIME VOL VOL_DSCL SMBL EXP
2 ABC123 2020-05-18 09:06:45 20 10 CHH 2020-05-20
4 DEF123 2020-05-18 09:24:35 20 20 CHH 2020-06-19
5 PQR333 2020-05-18 09:26:23 0 0 SSS 2020-06-19
Out of these, the split step is the most straightforward. In fact, in many
situations we may wish to split the data set into groups and do something with
those groups. In the apply step, we might wish to do one of the
following:
Aggregation: compute a summary statistic (or statistics) for each
group. Some examples:
Compute group sums or means.
Compute group sizes / counts.
Transformation: perform some group-specific computations and return a
like-indexed object. Some examples:
Standardize data (zscore) within a group.
Filling NAs within groups with a value derived from each group.
Filtration: discard some groups, according to a group-wise computation
that evaluates True or False. Some examples:
Discard data that belongs to groups with only a few members.
Filter out data based on the group sum or mean.
Some combination of the above: GroupBy will examine the results of the apply
step and try to return a sensibly combined result if it doesn’t fit into
either of the above two categories.
Since the set of object instance methods on pandas data structures are generally
rich and expressive, we often simply want to invoke, say, a DataFrame function
on each group. The name GroupBy should be quite familiar to those who have used
a SQL-based tool (or itertools), in which you can write code like:
SELECT Column1, Column2, mean(Column3), sum(Column4)
FROM SomeTable
GROUP BY Column1, Column2
We aim to make operations like this natural and easy to express using
pandas. We’ll address each area of GroupBy functionality then provide some
non-trivial examples / use cases.
See the cookbook for some advanced strategies.
Splitting an object into groups#
pandas objects can be split on any of their axes. The abstract definition of
grouping is to provide a mapping of labels to group names. To create a GroupBy
object (more on what the GroupBy object is later), you may do the following:
In [1]: df = pd.DataFrame(
...: [
...: ("bird", "Falconiformes", 389.0),
...: ("bird", "Psittaciformes", 24.0),
...: ("mammal", "Carnivora", 80.2),
...: ("mammal", "Primates", np.nan),
...: ("mammal", "Carnivora", 58),
...: ],
...: index=["falcon", "parrot", "lion", "monkey", "leopard"],
...: columns=("class", "order", "max_speed"),
...: )
...:
In [2]: df
Out[2]:
class order max_speed
falcon bird Falconiformes 389.0
parrot bird Psittaciformes 24.0
lion mammal Carnivora 80.2
monkey mammal Primates NaN
leopard mammal Carnivora 58.0
# default is axis=0
In [3]: grouped = df.groupby("class")
In [4]: grouped = df.groupby("order", axis="columns")
In [5]: grouped = df.groupby(["class", "order"])
The mapping can be specified many different ways:
A Python function, to be called on each of the axis labels.
A list or NumPy array of the same length as the selected axis.
A dict or Series, providing a label -> group name mapping.
For DataFrame objects, a string indicating either a column name or
an index level name to be used to group.
df.groupby('A') is just syntactic sugar for df.groupby(df['A']).
A list of any of the above things.
Collectively we refer to the grouping objects as the keys. For example,
consider the following DataFrame:
Note
A string passed to groupby may refer to either a column or an index level.
If a string matches both a column name and an index level name, a
ValueError will be raised.
In [6]: df = pd.DataFrame(
...: {
...: "A": ["foo", "bar", "foo", "bar", "foo", "bar", "foo", "foo"],
...: "B": ["one", "one", "two", "three", "two", "two", "one", "three"],
...: "C": np.random.randn(8),
...: "D": np.random.randn(8),
...: }
...: )
...:
In [7]: df
Out[7]:
A B C D
0 foo one 0.469112 -0.861849
1 bar one -0.282863 -2.104569
2 foo two -1.509059 -0.494929
3 bar three -1.135632 1.071804
4 foo two 1.212112 0.721555
5 bar two -0.173215 -0.706771
6 foo one 0.119209 -1.039575
7 foo three -1.044236 0.271860
On a DataFrame, we obtain a GroupBy object by calling groupby().
We could naturally group by either the A or B columns, or both:
In [8]: grouped = df.groupby("A")
In [9]: grouped = df.groupby(["A", "B"])
If we also have a MultiIndex on columns A and B, we can group by all
but the specified columns
In [10]: df2 = df.set_index(["A", "B"])
In [11]: grouped = df2.groupby(level=df2.index.names.difference(["B"]))
In [12]: grouped.sum()
Out[12]:
C D
A
bar -1.591710 -1.739537
foo -0.752861 -1.402938
These will split the DataFrame on its index (rows). We could also split by the
columns:
In [13]: def get_letter_type(letter):
....: if letter.lower() in 'aeiou':
....: return 'vowel'
....: else:
....: return 'consonant'
....:
In [14]: grouped = df.groupby(get_letter_type, axis=1)
pandas Index objects support duplicate values. If a
non-unique index is used as the group key in a groupby operation, all values
for the same index value will be considered to be in one group and thus the
output of aggregation functions will only contain unique index values:
In [15]: lst = [1, 2, 3, 1, 2, 3]
In [16]: s = pd.Series([1, 2, 3, 10, 20, 30], lst)
In [17]: grouped = s.groupby(level=0)
In [18]: grouped.first()
Out[18]:
1 1
2 2
3 3
dtype: int64
In [19]: grouped.last()
Out[19]:
1 10
2 20
3 30
dtype: int64
In [20]: grouped.sum()
Out[20]:
1 11
2 22
3 33
dtype: int64
Note that no splitting occurs until it’s needed. Creating the GroupBy object
only verifies that you’ve passed a valid mapping.
Note
Many kinds of complicated data manipulations can be expressed in terms of
GroupBy operations (though can’t be guaranteed to be the most
efficient). You can get quite creative with the label mapping functions.
GroupBy sorting#
By default the group keys are sorted during the groupby operation. You may however pass sort=False for potential speedups:
In [21]: df2 = pd.DataFrame({"X": ["B", "B", "A", "A"], "Y": [1, 2, 3, 4]})
In [22]: df2.groupby(["X"]).sum()
Out[22]:
Y
X
A 7
B 3
In [23]: df2.groupby(["X"], sort=False).sum()
Out[23]:
Y
X
B 3
A 7
Note that groupby will preserve the order in which observations are sorted within each group.
For example, the groups created by groupby() below are in the order they appeared in the original DataFrame:
In [24]: df3 = pd.DataFrame({"X": ["A", "B", "A", "B"], "Y": [1, 4, 3, 2]})
In [25]: df3.groupby(["X"]).get_group("A")
Out[25]:
X Y
0 A 1
2 A 3
In [26]: df3.groupby(["X"]).get_group("B")
Out[26]:
X Y
1 B 4
3 B 2
New in version 1.1.0.
GroupBy dropna#
By default NA values are excluded from group keys during the groupby operation. However,
in case you want to include NA values in group keys, you could pass dropna=False to achieve it.
In [27]: df_list = [[1, 2, 3], [1, None, 4], [2, 1, 3], [1, 2, 2]]
In [28]: df_dropna = pd.DataFrame(df_list, columns=["a", "b", "c"])
In [29]: df_dropna
Out[29]:
a b c
0 1 2.0 3
1 1 NaN 4
2 2 1.0 3
3 1 2.0 2
# Default ``dropna`` is set to True, which will exclude NaNs in keys
In [30]: df_dropna.groupby(by=["b"], dropna=True).sum()
Out[30]:
a c
b
1.0 2 3
2.0 2 5
# In order to allow NaN in keys, set ``dropna`` to False
In [31]: df_dropna.groupby(by=["b"], dropna=False).sum()
Out[31]:
a c
b
1.0 2 3
2.0 2 5
NaN 1 4
The default setting of dropna argument is True which means NA are not included in group keys.
GroupBy object attributes#
The groups attribute is a dict whose keys are the computed unique groups
and corresponding values being the axis labels belonging to each group. In the
above example we have:
In [32]: df.groupby("A").groups
Out[32]: {'bar': [1, 3, 5], 'foo': [0, 2, 4, 6, 7]}
In [33]: df.groupby(get_letter_type, axis=1).groups
Out[33]: {'consonant': ['B', 'C', 'D'], 'vowel': ['A']}
Calling the standard Python len function on the GroupBy object just returns
the length of the groups dict, so it is largely just a convenience:
In [34]: grouped = df.groupby(["A", "B"])
In [35]: grouped.groups
Out[35]: {('bar', 'one'): [1], ('bar', 'three'): [3], ('bar', 'two'): [5], ('foo', 'one'): [0, 6], ('foo', 'three'): [7], ('foo', 'two'): [2, 4]}
In [36]: len(grouped)
Out[36]: 6
GroupBy will tab complete column names (and other attributes):
In [37]: df
Out[37]:
height weight gender
2000-01-01 42.849980 157.500553 male
2000-01-02 49.607315 177.340407 male
2000-01-03 56.293531 171.524640 male
2000-01-04 48.421077 144.251986 female
2000-01-05 46.556882 152.526206 male
2000-01-06 68.448851 168.272968 female
2000-01-07 70.757698 136.431469 male
2000-01-08 58.909500 176.499753 female
2000-01-09 76.435631 174.094104 female
2000-01-10 45.306120 177.540920 male
In [38]: gb = df.groupby("gender")
In [39]: gb.<TAB> # noqa: E225, E999
gb.agg gb.boxplot gb.cummin gb.describe gb.filter gb.get_group gb.height gb.last gb.median gb.ngroups gb.plot gb.rank gb.std gb.transform
gb.aggregate gb.count gb.cumprod gb.dtype gb.first gb.groups gb.hist gb.max gb.min gb.nth gb.prod gb.resample gb.sum gb.var
gb.apply gb.cummax gb.cumsum gb.fillna gb.gender gb.head gb.indices gb.mean gb.name gb.ohlc gb.quantile gb.size gb.tail gb.weight
GroupBy with MultiIndex#
With hierarchically-indexed data, it’s quite
natural to group by one of the levels of the hierarchy.
Let’s create a Series with a two-level MultiIndex.
In [40]: arrays = [
....: ["bar", "bar", "baz", "baz", "foo", "foo", "qux", "qux"],
....: ["one", "two", "one", "two", "one", "two", "one", "two"],
....: ]
....:
In [41]: index = pd.MultiIndex.from_arrays(arrays, names=["first", "second"])
In [42]: s = pd.Series(np.random.randn(8), index=index)
In [43]: s
Out[43]:
first second
bar one -0.919854
two -0.042379
baz one 1.247642
two -0.009920
foo one 0.290213
two 0.495767
qux one 0.362949
two 1.548106
dtype: float64
We can then group by one of the levels in s.
In [44]: grouped = s.groupby(level=0)
In [45]: grouped.sum()
Out[45]:
first
bar -0.962232
baz 1.237723
foo 0.785980
qux 1.911055
dtype: float64
If the MultiIndex has names specified, these can be passed instead of the level
number:
In [46]: s.groupby(level="second").sum()
Out[46]:
second
one 0.980950
two 1.991575
dtype: float64
Grouping with multiple levels is supported.
In [47]: s
Out[47]:
first second third
bar doo one -1.131345
two -0.089329
baz bee one 0.337863
two -0.945867
foo bop one -0.932132
two 1.956030
qux bop one 0.017587
two -0.016692
dtype: float64
In [48]: s.groupby(level=["first", "second"]).sum()
Out[48]:
first second
bar doo -1.220674
baz bee -0.608004
foo bop 1.023898
qux bop 0.000895
dtype: float64
Index level names may be supplied as keys.
In [49]: s.groupby(["first", "second"]).sum()
Out[49]:
first second
bar doo -1.220674
baz bee -0.608004
foo bop 1.023898
qux bop 0.000895
dtype: float64
More on the sum function and aggregation later.
Grouping DataFrame with Index levels and columns#
A DataFrame may be grouped by a combination of columns and index levels by
specifying the column names as strings and the index levels as pd.Grouper
objects.
In [50]: arrays = [
....: ["bar", "bar", "baz", "baz", "foo", "foo", "qux", "qux"],
....: ["one", "two", "one", "two", "one", "two", "one", "two"],
....: ]
....:
In [51]: index = pd.MultiIndex.from_arrays(arrays, names=["first", "second"])
In [52]: df = pd.DataFrame({"A": [1, 1, 1, 1, 2, 2, 3, 3], "B": np.arange(8)}, index=index)
In [53]: df
Out[53]:
A B
first second
bar one 1 0
two 1 1
baz one 1 2
two 1 3
foo one 2 4
two 2 5
qux one 3 6
two 3 7
The following example groups df by the second index level and
the A column.
In [54]: df.groupby([pd.Grouper(level=1), "A"]).sum()
Out[54]:
B
second A
one 1 2
2 4
3 6
two 1 4
2 5
3 7
Index levels may also be specified by name.
In [55]: df.groupby([pd.Grouper(level="second"), "A"]).sum()
Out[55]:
B
second A
one 1 2
2 4
3 6
two 1 4
2 5
3 7
Index level names may be specified as keys directly to groupby.
In [56]: df.groupby(["second", "A"]).sum()
Out[56]:
B
second A
one 1 2
2 4
3 6
two 1 4
2 5
3 7
DataFrame column selection in GroupBy#
Once you have created the GroupBy object from a DataFrame, you might want to do
something different for each of the columns. Thus, using [] similar to
getting a column from a DataFrame, you can do:
In [57]: df = pd.DataFrame(
....: {
....: "A": ["foo", "bar", "foo", "bar", "foo", "bar", "foo", "foo"],
....: "B": ["one", "one", "two", "three", "two", "two", "one", "three"],
....: "C": np.random.randn(8),
....: "D": np.random.randn(8),
....: }
....: )
....:
In [58]: df
Out[58]:
A B C D
0 foo one -0.575247 1.346061
1 bar one 0.254161 1.511763
2 foo two -1.143704 1.627081
3 bar three 0.215897 -0.990582
4 foo two 1.193555 -0.441652
5 bar two -0.077118 1.211526
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
In [59]: grouped = df.groupby(["A"])
In [60]: grouped_C = grouped["C"]
In [61]: grouped_D = grouped["D"]
This is mainly syntactic sugar for the alternative and much more verbose:
In [62]: df["C"].groupby(df["A"])
Out[62]: <pandas.core.groupby.generic.SeriesGroupBy object at 0x7f1ea100a490>
Additionally this method avoids recomputing the internal grouping information
derived from the passed key.
Iterating through groups#
With the GroupBy object in hand, iterating through the grouped data is very
natural and functions similarly to itertools.groupby():
In [63]: grouped = df.groupby('A')
In [64]: for name, group in grouped:
....: print(name)
....: print(group)
....:
bar
A B C D
1 bar one 0.254161 1.511763
3 bar three 0.215897 -0.990582
5 bar two -0.077118 1.211526
foo
A B C D
0 foo one -0.575247 1.346061
2 foo two -1.143704 1.627081
4 foo two 1.193555 -0.441652
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
In the case of grouping by multiple keys, the group name will be a tuple:
In [65]: for name, group in df.groupby(['A', 'B']):
....: print(name)
....: print(group)
....:
('bar', 'one')
A B C D
1 bar one 0.254161 1.511763
('bar', 'three')
A B C D
3 bar three 0.215897 -0.990582
('bar', 'two')
A B C D
5 bar two -0.077118 1.211526
('foo', 'one')
A B C D
0 foo one -0.575247 1.346061
6 foo one -0.408530 0.268520
('foo', 'three')
A B C D
7 foo three -0.862495 0.02458
('foo', 'two')
A B C D
2 foo two -1.143704 1.627081
4 foo two 1.193555 -0.441652
See Iterating through groups.
Selecting a group#
A single group can be selected using
get_group():
In [66]: grouped.get_group("bar")
Out[66]:
A B C D
1 bar one 0.254161 1.511763
3 bar three 0.215897 -0.990582
5 bar two -0.077118 1.211526
Or for an object grouped on multiple columns:
In [67]: df.groupby(["A", "B"]).get_group(("bar", "one"))
Out[67]:
A B C D
1 bar one 0.254161 1.511763
Aggregation#
Once the GroupBy object has been created, several methods are available to
perform a computation on the grouped data. These operations are similar to the
aggregating API, window API,
and resample API.
An obvious one is aggregation via the
aggregate() or equivalently
agg() method:
In [68]: grouped = df.groupby("A")
In [69]: grouped[["C", "D"]].aggregate(np.sum)
Out[69]:
C D
A
bar 0.392940 1.732707
foo -1.796421 2.824590
In [70]: grouped = df.groupby(["A", "B"])
In [71]: grouped.aggregate(np.sum)
Out[71]:
C D
A B
bar one 0.254161 1.511763
three 0.215897 -0.990582
two -0.077118 1.211526
foo one -0.983776 1.614581
three -0.862495 0.024580
two 0.049851 1.185429
As you can see, the result of the aggregation will have the group names as the
new index along the grouped axis. In the case of multiple keys, the result is a
MultiIndex by default, though this can be
changed by using the as_index option:
In [72]: grouped = df.groupby(["A", "B"], as_index=False)
In [73]: grouped.aggregate(np.sum)
Out[73]:
A B C D
0 bar one 0.254161 1.511763
1 bar three 0.215897 -0.990582
2 bar two -0.077118 1.211526
3 foo one -0.983776 1.614581
4 foo three -0.862495 0.024580
5 foo two 0.049851 1.185429
In [74]: df.groupby("A", as_index=False)[["C", "D"]].sum()
Out[74]:
A C D
0 bar 0.392940 1.732707
1 foo -1.796421 2.824590
Note that you could use the reset_index DataFrame function to achieve the
same result as the column names are stored in the resulting MultiIndex:
In [75]: df.groupby(["A", "B"]).sum().reset_index()
Out[75]:
A B C D
0 bar one 0.254161 1.511763
1 bar three 0.215897 -0.990582
2 bar two -0.077118 1.211526
3 foo one -0.983776 1.614581
4 foo three -0.862495 0.024580
5 foo two 0.049851 1.185429
Another simple aggregation example is to compute the size of each group.
This is included in GroupBy as the size method. It returns a Series whose
index are the group names and whose values are the sizes of each group.
In [76]: grouped.size()
Out[76]:
A B size
0 bar one 1
1 bar three 1
2 bar two 1
3 foo one 2
4 foo three 1
5 foo two 2
In [77]: grouped.describe()
Out[77]:
C ... D
count mean std min ... 25% 50% 75% max
0 1.0 0.254161 NaN 0.254161 ... 1.511763 1.511763 1.511763 1.511763
1 1.0 0.215897 NaN 0.215897 ... -0.990582 -0.990582 -0.990582 -0.990582
2 1.0 -0.077118 NaN -0.077118 ... 1.211526 1.211526 1.211526 1.211526
3 2.0 -0.491888 0.117887 -0.575247 ... 0.537905 0.807291 1.076676 1.346061
4 1.0 -0.862495 NaN -0.862495 ... 0.024580 0.024580 0.024580 0.024580
5 2.0 0.024925 1.652692 -1.143704 ... 0.075531 0.592714 1.109898 1.627081
[6 rows x 16 columns]
Another aggregation example is to compute the number of unique values of each group. This is similar to the value_counts function, except that it only counts unique values.
In [78]: ll = [['foo', 1], ['foo', 2], ['foo', 2], ['bar', 1], ['bar', 1]]
In [79]: df4 = pd.DataFrame(ll, columns=["A", "B"])
In [80]: df4
Out[80]:
A B
0 foo 1
1 foo 2
2 foo 2
3 bar 1
4 bar 1
In [81]: df4.groupby("A")["B"].nunique()
Out[81]:
A
bar 1
foo 2
Name: B, dtype: int64
Note
Aggregation functions will not return the groups that you are aggregating over
if they are named columns, when as_index=True, the default. The grouped columns will
be the indices of the returned object.
Passing as_index=False will return the groups that you are aggregating over, if they are
named columns.
Aggregating functions are the ones that reduce the dimension of the returned objects.
Some common aggregating functions are tabulated below:
Function
Description
mean()
Compute mean of groups
sum()
Compute sum of group values
size()
Compute group sizes
count()
Compute count of group
std()
Standard deviation of groups
var()
Compute variance of groups
sem()
Standard error of the mean of groups
describe()
Generates descriptive statistics
first()
Compute first of group values
last()
Compute last of group values
nth()
Take nth value, or a subset if n is a list
min()
Compute min of group values
max()
Compute max of group values
The aggregating functions above will exclude NA values. Any function which
reduces a Series to a scalar value is an aggregation function and will work,
a trivial example is df.groupby('A').agg(lambda ser: 1). Note that
nth() can act as a reducer or a
filter, see here.
Applying multiple functions at once#
With grouped Series you can also pass a list or dict of functions to do
aggregation with, outputting a DataFrame:
In [82]: grouped = df.groupby("A")
In [83]: grouped["C"].agg([np.sum, np.mean, np.std])
Out[83]:
sum mean std
A
bar 0.392940 0.130980 0.181231
foo -1.796421 -0.359284 0.912265
On a grouped DataFrame, you can pass a list of functions to apply to each
column, which produces an aggregated result with a hierarchical index:
In [84]: grouped[["C", "D"]].agg([np.sum, np.mean, np.std])
Out[84]:
C D
sum mean std sum mean std
A
bar 0.392940 0.130980 0.181231 1.732707 0.577569 1.366330
foo -1.796421 -0.359284 0.912265 2.824590 0.564918 0.884785
The resulting aggregations are named for the functions themselves. If you
need to rename, then you can add in a chained operation for a Series like this:
In [85]: (
....: grouped["C"]
....: .agg([np.sum, np.mean, np.std])
....: .rename(columns={"sum": "foo", "mean": "bar", "std": "baz"})
....: )
....:
Out[85]:
foo bar baz
A
bar 0.392940 0.130980 0.181231
foo -1.796421 -0.359284 0.912265
For a grouped DataFrame, you can rename in a similar manner:
In [86]: (
....: grouped[["C", "D"]].agg([np.sum, np.mean, np.std]).rename(
....: columns={"sum": "foo", "mean": "bar", "std": "baz"}
....: )
....: )
....:
Out[86]:
C D
foo bar baz foo bar baz
A
bar 0.392940 0.130980 0.181231 1.732707 0.577569 1.366330
foo -1.796421 -0.359284 0.912265 2.824590 0.564918 0.884785
Note
In general, the output column names should be unique. You can’t apply
the same function (or two functions with the same name) to the same
column.
In [87]: grouped["C"].agg(["sum", "sum"])
Out[87]:
sum sum
A
bar 0.392940 0.392940
foo -1.796421 -1.796421
pandas does allow you to provide multiple lambdas. In this case, pandas
will mangle the name of the (nameless) lambda functions, appending _<i>
to each subsequent lambda.
In [88]: grouped["C"].agg([lambda x: x.max() - x.min(), lambda x: x.median() - x.mean()])
Out[88]:
<lambda_0> <lambda_1>
A
bar 0.331279 0.084917
foo 2.337259 -0.215962
Named aggregation#
New in version 0.25.0.
To support column-specific aggregation with control over the output column names, pandas
accepts the special syntax in GroupBy.agg(), known as “named aggregation”, where
The keywords are the output column names
The values are tuples whose first element is the column to select
and the second element is the aggregation to apply to that column. pandas
provides the pandas.NamedAgg namedtuple with the fields ['column', 'aggfunc']
to make it clearer what the arguments are. As usual, the aggregation can
be a callable or a string alias.
In [89]: animals = pd.DataFrame(
....: {
....: "kind": ["cat", "dog", "cat", "dog"],
....: "height": [9.1, 6.0, 9.5, 34.0],
....: "weight": [7.9, 7.5, 9.9, 198.0],
....: }
....: )
....:
In [90]: animals
Out[90]:
kind height weight
0 cat 9.1 7.9
1 dog 6.0 7.5
2 cat 9.5 9.9
3 dog 34.0 198.0
In [91]: animals.groupby("kind").agg(
....: min_height=pd.NamedAgg(column="height", aggfunc="min"),
....: max_height=pd.NamedAgg(column="height", aggfunc="max"),
....: average_weight=pd.NamedAgg(column="weight", aggfunc=np.mean),
....: )
....:
Out[91]:
min_height max_height average_weight
kind
cat 9.1 9.5 8.90
dog 6.0 34.0 102.75
pandas.NamedAgg is just a namedtuple. Plain tuples are allowed as well.
In [92]: animals.groupby("kind").agg(
....: min_height=("height", "min"),
....: max_height=("height", "max"),
....: average_weight=("weight", np.mean),
....: )
....:
Out[92]:
min_height max_height average_weight
kind
cat 9.1 9.5 8.90
dog 6.0 34.0 102.75
If your desired output column names are not valid Python keywords, construct a dictionary
and unpack the keyword arguments
In [93]: animals.groupby("kind").agg(
....: **{
....: "total weight": pd.NamedAgg(column="weight", aggfunc=sum)
....: }
....: )
....:
Out[93]:
total weight
kind
cat 17.8
dog 205.5
Additional keyword arguments are not passed through to the aggregation functions. Only pairs
of (column, aggfunc) should be passed as **kwargs. If your aggregation functions
requires additional arguments, partially apply them with functools.partial().
Note
For Python 3.5 and earlier, the order of **kwargs in a functions was not
preserved. This means that the output column ordering would not be
consistent. To ensure consistent ordering, the keys (and so output columns)
will always be sorted for Python 3.5.
Named aggregation is also valid for Series groupby aggregations. In this case there’s
no column selection, so the values are just the functions.
In [94]: animals.groupby("kind").height.agg(
....: min_height="min",
....: max_height="max",
....: )
....:
Out[94]:
min_height max_height
kind
cat 9.1 9.5
dog 6.0 34.0
Applying different functions to DataFrame columns#
By passing a dict to aggregate you can apply a different aggregation to the
columns of a DataFrame:
In [95]: grouped.agg({"C": np.sum, "D": lambda x: np.std(x, ddof=1)})
Out[95]:
C D
A
bar 0.392940 1.366330
foo -1.796421 0.884785
The function names can also be strings. In order for a string to be valid it
must be either implemented on GroupBy or available via dispatching:
In [96]: grouped.agg({"C": "sum", "D": "std"})
Out[96]:
C D
A
bar 0.392940 1.366330
foo -1.796421 0.884785
Cython-optimized aggregation functions#
Some common aggregations, currently only sum, mean, std, and sem, have
optimized Cython implementations:
In [97]: df.groupby("A")[["C", "D"]].sum()
Out[97]:
C D
A
bar 0.392940 1.732707
foo -1.796421 2.824590
In [98]: df.groupby(["A", "B"]).mean()
Out[98]:
C D
A B
bar one 0.254161 1.511763
three 0.215897 -0.990582
two -0.077118 1.211526
foo one -0.491888 0.807291
three -0.862495 0.024580
two 0.024925 0.592714
Of course sum and mean are implemented on pandas objects, so the above
code would work even without the special versions via dispatching (see below).
Aggregations with User-Defined Functions#
Users can also provide their own functions for custom aggregations. When aggregating
with a User-Defined Function (UDF), the UDF should not mutate the provided Series, see
Mutating with User Defined Function (UDF) methods for more information.
In [99]: animals.groupby("kind")[["height"]].agg(lambda x: set(x))
Out[99]:
height
kind
cat {9.1, 9.5}
dog {34.0, 6.0}
The resulting dtype will reflect that of the aggregating function. If the results from different groups have
different dtypes, then a common dtype will be determined in the same way as DataFrame construction.
In [100]: animals.groupby("kind")[["height"]].agg(lambda x: x.astype(int).sum())
Out[100]:
height
kind
cat 18
dog 40
Transformation#
The transform method returns an object that is indexed the same
as the one being grouped. The transform function must:
Return a result that is either the same size as the group chunk or
broadcastable to the size of the group chunk (e.g., a scalar,
grouped.transform(lambda x: x.iloc[-1])).
Operate column-by-column on the group chunk. The transform is applied to
the first group chunk using chunk.apply.
Not perform in-place operations on the group chunk. Group chunks should
be treated as immutable, and changes to a group chunk may produce unexpected
results. For example, when using fillna, inplace must be False
(grouped.transform(lambda x: x.fillna(inplace=False))).
(Optionally) operates on the entire group chunk. If this is supported, a
fast path is used starting from the second chunk.
Deprecated since version 1.5.0: When using .transform on a grouped DataFrame and the transformation function
returns a DataFrame, currently pandas does not align the result’s index
with the input’s index. This behavior is deprecated and alignment will
be performed in a future version of pandas. You can apply .to_numpy() to the
result of the transformation function to avoid alignment.
Similar to Aggregations with User-Defined Functions, the resulting dtype will reflect that of the
transformation function. If the results from different groups have different dtypes, then
a common dtype will be determined in the same way as DataFrame construction.
Suppose we wished to standardize the data within each group:
In [101]: index = pd.date_range("10/1/1999", periods=1100)
In [102]: ts = pd.Series(np.random.normal(0.5, 2, 1100), index)
In [103]: ts = ts.rolling(window=100, min_periods=100).mean().dropna()
In [104]: ts.head()
Out[104]:
2000-01-08 0.779333
2000-01-09 0.778852
2000-01-10 0.786476
2000-01-11 0.782797
2000-01-12 0.798110
Freq: D, dtype: float64
In [105]: ts.tail()
Out[105]:
2002-09-30 0.660294
2002-10-01 0.631095
2002-10-02 0.673601
2002-10-03 0.709213
2002-10-04 0.719369
Freq: D, dtype: float64
In [106]: transformed = ts.groupby(lambda x: x.year).transform(
.....: lambda x: (x - x.mean()) / x.std()
.....: )
.....:
We would expect the result to now have mean 0 and standard deviation 1 within
each group, which we can easily check:
# Original Data
In [107]: grouped = ts.groupby(lambda x: x.year)
In [108]: grouped.mean()
Out[108]:
2000 0.442441
2001 0.526246
2002 0.459365
dtype: float64
In [109]: grouped.std()
Out[109]:
2000 0.131752
2001 0.210945
2002 0.128753
dtype: float64
# Transformed Data
In [110]: grouped_trans = transformed.groupby(lambda x: x.year)
In [111]: grouped_trans.mean()
Out[111]:
2000 -4.870756e-16
2001 -1.545187e-16
2002 4.136282e-16
dtype: float64
In [112]: grouped_trans.std()
Out[112]:
2000 1.0
2001 1.0
2002 1.0
dtype: float64
We can also visually compare the original and transformed data sets.
In [113]: compare = pd.DataFrame({"Original": ts, "Transformed": transformed})
In [114]: compare.plot()
Out[114]: <AxesSubplot: >
Transformation functions that have lower dimension outputs are broadcast to
match the shape of the input array.
In [115]: ts.groupby(lambda x: x.year).transform(lambda x: x.max() - x.min())
Out[115]:
2000-01-08 0.623893
2000-01-09 0.623893
2000-01-10 0.623893
2000-01-11 0.623893
2000-01-12 0.623893
...
2002-09-30 0.558275
2002-10-01 0.558275
2002-10-02 0.558275
2002-10-03 0.558275
2002-10-04 0.558275
Freq: D, Length: 1001, dtype: float64
Alternatively, the built-in methods could be used to produce the same outputs.
In [116]: max_ts = ts.groupby(lambda x: x.year).transform("max")
In [117]: min_ts = ts.groupby(lambda x: x.year).transform("min")
In [118]: max_ts - min_ts
Out[118]:
2000-01-08 0.623893
2000-01-09 0.623893
2000-01-10 0.623893
2000-01-11 0.623893
2000-01-12 0.623893
...
2002-09-30 0.558275
2002-10-01 0.558275
2002-10-02 0.558275
2002-10-03 0.558275
2002-10-04 0.558275
Freq: D, Length: 1001, dtype: float64
Another common data transform is to replace missing data with the group mean.
In [119]: data_df
Out[119]:
A B C
0 1.539708 -1.166480 0.533026
1 1.302092 -0.505754 NaN
2 -0.371983 1.104803 -0.651520
3 -1.309622 1.118697 -1.161657
4 -1.924296 0.396437 0.812436
.. ... ... ...
995 -0.093110 0.683847 -0.774753
996 -0.185043 1.438572 NaN
997 -0.394469 -0.642343 0.011374
998 -1.174126 1.857148 NaN
999 0.234564 0.517098 0.393534
[1000 rows x 3 columns]
In [120]: countries = np.array(["US", "UK", "GR", "JP"])
In [121]: key = countries[np.random.randint(0, 4, 1000)]
In [122]: grouped = data_df.groupby(key)
# Non-NA count in each group
In [123]: grouped.count()
Out[123]:
A B C
GR 209 217 189
JP 240 255 217
UK 216 231 193
US 239 250 217
In [124]: transformed = grouped.transform(lambda x: x.fillna(x.mean()))
We can verify that the group means have not changed in the transformed data
and that the transformed data contains no NAs.
In [125]: grouped_trans = transformed.groupby(key)
In [126]: grouped.mean() # original group means
Out[126]:
A B C
GR -0.098371 -0.015420 0.068053
JP 0.069025 0.023100 -0.077324
UK 0.034069 -0.052580 -0.116525
US 0.058664 -0.020399 0.028603
In [127]: grouped_trans.mean() # transformation did not change group means
Out[127]:
A B C
GR -0.098371 -0.015420 0.068053
JP 0.069025 0.023100 -0.077324
UK 0.034069 -0.052580 -0.116525
US 0.058664 -0.020399 0.028603
In [128]: grouped.count() # original has some missing data points
Out[128]:
A B C
GR 209 217 189
JP 240 255 217
UK 216 231 193
US 239 250 217
In [129]: grouped_trans.count() # counts after transformation
Out[129]:
A B C
GR 228 228 228
JP 267 267 267
UK 247 247 247
US 258 258 258
In [130]: grouped_trans.size() # Verify non-NA count equals group size
Out[130]:
GR 228
JP 267
UK 247
US 258
dtype: int64
Note
Some functions will automatically transform the input when applied to a
GroupBy object, but returning an object of the same shape as the original.
Passing as_index=False will not affect these transformation methods.
For example: fillna, ffill, bfill, shift..
In [131]: grouped.ffill()
Out[131]:
A B C
0 1.539708 -1.166480 0.533026
1 1.302092 -0.505754 0.533026
2 -0.371983 1.104803 -0.651520
3 -1.309622 1.118697 -1.161657
4 -1.924296 0.396437 0.812436
.. ... ... ...
995 -0.093110 0.683847 -0.774753
996 -0.185043 1.438572 -0.774753
997 -0.394469 -0.642343 0.011374
998 -1.174126 1.857148 -0.774753
999 0.234564 0.517098 0.393534
[1000 rows x 3 columns]
Window and resample operations#
It is possible to use resample(), expanding() and
rolling() as methods on groupbys.
The example below will apply the rolling() method on the samples of
the column B based on the groups of column A.
In [132]: df_re = pd.DataFrame({"A": [1] * 10 + [5] * 10, "B": np.arange(20)})
In [133]: df_re
Out[133]:
A B
0 1 0
1 1 1
2 1 2
3 1 3
4 1 4
.. .. ..
15 5 15
16 5 16
17 5 17
18 5 18
19 5 19
[20 rows x 2 columns]
In [134]: df_re.groupby("A").rolling(4).B.mean()
Out[134]:
A
1 0 NaN
1 NaN
2 NaN
3 1.5
4 2.5
...
5 15 13.5
16 14.5
17 15.5
18 16.5
19 17.5
Name: B, Length: 20, dtype: float64
The expanding() method will accumulate a given operation
(sum() in the example) for all the members of each particular
group.
In [135]: df_re.groupby("A").expanding().sum()
Out[135]:
B
A
1 0 0.0
1 1.0
2 3.0
3 6.0
4 10.0
... ...
5 15 75.0
16 91.0
17 108.0
18 126.0
19 145.0
[20 rows x 1 columns]
Suppose you want to use the resample() method to get a daily
frequency in each group of your dataframe and wish to complete the
missing values with the ffill() method.
In [136]: df_re = pd.DataFrame(
.....: {
.....: "date": pd.date_range(start="2016-01-01", periods=4, freq="W"),
.....: "group": [1, 1, 2, 2],
.....: "val": [5, 6, 7, 8],
.....: }
.....: ).set_index("date")
.....:
In [137]: df_re
Out[137]:
group val
date
2016-01-03 1 5
2016-01-10 1 6
2016-01-17 2 7
2016-01-24 2 8
In [138]: df_re.groupby("group").resample("1D").ffill()
Out[138]:
group val
group date
1 2016-01-03 1 5
2016-01-04 1 5
2016-01-05 1 5
2016-01-06 1 5
2016-01-07 1 5
... ... ...
2 2016-01-20 2 7
2016-01-21 2 7
2016-01-22 2 7
2016-01-23 2 7
2016-01-24 2 8
[16 rows x 2 columns]
Filtration#
The filter method returns a subset of the original object. Suppose we
want to take only elements that belong to groups with a group sum greater
than 2.
In [139]: sf = pd.Series([1, 1, 2, 3, 3, 3])
In [140]: sf.groupby(sf).filter(lambda x: x.sum() > 2)
Out[140]:
3 3
4 3
5 3
dtype: int64
The argument of filter must be a function that, applied to the group as a
whole, returns True or False.
Another useful operation is filtering out elements that belong to groups
with only a couple members.
In [141]: dff = pd.DataFrame({"A": np.arange(8), "B": list("aabbbbcc")})
In [142]: dff.groupby("B").filter(lambda x: len(x) > 2)
Out[142]:
A B
2 2 b
3 3 b
4 4 b
5 5 b
Alternatively, instead of dropping the offending groups, we can return a
like-indexed objects where the groups that do not pass the filter are filled
with NaNs.
In [143]: dff.groupby("B").filter(lambda x: len(x) > 2, dropna=False)
Out[143]:
A B
0 NaN NaN
1 NaN NaN
2 2.0 b
3 3.0 b
4 4.0 b
5 5.0 b
6 NaN NaN
7 NaN NaN
For DataFrames with multiple columns, filters should explicitly specify a column as the filter criterion.
In [144]: dff["C"] = np.arange(8)
In [145]: dff.groupby("B").filter(lambda x: len(x["C"]) > 2)
Out[145]:
A B C
2 2 b 2
3 3 b 3
4 4 b 4
5 5 b 5
Note
Some functions when applied to a groupby object will act as a filter on the input, returning
a reduced shape of the original (and potentially eliminating groups), but with the index unchanged.
Passing as_index=False will not affect these transformation methods.
For example: head, tail.
In [146]: dff.groupby("B").head(2)
Out[146]:
A B C
0 0 a 0
1 1 a 1
2 2 b 2
3 3 b 3
6 6 c 6
7 7 c 7
Dispatching to instance methods#
When doing an aggregation or transformation, you might just want to call an
instance method on each data group. This is pretty easy to do by passing lambda
functions:
In [147]: grouped = df.groupby("A")
In [148]: grouped.agg(lambda x: x.std())
Out[148]:
C D
A
bar 0.181231 1.366330
foo 0.912265 0.884785
But, it’s rather verbose and can be untidy if you need to pass additional
arguments. Using a bit of metaprogramming cleverness, GroupBy now has the
ability to “dispatch” method calls to the groups:
In [149]: grouped.std()
Out[149]:
C D
A
bar 0.181231 1.366330
foo 0.912265 0.884785
What is actually happening here is that a function wrapper is being
generated. When invoked, it takes any passed arguments and invokes the function
with any arguments on each group (in the above example, the std
function). The results are then combined together much in the style of agg
and transform (it actually uses apply to infer the gluing, documented
next). This enables some operations to be carried out rather succinctly:
In [150]: tsdf = pd.DataFrame(
.....: np.random.randn(1000, 3),
.....: index=pd.date_range("1/1/2000", periods=1000),
.....: columns=["A", "B", "C"],
.....: )
.....:
In [151]: tsdf.iloc[::2] = np.nan
In [152]: grouped = tsdf.groupby(lambda x: x.year)
In [153]: grouped.fillna(method="pad")
Out[153]:
A B C
2000-01-01 NaN NaN NaN
2000-01-02 -0.353501 -0.080957 -0.876864
2000-01-03 -0.353501 -0.080957 -0.876864
2000-01-04 0.050976 0.044273 -0.559849
2000-01-05 0.050976 0.044273 -0.559849
... ... ... ...
2002-09-22 0.005011 0.053897 -1.026922
2002-09-23 0.005011 0.053897 -1.026922
2002-09-24 -0.456542 -1.849051 1.559856
2002-09-25 -0.456542 -1.849051 1.559856
2002-09-26 1.123162 0.354660 1.128135
[1000 rows x 3 columns]
In this example, we chopped the collection of time series into yearly chunks
then independently called fillna on the
groups.
The nlargest and nsmallest methods work on Series style groupbys:
In [154]: s = pd.Series([9, 8, 7, 5, 19, 1, 4.2, 3.3])
In [155]: g = pd.Series(list("abababab"))
In [156]: gb = s.groupby(g)
In [157]: gb.nlargest(3)
Out[157]:
a 4 19.0
0 9.0
2 7.0
b 1 8.0
3 5.0
7 3.3
dtype: float64
In [158]: gb.nsmallest(3)
Out[158]:
a 6 4.2
2 7.0
0 9.0
b 5 1.0
7 3.3
3 5.0
dtype: float64
Flexible apply#
Some operations on the grouped data might not fit into either the aggregate or
transform categories. Or, you may simply want GroupBy to infer how to combine
the results. For these, use the apply function, which can be substituted
for both aggregate and transform in many standard use cases. However,
apply can handle some exceptional use cases.
Note
apply can act as a reducer, transformer, or filter function, depending
on exactly what is passed to it. It can depend on the passed function and
exactly what you are grouping. Thus the grouped column(s) may be included in
the output as well as set the indices.
In [159]: df
Out[159]:
A B C D
0 foo one -0.575247 1.346061
1 bar one 0.254161 1.511763
2 foo two -1.143704 1.627081
3 bar three 0.215897 -0.990582
4 foo two 1.193555 -0.441652
5 bar two -0.077118 1.211526
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
In [160]: grouped = df.groupby("A")
# could also just call .describe()
In [161]: grouped["C"].apply(lambda x: x.describe())
Out[161]:
A
bar count 3.000000
mean 0.130980
std 0.181231
min -0.077118
25% 0.069390
...
foo min -1.143704
25% -0.862495
50% -0.575247
75% -0.408530
max 1.193555
Name: C, Length: 16, dtype: float64
The dimension of the returned result can also change:
In [162]: grouped = df.groupby('A')['C']
In [163]: def f(group):
.....: return pd.DataFrame({'original': group,
.....: 'demeaned': group - group.mean()})
.....:
apply on a Series can operate on a returned value from the applied function,
that is itself a series, and possibly upcast the result to a DataFrame:
In [164]: def f(x):
.....: return pd.Series([x, x ** 2], index=["x", "x^2"])
.....:
In [165]: s = pd.Series(np.random.rand(5))
In [166]: s
Out[166]:
0 0.321438
1 0.493496
2 0.139505
3 0.910103
4 0.194158
dtype: float64
In [167]: s.apply(f)
Out[167]:
x x^2
0 0.321438 0.103323
1 0.493496 0.243538
2 0.139505 0.019462
3 0.910103 0.828287
4 0.194158 0.037697
Control grouped column(s) placement with group_keys#
Note
If group_keys=True is specified when calling groupby(),
functions passed to apply that return like-indexed outputs will have the
group keys added to the result index. Previous versions of pandas would add
the group keys only when the result from the applied function had a different
index than the input. If group_keys is not specified, the group keys will
not be added for like-indexed outputs. In the future this behavior
will change to always respect group_keys, which defaults to True.
Changed in version 1.5.0.
To control whether the grouped column(s) are included in the indices, you can use
the argument group_keys. Compare
In [168]: df.groupby("A", group_keys=True).apply(lambda x: x)
Out[168]:
A B C D
A
bar 1 bar one 0.254161 1.511763
3 bar three 0.215897 -0.990582
5 bar two -0.077118 1.211526
foo 0 foo one -0.575247 1.346061
2 foo two -1.143704 1.627081
4 foo two 1.193555 -0.441652
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
with
In [169]: df.groupby("A", group_keys=False).apply(lambda x: x)
Out[169]:
A B C D
0 foo one -0.575247 1.346061
1 bar one 0.254161 1.511763
2 foo two -1.143704 1.627081
3 bar three 0.215897 -0.990582
4 foo two 1.193555 -0.441652
5 bar two -0.077118 1.211526
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
Similar to Aggregations with User-Defined Functions, the resulting dtype will reflect that of the
apply function. If the results from different groups have different dtypes, then
a common dtype will be determined in the same way as DataFrame construction.
Numba Accelerated Routines#
New in version 1.1.
If Numba is installed as an optional dependency, the transform and
aggregate methods support engine='numba' and engine_kwargs arguments.
See enhancing performance with Numba for general usage of the arguments
and performance considerations.
The function signature must start with values, index exactly as the data belonging to each group
will be passed into values, and the group index will be passed into index.
Warning
When using engine='numba', there will be no “fall back” behavior internally. The group
data and group index will be passed as NumPy arrays to the JITed user defined function, and no
alternative execution attempts will be tried.
Other useful features#
Automatic exclusion of “nuisance” columns#
Again consider the example DataFrame we’ve been looking at:
In [170]: df
Out[170]:
A B C D
0 foo one -0.575247 1.346061
1 bar one 0.254161 1.511763
2 foo two -1.143704 1.627081
3 bar three 0.215897 -0.990582
4 foo two 1.193555 -0.441652
5 bar two -0.077118 1.211526
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
Suppose we wish to compute the standard deviation grouped by the A
column. There is a slight problem, namely that we don’t care about the data in
column B. We refer to this as a “nuisance” column. You can avoid nuisance
columns by specifying numeric_only=True:
In [171]: df.groupby("A").std(numeric_only=True)
Out[171]:
C D
A
bar 0.181231 1.366330
foo 0.912265 0.884785
Note that df.groupby('A').colname.std(). is more efficient than
df.groupby('A').std().colname, so if the result of an aggregation function
is only interesting over one column (here colname), it may be filtered
before applying the aggregation function.
Note
Any object column, also if it contains numerical values such as Decimal
objects, is considered as a “nuisance” columns. They are excluded from
aggregate functions automatically in groupby.
If you do wish to include decimal or object columns in an aggregation with
other non-nuisance data types, you must do so explicitly.
Warning
The automatic dropping of nuisance columns has been deprecated and will be removed
in a future version of pandas. If columns are included that cannot be operated
on, pandas will instead raise an error. In order to avoid this, either select
the columns you wish to operate on or specify numeric_only=True.
In [172]: from decimal import Decimal
In [173]: df_dec = pd.DataFrame(
.....: {
.....: "id": [1, 2, 1, 2],
.....: "int_column": [1, 2, 3, 4],
.....: "dec_column": [
.....: Decimal("0.50"),
.....: Decimal("0.15"),
.....: Decimal("0.25"),
.....: Decimal("0.40"),
.....: ],
.....: }
.....: )
.....:
# Decimal columns can be sum'd explicitly by themselves...
In [174]: df_dec.groupby(["id"])[["dec_column"]].sum()
Out[174]:
dec_column
id
1 0.75
2 0.55
# ...but cannot be combined with standard data types or they will be excluded
In [175]: df_dec.groupby(["id"])[["int_column", "dec_column"]].sum()
Out[175]:
int_column
id
1 4
2 6
# Use .agg function to aggregate over standard and "nuisance" data types
# at the same time
In [176]: df_dec.groupby(["id"]).agg({"int_column": "sum", "dec_column": "sum"})
Out[176]:
int_column dec_column
id
1 4 0.75
2 6 0.55
Handling of (un)observed Categorical values#
When using a Categorical grouper (as a single grouper, or as part of multiple groupers), the observed keyword
controls whether to return a cartesian product of all possible groupers values (observed=False) or only those
that are observed groupers (observed=True).
Show all values:
In [177]: pd.Series([1, 1, 1]).groupby(
.....: pd.Categorical(["a", "a", "a"], categories=["a", "b"]), observed=False
.....: ).count()
.....:
Out[177]:
a 3
b 0
dtype: int64
Show only the observed values:
In [178]: pd.Series([1, 1, 1]).groupby(
.....: pd.Categorical(["a", "a", "a"], categories=["a", "b"]), observed=True
.....: ).count()
.....:
Out[178]:
a 3
dtype: int64
The returned dtype of the grouped will always include all of the categories that were grouped.
In [179]: s = (
.....: pd.Series([1, 1, 1])
.....: .groupby(pd.Categorical(["a", "a", "a"], categories=["a", "b"]), observed=False)
.....: .count()
.....: )
.....:
In [180]: s.index.dtype
Out[180]: CategoricalDtype(categories=['a', 'b'], ordered=False)
NA and NaT group handling#
If there are any NaN or NaT values in the grouping key, these will be
automatically excluded. In other words, there will never be an “NA group” or
“NaT group”. This was not the case in older versions of pandas, but users were
generally discarding the NA group anyway (and supporting it was an
implementation headache).
Grouping with ordered factors#
Categorical variables represented as instance of pandas’s Categorical class
can be used as group keys. If so, the order of the levels will be preserved:
In [181]: data = pd.Series(np.random.randn(100))
In [182]: factor = pd.qcut(data, [0, 0.25, 0.5, 0.75, 1.0])
In [183]: data.groupby(factor).mean()
Out[183]:
(-2.645, -0.523] -1.362896
(-0.523, 0.0296] -0.260266
(0.0296, 0.654] 0.361802
(0.654, 2.21] 1.073801
dtype: float64
Grouping with a grouper specification#
You may need to specify a bit more data to properly group. You can
use the pd.Grouper to provide this local control.
In [184]: import datetime
In [185]: df = pd.DataFrame(
.....: {
.....: "Branch": "A A A A A A A B".split(),
.....: "Buyer": "Carl Mark Carl Carl Joe Joe Joe Carl".split(),
.....: "Quantity": [1, 3, 5, 1, 8, 1, 9, 3],
.....: "Date": [
.....: datetime.datetime(2013, 1, 1, 13, 0),
.....: datetime.datetime(2013, 1, 1, 13, 5),
.....: datetime.datetime(2013, 10, 1, 20, 0),
.....: datetime.datetime(2013, 10, 2, 10, 0),
.....: datetime.datetime(2013, 10, 1, 20, 0),
.....: datetime.datetime(2013, 10, 2, 10, 0),
.....: datetime.datetime(2013, 12, 2, 12, 0),
.....: datetime.datetime(2013, 12, 2, 14, 0),
.....: ],
.....: }
.....: )
.....:
In [186]: df
Out[186]:
Branch Buyer Quantity Date
0 A Carl 1 2013-01-01 13:00:00
1 A Mark 3 2013-01-01 13:05:00
2 A Carl 5 2013-10-01 20:00:00
3 A Carl 1 2013-10-02 10:00:00
4 A Joe 8 2013-10-01 20:00:00
5 A Joe 1 2013-10-02 10:00:00
6 A Joe 9 2013-12-02 12:00:00
7 B Carl 3 2013-12-02 14:00:00
Groupby a specific column with the desired frequency. This is like resampling.
In [187]: df.groupby([pd.Grouper(freq="1M", key="Date"), "Buyer"])[["Quantity"]].sum()
Out[187]:
Quantity
Date Buyer
2013-01-31 Carl 1
Mark 3
2013-10-31 Carl 6
Joe 9
2013-12-31 Carl 3
Joe 9
You have an ambiguous specification in that you have a named index and a column
that could be potential groupers.
In [188]: df = df.set_index("Date")
In [189]: df["Date"] = df.index + pd.offsets.MonthEnd(2)
In [190]: df.groupby([pd.Grouper(freq="6M", key="Date"), "Buyer"])[["Quantity"]].sum()
Out[190]:
Quantity
Date Buyer
2013-02-28 Carl 1
Mark 3
2014-02-28 Carl 9
Joe 18
In [191]: df.groupby([pd.Grouper(freq="6M", level="Date"), "Buyer"])[["Quantity"]].sum()
Out[191]:
Quantity
Date Buyer
2013-01-31 Carl 1
Mark 3
2014-01-31 Carl 9
Joe 18
Taking the first rows of each group#
Just like for a DataFrame or Series you can call head and tail on a groupby:
In [192]: df = pd.DataFrame([[1, 2], [1, 4], [5, 6]], columns=["A", "B"])
In [193]: df
Out[193]:
A B
0 1 2
1 1 4
2 5 6
In [194]: g = df.groupby("A")
In [195]: g.head(1)
Out[195]:
A B
0 1 2
2 5 6
In [196]: g.tail(1)
Out[196]:
A B
1 1 4
2 5 6
This shows the first or last n rows from each group.
Taking the nth row of each group#
To select from a DataFrame or Series the nth item, use
nth(). This is a reduction method, and
will return a single row (or no row) per group if you pass an int for n:
In [197]: df = pd.DataFrame([[1, np.nan], [1, 4], [5, 6]], columns=["A", "B"])
In [198]: g = df.groupby("A")
In [199]: g.nth(0)
Out[199]:
B
A
1 NaN
5 6.0
In [200]: g.nth(-1)
Out[200]:
B
A
1 4.0
5 6.0
In [201]: g.nth(1)
Out[201]:
B
A
1 4.0
If you want to select the nth not-null item, use the dropna kwarg. For a DataFrame this should be either 'any' or 'all' just like you would pass to dropna:
# nth(0) is the same as g.first()
In [202]: g.nth(0, dropna="any")
Out[202]:
B
A
1 4.0
5 6.0
In [203]: g.first()
Out[203]:
B
A
1 4.0
5 6.0
# nth(-1) is the same as g.last()
In [204]: g.nth(-1, dropna="any") # NaNs denote group exhausted when using dropna
Out[204]:
B
A
1 4.0
5 6.0
In [205]: g.last()
Out[205]:
B
A
1 4.0
5 6.0
In [206]: g.B.nth(0, dropna="all")
Out[206]:
A
1 4.0
5 6.0
Name: B, dtype: float64
As with other methods, passing as_index=False, will achieve a filtration, which returns the grouped row.
In [207]: df = pd.DataFrame([[1, np.nan], [1, 4], [5, 6]], columns=["A", "B"])
In [208]: g = df.groupby("A", as_index=False)
In [209]: g.nth(0)
Out[209]:
A B
0 1 NaN
2 5 6.0
In [210]: g.nth(-1)
Out[210]:
A B
1 1 4.0
2 5 6.0
You can also select multiple rows from each group by specifying multiple nth values as a list of ints.
In [211]: business_dates = pd.date_range(start="4/1/2014", end="6/30/2014", freq="B")
In [212]: df = pd.DataFrame(1, index=business_dates, columns=["a", "b"])
# get the first, 4th, and last date index for each month
In [213]: df.groupby([df.index.year, df.index.month]).nth([0, 3, -1])
Out[213]:
a b
2014 4 1 1
4 1 1
4 1 1
5 1 1
5 1 1
5 1 1
6 1 1
6 1 1
6 1 1
Enumerate group items#
To see the order in which each row appears within its group, use the
cumcount method:
In [214]: dfg = pd.DataFrame(list("aaabba"), columns=["A"])
In [215]: dfg
Out[215]:
A
0 a
1 a
2 a
3 b
4 b
5 a
In [216]: dfg.groupby("A").cumcount()
Out[216]:
0 0
1 1
2 2
3 0
4 1
5 3
dtype: int64
In [217]: dfg.groupby("A").cumcount(ascending=False)
Out[217]:
0 3
1 2
2 1
3 1
4 0
5 0
dtype: int64
Enumerate groups#
To see the ordering of the groups (as opposed to the order of rows
within a group given by cumcount) you can use
ngroup().
Note that the numbers given to the groups match the order in which the
groups would be seen when iterating over the groupby object, not the
order they are first observed.
In [218]: dfg = pd.DataFrame(list("aaabba"), columns=["A"])
In [219]: dfg
Out[219]:
A
0 a
1 a
2 a
3 b
4 b
5 a
In [220]: dfg.groupby("A").ngroup()
Out[220]:
0 0
1 0
2 0
3 1
4 1
5 0
dtype: int64
In [221]: dfg.groupby("A").ngroup(ascending=False)
Out[221]:
0 1
1 1
2 1
3 0
4 0
5 1
dtype: int64
Plotting#
Groupby also works with some plotting methods. For example, suppose we
suspect that some features in a DataFrame may differ by group, in this case,
the values in column 1 where the group is “B” are 3 higher on average.
In [222]: np.random.seed(1234)
In [223]: df = pd.DataFrame(np.random.randn(50, 2))
In [224]: df["g"] = np.random.choice(["A", "B"], size=50)
In [225]: df.loc[df["g"] == "B", 1] += 3
We can easily visualize this with a boxplot:
In [226]: df.groupby("g").boxplot()
Out[226]:
A AxesSubplot(0.1,0.15;0.363636x0.75)
B AxesSubplot(0.536364,0.15;0.363636x0.75)
dtype: object
The result of calling boxplot is a dictionary whose keys are the values
of our grouping column g (“A” and “B”). The values of the resulting dictionary
can be controlled by the return_type keyword of boxplot.
See the visualization documentation for more.
Warning
For historical reasons, df.groupby("g").boxplot() is not equivalent
to df.boxplot(by="g"). See here for
an explanation.
Piping function calls#
Similar to the functionality provided by DataFrame and Series, functions
that take GroupBy objects can be chained together using a pipe method to
allow for a cleaner, more readable syntax. To read about .pipe in general terms,
see here.
Combining .groupby and .pipe is often useful when you need to reuse
GroupBy objects.
As an example, imagine having a DataFrame with columns for stores, products,
revenue and quantity sold. We’d like to do a groupwise calculation of prices
(i.e. revenue/quantity) per store and per product. We could do this in a
multi-step operation, but expressing it in terms of piping can make the
code more readable. First we set the data:
In [227]: n = 1000
In [228]: df = pd.DataFrame(
.....: {
.....: "Store": np.random.choice(["Store_1", "Store_2"], n),
.....: "Product": np.random.choice(["Product_1", "Product_2"], n),
.....: "Revenue": (np.random.random(n) * 50 + 10).round(2),
.....: "Quantity": np.random.randint(1, 10, size=n),
.....: }
.....: )
.....:
In [229]: df.head(2)
Out[229]:
Store Product Revenue Quantity
0 Store_2 Product_1 26.12 1
1 Store_2 Product_1 28.86 1
Now, to find prices per store/product, we can simply do:
In [230]: (
.....: df.groupby(["Store", "Product"])
.....: .pipe(lambda grp: grp.Revenue.sum() / grp.Quantity.sum())
.....: .unstack()
.....: .round(2)
.....: )
.....:
Out[230]:
Product Product_1 Product_2
Store
Store_1 6.82 7.05
Store_2 6.30 6.64
Piping can also be expressive when you want to deliver a grouped object to some
arbitrary function, for example:
In [231]: def mean(groupby):
.....: return groupby.mean()
.....:
In [232]: df.groupby(["Store", "Product"]).pipe(mean)
Out[232]:
Revenue Quantity
Store Product
Store_1 Product_1 34.622727 5.075758
Product_2 35.482815 5.029630
Store_2 Product_1 32.972837 5.237589
Product_2 34.684360 5.224000
where mean takes a GroupBy object and finds the mean of the Revenue and Quantity
columns respectively for each Store-Product combination. The mean function can
be any function that takes in a GroupBy object; the .pipe will pass the GroupBy
object as a parameter into the function you specify.
Examples#
Regrouping by factor#
Regroup columns of a DataFrame according to their sum, and sum the aggregated ones.
In [233]: df = pd.DataFrame({"a": [1, 0, 0], "b": [0, 1, 0], "c": [1, 0, 0], "d": [2, 3, 4]})
In [234]: df
Out[234]:
a b c d
0 1 0 1 2
1 0 1 0 3
2 0 0 0 4
In [235]: df.groupby(df.sum(), axis=1).sum()
Out[235]:
1 9
0 2 2
1 1 3
2 0 4
Multi-column factorization#
By using ngroup(), we can extract
information about the groups in a way similar to factorize() (as described
further in the reshaping API) but which applies
naturally to multiple columns of mixed type and different
sources. This can be useful as an intermediate categorical-like step
in processing, when the relationships between the group rows are more
important than their content, or as input to an algorithm which only
accepts the integer encoding. (For more information about support in
pandas for full categorical data, see the Categorical
introduction and the
API documentation.)
In [236]: dfg = pd.DataFrame({"A": [1, 1, 2, 3, 2], "B": list("aaaba")})
In [237]: dfg
Out[237]:
A B
0 1 a
1 1 a
2 2 a
3 3 b
4 2 a
In [238]: dfg.groupby(["A", "B"]).ngroup()
Out[238]:
0 0
1 0
2 1
3 2
4 1
dtype: int64
In [239]: dfg.groupby(["A", [0, 0, 0, 1, 1]]).ngroup()
Out[239]:
0 0
1 0
2 1
3 3
4 2
dtype: int64
Groupby by indexer to ‘resample’ data#
Resampling produces new hypothetical samples (resamples) from already existing observed data or from a model that generates data. These new samples are similar to the pre-existing samples.
In order to resample to work on indices that are non-datetimelike, the following procedure can be utilized.
In the following examples, df.index // 5 returns a binary array which is used to determine what gets selected for the groupby operation.
Note
The below example shows how we can downsample by consolidation of samples into fewer samples. Here by using df.index // 5, we are aggregating the samples in bins. By applying std() function, we aggregate the information contained in many samples into a small subset of values which is their standard deviation thereby reducing the number of samples.
In [240]: df = pd.DataFrame(np.random.randn(10, 2))
In [241]: df
Out[241]:
0 1
0 -0.793893 0.321153
1 0.342250 1.618906
2 -0.975807 1.918201
3 -0.810847 -1.405919
4 -1.977759 0.461659
5 0.730057 -1.316938
6 -0.751328 0.528290
7 -0.257759 -1.081009
8 0.505895 -1.701948
9 -1.006349 0.020208
In [242]: df.index // 5
Out[242]: Int64Index([0, 0, 0, 0, 0, 1, 1, 1, 1, 1], dtype='int64')
In [243]: df.groupby(df.index // 5).std()
Out[243]:
0 1
0 0.823647 1.312912
1 0.760109 0.942941
Returning a Series to propagate names#
Group DataFrame columns, compute a set of metrics and return a named Series.
The Series name is used as the name for the column index. This is especially
useful in conjunction with reshaping operations such as stacking in which the
column index name will be used as the name of the inserted column:
In [244]: df = pd.DataFrame(
.....: {
.....: "a": [0, 0, 0, 0, 1, 1, 1, 1, 2, 2, 2, 2],
.....: "b": [0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1],
.....: "c": [1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0],
.....: "d": [0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1],
.....: }
.....: )
.....:
In [245]: def compute_metrics(x):
.....: result = {"b_sum": x["b"].sum(), "c_mean": x["c"].mean()}
.....: return pd.Series(result, name="metrics")
.....:
In [246]: result = df.groupby("a").apply(compute_metrics)
In [247]: result
Out[247]:
metrics b_sum c_mean
a
0 2.0 0.5
1 2.0 0.5
2 2.0 0.5
In [248]: result.stack()
Out[248]:
a metrics
0 b_sum 2.0
c_mean 0.5
1 b_sum 2.0
c_mean 0.5
2 b_sum 2.0
c_mean 0.5
dtype: float64
| 304
| 737
|
Group by in pandas for criteria on one column and getting records for other columns as-is
So my dataframe looks something like this -
ORD_ID|TIME|VOL|VOL_DSCL|SMBL|EXP
ABC123|2020-05-18 09:01:35|30|10|CHH|2020-05-20
DEF123|2020-05-18 09:04:35|50|20|CHH|2020-06-19
ABC123|2020-05-18 09:06:45|20|10|CHH|2020-05-20
PQR333|2020-05-18 09:13:12|50|10|SSS|2020-06-19
DEF123|2020-05-18 09:24:35|20|20|CHH|2020-06-19
PQR333|2020-05-18 09:26:23|0|0|SSS|2020-06-19
I want to group by ORD_ID. And grab the record which is last in TIME for that ORD_ID (without performing any aggregate function on other columns). i.e. the desired output is -
ORD_ID|TIME|VOL|VOL_DSCL|SMBL|EXP
ABC123|2020-05-18 09:06:45|20|10|CHH|2020-05-20
DEF123|2020-05-18 09:24:35|20|20|CHH|2020-06-19
PQR333|2020-05-18 09:26:23|0|0|SSS|2020-06-19
How can this be achieved? (so only the last record in TIME as per each unique ORD_ID )
|
70,166,251
|
Filter out rows depending on action that could be performed in different HTML pages
|
<p>We record user interaction on a website with the naming convention <strong>action_cardType</strong>. We have 8 cardType values. For example:</p>
<ul>
<li><code>view_detail_<xxx></code> (e.g. <em>view_detail_role</em>, <em>view_detail_mentor</em>...)</li>
<li><code>explore_more_<xxx></code> (e.g. explore_more_learning)</li>
</ul>
<p>piece of sample data:</p>
<pre><code>module,page,step1,step2,step3
goal,goalLanding,view_page,view_detail_assignment,ExitPage
goal,goalLanding,view_page,view_detail_role,explore_more
goal,goalLanding,view_page,view_detail_mentor,ExitPage
goal,goalLanding,view_page,view_detail_mentoringProgram,view_card_detail
goal,goalLanding,view_page,explore_more_assignment,ExitPage
goal,goalLanding,view_page,explore_more_learning,view_manage_opportunities
goal,goalLanding,view_page,explore_more_connectWithPeople,bookmark
goal,goalLanding,view_page,back_to_opp,view_snack
goal,goalLanding,view_page,join_as_mentee,view_snack
goal,goalLanding,view_page,ExitPage
</code></pre>
<p><strong>Goal</strong>
I want to filter out the rows for which step2 action couldn't be performed in goalLanding page.</p>
<p><strong>What I have tried</strong>:
I pre-defined all the actions that exist on goalLanding page in a list regex expression list.</p>
<pre><code>List = [r'explore_now(\S+)', r'view_detail(\S+)', 'ExitPage']
</code></pre>
<p>then I tried to use this script to filter out invalid rows:</p>
<pre><code>df = df.loc[df['step2'].isin(List)]
</code></pre>
<p>The expected result after cleaning should be:</p>
<pre><code>module,page,step1,step2,step3
goal,goalLanding,view_page,view_detail_assignment,ExitPage
goal,goalLanding,view_page,view_detail_role,explore_more
goal,goalLanding,view_page,view_detail_mentor,ExitPage
goal,goalLanding,view_page,view_detail_mentoringProgram,view_card_detail
goal,goalLanding,view_page,explore_more_assignment,ExitPage
goal,goalLanding,view_page,explore_more_learning,view_manage_opportunities
goal,goalLanding,view_page,explore_more_connectWithPeople,bookmark
goal,goalLanding,view_page,ExitPage
</code></pre>
<p>But the above approach doesn't work.</p>
<p>Can anyone help? As the data to be cleaned is huge, is there any convenient and straightforward way to achieve this?</p>
<p>Thanks,
Cherie</p>
| 70,170,201
| 2021-11-30T08:21:59.867000
| 1
| null | 1
| 24
|
pandas
|
<p>You can't use isin to match several regexes; what you <strong>CAN</strong> do however, is compute a mask that combine several regex matches, and use it to filter out your rows.</p>
<p>Assuming you have df as follow</p>
<pre><code>>>> df
module page step1 step2 step3
0 goal goalLanding view_page view_detail_assignment ExitPage
1 goal goalLanding view_page view_detail_role explore_more
2 goal goalLanding view_page view_detail_mentor ExitPage
3 goal goalLanding view_page view_detail_mentoringProgram view_card_detail
4 goal goalLanding view_page explore_more_assignment ExitPage
5 goal goalLanding view_page explore_more_learning view_manage_opportunities
6 goal goalLanding view_page explore_more_connectWithPeople bookmark
7 goal goalLanding view_page back_to_opp view_snack
8 goal goalLanding view_page join_as_mentee view_snack
9 goal goalLanding view_page ExitPage NaN
</code></pre>
<p>You can easily get which row match <em>view_detail</em> (or anything really) with the following</p>
<pre><code>>>> mask1 = df.step2.str.match(r"view_detail(\S+)")
>>> mask2 = df.step2.str.match(r"explore_more_(\S+)")
>>> mask3 = df.step2.str.match(r"ExitPage")
>>> mask1
0 True
1 True
2 True
3 True
4 False
5 False
6 False
7 False
8 False
9 False
Name: step2, dtype: bool
</code></pre>
<p>Alternatively, if you're just concerned with the start of the string, you can use</p>
<pre><code>mask1 = df.step2.str.startswith("view_detail")
...
</code></pre>
<p>Then, just combine these masks with a logical OR and Voilà!</p>
<pre><code>>>> df = df[mask1|mask2|mask3]
>>> df
module page step1 step2 step3
0 goal goalLanding view_page view_detail_assignment ExitPage
1 goal goalLanding view_page view_detail_role explore_more
2 goal goalLanding view_page view_detail_mentor ExitPage
3 goal goalLanding view_page view_detail_mentoringProgram view_card_detail
4 goal goalLanding view_page explore_more_assignment ExitPage
5 goal goalLanding view_page explore_more_learning view_manage_opportunities
6 goal goalLanding view_page explore_more_connectWithPeople bookmark
9 goal goalLanding view_page ExitPage NaN
</code></pre>
<p><strong>Note</strong>
It is a bit unwieldy to define these masks manualy. You can use a list comprehension and numpy's <code>reduce</code> function to automate the process</p>
<pre><code>import numpy as np
regexes = [r"view_detail(\S+)", r"explore_more_(\S+)", r"ExitPage"]
mask = np.logical_or.reduce([df.step2.str.match(regex) for regex in regexes])
df = df[mask]
</code></pre>
| 2021-11-30T13:33:52.977000
| 0
|
https://pandas.pydata.org/docs/user_guide/groupby.html
|
You can't use isin to match several regexes; what you CAN do however, is compute a mask that combine several regex matches, and use it to filter out your rows.
Assuming you have df as follow
>>> df
module page step1 step2 step3
0 goal goalLanding view_page view_detail_assignment ExitPage
1 goal goalLanding view_page view_detail_role explore_more
2 goal goalLanding view_page view_detail_mentor ExitPage
3 goal goalLanding view_page view_detail_mentoringProgram view_card_detail
4 goal goalLanding view_page explore_more_assignment ExitPage
5 goal goalLanding view_page explore_more_learning view_manage_opportunities
6 goal goalLanding view_page explore_more_connectWithPeople bookmark
7 goal goalLanding view_page back_to_opp view_snack
8 goal goalLanding view_page join_as_mentee view_snack
9 goal goalLanding view_page ExitPage NaN
You can easily get which row match view_detail (or anything really) with the following
>>> mask1 = df.step2.str.match(r"view_detail(\S+)")
>>> mask2 = df.step2.str.match(r"explore_more_(\S+)")
>>> mask3 = df.step2.str.match(r"ExitPage")
>>> mask1
0 True
1 True
2 True
3 True
4 False
5 False
6 False
7 False
8 False
9 False
Name: step2, dtype: bool
Alternatively, if you're just concerned with the start of the string, you can use
mask1 = df.step2.str.startswith("view_detail")
...
Then, just combine these masks with a logical OR and Voilà!
>>> df = df[mask1|mask2|mask3]
>>> df
module page step1 step2 step3
0 goal goalLanding view_page view_detail_assignment ExitPage
1 goal goalLanding view_page view_detail_role explore_more
2 goal goalLanding view_page view_detail_mentor ExitPage
3 goal goalLanding view_page view_detail_mentoringProgram view_card_detail
4 goal goalLanding view_page explore_more_assignment ExitPage
5 goal goalLanding view_page explore_more_learning view_manage_opportunities
6 goal goalLanding view_page explore_more_connectWithPeople bookmark
9 goal goalLanding view_page ExitPage NaN
Note
It is a bit unwieldy to define these masks manualy. You can use a list comprehension and numpy's reduce function to automate the process
import numpy as np
regexes = [r"view_detail(\S+)", r"explore_more_(\S+)", r"ExitPage"]
mask = np.logical_or.reduce([df.step2.str.match(regex) for regex in regexes])
df = df[mask]
| 0
| 2,976
|
Filter out rows depending on action that could be performed in different HTML pages
We record user interaction on a website with the naming convention action_cardType. We have 8 cardType values. For example:
view_detail_<xxx> (e.g. view_detail_role, view_detail_mentor...)
explore_more_<xxx> (e.g. explore_more_learning)
piece of sample data:
module,page,step1,step2,step3
goal,goalLanding,view_page,view_detail_assignment,ExitPage
goal,goalLanding,view_page,view_detail_role,explore_more
goal,goalLanding,view_page,view_detail_mentor,ExitPage
goal,goalLanding,view_page,view_detail_mentoringProgram,view_card_detail
goal,goalLanding,view_page,explore_more_assignment,ExitPage
goal,goalLanding,view_page,explore_more_learning,view_manage_opportunities
goal,goalLanding,view_page,explore_more_connectWithPeople,bookmark
goal,goalLanding,view_page,back_to_opp,view_snack
goal,goalLanding,view_page,join_as_mentee,view_snack
goal,goalLanding,view_page,ExitPage
Goal
I want to filter out the rows for which step2 action couldn't be performed in goalLanding page.
What I have tried:
I pre-defined all the actions that exist on goalLanding page in a list regex expression list.
List = [r'explore_now(\S+)', r'view_detail(\S+)', 'ExitPage']
then I tried to use this script to filter out invalid rows:
df = df.loc[df['step2'].isin(List)]
The expected result after cleaning should be:
module,page,step1,step2,step3
goal,goalLanding,view_page,view_detail_assignment,ExitPage
goal,goalLanding,view_page,view_detail_role,explore_more
goal,goalLanding,view_page,view_detail_mentor,ExitPage
goal,goalLanding,view_page,view_detail_mentoringProgram,view_card_detail
goal,goalLanding,view_page,explore_more_assignment,ExitPage
goal,goalLanding,view_page,explore_more_learning,view_manage_opportunities
goal,goalLanding,view_page,explore_more_connectWithPeople,bookmark
goal,goalLanding,view_page,ExitPage
But the above approach doesn't work.
Can anyone help? As the data to be cleaned is huge, is there any convenient and straightforward way to achieve this?
Thanks,
Cherie
|
66,356,397
|
Is there a pandas function that can read multiple excel sheets but with only sheet1 having a header
|
<p>Here is my code to read multiple sheets.</p>
<pre><code>df = pd.read_excel('excelfile.xls',sheet_name=['Sheet1','Sheet2','Sheet3'])
</code></pre>
<p>But only sheet1 has a header. Sheet2 and sheet3 have no header.</p>
| 66,356,791
| 2021-02-24T18:02:12.613000
| 2
| null | 0
| 26
|
python|pandas
|
<p>You can read the first sheet with header and the remaining sheets without. Apply the first sheet's column header to the remaining sheets and concatenate the lot. Since dict values enumerate in insertion order, the sheet read order should be the same. Alternately you could sort by sheet name or other criteria.</p>
<pre><code>import pandas as pd
sheets = pd.read_excel('excelfile.xls',sheet_name=['Sheet1'])
columns = sheets["Sheet1"].columns
sheets.update(pd.read_excel('excelfile.xls', header=None,
sheet_name=['Sheet2','Sheet3']))
for sheet in sheets.values():
sheet.columns = columns
df = pd.concat(sheets.values())
print(df)
</code></pre>
| 2021-02-24T18:29:39.927000
| 0
|
https://pandas.pydata.org/docs/reference/api/pandas.read_excel.html
|
pandas.read_excel#
pandas.read_excel#
pandas.read_excel(io, sheet_name=0, *, header=0, names=None, index_col=None, usecols=None, squeeze=None, dtype=None, engine=None, converters=None, true_values=None, false_values=None, skiprows=None, nrows=None, na_values=None, keep_default_na=True, na_filter=True, verbose=False, parse_dates=False, date_parser=None, thousands=None, decimal='.', comment=None, skipfooter=0, convert_float=None, mangle_dupe_cols=True, storage_options=None)[source]#
Read an Excel file into a pandas DataFrame.
You can read the first sheet with header and the remaining sheets without. Apply the first sheet's column header to the remaining sheets and concatenate the lot. Since dict values enumerate in insertion order, the sheet read order should be the same. Alternately you could sort by sheet name or other criteria.
import pandas as pd
sheets = pd.read_excel('excelfile.xls',sheet_name=['Sheet1'])
columns = sheets["Sheet1"].columns
sheets.update(pd.read_excel('excelfile.xls', header=None,
sheet_name=['Sheet2','Sheet3']))
for sheet in sheets.values():
sheet.columns = columns
df = pd.concat(sheets.values())
print(df)
Supports xls, xlsx, xlsm, xlsb, odf, ods and odt file extensions
read from a local filesystem or URL. Supports an option to read
a single sheet or a list of sheets.
Parameters
iostr, bytes, ExcelFile, xlrd.Book, path object, or file-like objectAny valid string path is acceptable. The string could be a URL. Valid
URL schemes include http, ftp, s3, and file. For file URLs, a host is
expected. A local file could be: file://localhost/path/to/table.xlsx.
If you want to pass in a path object, pandas accepts any os.PathLike.
By file-like object, we refer to objects with a read() method,
such as a file handle (e.g. via builtin open function)
or StringIO.
sheet_namestr, int, list, or None, default 0Strings are used for sheet names. Integers are used in zero-indexed
sheet positions (chart sheets do not count as a sheet position).
Lists of strings/integers are used to request multiple sheets.
Specify None to get all worksheets.
Available cases:
Defaults to 0: 1st sheet as a DataFrame
1: 2nd sheet as a DataFrame
"Sheet1": Load sheet with name “Sheet1”
[0, 1, "Sheet5"]: Load first, second and sheet named “Sheet5”
as a dict of DataFrame
None: All worksheets.
headerint, list of int, default 0Row (0-indexed) to use for the column labels of the parsed
DataFrame. If a list of integers is passed those row positions will
be combined into a MultiIndex. Use None if there is no header.
namesarray-like, default NoneList of column names to use. If file contains no header row,
then you should explicitly pass header=None.
index_colint, list of int, default NoneColumn (0-indexed) to use as the row labels of the DataFrame.
Pass None if there is no such column. If a list is passed,
those columns will be combined into a MultiIndex. If a
subset of data is selected with usecols, index_col
is based on the subset.
Missing values will be forward filled to allow roundtripping with
to_excel for merged_cells=True. To avoid forward filling the
missing values use set_index after reading the data instead of
index_col.
usecolsstr, list-like, or callable, default None
If None, then parse all columns.
If str, then indicates comma separated list of Excel column letters
and column ranges (e.g. “A:E” or “A,C,E:F”). Ranges are inclusive of
both sides.
If list of int, then indicates list of column numbers to be parsed
(0-indexed).
If list of string, then indicates list of column names to be parsed.
If callable, then evaluate each column name against it and parse the
column if the callable returns True.
Returns a subset of the columns according to behavior above.
squeezebool, default FalseIf the parsed data only contains one column then return a Series.
Deprecated since version 1.4.0: Append .squeeze("columns") to the call to read_excel to squeeze
the data.
dtypeType name or dict of column -> type, default NoneData type for data or columns. E.g. {‘a’: np.float64, ‘b’: np.int32}
Use object to preserve data as stored in Excel and not interpret dtype.
If converters are specified, they will be applied INSTEAD
of dtype conversion.
enginestr, default NoneIf io is not a buffer or path, this must be set to identify io.
Supported engines: “xlrd”, “openpyxl”, “odf”, “pyxlsb”.
Engine compatibility :
“xlrd” supports old-style Excel files (.xls).
“openpyxl” supports newer Excel file formats.
“odf” supports OpenDocument file formats (.odf, .ods, .odt).
“pyxlsb” supports Binary Excel files.
Changed in version 1.2.0: The engine xlrd
now only supports old-style .xls files.
When engine=None, the following logic will be
used to determine the engine:
If path_or_buffer is an OpenDocument format (.odf, .ods, .odt),
then odf will be used.
Otherwise if path_or_buffer is an xls format,
xlrd will be used.
Otherwise if path_or_buffer is in xlsb format,
pyxlsb will be used.
New in version 1.3.0.
Otherwise openpyxl will be used.
Changed in version 1.3.0.
convertersdict, default NoneDict of functions for converting values in certain columns. Keys can
either be integers or column labels, values are functions that take one
input argument, the Excel cell content, and return the transformed
content.
true_valueslist, default NoneValues to consider as True.
false_valueslist, default NoneValues to consider as False.
skiprowslist-like, int, or callable, optionalLine numbers to skip (0-indexed) or number of lines to skip (int) at the
start of the file. If callable, the callable function will be evaluated
against the row indices, returning True if the row should be skipped and
False otherwise. An example of a valid callable argument would be lambda
x: x in [0, 2].
nrowsint, default NoneNumber of rows to parse.
na_valuesscalar, str, list-like, or dict, default NoneAdditional strings to recognize as NA/NaN. If dict passed, specific
per-column NA values. By default the following values are interpreted
as NaN: ‘’, ‘#N/A’, ‘#N/A N/A’, ‘#NA’, ‘-1.#IND’, ‘-1.#QNAN’, ‘-NaN’, ‘-nan’,
‘1.#IND’, ‘1.#QNAN’, ‘<NA>’, ‘N/A’, ‘NA’, ‘NULL’, ‘NaN’, ‘n/a’,
‘nan’, ‘null’.
keep_default_nabool, default TrueWhether or not to include the default NaN values when parsing the data.
Depending on whether na_values is passed in, the behavior is as follows:
If keep_default_na is True, and na_values are specified, na_values
is appended to the default NaN values used for parsing.
If keep_default_na is True, and na_values are not specified, only
the default NaN values are used for parsing.
If keep_default_na is False, and na_values are specified, only
the NaN values specified na_values are used for parsing.
If keep_default_na is False, and na_values are not specified, no
strings will be parsed as NaN.
Note that if na_filter is passed in as False, the keep_default_na and
na_values parameters will be ignored.
na_filterbool, default TrueDetect missing value markers (empty strings and the value of na_values). In
data without any NAs, passing na_filter=False can improve the performance
of reading a large file.
verbosebool, default FalseIndicate number of NA values placed in non-numeric columns.
parse_datesbool, list-like, or dict, default FalseThe behavior is as follows:
bool. If True -> try parsing the index.
list of int or names. e.g. If [1, 2, 3] -> try parsing columns 1, 2, 3
each as a separate date column.
list of lists. e.g. If [[1, 3]] -> combine columns 1 and 3 and parse as
a single date column.
dict, e.g. {‘foo’ : [1, 3]} -> parse columns 1, 3 as date and call
result ‘foo’
If a column or index contains an unparsable date, the entire column or
index will be returned unaltered as an object data type. If you don`t want to
parse some cells as date just change their type in Excel to “Text”.
For non-standard datetime parsing, use pd.to_datetime after pd.read_excel.
Note: A fast-path exists for iso8601-formatted dates.
date_parserfunction, optionalFunction to use for converting a sequence of string columns to an array of
datetime instances. The default uses dateutil.parser.parser to do the
conversion. Pandas will try to call date_parser in three different ways,
advancing to the next if an exception occurs: 1) Pass one or more arrays
(as defined by parse_dates) as arguments; 2) concatenate (row-wise) the
string values from the columns defined by parse_dates into a single array
and pass that; and 3) call date_parser once for each row using one or
more strings (corresponding to the columns defined by parse_dates) as
arguments.
thousandsstr, default NoneThousands separator for parsing string columns to numeric. Note that
this parameter is only necessary for columns stored as TEXT in Excel,
any numeric columns will automatically be parsed, regardless of display
format.
decimalstr, default ‘.’Character to recognize as decimal point for parsing string columns to numeric.
Note that this parameter is only necessary for columns stored as TEXT in Excel,
any numeric columns will automatically be parsed, regardless of display
format.(e.g. use ‘,’ for European data).
New in version 1.4.0.
commentstr, default NoneComments out remainder of line. Pass a character or characters to this
argument to indicate comments in the input file. Any data between the
comment string and the end of the current line is ignored.
skipfooterint, default 0Rows at the end to skip (0-indexed).
convert_floatbool, default TrueConvert integral floats to int (i.e., 1.0 –> 1). If False, all numeric
data will be read in as floats: Excel stores all numbers as floats
internally.
Deprecated since version 1.3.0: convert_float will be removed in a future version
mangle_dupe_colsbool, default TrueDuplicate columns will be specified as ‘X’, ‘X.1’, …’X.N’, rather than
‘X’…’X’. Passing in False will cause data to be overwritten if there
are duplicate names in the columns.
Deprecated since version 1.5.0: Not implemented, and a new argument to specify the pattern for the
names of duplicated columns will be added instead
storage_optionsdict, optionalExtra options that make sense for a particular storage connection, e.g.
host, port, username, password, etc. For HTTP(S) URLs the key-value pairs
are forwarded to urllib.request.Request as header options. For other
URLs (e.g. starting with “s3://”, and “gcs://”) the key-value pairs are
forwarded to fsspec.open. Please see fsspec and urllib for more
details, and for more examples on storage options refer here.
New in version 1.2.0.
Returns
DataFrame or dict of DataFramesDataFrame from the passed in Excel file. See notes in sheet_name
argument for more information on when a dict of DataFrames is returned.
See also
DataFrame.to_excelWrite DataFrame to an Excel file.
DataFrame.to_csvWrite DataFrame to a comma-separated values (csv) file.
read_csvRead a comma-separated values (csv) file into DataFrame.
read_fwfRead a table of fixed-width formatted lines into DataFrame.
Examples
The file can be read using the file name as string or an open file object:
>>> pd.read_excel('tmp.xlsx', index_col=0)
Name Value
0 string1 1
1 string2 2
2 #Comment 3
>>> pd.read_excel(open('tmp.xlsx', 'rb'),
... sheet_name='Sheet3')
Unnamed: 0 Name Value
0 0 string1 1
1 1 string2 2
2 2 #Comment 3
Index and header can be specified via the index_col and header arguments
>>> pd.read_excel('tmp.xlsx', index_col=None, header=None)
0 1 2
0 NaN Name Value
1 0.0 string1 1
2 1.0 string2 2
3 2.0 #Comment 3
Column types are inferred but can be explicitly specified
>>> pd.read_excel('tmp.xlsx', index_col=0,
... dtype={'Name': str, 'Value': float})
Name Value
0 string1 1.0
1 string2 2.0
2 #Comment 3.0
True, False, and NA values, and thousands separators have defaults,
but can be explicitly specified, too. Supply the values you would like
as strings or lists of strings!
>>> pd.read_excel('tmp.xlsx', index_col=0,
... na_values=['string1', 'string2'])
Name Value
0 NaN 1
1 NaN 2
2 #Comment 3
Comment lines in the excel input file can be skipped using the comment kwarg
>>> pd.read_excel('tmp.xlsx', index_col=0, comment='#')
Name Value
0 string1 1.0
1 string2 2.0
2 None NaN
| 534
| 1,157
|
Is there a pandas function that can read multiple excel sheets but with only sheet1 having a header
Here is my code to read multiple sheets.
df = pd.read_excel('excelfile.xls',sheet_name=['Sheet1','Sheet2','Sheet3'])
But only sheet1 has a header. Sheet2 and sheet3 have no header.
|
64,971,775
|
How to compare columns with equal values?
|
<p>I have a dataframe which looks as follows:</p>
<pre><code> colA colB
0 2 1
1 4 2
2 3 7
3 8 5
4 7 2
</code></pre>
<p>I have two datasets one with customer code and other information and the other with addresses plus related customer code.</p>
<p>I did a merge with the two bases and now I want to return the lines where the values in the columns are the same, but I'm not able to do it.</p>
<p>Can someone help me?</p>
<p>Thanks</p>
| 64,971,811
| 2020-11-23T15:55:05.497000
| 1
| null | 0
| 27
|
pandas
|
<p>you can try :</p>
<pre><code>dfs=df.loc[df['colA']==df['colB']]
</code></pre>
| 2020-11-23T15:57:27.420000
| 0
|
https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.equals.html
|
pandas.DataFrame.equals#
pandas.DataFrame.equals#
DataFrame.equals(other)[source]#
Test whether two objects contain the same elements.
This function allows two Series or DataFrames to be compared against
you can try :
dfs=df.loc[df['colA']==df['colB']]
each other to see if they have the same shape and elements. NaNs in
the same location are considered equal.
The row/column index do not need to have the same type, as long
as the values are considered equal. Corresponding columns must be of
the same dtype.
Parameters
otherSeries or DataFrameThe other Series or DataFrame to be compared with the first.
Returns
boolTrue if all elements are the same in both objects, False
otherwise.
See also
Series.eqCompare two Series objects of the same length and return a Series where each element is True if the element in each Series is equal, False otherwise.
DataFrame.eqCompare two DataFrame objects of the same shape and return a DataFrame where each element is True if the respective element in each DataFrame is equal, False otherwise.
testing.assert_series_equalRaises an AssertionError if left and right are not equal. Provides an easy interface to ignore inequality in dtypes, indexes and precision among others.
testing.assert_frame_equalLike assert_series_equal, but targets DataFrames.
numpy.array_equalReturn True if two arrays have the same shape and elements, False otherwise.
Examples
>>> df = pd.DataFrame({1: [10], 2: [20]})
>>> df
1 2
0 10 20
DataFrames df and exactly_equal have the same types and values for
their elements and column labels, which will return True.
>>> exactly_equal = pd.DataFrame({1: [10], 2: [20]})
>>> exactly_equal
1 2
0 10 20
>>> df.equals(exactly_equal)
True
DataFrames df and different_column_type have the same element
types and values, but have different types for the column labels,
which will still return True.
>>> different_column_type = pd.DataFrame({1.0: [10], 2.0: [20]})
>>> different_column_type
1.0 2.0
0 10 20
>>> df.equals(different_column_type)
True
DataFrames df and different_data_type have different types for the
same values for their elements, and will return False even though
their column labels are the same values and types.
>>> different_data_type = pd.DataFrame({1: [10.0], 2: [20.0]})
>>> different_data_type
1 2
0 10.0 20.0
>>> df.equals(different_data_type)
False
| 208
| 257
|
How to compare columns with equal values?
I have a dataframe which looks as follows:
colA colB
0 2 1
1 4 2
2 3 7
3 8 5
4 7 2
I have two datasets one with customer code and other information and the other with addresses plus related customer code.
I did a merge with the two bases and now I want to return the lines where the values in the columns are the same, but I'm not able to do it.
Can someone help me?
Thanks
|
69,537,816
|
How to delete rows based on two fields?
|
<p>I have a df with lots of ids and dates, I need to delete from this df rows with id = 4 where date != '2021-01-01'
This expression, I assume won't work</p>
<pre><code>df_2 = df_2[df_2['id'] != 4 & df_2['date'] != '2021-01-01']
</code></pre>
<p>How else can I write the condition?</p>
<p>E.g.</p>
<pre><code>4 2020-01-01
5 2021-05-01
4 2021-01-01
4 2021-09-01
</code></pre>
<p>Should become</p>
<pre><code>5 2021-05-01
4 2021-01-01
</code></pre>
| 69,537,978
| 2021-10-12T09:06:32.263000
| 1
| null | 1
| 27
|
pandas
|
<p>Add parantheses and chain mask by <code>|</code> for bitwise <code>OR</code> and swap <code>==</code> with <code>!=</code>:</p>
<pre><code>df_2 = df_2[(df_2['id'] != 4) | (df_2['date'] == '2021-01-01')]
print (df_2)
id date
1 5 2021-05-01
2 4 2021-01-01
</code></pre>
<p>Your solution should be change with invert mask by <code>~</code>:</p>
<pre><code>df_2 = df_2[ ~((df_2['id'] == 4) & (df_2['date'] != '2021-01-01'))]
</code></pre>
| 2021-10-12T09:19:16.457000
| 0
|
https://pandas.pydata.org/docs/dev/user_guide/merging.html
|
Merge, join, concatenate and compare#
Merge, join, concatenate and compare#
pandas provides various facilities for easily combining together Series or
DataFrame with various kinds of set logic for the indexes
and relational algebra functionality in the case of join / merge-type
Add parantheses and chain mask by | for bitwise OR and swap == with !=:
df_2 = df_2[(df_2['id'] != 4) | (df_2['date'] == '2021-01-01')]
print (df_2)
id date
1 5 2021-05-01
2 4 2021-01-01
Your solution should be change with invert mask by ~:
df_2 = df_2[ ~((df_2['id'] == 4) & (df_2['date'] != '2021-01-01'))]
operations.
In addition, pandas also provides utilities to compare two Series or DataFrame
and summarize their differences.
Concatenating objects#
The concat() function (in the main pandas namespace) does all of
the heavy lifting of performing concatenation operations along an axis while
performing optional set logic (union or intersection) of the indexes (if any) on
the other axes. Note that I say “if any” because there is only a single possible
axis of concatenation for Series.
Before diving into all of the details of concat and what it can do, here is
a simple example:
In [1]: df1 = pd.DataFrame(
...: {
...: "A": ["A0", "A1", "A2", "A3"],
...: "B": ["B0", "B1", "B2", "B3"],
...: "C": ["C0", "C1", "C2", "C3"],
...: "D": ["D0", "D1", "D2", "D3"],
...: },
...: index=[0, 1, 2, 3],
...: )
...:
In [2]: df2 = pd.DataFrame(
...: {
...: "A": ["A4", "A5", "A6", "A7"],
...: "B": ["B4", "B5", "B6", "B7"],
...: "C": ["C4", "C5", "C6", "C7"],
...: "D": ["D4", "D5", "D6", "D7"],
...: },
...: index=[4, 5, 6, 7],
...: )
...:
In [3]: df3 = pd.DataFrame(
...: {
...: "A": ["A8", "A9", "A10", "A11"],
...: "B": ["B8", "B9", "B10", "B11"],
...: "C": ["C8", "C9", "C10", "C11"],
...: "D": ["D8", "D9", "D10", "D11"],
...: },
...: index=[8, 9, 10, 11],
...: )
...:
In [4]: frames = [df1, df2, df3]
In [5]: result = pd.concat(frames)
Like its sibling function on ndarrays, numpy.concatenate, pandas.concat
takes a list or dict of homogeneously-typed objects and concatenates them with
some configurable handling of “what to do with the other axes”:
pd.concat(
objs,
axis=0,
join="outer",
ignore_index=False,
keys=None,
levels=None,
names=None,
verify_integrity=False,
copy=True,
)
objs : a sequence or mapping of Series or DataFrame objects. If a
dict is passed, the sorted keys will be used as the keys argument, unless
it is passed, in which case the values will be selected (see below). Any None
objects will be dropped silently unless they are all None in which case a
ValueError will be raised.
axis : {0, 1, …}, default 0. The axis to concatenate along.
join : {‘inner’, ‘outer’}, default ‘outer’. How to handle indexes on
other axis(es). Outer for union and inner for intersection.
ignore_index : boolean, default False. If True, do not use the index
values on the concatenation axis. The resulting axis will be labeled 0, …,
n - 1. This is useful if you are concatenating objects where the
concatenation axis does not have meaningful indexing information. Note
the index values on the other axes are still respected in the join.
keys : sequence, default None. Construct hierarchical index using the
passed keys as the outermost level. If multiple levels passed, should
contain tuples.
levels : list of sequences, default None. Specific levels (unique values)
to use for constructing a MultiIndex. Otherwise they will be inferred from the
keys.
names : list, default None. Names for the levels in the resulting
hierarchical index.
verify_integrity : boolean, default False. Check whether the new
concatenated axis contains duplicates. This can be very expensive relative
to the actual data concatenation.
copy : boolean, default True. If False, do not copy data unnecessarily.
Without a little bit of context many of these arguments don’t make much sense.
Let’s revisit the above example. Suppose we wanted to associate specific keys
with each of the pieces of the chopped up DataFrame. We can do this using the
keys argument:
In [6]: result = pd.concat(frames, keys=["x", "y", "z"])
As you can see (if you’ve read the rest of the documentation), the resulting
object’s index has a hierarchical index. This
means that we can now select out each chunk by key:
In [7]: result.loc["y"]
Out[7]:
A B C D
4 A4 B4 C4 D4
5 A5 B5 C5 D5
6 A6 B6 C6 D6
7 A7 B7 C7 D7
It’s not a stretch to see how this can be very useful. More detail on this
functionality below.
Note
It is worth noting that concat() makes a full copy of the data, and that constantly
reusing this function can create a significant performance hit. If you need
to use the operation over several datasets, use a list comprehension.
frames = [ process_your_file(f) for f in files ]
result = pd.concat(frames)
Note
When concatenating DataFrames with named axes, pandas will attempt to preserve
these index/column names whenever possible. In the case where all inputs share a
common name, this name will be assigned to the result. When the input names do
not all agree, the result will be unnamed. The same is true for MultiIndex,
but the logic is applied separately on a level-by-level basis.
Set logic on the other axes#
When gluing together multiple DataFrames, you have a choice of how to handle
the other axes (other than the one being concatenated). This can be done in
the following two ways:
Take the union of them all, join='outer'. This is the default
option as it results in zero information loss.
Take the intersection, join='inner'.
Here is an example of each of these methods. First, the default join='outer'
behavior:
In [8]: df4 = pd.DataFrame(
...: {
...: "B": ["B2", "B3", "B6", "B7"],
...: "D": ["D2", "D3", "D6", "D7"],
...: "F": ["F2", "F3", "F6", "F7"],
...: },
...: index=[2, 3, 6, 7],
...: )
...:
In [9]: result = pd.concat([df1, df4], axis=1)
Here is the same thing with join='inner':
In [10]: result = pd.concat([df1, df4], axis=1, join="inner")
Lastly, suppose we just wanted to reuse the exact index from the original
DataFrame:
In [11]: result = pd.concat([df1, df4], axis=1).reindex(df1.index)
Similarly, we could index before the concatenation:
In [12]: pd.concat([df1, df4.reindex(df1.index)], axis=1)
Out[12]:
A B C D B D F
0 A0 B0 C0 D0 NaN NaN NaN
1 A1 B1 C1 D1 NaN NaN NaN
2 A2 B2 C2 D2 B2 D2 F2
3 A3 B3 C3 D3 B3 D3 F3
Ignoring indexes on the concatenation axis#
For DataFrame objects which don’t have a meaningful index, you may wish
to append them and ignore the fact that they may have overlapping indexes. To
do this, use the ignore_index argument:
In [13]: result = pd.concat([df1, df4], ignore_index=True, sort=False)
Concatenating with mixed ndims#
You can concatenate a mix of Series and DataFrame objects. The
Series will be transformed to DataFrame with the column name as
the name of the Series.
In [14]: s1 = pd.Series(["X0", "X1", "X2", "X3"], name="X")
In [15]: result = pd.concat([df1, s1], axis=1)
Note
Since we’re concatenating a Series to a DataFrame, we could have
achieved the same result with DataFrame.assign(). To concatenate an
arbitrary number of pandas objects (DataFrame or Series), use
concat.
If unnamed Series are passed they will be numbered consecutively.
In [16]: s2 = pd.Series(["_0", "_1", "_2", "_3"])
In [17]: result = pd.concat([df1, s2, s2, s2], axis=1)
Passing ignore_index=True will drop all name references.
In [18]: result = pd.concat([df1, s1], axis=1, ignore_index=True)
More concatenating with group keys#
A fairly common use of the keys argument is to override the column names
when creating a new DataFrame based on existing Series.
Notice how the default behaviour consists on letting the resulting DataFrame
inherit the parent Series’ name, when these existed.
In [19]: s3 = pd.Series([0, 1, 2, 3], name="foo")
In [20]: s4 = pd.Series([0, 1, 2, 3])
In [21]: s5 = pd.Series([0, 1, 4, 5])
In [22]: pd.concat([s3, s4, s5], axis=1)
Out[22]:
foo 0 1
0 0 0 0
1 1 1 1
2 2 2 4
3 3 3 5
Through the keys argument we can override the existing column names.
In [23]: pd.concat([s3, s4, s5], axis=1, keys=["red", "blue", "yellow"])
Out[23]:
red blue yellow
0 0 0 0
1 1 1 1
2 2 2 4
3 3 3 5
Let’s consider a variation of the very first example presented:
In [24]: result = pd.concat(frames, keys=["x", "y", "z"])
You can also pass a dict to concat in which case the dict keys will be used
for the keys argument (unless other keys are specified):
In [25]: pieces = {"x": df1, "y": df2, "z": df3}
In [26]: result = pd.concat(pieces)
In [27]: result = pd.concat(pieces, keys=["z", "y"])
The MultiIndex created has levels that are constructed from the passed keys and
the index of the DataFrame pieces:
In [28]: result.index.levels
Out[28]: FrozenList([['z', 'y'], [4, 5, 6, 7, 8, 9, 10, 11]])
If you wish to specify other levels (as will occasionally be the case), you can
do so using the levels argument:
In [29]: result = pd.concat(
....: pieces, keys=["x", "y", "z"], levels=[["z", "y", "x", "w"]], names=["group_key"]
....: )
....:
In [30]: result.index.levels
Out[30]: FrozenList([['z', 'y', 'x', 'w'], [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]])
This is fairly esoteric, but it is actually necessary for implementing things
like GroupBy where the order of a categorical variable is meaningful.
Appending rows to a DataFrame#
If you have a series that you want to append as a single row to a DataFrame, you can convert the row into a
DataFrame and use concat
In [31]: s2 = pd.Series(["X0", "X1", "X2", "X3"], index=["A", "B", "C", "D"])
In [32]: result = pd.concat([df1, s2.to_frame().T], ignore_index=True)
You should use ignore_index with this method to instruct DataFrame to
discard its index. If you wish to preserve the index, you should construct an
appropriately-indexed DataFrame and append or concatenate those objects.
Database-style DataFrame or named Series joining/merging#
pandas has full-featured, high performance in-memory join operations
idiomatically very similar to relational databases like SQL. These methods
perform significantly better (in some cases well over an order of magnitude
better) than other open source implementations (like base::merge.data.frame
in R). The reason for this is careful algorithmic design and the internal layout
of the data in DataFrame.
See the cookbook for some advanced strategies.
Users who are familiar with SQL but new to pandas might be interested in a
comparison with SQL.
pandas provides a single function, merge(), as the entry point for
all standard database join operations between DataFrame or named Series objects:
pd.merge(
left,
right,
how="inner",
on=None,
left_on=None,
right_on=None,
left_index=False,
right_index=False,
sort=True,
suffixes=("_x", "_y"),
copy=True,
indicator=False,
validate=None,
)
left: A DataFrame or named Series object.
right: Another DataFrame or named Series object.
on: Column or index level names to join on. Must be found in both the left
and right DataFrame and/or Series objects. If not passed and left_index and
right_index are False, the intersection of the columns in the
DataFrames and/or Series will be inferred to be the join keys.
left_on: Columns or index levels from the left DataFrame or Series to use as
keys. Can either be column names, index level names, or arrays with length
equal to the length of the DataFrame or Series.
right_on: Columns or index levels from the right DataFrame or Series to use as
keys. Can either be column names, index level names, or arrays with length
equal to the length of the DataFrame or Series.
left_index: If True, use the index (row labels) from the left
DataFrame or Series as its join key(s). In the case of a DataFrame or Series with a MultiIndex
(hierarchical), the number of levels must match the number of join keys
from the right DataFrame or Series.
right_index: Same usage as left_index for the right DataFrame or Series
how: One of 'left', 'right', 'outer', 'inner', 'cross'. Defaults
to inner. See below for more detailed description of each method.
sort: Sort the result DataFrame by the join keys in lexicographical
order. Defaults to True, setting to False will improve performance
substantially in many cases.
suffixes: A tuple of string suffixes to apply to overlapping
columns. Defaults to ('_x', '_y').
copy: Always copy data (default True) from the passed DataFrame or named Series
objects, even when reindexing is not necessary. Cannot be avoided in many
cases but may improve performance / memory usage. The cases where copying
can be avoided are somewhat pathological but this option is provided
nonetheless.
indicator: Add a column to the output DataFrame called _merge
with information on the source of each row. _merge is Categorical-type
and takes on a value of left_only for observations whose merge key
only appears in 'left' DataFrame or Series, right_only for observations whose
merge key only appears in 'right' DataFrame or Series, and both if the
observation’s merge key is found in both.
validate : string, default None.
If specified, checks if merge is of specified type.
“one_to_one” or “1:1”: checks if merge keys are unique in both
left and right datasets.
“one_to_many” or “1:m”: checks if merge keys are unique in left
dataset.
“many_to_one” or “m:1”: checks if merge keys are unique in right
dataset.
“many_to_many” or “m:m”: allowed, but does not result in checks.
Note
Support for specifying index levels as the on, left_on, and
right_on parameters was added in version 0.23.0.
Support for merging named Series objects was added in version 0.24.0.
The return type will be the same as left. If left is a DataFrame or named Series
and right is a subclass of DataFrame, the return type will still be DataFrame.
merge is a function in the pandas namespace, and it is also available as a
DataFrame instance method merge(), with the calling
DataFrame being implicitly considered the left object in the join.
The related join() method, uses merge internally for the
index-on-index (by default) and column(s)-on-index join. If you are joining on
index only, you may wish to use DataFrame.join to save yourself some typing.
Brief primer on merge methods (relational algebra)#
Experienced users of relational databases like SQL will be familiar with the
terminology used to describe join operations between two SQL-table like
structures (DataFrame objects). There are several cases to consider which
are very important to understand:
one-to-one joins: for example when joining two DataFrame objects on
their indexes (which must contain unique values).
many-to-one joins: for example when joining an index (unique) to one or
more columns in a different DataFrame.
many-to-many joins: joining columns on columns.
Note
When joining columns on columns (potentially a many-to-many join), any
indexes on the passed DataFrame objects will be discarded.
It is worth spending some time understanding the result of the many-to-many
join case. In SQL / standard relational algebra, if a key combination appears
more than once in both tables, the resulting table will have the Cartesian
product of the associated data. Here is a very basic example with one unique
key combination:
In [33]: left = pd.DataFrame(
....: {
....: "key": ["K0", "K1", "K2", "K3"],
....: "A": ["A0", "A1", "A2", "A3"],
....: "B": ["B0", "B1", "B2", "B3"],
....: }
....: )
....:
In [34]: right = pd.DataFrame(
....: {
....: "key": ["K0", "K1", "K2", "K3"],
....: "C": ["C0", "C1", "C2", "C3"],
....: "D": ["D0", "D1", "D2", "D3"],
....: }
....: )
....:
In [35]: result = pd.merge(left, right, on="key")
Here is a more complicated example with multiple join keys. Only the keys
appearing in left and right are present (the intersection), since
how='inner' by default.
In [36]: left = pd.DataFrame(
....: {
....: "key1": ["K0", "K0", "K1", "K2"],
....: "key2": ["K0", "K1", "K0", "K1"],
....: "A": ["A0", "A1", "A2", "A3"],
....: "B": ["B0", "B1", "B2", "B3"],
....: }
....: )
....:
In [37]: right = pd.DataFrame(
....: {
....: "key1": ["K0", "K1", "K1", "K2"],
....: "key2": ["K0", "K0", "K0", "K0"],
....: "C": ["C0", "C1", "C2", "C3"],
....: "D": ["D0", "D1", "D2", "D3"],
....: }
....: )
....:
In [38]: result = pd.merge(left, right, on=["key1", "key2"])
The how argument to merge specifies how to determine which keys are to
be included in the resulting table. If a key combination does not appear in
either the left or right tables, the values in the joined table will be
NA. Here is a summary of the how options and their SQL equivalent names:
Merge method
SQL Join Name
Description
left
LEFT OUTER JOIN
Use keys from left frame only
right
RIGHT OUTER JOIN
Use keys from right frame only
outer
FULL OUTER JOIN
Use union of keys from both frames
inner
INNER JOIN
Use intersection of keys from both frames
cross
CROSS JOIN
Create the cartesian product of rows of both frames
In [39]: result = pd.merge(left, right, how="left", on=["key1", "key2"])
In [40]: result = pd.merge(left, right, how="right", on=["key1", "key2"])
In [41]: result = pd.merge(left, right, how="outer", on=["key1", "key2"])
In [42]: result = pd.merge(left, right, how="inner", on=["key1", "key2"])
In [43]: result = pd.merge(left, right, how="cross")
You can merge a mult-indexed Series and a DataFrame, if the names of
the MultiIndex correspond to the columns from the DataFrame. Transform
the Series to a DataFrame using Series.reset_index() before merging,
as shown in the following example.
In [44]: df = pd.DataFrame({"Let": ["A", "B", "C"], "Num": [1, 2, 3]})
In [45]: df
Out[45]:
Let Num
0 A 1
1 B 2
2 C 3
In [46]: ser = pd.Series(
....: ["a", "b", "c", "d", "e", "f"],
....: index=pd.MultiIndex.from_arrays(
....: [["A", "B", "C"] * 2, [1, 2, 3, 4, 5, 6]], names=["Let", "Num"]
....: ),
....: )
....:
In [47]: ser
Out[47]:
Let Num
A 1 a
B 2 b
C 3 c
A 4 d
B 5 e
C 6 f
dtype: object
In [48]: pd.merge(df, ser.reset_index(), on=["Let", "Num"])
Out[48]:
Let Num 0
0 A 1 a
1 B 2 b
2 C 3 c
Here is another example with duplicate join keys in DataFrames:
In [49]: left = pd.DataFrame({"A": [1, 2], "B": [2, 2]})
In [50]: right = pd.DataFrame({"A": [4, 5, 6], "B": [2, 2, 2]})
In [51]: result = pd.merge(left, right, on="B", how="outer")
Warning
Joining / merging on duplicate keys can cause a returned frame that is the multiplication of the row dimensions, which may result in memory overflow. It is the user’ s responsibility to manage duplicate values in keys before joining large DataFrames.
Checking for duplicate keys#
Users can use the validate argument to automatically check whether there
are unexpected duplicates in their merge keys. Key uniqueness is checked before
merge operations and so should protect against memory overflows. Checking key
uniqueness is also a good way to ensure user data structures are as expected.
In the following example, there are duplicate values of B in the right
DataFrame. As this is not a one-to-one merge – as specified in the
validate argument – an exception will be raised.
In [52]: left = pd.DataFrame({"A": [1, 2], "B": [1, 2]})
In [53]: right = pd.DataFrame({"A": [4, 5, 6], "B": [2, 2, 2]})
In [53]: result = pd.merge(left, right, on="B", how="outer", validate="one_to_one")
...
MergeError: Merge keys are not unique in right dataset; not a one-to-one merge
If the user is aware of the duplicates in the right DataFrame but wants to
ensure there are no duplicates in the left DataFrame, one can use the
validate='one_to_many' argument instead, which will not raise an exception.
In [54]: pd.merge(left, right, on="B", how="outer", validate="one_to_many")
Out[54]:
A_x B A_y
0 1 1 NaN
1 2 2 4.0
2 2 2 5.0
3 2 2 6.0
The merge indicator#
merge() accepts the argument indicator. If True, a
Categorical-type column called _merge will be added to the output object
that takes on values:
Observation Origin
_merge value
Merge key only in 'left' frame
left_only
Merge key only in 'right' frame
right_only
Merge key in both frames
both
In [55]: df1 = pd.DataFrame({"col1": [0, 1], "col_left": ["a", "b"]})
In [56]: df2 = pd.DataFrame({"col1": [1, 2, 2], "col_right": [2, 2, 2]})
In [57]: pd.merge(df1, df2, on="col1", how="outer", indicator=True)
Out[57]:
col1 col_left col_right _merge
0 0 a NaN left_only
1 1 b 2.0 both
2 2 NaN 2.0 right_only
3 2 NaN 2.0 right_only
The indicator argument will also accept string arguments, in which case the indicator function will use the value of the passed string as the name for the indicator column.
In [58]: pd.merge(df1, df2, on="col1", how="outer", indicator="indicator_column")
Out[58]:
col1 col_left col_right indicator_column
0 0 a NaN left_only
1 1 b 2.0 both
2 2 NaN 2.0 right_only
3 2 NaN 2.0 right_only
Merge dtypes#
Merging will preserve the dtype of the join keys.
In [59]: left = pd.DataFrame({"key": [1], "v1": [10]})
In [60]: left
Out[60]:
key v1
0 1 10
In [61]: right = pd.DataFrame({"key": [1, 2], "v1": [20, 30]})
In [62]: right
Out[62]:
key v1
0 1 20
1 2 30
We are able to preserve the join keys:
In [63]: pd.merge(left, right, how="outer")
Out[63]:
key v1
0 1 10
1 1 20
2 2 30
In [64]: pd.merge(left, right, how="outer").dtypes
Out[64]:
key int64
v1 int64
dtype: object
Of course if you have missing values that are introduced, then the
resulting dtype will be upcast.
In [65]: pd.merge(left, right, how="outer", on="key")
Out[65]:
key v1_x v1_y
0 1 10.0 20
1 2 NaN 30
In [66]: pd.merge(left, right, how="outer", on="key").dtypes
Out[66]:
key int64
v1_x float64
v1_y int64
dtype: object
Merging will preserve category dtypes of the mergands. See also the section on categoricals.
The left frame.
In [67]: from pandas.api.types import CategoricalDtype
In [68]: X = pd.Series(np.random.choice(["foo", "bar"], size=(10,)))
In [69]: X = X.astype(CategoricalDtype(categories=["foo", "bar"]))
In [70]: left = pd.DataFrame(
....: {"X": X, "Y": np.random.choice(["one", "two", "three"], size=(10,))}
....: )
....:
In [71]: left
Out[71]:
X Y
0 bar one
1 foo one
2 foo three
3 bar three
4 foo one
5 bar one
6 bar three
7 bar three
8 bar three
9 foo three
In [72]: left.dtypes
Out[72]:
X category
Y object
dtype: object
The right frame.
In [73]: right = pd.DataFrame(
....: {
....: "X": pd.Series(["foo", "bar"], dtype=CategoricalDtype(["foo", "bar"])),
....: "Z": [1, 2],
....: }
....: )
....:
In [74]: right
Out[74]:
X Z
0 foo 1
1 bar 2
In [75]: right.dtypes
Out[75]:
X category
Z int64
dtype: object
The merged result:
In [76]: result = pd.merge(left, right, how="outer")
In [77]: result
Out[77]:
X Y Z
0 bar one 2
1 bar three 2
2 bar one 2
3 bar three 2
4 bar three 2
5 bar three 2
6 foo one 1
7 foo three 1
8 foo one 1
9 foo three 1
In [78]: result.dtypes
Out[78]:
X category
Y object
Z int64
dtype: object
Note
The category dtypes must be exactly the same, meaning the same categories and the ordered attribute.
Otherwise the result will coerce to the categories’ dtype.
Note
Merging on category dtypes that are the same can be quite performant compared to object dtype merging.
Joining on index#
DataFrame.join() is a convenient method for combining the columns of two
potentially differently-indexed DataFrames into a single result
DataFrame. Here is a very basic example:
In [79]: left = pd.DataFrame(
....: {"A": ["A0", "A1", "A2"], "B": ["B0", "B1", "B2"]}, index=["K0", "K1", "K2"]
....: )
....:
In [80]: right = pd.DataFrame(
....: {"C": ["C0", "C2", "C3"], "D": ["D0", "D2", "D3"]}, index=["K0", "K2", "K3"]
....: )
....:
In [81]: result = left.join(right)
In [82]: result = left.join(right, how="outer")
The same as above, but with how='inner'.
In [83]: result = left.join(right, how="inner")
The data alignment here is on the indexes (row labels). This same behavior can
be achieved using merge plus additional arguments instructing it to use the
indexes:
In [84]: result = pd.merge(left, right, left_index=True, right_index=True, how="outer")
In [85]: result = pd.merge(left, right, left_index=True, right_index=True, how="inner")
Joining key columns on an index#
join() takes an optional on argument which may be a column
or multiple column names, which specifies that the passed DataFrame is to be
aligned on that column in the DataFrame. These two function calls are
completely equivalent:
left.join(right, on=key_or_keys)
pd.merge(
left, right, left_on=key_or_keys, right_index=True, how="left", sort=False
)
Obviously you can choose whichever form you find more convenient. For
many-to-one joins (where one of the DataFrame’s is already indexed by the
join key), using join may be more convenient. Here is a simple example:
In [86]: left = pd.DataFrame(
....: {
....: "A": ["A0", "A1", "A2", "A3"],
....: "B": ["B0", "B1", "B2", "B3"],
....: "key": ["K0", "K1", "K0", "K1"],
....: }
....: )
....:
In [87]: right = pd.DataFrame({"C": ["C0", "C1"], "D": ["D0", "D1"]}, index=["K0", "K1"])
In [88]: result = left.join(right, on="key")
In [89]: result = pd.merge(
....: left, right, left_on="key", right_index=True, how="left", sort=False
....: )
....:
To join on multiple keys, the passed DataFrame must have a MultiIndex:
In [90]: left = pd.DataFrame(
....: {
....: "A": ["A0", "A1", "A2", "A3"],
....: "B": ["B0", "B1", "B2", "B3"],
....: "key1": ["K0", "K0", "K1", "K2"],
....: "key2": ["K0", "K1", "K0", "K1"],
....: }
....: )
....:
In [91]: index = pd.MultiIndex.from_tuples(
....: [("K0", "K0"), ("K1", "K0"), ("K2", "K0"), ("K2", "K1")]
....: )
....:
In [92]: right = pd.DataFrame(
....: {"C": ["C0", "C1", "C2", "C3"], "D": ["D0", "D1", "D2", "D3"]}, index=index
....: )
....:
Now this can be joined by passing the two key column names:
In [93]: result = left.join(right, on=["key1", "key2"])
The default for DataFrame.join is to perform a left join (essentially a
“VLOOKUP” operation, for Excel users), which uses only the keys found in the
calling DataFrame. Other join types, for example inner join, can be just as
easily performed:
In [94]: result = left.join(right, on=["key1", "key2"], how="inner")
As you can see, this drops any rows where there was no match.
Joining a single Index to a MultiIndex#
You can join a singly-indexed DataFrame with a level of a MultiIndexed DataFrame.
The level will match on the name of the index of the singly-indexed frame against
a level name of the MultiIndexed frame.
In [95]: left = pd.DataFrame(
....: {"A": ["A0", "A1", "A2"], "B": ["B0", "B1", "B2"]},
....: index=pd.Index(["K0", "K1", "K2"], name="key"),
....: )
....:
In [96]: index = pd.MultiIndex.from_tuples(
....: [("K0", "Y0"), ("K1", "Y1"), ("K2", "Y2"), ("K2", "Y3")],
....: names=["key", "Y"],
....: )
....:
In [97]: right = pd.DataFrame(
....: {"C": ["C0", "C1", "C2", "C3"], "D": ["D0", "D1", "D2", "D3"]},
....: index=index,
....: )
....:
In [98]: result = left.join(right, how="inner")
This is equivalent but less verbose and more memory efficient / faster than this.
In [99]: result = pd.merge(
....: left.reset_index(), right.reset_index(), on=["key"], how="inner"
....: ).set_index(["key","Y"])
....:
Joining with two MultiIndexes#
This is supported in a limited way, provided that the index for the right
argument is completely used in the join, and is a subset of the indices in
the left argument, as in this example:
In [100]: leftindex = pd.MultiIndex.from_product(
.....: [list("abc"), list("xy"), [1, 2]], names=["abc", "xy", "num"]
.....: )
.....:
In [101]: left = pd.DataFrame({"v1": range(12)}, index=leftindex)
In [102]: left
Out[102]:
v1
abc xy num
a x 1 0
2 1
y 1 2
2 3
b x 1 4
2 5
y 1 6
2 7
c x 1 8
2 9
y 1 10
2 11
In [103]: rightindex = pd.MultiIndex.from_product(
.....: [list("abc"), list("xy")], names=["abc", "xy"]
.....: )
.....:
In [104]: right = pd.DataFrame({"v2": [100 * i for i in range(1, 7)]}, index=rightindex)
In [105]: right
Out[105]:
v2
abc xy
a x 100
y 200
b x 300
y 400
c x 500
y 600
In [106]: left.join(right, on=["abc", "xy"], how="inner")
Out[106]:
v1 v2
abc xy num
a x 1 0 100
2 1 100
y 1 2 200
2 3 200
b x 1 4 300
2 5 300
y 1 6 400
2 7 400
c x 1 8 500
2 9 500
y 1 10 600
2 11 600
If that condition is not satisfied, a join with two multi-indexes can be
done using the following code.
In [107]: leftindex = pd.MultiIndex.from_tuples(
.....: [("K0", "X0"), ("K0", "X1"), ("K1", "X2")], names=["key", "X"]
.....: )
.....:
In [108]: left = pd.DataFrame(
.....: {"A": ["A0", "A1", "A2"], "B": ["B0", "B1", "B2"]}, index=leftindex
.....: )
.....:
In [109]: rightindex = pd.MultiIndex.from_tuples(
.....: [("K0", "Y0"), ("K1", "Y1"), ("K2", "Y2"), ("K2", "Y3")], names=["key", "Y"]
.....: )
.....:
In [110]: right = pd.DataFrame(
.....: {"C": ["C0", "C1", "C2", "C3"], "D": ["D0", "D1", "D2", "D3"]}, index=rightindex
.....: )
.....:
In [111]: result = pd.merge(
.....: left.reset_index(), right.reset_index(), on=["key"], how="inner"
.....: ).set_index(["key", "X", "Y"])
.....:
Merging on a combination of columns and index levels#
Strings passed as the on, left_on, and right_on parameters
may refer to either column names or index level names. This enables merging
DataFrame instances on a combination of index levels and columns without
resetting indexes.
In [112]: left_index = pd.Index(["K0", "K0", "K1", "K2"], name="key1")
In [113]: left = pd.DataFrame(
.....: {
.....: "A": ["A0", "A1", "A2", "A3"],
.....: "B": ["B0", "B1", "B2", "B3"],
.....: "key2": ["K0", "K1", "K0", "K1"],
.....: },
.....: index=left_index,
.....: )
.....:
In [114]: right_index = pd.Index(["K0", "K1", "K2", "K2"], name="key1")
In [115]: right = pd.DataFrame(
.....: {
.....: "C": ["C0", "C1", "C2", "C3"],
.....: "D": ["D0", "D1", "D2", "D3"],
.....: "key2": ["K0", "K0", "K0", "K1"],
.....: },
.....: index=right_index,
.....: )
.....:
In [116]: result = left.merge(right, on=["key1", "key2"])
Note
When DataFrames are merged on a string that matches an index level in both
frames, the index level is preserved as an index level in the resulting
DataFrame.
Note
When DataFrames are merged using only some of the levels of a MultiIndex,
the extra levels will be dropped from the resulting merge. In order to
preserve those levels, use reset_index on those level names to move
those levels to columns prior to doing the merge.
Note
If a string matches both a column name and an index level name, then a
warning is issued and the column takes precedence. This will result in an
ambiguity error in a future version.
Overlapping value columns#
The merge suffixes argument takes a tuple of list of strings to append to
overlapping column names in the input DataFrames to disambiguate the result
columns:
In [117]: left = pd.DataFrame({"k": ["K0", "K1", "K2"], "v": [1, 2, 3]})
In [118]: right = pd.DataFrame({"k": ["K0", "K0", "K3"], "v": [4, 5, 6]})
In [119]: result = pd.merge(left, right, on="k")
In [120]: result = pd.merge(left, right, on="k", suffixes=("_l", "_r"))
DataFrame.join() has lsuffix and rsuffix arguments which behave
similarly.
In [121]: left = left.set_index("k")
In [122]: right = right.set_index("k")
In [123]: result = left.join(right, lsuffix="_l", rsuffix="_r")
Joining multiple DataFrames#
A list or tuple of DataFrames can also be passed to join()
to join them together on their indexes.
In [124]: right2 = pd.DataFrame({"v": [7, 8, 9]}, index=["K1", "K1", "K2"])
In [125]: result = left.join([right, right2])
Merging together values within Series or DataFrame columns#
Another fairly common situation is to have two like-indexed (or similarly
indexed) Series or DataFrame objects and wanting to “patch” values in
one object from values for matching indices in the other. Here is an example:
In [126]: df1 = pd.DataFrame(
.....: [[np.nan, 3.0, 5.0], [-4.6, np.nan, np.nan], [np.nan, 7.0, np.nan]]
.....: )
.....:
In [127]: df2 = pd.DataFrame([[-42.6, np.nan, -8.2], [-5.0, 1.6, 4]], index=[1, 2])
For this, use the combine_first() method:
In [128]: result = df1.combine_first(df2)
Note that this method only takes values from the right DataFrame if they are
missing in the left DataFrame. A related method, update(),
alters non-NA values in place:
In [129]: df1.update(df2)
Timeseries friendly merging#
Merging ordered data#
A merge_ordered() function allows combining time series and other
ordered data. In particular it has an optional fill_method keyword to
fill/interpolate missing data:
In [130]: left = pd.DataFrame(
.....: {"k": ["K0", "K1", "K1", "K2"], "lv": [1, 2, 3, 4], "s": ["a", "b", "c", "d"]}
.....: )
.....:
In [131]: right = pd.DataFrame({"k": ["K1", "K2", "K4"], "rv": [1, 2, 3]})
In [132]: pd.merge_ordered(left, right, fill_method="ffill", left_by="s")
Out[132]:
k lv s rv
0 K0 1.0 a NaN
1 K1 1.0 a 1.0
2 K2 1.0 a 2.0
3 K4 1.0 a 3.0
4 K1 2.0 b 1.0
5 K2 2.0 b 2.0
6 K4 2.0 b 3.0
7 K1 3.0 c 1.0
8 K2 3.0 c 2.0
9 K4 3.0 c 3.0
10 K1 NaN d 1.0
11 K2 4.0 d 2.0
12 K4 4.0 d 3.0
Merging asof#
A merge_asof() is similar to an ordered left-join except that we match on
nearest key rather than equal keys. For each row in the left DataFrame,
we select the last row in the right DataFrame whose on key is less
than the left’s key. Both DataFrames must be sorted by the key.
Optionally an asof merge can perform a group-wise merge. This matches the
by key equally, in addition to the nearest match on the on key.
For example; we might have trades and quotes and we want to asof
merge them.
In [133]: trades = pd.DataFrame(
.....: {
.....: "time": pd.to_datetime(
.....: [
.....: "20160525 13:30:00.023",
.....: "20160525 13:30:00.038",
.....: "20160525 13:30:00.048",
.....: "20160525 13:30:00.048",
.....: "20160525 13:30:00.048",
.....: ]
.....: ),
.....: "ticker": ["MSFT", "MSFT", "GOOG", "GOOG", "AAPL"],
.....: "price": [51.95, 51.95, 720.77, 720.92, 98.00],
.....: "quantity": [75, 155, 100, 100, 100],
.....: },
.....: columns=["time", "ticker", "price", "quantity"],
.....: )
.....:
In [134]: quotes = pd.DataFrame(
.....: {
.....: "time": pd.to_datetime(
.....: [
.....: "20160525 13:30:00.023",
.....: "20160525 13:30:00.023",
.....: "20160525 13:30:00.030",
.....: "20160525 13:30:00.041",
.....: "20160525 13:30:00.048",
.....: "20160525 13:30:00.049",
.....: "20160525 13:30:00.072",
.....: "20160525 13:30:00.075",
.....: ]
.....: ),
.....: "ticker": ["GOOG", "MSFT", "MSFT", "MSFT", "GOOG", "AAPL", "GOOG", "MSFT"],
.....: "bid": [720.50, 51.95, 51.97, 51.99, 720.50, 97.99, 720.50, 52.01],
.....: "ask": [720.93, 51.96, 51.98, 52.00, 720.93, 98.01, 720.88, 52.03],
.....: },
.....: columns=["time", "ticker", "bid", "ask"],
.....: )
.....:
In [135]: trades
Out[135]:
time ticker price quantity
0 2016-05-25 13:30:00.023 MSFT 51.95 75
1 2016-05-25 13:30:00.038 MSFT 51.95 155
2 2016-05-25 13:30:00.048 GOOG 720.77 100
3 2016-05-25 13:30:00.048 GOOG 720.92 100
4 2016-05-25 13:30:00.048 AAPL 98.00 100
In [136]: quotes
Out[136]:
time ticker bid ask
0 2016-05-25 13:30:00.023 GOOG 720.50 720.93
1 2016-05-25 13:30:00.023 MSFT 51.95 51.96
2 2016-05-25 13:30:00.030 MSFT 51.97 51.98
3 2016-05-25 13:30:00.041 MSFT 51.99 52.00
4 2016-05-25 13:30:00.048 GOOG 720.50 720.93
5 2016-05-25 13:30:00.049 AAPL 97.99 98.01
6 2016-05-25 13:30:00.072 GOOG 720.50 720.88
7 2016-05-25 13:30:00.075 MSFT 52.01 52.03
By default we are taking the asof of the quotes.
In [137]: pd.merge_asof(trades, quotes, on="time", by="ticker")
Out[137]:
time ticker price quantity bid ask
0 2016-05-25 13:30:00.023 MSFT 51.95 75 51.95 51.96
1 2016-05-25 13:30:00.038 MSFT 51.95 155 51.97 51.98
2 2016-05-25 13:30:00.048 GOOG 720.77 100 720.50 720.93
3 2016-05-25 13:30:00.048 GOOG 720.92 100 720.50 720.93
4 2016-05-25 13:30:00.048 AAPL 98.00 100 NaN NaN
We only asof within 2ms between the quote time and the trade time.
In [138]: pd.merge_asof(trades, quotes, on="time", by="ticker", tolerance=pd.Timedelta("2ms"))
Out[138]:
time ticker price quantity bid ask
0 2016-05-25 13:30:00.023 MSFT 51.95 75 51.95 51.96
1 2016-05-25 13:30:00.038 MSFT 51.95 155 NaN NaN
2 2016-05-25 13:30:00.048 GOOG 720.77 100 720.50 720.93
3 2016-05-25 13:30:00.048 GOOG 720.92 100 720.50 720.93
4 2016-05-25 13:30:00.048 AAPL 98.00 100 NaN NaN
We only asof within 10ms between the quote time and the trade time and we
exclude exact matches on time. Note that though we exclude the exact matches
(of the quotes), prior quotes do propagate to that point in time.
In [139]: pd.merge_asof(
.....: trades,
.....: quotes,
.....: on="time",
.....: by="ticker",
.....: tolerance=pd.Timedelta("10ms"),
.....: allow_exact_matches=False,
.....: )
.....:
Out[139]:
time ticker price quantity bid ask
0 2016-05-25 13:30:00.023 MSFT 51.95 75 NaN NaN
1 2016-05-25 13:30:00.038 MSFT 51.95 155 51.97 51.98
2 2016-05-25 13:30:00.048 GOOG 720.77 100 NaN NaN
3 2016-05-25 13:30:00.048 GOOG 720.92 100 NaN NaN
4 2016-05-25 13:30:00.048 AAPL 98.00 100 NaN NaN
Comparing objects#
The compare() and compare() methods allow you to
compare two DataFrame or Series, respectively, and summarize their differences.
This feature was added in V1.1.0.
For example, you might want to compare two DataFrame and stack their differences
side by side.
In [140]: df = pd.DataFrame(
.....: {
.....: "col1": ["a", "a", "b", "b", "a"],
.....: "col2": [1.0, 2.0, 3.0, np.nan, 5.0],
.....: "col3": [1.0, 2.0, 3.0, 4.0, 5.0],
.....: },
.....: columns=["col1", "col2", "col3"],
.....: )
.....:
In [141]: df
Out[141]:
col1 col2 col3
0 a 1.0 1.0
1 a 2.0 2.0
2 b 3.0 3.0
3 b NaN 4.0
4 a 5.0 5.0
In [142]: df2 = df.copy()
In [143]: df2.loc[0, "col1"] = "c"
In [144]: df2.loc[2, "col3"] = 4.0
In [145]: df2
Out[145]:
col1 col2 col3
0 c 1.0 1.0
1 a 2.0 2.0
2 b 3.0 4.0
3 b NaN 4.0
4 a 5.0 5.0
In [146]: df.compare(df2)
Out[146]:
col1 col3
self other self other
0 a c NaN NaN
2 NaN NaN 3.0 4.0
By default, if two corresponding values are equal, they will be shown as NaN.
Furthermore, if all values in an entire row / column, the row / column will be
omitted from the result. The remaining differences will be aligned on columns.
If you wish, you may choose to stack the differences on rows.
In [147]: df.compare(df2, align_axis=0)
Out[147]:
col1 col3
0 self a NaN
other c NaN
2 self NaN 3.0
other NaN 4.0
If you wish to keep all original rows and columns, set keep_shape argument
to True.
In [148]: df.compare(df2, keep_shape=True)
Out[148]:
col1 col2 col3
self other self other self other
0 a c NaN NaN NaN NaN
1 NaN NaN NaN NaN NaN NaN
2 NaN NaN NaN NaN 3.0 4.0
3 NaN NaN NaN NaN NaN NaN
4 NaN NaN NaN NaN NaN NaN
You may also keep all the original values even if they are equal.
In [149]: df.compare(df2, keep_shape=True, keep_equal=True)
Out[149]:
col1 col2 col3
self other self other self other
0 a c 1.0 1.0 1.0 1.0
1 a a 2.0 2.0 2.0 2.0
2 b b 3.0 3.0 3.0 4.0
3 b b NaN NaN 4.0 4.0
4 a a 5.0 5.0 5.0 5.0
| 281
| 607
|
How to delete rows based on two fields?
I have a df with lots of ids and dates, I need to delete from this df rows with id = 4 where date != '2021-01-01'
This expression, I assume won't work
df_2 = df_2[df_2['id'] != 4 & df_2['date'] != '2021-01-01']
How else can I write the condition?
E.g.
4 2020-01-01
5 2021-05-01
4 2021-01-01
4 2021-09-01
Should become
5 2021-05-01
4 2021-01-01
|
59,852,746
|
Sum of group but keep the same value for each row in pandas
|
<p>How to solve same problem in this link <a href="https://stackoverflow.com/questions/38690042/sum-of-group-but-keep-the-same-value-for-each-row-in-r">Sum of group but keep the same value for each row in r</a> using pandas?</p>
<p>I can generate separate <code>df</code> have the sum for each group and then merge the generated <code>df</code> with the original. </p>
| 59,852,823
| 2020-01-22T04:43:23.960000
| 1
| null | 0
| 28
|
pandas
|
<p>You can use <code>groupby</code> & <code>transform</code> as below to get your output.</p>
<pre><code>df['sumx']=df.groupby(['ID', 'Group'],sort=False)['x'].transform(sum)
df['sumy']=df.groupby(['ID', 'Group'],sort=False)['y'].transform(sum)
df
</code></pre>
<p><strong>output</strong></p>
<pre><code>ID Group x y sumx sumy
1 1 1 1 12 3 25
2 1 1 2 13 3 25
3 1 2 3 14 3 14
4 3 1 4 15 15 48
5 3 1 5 16 15 48
6 3 1 6 17 15 48
7 3 2 7 18 15 37
8 3 2 8 19 15 37
9 4 1 9 20 30 63
10 4 1 10 21 30 63
11 4 1 11 22 30 63
12 4 2 12 23 12 23
</code></pre>
| 2020-01-22T04:52:42.937000
| 0
|
https://pandas.pydata.org/docs/user_guide/groupby.html
|
Group by: split-apply-combine#
Group by: split-apply-combine#
By “group by” we are referring to a process involving one or more of the following
steps:
Splitting the data into groups based on some criteria.
Applying a function to each group independently.
Combining the results into a data structure.
Out of these, the split step is the most straightforward. In fact, in many
situations we may wish to split the data set into groups and do something with
those groups. In the apply step, we might wish to do one of the
following:
Aggregation: compute a summary statistic (or statistics) for each
group. Some examples:
Compute group sums or means.
Compute group sizes / counts.
You can use groupby & transform as below to get your output.
df['sumx']=df.groupby(['ID', 'Group'],sort=False)['x'].transform(sum)
df['sumy']=df.groupby(['ID', 'Group'],sort=False)['y'].transform(sum)
df
output
ID Group x y sumx sumy
1 1 1 1 12 3 25
2 1 1 2 13 3 25
3 1 2 3 14 3 14
4 3 1 4 15 15 48
5 3 1 5 16 15 48
6 3 1 6 17 15 48
7 3 2 7 18 15 37
8 3 2 8 19 15 37
9 4 1 9 20 30 63
10 4 1 10 21 30 63
11 4 1 11 22 30 63
12 4 2 12 23 12 23
Transformation: perform some group-specific computations and return a
like-indexed object. Some examples:
Standardize data (zscore) within a group.
Filling NAs within groups with a value derived from each group.
Filtration: discard some groups, according to a group-wise computation
that evaluates True or False. Some examples:
Discard data that belongs to groups with only a few members.
Filter out data based on the group sum or mean.
Some combination of the above: GroupBy will examine the results of the apply
step and try to return a sensibly combined result if it doesn’t fit into
either of the above two categories.
Since the set of object instance methods on pandas data structures are generally
rich and expressive, we often simply want to invoke, say, a DataFrame function
on each group. The name GroupBy should be quite familiar to those who have used
a SQL-based tool (or itertools), in which you can write code like:
SELECT Column1, Column2, mean(Column3), sum(Column4)
FROM SomeTable
GROUP BY Column1, Column2
We aim to make operations like this natural and easy to express using
pandas. We’ll address each area of GroupBy functionality then provide some
non-trivial examples / use cases.
See the cookbook for some advanced strategies.
Splitting an object into groups#
pandas objects can be split on any of their axes. The abstract definition of
grouping is to provide a mapping of labels to group names. To create a GroupBy
object (more on what the GroupBy object is later), you may do the following:
In [1]: df = pd.DataFrame(
...: [
...: ("bird", "Falconiformes", 389.0),
...: ("bird", "Psittaciformes", 24.0),
...: ("mammal", "Carnivora", 80.2),
...: ("mammal", "Primates", np.nan),
...: ("mammal", "Carnivora", 58),
...: ],
...: index=["falcon", "parrot", "lion", "monkey", "leopard"],
...: columns=("class", "order", "max_speed"),
...: )
...:
In [2]: df
Out[2]:
class order max_speed
falcon bird Falconiformes 389.0
parrot bird Psittaciformes 24.0
lion mammal Carnivora 80.2
monkey mammal Primates NaN
leopard mammal Carnivora 58.0
# default is axis=0
In [3]: grouped = df.groupby("class")
In [4]: grouped = df.groupby("order", axis="columns")
In [5]: grouped = df.groupby(["class", "order"])
The mapping can be specified many different ways:
A Python function, to be called on each of the axis labels.
A list or NumPy array of the same length as the selected axis.
A dict or Series, providing a label -> group name mapping.
For DataFrame objects, a string indicating either a column name or
an index level name to be used to group.
df.groupby('A') is just syntactic sugar for df.groupby(df['A']).
A list of any of the above things.
Collectively we refer to the grouping objects as the keys. For example,
consider the following DataFrame:
Note
A string passed to groupby may refer to either a column or an index level.
If a string matches both a column name and an index level name, a
ValueError will be raised.
In [6]: df = pd.DataFrame(
...: {
...: "A": ["foo", "bar", "foo", "bar", "foo", "bar", "foo", "foo"],
...: "B": ["one", "one", "two", "three", "two", "two", "one", "three"],
...: "C": np.random.randn(8),
...: "D": np.random.randn(8),
...: }
...: )
...:
In [7]: df
Out[7]:
A B C D
0 foo one 0.469112 -0.861849
1 bar one -0.282863 -2.104569
2 foo two -1.509059 -0.494929
3 bar three -1.135632 1.071804
4 foo two 1.212112 0.721555
5 bar two -0.173215 -0.706771
6 foo one 0.119209 -1.039575
7 foo three -1.044236 0.271860
On a DataFrame, we obtain a GroupBy object by calling groupby().
We could naturally group by either the A or B columns, or both:
In [8]: grouped = df.groupby("A")
In [9]: grouped = df.groupby(["A", "B"])
If we also have a MultiIndex on columns A and B, we can group by all
but the specified columns
In [10]: df2 = df.set_index(["A", "B"])
In [11]: grouped = df2.groupby(level=df2.index.names.difference(["B"]))
In [12]: grouped.sum()
Out[12]:
C D
A
bar -1.591710 -1.739537
foo -0.752861 -1.402938
These will split the DataFrame on its index (rows). We could also split by the
columns:
In [13]: def get_letter_type(letter):
....: if letter.lower() in 'aeiou':
....: return 'vowel'
....: else:
....: return 'consonant'
....:
In [14]: grouped = df.groupby(get_letter_type, axis=1)
pandas Index objects support duplicate values. If a
non-unique index is used as the group key in a groupby operation, all values
for the same index value will be considered to be in one group and thus the
output of aggregation functions will only contain unique index values:
In [15]: lst = [1, 2, 3, 1, 2, 3]
In [16]: s = pd.Series([1, 2, 3, 10, 20, 30], lst)
In [17]: grouped = s.groupby(level=0)
In [18]: grouped.first()
Out[18]:
1 1
2 2
3 3
dtype: int64
In [19]: grouped.last()
Out[19]:
1 10
2 20
3 30
dtype: int64
In [20]: grouped.sum()
Out[20]:
1 11
2 22
3 33
dtype: int64
Note that no splitting occurs until it’s needed. Creating the GroupBy object
only verifies that you’ve passed a valid mapping.
Note
Many kinds of complicated data manipulations can be expressed in terms of
GroupBy operations (though can’t be guaranteed to be the most
efficient). You can get quite creative with the label mapping functions.
GroupBy sorting#
By default the group keys are sorted during the groupby operation. You may however pass sort=False for potential speedups:
In [21]: df2 = pd.DataFrame({"X": ["B", "B", "A", "A"], "Y": [1, 2, 3, 4]})
In [22]: df2.groupby(["X"]).sum()
Out[22]:
Y
X
A 7
B 3
In [23]: df2.groupby(["X"], sort=False).sum()
Out[23]:
Y
X
B 3
A 7
Note that groupby will preserve the order in which observations are sorted within each group.
For example, the groups created by groupby() below are in the order they appeared in the original DataFrame:
In [24]: df3 = pd.DataFrame({"X": ["A", "B", "A", "B"], "Y": [1, 4, 3, 2]})
In [25]: df3.groupby(["X"]).get_group("A")
Out[25]:
X Y
0 A 1
2 A 3
In [26]: df3.groupby(["X"]).get_group("B")
Out[26]:
X Y
1 B 4
3 B 2
New in version 1.1.0.
GroupBy dropna#
By default NA values are excluded from group keys during the groupby operation. However,
in case you want to include NA values in group keys, you could pass dropna=False to achieve it.
In [27]: df_list = [[1, 2, 3], [1, None, 4], [2, 1, 3], [1, 2, 2]]
In [28]: df_dropna = pd.DataFrame(df_list, columns=["a", "b", "c"])
In [29]: df_dropna
Out[29]:
a b c
0 1 2.0 3
1 1 NaN 4
2 2 1.0 3
3 1 2.0 2
# Default ``dropna`` is set to True, which will exclude NaNs in keys
In [30]: df_dropna.groupby(by=["b"], dropna=True).sum()
Out[30]:
a c
b
1.0 2 3
2.0 2 5
# In order to allow NaN in keys, set ``dropna`` to False
In [31]: df_dropna.groupby(by=["b"], dropna=False).sum()
Out[31]:
a c
b
1.0 2 3
2.0 2 5
NaN 1 4
The default setting of dropna argument is True which means NA are not included in group keys.
GroupBy object attributes#
The groups attribute is a dict whose keys are the computed unique groups
and corresponding values being the axis labels belonging to each group. In the
above example we have:
In [32]: df.groupby("A").groups
Out[32]: {'bar': [1, 3, 5], 'foo': [0, 2, 4, 6, 7]}
In [33]: df.groupby(get_letter_type, axis=1).groups
Out[33]: {'consonant': ['B', 'C', 'D'], 'vowel': ['A']}
Calling the standard Python len function on the GroupBy object just returns
the length of the groups dict, so it is largely just a convenience:
In [34]: grouped = df.groupby(["A", "B"])
In [35]: grouped.groups
Out[35]: {('bar', 'one'): [1], ('bar', 'three'): [3], ('bar', 'two'): [5], ('foo', 'one'): [0, 6], ('foo', 'three'): [7], ('foo', 'two'): [2, 4]}
In [36]: len(grouped)
Out[36]: 6
GroupBy will tab complete column names (and other attributes):
In [37]: df
Out[37]:
height weight gender
2000-01-01 42.849980 157.500553 male
2000-01-02 49.607315 177.340407 male
2000-01-03 56.293531 171.524640 male
2000-01-04 48.421077 144.251986 female
2000-01-05 46.556882 152.526206 male
2000-01-06 68.448851 168.272968 female
2000-01-07 70.757698 136.431469 male
2000-01-08 58.909500 176.499753 female
2000-01-09 76.435631 174.094104 female
2000-01-10 45.306120 177.540920 male
In [38]: gb = df.groupby("gender")
In [39]: gb.<TAB> # noqa: E225, E999
gb.agg gb.boxplot gb.cummin gb.describe gb.filter gb.get_group gb.height gb.last gb.median gb.ngroups gb.plot gb.rank gb.std gb.transform
gb.aggregate gb.count gb.cumprod gb.dtype gb.first gb.groups gb.hist gb.max gb.min gb.nth gb.prod gb.resample gb.sum gb.var
gb.apply gb.cummax gb.cumsum gb.fillna gb.gender gb.head gb.indices gb.mean gb.name gb.ohlc gb.quantile gb.size gb.tail gb.weight
GroupBy with MultiIndex#
With hierarchically-indexed data, it’s quite
natural to group by one of the levels of the hierarchy.
Let’s create a Series with a two-level MultiIndex.
In [40]: arrays = [
....: ["bar", "bar", "baz", "baz", "foo", "foo", "qux", "qux"],
....: ["one", "two", "one", "two", "one", "two", "one", "two"],
....: ]
....:
In [41]: index = pd.MultiIndex.from_arrays(arrays, names=["first", "second"])
In [42]: s = pd.Series(np.random.randn(8), index=index)
In [43]: s
Out[43]:
first second
bar one -0.919854
two -0.042379
baz one 1.247642
two -0.009920
foo one 0.290213
two 0.495767
qux one 0.362949
two 1.548106
dtype: float64
We can then group by one of the levels in s.
In [44]: grouped = s.groupby(level=0)
In [45]: grouped.sum()
Out[45]:
first
bar -0.962232
baz 1.237723
foo 0.785980
qux 1.911055
dtype: float64
If the MultiIndex has names specified, these can be passed instead of the level
number:
In [46]: s.groupby(level="second").sum()
Out[46]:
second
one 0.980950
two 1.991575
dtype: float64
Grouping with multiple levels is supported.
In [47]: s
Out[47]:
first second third
bar doo one -1.131345
two -0.089329
baz bee one 0.337863
two -0.945867
foo bop one -0.932132
two 1.956030
qux bop one 0.017587
two -0.016692
dtype: float64
In [48]: s.groupby(level=["first", "second"]).sum()
Out[48]:
first second
bar doo -1.220674
baz bee -0.608004
foo bop 1.023898
qux bop 0.000895
dtype: float64
Index level names may be supplied as keys.
In [49]: s.groupby(["first", "second"]).sum()
Out[49]:
first second
bar doo -1.220674
baz bee -0.608004
foo bop 1.023898
qux bop 0.000895
dtype: float64
More on the sum function and aggregation later.
Grouping DataFrame with Index levels and columns#
A DataFrame may be grouped by a combination of columns and index levels by
specifying the column names as strings and the index levels as pd.Grouper
objects.
In [50]: arrays = [
....: ["bar", "bar", "baz", "baz", "foo", "foo", "qux", "qux"],
....: ["one", "two", "one", "two", "one", "two", "one", "two"],
....: ]
....:
In [51]: index = pd.MultiIndex.from_arrays(arrays, names=["first", "second"])
In [52]: df = pd.DataFrame({"A": [1, 1, 1, 1, 2, 2, 3, 3], "B": np.arange(8)}, index=index)
In [53]: df
Out[53]:
A B
first second
bar one 1 0
two 1 1
baz one 1 2
two 1 3
foo one 2 4
two 2 5
qux one 3 6
two 3 7
The following example groups df by the second index level and
the A column.
In [54]: df.groupby([pd.Grouper(level=1), "A"]).sum()
Out[54]:
B
second A
one 1 2
2 4
3 6
two 1 4
2 5
3 7
Index levels may also be specified by name.
In [55]: df.groupby([pd.Grouper(level="second"), "A"]).sum()
Out[55]:
B
second A
one 1 2
2 4
3 6
two 1 4
2 5
3 7
Index level names may be specified as keys directly to groupby.
In [56]: df.groupby(["second", "A"]).sum()
Out[56]:
B
second A
one 1 2
2 4
3 6
two 1 4
2 5
3 7
DataFrame column selection in GroupBy#
Once you have created the GroupBy object from a DataFrame, you might want to do
something different for each of the columns. Thus, using [] similar to
getting a column from a DataFrame, you can do:
In [57]: df = pd.DataFrame(
....: {
....: "A": ["foo", "bar", "foo", "bar", "foo", "bar", "foo", "foo"],
....: "B": ["one", "one", "two", "three", "two", "two", "one", "three"],
....: "C": np.random.randn(8),
....: "D": np.random.randn(8),
....: }
....: )
....:
In [58]: df
Out[58]:
A B C D
0 foo one -0.575247 1.346061
1 bar one 0.254161 1.511763
2 foo two -1.143704 1.627081
3 bar three 0.215897 -0.990582
4 foo two 1.193555 -0.441652
5 bar two -0.077118 1.211526
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
In [59]: grouped = df.groupby(["A"])
In [60]: grouped_C = grouped["C"]
In [61]: grouped_D = grouped["D"]
This is mainly syntactic sugar for the alternative and much more verbose:
In [62]: df["C"].groupby(df["A"])
Out[62]: <pandas.core.groupby.generic.SeriesGroupBy object at 0x7f1ea100a490>
Additionally this method avoids recomputing the internal grouping information
derived from the passed key.
Iterating through groups#
With the GroupBy object in hand, iterating through the grouped data is very
natural and functions similarly to itertools.groupby():
In [63]: grouped = df.groupby('A')
In [64]: for name, group in grouped:
....: print(name)
....: print(group)
....:
bar
A B C D
1 bar one 0.254161 1.511763
3 bar three 0.215897 -0.990582
5 bar two -0.077118 1.211526
foo
A B C D
0 foo one -0.575247 1.346061
2 foo two -1.143704 1.627081
4 foo two 1.193555 -0.441652
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
In the case of grouping by multiple keys, the group name will be a tuple:
In [65]: for name, group in df.groupby(['A', 'B']):
....: print(name)
....: print(group)
....:
('bar', 'one')
A B C D
1 bar one 0.254161 1.511763
('bar', 'three')
A B C D
3 bar three 0.215897 -0.990582
('bar', 'two')
A B C D
5 bar two -0.077118 1.211526
('foo', 'one')
A B C D
0 foo one -0.575247 1.346061
6 foo one -0.408530 0.268520
('foo', 'three')
A B C D
7 foo three -0.862495 0.02458
('foo', 'two')
A B C D
2 foo two -1.143704 1.627081
4 foo two 1.193555 -0.441652
See Iterating through groups.
Selecting a group#
A single group can be selected using
get_group():
In [66]: grouped.get_group("bar")
Out[66]:
A B C D
1 bar one 0.254161 1.511763
3 bar three 0.215897 -0.990582
5 bar two -0.077118 1.211526
Or for an object grouped on multiple columns:
In [67]: df.groupby(["A", "B"]).get_group(("bar", "one"))
Out[67]:
A B C D
1 bar one 0.254161 1.511763
Aggregation#
Once the GroupBy object has been created, several methods are available to
perform a computation on the grouped data. These operations are similar to the
aggregating API, window API,
and resample API.
An obvious one is aggregation via the
aggregate() or equivalently
agg() method:
In [68]: grouped = df.groupby("A")
In [69]: grouped[["C", "D"]].aggregate(np.sum)
Out[69]:
C D
A
bar 0.392940 1.732707
foo -1.796421 2.824590
In [70]: grouped = df.groupby(["A", "B"])
In [71]: grouped.aggregate(np.sum)
Out[71]:
C D
A B
bar one 0.254161 1.511763
three 0.215897 -0.990582
two -0.077118 1.211526
foo one -0.983776 1.614581
three -0.862495 0.024580
two 0.049851 1.185429
As you can see, the result of the aggregation will have the group names as the
new index along the grouped axis. In the case of multiple keys, the result is a
MultiIndex by default, though this can be
changed by using the as_index option:
In [72]: grouped = df.groupby(["A", "B"], as_index=False)
In [73]: grouped.aggregate(np.sum)
Out[73]:
A B C D
0 bar one 0.254161 1.511763
1 bar three 0.215897 -0.990582
2 bar two -0.077118 1.211526
3 foo one -0.983776 1.614581
4 foo three -0.862495 0.024580
5 foo two 0.049851 1.185429
In [74]: df.groupby("A", as_index=False)[["C", "D"]].sum()
Out[74]:
A C D
0 bar 0.392940 1.732707
1 foo -1.796421 2.824590
Note that you could use the reset_index DataFrame function to achieve the
same result as the column names are stored in the resulting MultiIndex:
In [75]: df.groupby(["A", "B"]).sum().reset_index()
Out[75]:
A B C D
0 bar one 0.254161 1.511763
1 bar three 0.215897 -0.990582
2 bar two -0.077118 1.211526
3 foo one -0.983776 1.614581
4 foo three -0.862495 0.024580
5 foo two 0.049851 1.185429
Another simple aggregation example is to compute the size of each group.
This is included in GroupBy as the size method. It returns a Series whose
index are the group names and whose values are the sizes of each group.
In [76]: grouped.size()
Out[76]:
A B size
0 bar one 1
1 bar three 1
2 bar two 1
3 foo one 2
4 foo three 1
5 foo two 2
In [77]: grouped.describe()
Out[77]:
C ... D
count mean std min ... 25% 50% 75% max
0 1.0 0.254161 NaN 0.254161 ... 1.511763 1.511763 1.511763 1.511763
1 1.0 0.215897 NaN 0.215897 ... -0.990582 -0.990582 -0.990582 -0.990582
2 1.0 -0.077118 NaN -0.077118 ... 1.211526 1.211526 1.211526 1.211526
3 2.0 -0.491888 0.117887 -0.575247 ... 0.537905 0.807291 1.076676 1.346061
4 1.0 -0.862495 NaN -0.862495 ... 0.024580 0.024580 0.024580 0.024580
5 2.0 0.024925 1.652692 -1.143704 ... 0.075531 0.592714 1.109898 1.627081
[6 rows x 16 columns]
Another aggregation example is to compute the number of unique values of each group. This is similar to the value_counts function, except that it only counts unique values.
In [78]: ll = [['foo', 1], ['foo', 2], ['foo', 2], ['bar', 1], ['bar', 1]]
In [79]: df4 = pd.DataFrame(ll, columns=["A", "B"])
In [80]: df4
Out[80]:
A B
0 foo 1
1 foo 2
2 foo 2
3 bar 1
4 bar 1
In [81]: df4.groupby("A")["B"].nunique()
Out[81]:
A
bar 1
foo 2
Name: B, dtype: int64
Note
Aggregation functions will not return the groups that you are aggregating over
if they are named columns, when as_index=True, the default. The grouped columns will
be the indices of the returned object.
Passing as_index=False will return the groups that you are aggregating over, if they are
named columns.
Aggregating functions are the ones that reduce the dimension of the returned objects.
Some common aggregating functions are tabulated below:
Function
Description
mean()
Compute mean of groups
sum()
Compute sum of group values
size()
Compute group sizes
count()
Compute count of group
std()
Standard deviation of groups
var()
Compute variance of groups
sem()
Standard error of the mean of groups
describe()
Generates descriptive statistics
first()
Compute first of group values
last()
Compute last of group values
nth()
Take nth value, or a subset if n is a list
min()
Compute min of group values
max()
Compute max of group values
The aggregating functions above will exclude NA values. Any function which
reduces a Series to a scalar value is an aggregation function and will work,
a trivial example is df.groupby('A').agg(lambda ser: 1). Note that
nth() can act as a reducer or a
filter, see here.
Applying multiple functions at once#
With grouped Series you can also pass a list or dict of functions to do
aggregation with, outputting a DataFrame:
In [82]: grouped = df.groupby("A")
In [83]: grouped["C"].agg([np.sum, np.mean, np.std])
Out[83]:
sum mean std
A
bar 0.392940 0.130980 0.181231
foo -1.796421 -0.359284 0.912265
On a grouped DataFrame, you can pass a list of functions to apply to each
column, which produces an aggregated result with a hierarchical index:
In [84]: grouped[["C", "D"]].agg([np.sum, np.mean, np.std])
Out[84]:
C D
sum mean std sum mean std
A
bar 0.392940 0.130980 0.181231 1.732707 0.577569 1.366330
foo -1.796421 -0.359284 0.912265 2.824590 0.564918 0.884785
The resulting aggregations are named for the functions themselves. If you
need to rename, then you can add in a chained operation for a Series like this:
In [85]: (
....: grouped["C"]
....: .agg([np.sum, np.mean, np.std])
....: .rename(columns={"sum": "foo", "mean": "bar", "std": "baz"})
....: )
....:
Out[85]:
foo bar baz
A
bar 0.392940 0.130980 0.181231
foo -1.796421 -0.359284 0.912265
For a grouped DataFrame, you can rename in a similar manner:
In [86]: (
....: grouped[["C", "D"]].agg([np.sum, np.mean, np.std]).rename(
....: columns={"sum": "foo", "mean": "bar", "std": "baz"}
....: )
....: )
....:
Out[86]:
C D
foo bar baz foo bar baz
A
bar 0.392940 0.130980 0.181231 1.732707 0.577569 1.366330
foo -1.796421 -0.359284 0.912265 2.824590 0.564918 0.884785
Note
In general, the output column names should be unique. You can’t apply
the same function (or two functions with the same name) to the same
column.
In [87]: grouped["C"].agg(["sum", "sum"])
Out[87]:
sum sum
A
bar 0.392940 0.392940
foo -1.796421 -1.796421
pandas does allow you to provide multiple lambdas. In this case, pandas
will mangle the name of the (nameless) lambda functions, appending _<i>
to each subsequent lambda.
In [88]: grouped["C"].agg([lambda x: x.max() - x.min(), lambda x: x.median() - x.mean()])
Out[88]:
<lambda_0> <lambda_1>
A
bar 0.331279 0.084917
foo 2.337259 -0.215962
Named aggregation#
New in version 0.25.0.
To support column-specific aggregation with control over the output column names, pandas
accepts the special syntax in GroupBy.agg(), known as “named aggregation”, where
The keywords are the output column names
The values are tuples whose first element is the column to select
and the second element is the aggregation to apply to that column. pandas
provides the pandas.NamedAgg namedtuple with the fields ['column', 'aggfunc']
to make it clearer what the arguments are. As usual, the aggregation can
be a callable or a string alias.
In [89]: animals = pd.DataFrame(
....: {
....: "kind": ["cat", "dog", "cat", "dog"],
....: "height": [9.1, 6.0, 9.5, 34.0],
....: "weight": [7.9, 7.5, 9.9, 198.0],
....: }
....: )
....:
In [90]: animals
Out[90]:
kind height weight
0 cat 9.1 7.9
1 dog 6.0 7.5
2 cat 9.5 9.9
3 dog 34.0 198.0
In [91]: animals.groupby("kind").agg(
....: min_height=pd.NamedAgg(column="height", aggfunc="min"),
....: max_height=pd.NamedAgg(column="height", aggfunc="max"),
....: average_weight=pd.NamedAgg(column="weight", aggfunc=np.mean),
....: )
....:
Out[91]:
min_height max_height average_weight
kind
cat 9.1 9.5 8.90
dog 6.0 34.0 102.75
pandas.NamedAgg is just a namedtuple. Plain tuples are allowed as well.
In [92]: animals.groupby("kind").agg(
....: min_height=("height", "min"),
....: max_height=("height", "max"),
....: average_weight=("weight", np.mean),
....: )
....:
Out[92]:
min_height max_height average_weight
kind
cat 9.1 9.5 8.90
dog 6.0 34.0 102.75
If your desired output column names are not valid Python keywords, construct a dictionary
and unpack the keyword arguments
In [93]: animals.groupby("kind").agg(
....: **{
....: "total weight": pd.NamedAgg(column="weight", aggfunc=sum)
....: }
....: )
....:
Out[93]:
total weight
kind
cat 17.8
dog 205.5
Additional keyword arguments are not passed through to the aggregation functions. Only pairs
of (column, aggfunc) should be passed as **kwargs. If your aggregation functions
requires additional arguments, partially apply them with functools.partial().
Note
For Python 3.5 and earlier, the order of **kwargs in a functions was not
preserved. This means that the output column ordering would not be
consistent. To ensure consistent ordering, the keys (and so output columns)
will always be sorted for Python 3.5.
Named aggregation is also valid for Series groupby aggregations. In this case there’s
no column selection, so the values are just the functions.
In [94]: animals.groupby("kind").height.agg(
....: min_height="min",
....: max_height="max",
....: )
....:
Out[94]:
min_height max_height
kind
cat 9.1 9.5
dog 6.0 34.0
Applying different functions to DataFrame columns#
By passing a dict to aggregate you can apply a different aggregation to the
columns of a DataFrame:
In [95]: grouped.agg({"C": np.sum, "D": lambda x: np.std(x, ddof=1)})
Out[95]:
C D
A
bar 0.392940 1.366330
foo -1.796421 0.884785
The function names can also be strings. In order for a string to be valid it
must be either implemented on GroupBy or available via dispatching:
In [96]: grouped.agg({"C": "sum", "D": "std"})
Out[96]:
C D
A
bar 0.392940 1.366330
foo -1.796421 0.884785
Cython-optimized aggregation functions#
Some common aggregations, currently only sum, mean, std, and sem, have
optimized Cython implementations:
In [97]: df.groupby("A")[["C", "D"]].sum()
Out[97]:
C D
A
bar 0.392940 1.732707
foo -1.796421 2.824590
In [98]: df.groupby(["A", "B"]).mean()
Out[98]:
C D
A B
bar one 0.254161 1.511763
three 0.215897 -0.990582
two -0.077118 1.211526
foo one -0.491888 0.807291
three -0.862495 0.024580
two 0.024925 0.592714
Of course sum and mean are implemented on pandas objects, so the above
code would work even without the special versions via dispatching (see below).
Aggregations with User-Defined Functions#
Users can also provide their own functions for custom aggregations. When aggregating
with a User-Defined Function (UDF), the UDF should not mutate the provided Series, see
Mutating with User Defined Function (UDF) methods for more information.
In [99]: animals.groupby("kind")[["height"]].agg(lambda x: set(x))
Out[99]:
height
kind
cat {9.1, 9.5}
dog {34.0, 6.0}
The resulting dtype will reflect that of the aggregating function. If the results from different groups have
different dtypes, then a common dtype will be determined in the same way as DataFrame construction.
In [100]: animals.groupby("kind")[["height"]].agg(lambda x: x.astype(int).sum())
Out[100]:
height
kind
cat 18
dog 40
Transformation#
The transform method returns an object that is indexed the same
as the one being grouped. The transform function must:
Return a result that is either the same size as the group chunk or
broadcastable to the size of the group chunk (e.g., a scalar,
grouped.transform(lambda x: x.iloc[-1])).
Operate column-by-column on the group chunk. The transform is applied to
the first group chunk using chunk.apply.
Not perform in-place operations on the group chunk. Group chunks should
be treated as immutable, and changes to a group chunk may produce unexpected
results. For example, when using fillna, inplace must be False
(grouped.transform(lambda x: x.fillna(inplace=False))).
(Optionally) operates on the entire group chunk. If this is supported, a
fast path is used starting from the second chunk.
Deprecated since version 1.5.0: When using .transform on a grouped DataFrame and the transformation function
returns a DataFrame, currently pandas does not align the result’s index
with the input’s index. This behavior is deprecated and alignment will
be performed in a future version of pandas. You can apply .to_numpy() to the
result of the transformation function to avoid alignment.
Similar to Aggregations with User-Defined Functions, the resulting dtype will reflect that of the
transformation function. If the results from different groups have different dtypes, then
a common dtype will be determined in the same way as DataFrame construction.
Suppose we wished to standardize the data within each group:
In [101]: index = pd.date_range("10/1/1999", periods=1100)
In [102]: ts = pd.Series(np.random.normal(0.5, 2, 1100), index)
In [103]: ts = ts.rolling(window=100, min_periods=100).mean().dropna()
In [104]: ts.head()
Out[104]:
2000-01-08 0.779333
2000-01-09 0.778852
2000-01-10 0.786476
2000-01-11 0.782797
2000-01-12 0.798110
Freq: D, dtype: float64
In [105]: ts.tail()
Out[105]:
2002-09-30 0.660294
2002-10-01 0.631095
2002-10-02 0.673601
2002-10-03 0.709213
2002-10-04 0.719369
Freq: D, dtype: float64
In [106]: transformed = ts.groupby(lambda x: x.year).transform(
.....: lambda x: (x - x.mean()) / x.std()
.....: )
.....:
We would expect the result to now have mean 0 and standard deviation 1 within
each group, which we can easily check:
# Original Data
In [107]: grouped = ts.groupby(lambda x: x.year)
In [108]: grouped.mean()
Out[108]:
2000 0.442441
2001 0.526246
2002 0.459365
dtype: float64
In [109]: grouped.std()
Out[109]:
2000 0.131752
2001 0.210945
2002 0.128753
dtype: float64
# Transformed Data
In [110]: grouped_trans = transformed.groupby(lambda x: x.year)
In [111]: grouped_trans.mean()
Out[111]:
2000 -4.870756e-16
2001 -1.545187e-16
2002 4.136282e-16
dtype: float64
In [112]: grouped_trans.std()
Out[112]:
2000 1.0
2001 1.0
2002 1.0
dtype: float64
We can also visually compare the original and transformed data sets.
In [113]: compare = pd.DataFrame({"Original": ts, "Transformed": transformed})
In [114]: compare.plot()
Out[114]: <AxesSubplot: >
Transformation functions that have lower dimension outputs are broadcast to
match the shape of the input array.
In [115]: ts.groupby(lambda x: x.year).transform(lambda x: x.max() - x.min())
Out[115]:
2000-01-08 0.623893
2000-01-09 0.623893
2000-01-10 0.623893
2000-01-11 0.623893
2000-01-12 0.623893
...
2002-09-30 0.558275
2002-10-01 0.558275
2002-10-02 0.558275
2002-10-03 0.558275
2002-10-04 0.558275
Freq: D, Length: 1001, dtype: float64
Alternatively, the built-in methods could be used to produce the same outputs.
In [116]: max_ts = ts.groupby(lambda x: x.year).transform("max")
In [117]: min_ts = ts.groupby(lambda x: x.year).transform("min")
In [118]: max_ts - min_ts
Out[118]:
2000-01-08 0.623893
2000-01-09 0.623893
2000-01-10 0.623893
2000-01-11 0.623893
2000-01-12 0.623893
...
2002-09-30 0.558275
2002-10-01 0.558275
2002-10-02 0.558275
2002-10-03 0.558275
2002-10-04 0.558275
Freq: D, Length: 1001, dtype: float64
Another common data transform is to replace missing data with the group mean.
In [119]: data_df
Out[119]:
A B C
0 1.539708 -1.166480 0.533026
1 1.302092 -0.505754 NaN
2 -0.371983 1.104803 -0.651520
3 -1.309622 1.118697 -1.161657
4 -1.924296 0.396437 0.812436
.. ... ... ...
995 -0.093110 0.683847 -0.774753
996 -0.185043 1.438572 NaN
997 -0.394469 -0.642343 0.011374
998 -1.174126 1.857148 NaN
999 0.234564 0.517098 0.393534
[1000 rows x 3 columns]
In [120]: countries = np.array(["US", "UK", "GR", "JP"])
In [121]: key = countries[np.random.randint(0, 4, 1000)]
In [122]: grouped = data_df.groupby(key)
# Non-NA count in each group
In [123]: grouped.count()
Out[123]:
A B C
GR 209 217 189
JP 240 255 217
UK 216 231 193
US 239 250 217
In [124]: transformed = grouped.transform(lambda x: x.fillna(x.mean()))
We can verify that the group means have not changed in the transformed data
and that the transformed data contains no NAs.
In [125]: grouped_trans = transformed.groupby(key)
In [126]: grouped.mean() # original group means
Out[126]:
A B C
GR -0.098371 -0.015420 0.068053
JP 0.069025 0.023100 -0.077324
UK 0.034069 -0.052580 -0.116525
US 0.058664 -0.020399 0.028603
In [127]: grouped_trans.mean() # transformation did not change group means
Out[127]:
A B C
GR -0.098371 -0.015420 0.068053
JP 0.069025 0.023100 -0.077324
UK 0.034069 -0.052580 -0.116525
US 0.058664 -0.020399 0.028603
In [128]: grouped.count() # original has some missing data points
Out[128]:
A B C
GR 209 217 189
JP 240 255 217
UK 216 231 193
US 239 250 217
In [129]: grouped_trans.count() # counts after transformation
Out[129]:
A B C
GR 228 228 228
JP 267 267 267
UK 247 247 247
US 258 258 258
In [130]: grouped_trans.size() # Verify non-NA count equals group size
Out[130]:
GR 228
JP 267
UK 247
US 258
dtype: int64
Note
Some functions will automatically transform the input when applied to a
GroupBy object, but returning an object of the same shape as the original.
Passing as_index=False will not affect these transformation methods.
For example: fillna, ffill, bfill, shift..
In [131]: grouped.ffill()
Out[131]:
A B C
0 1.539708 -1.166480 0.533026
1 1.302092 -0.505754 0.533026
2 -0.371983 1.104803 -0.651520
3 -1.309622 1.118697 -1.161657
4 -1.924296 0.396437 0.812436
.. ... ... ...
995 -0.093110 0.683847 -0.774753
996 -0.185043 1.438572 -0.774753
997 -0.394469 -0.642343 0.011374
998 -1.174126 1.857148 -0.774753
999 0.234564 0.517098 0.393534
[1000 rows x 3 columns]
Window and resample operations#
It is possible to use resample(), expanding() and
rolling() as methods on groupbys.
The example below will apply the rolling() method on the samples of
the column B based on the groups of column A.
In [132]: df_re = pd.DataFrame({"A": [1] * 10 + [5] * 10, "B": np.arange(20)})
In [133]: df_re
Out[133]:
A B
0 1 0
1 1 1
2 1 2
3 1 3
4 1 4
.. .. ..
15 5 15
16 5 16
17 5 17
18 5 18
19 5 19
[20 rows x 2 columns]
In [134]: df_re.groupby("A").rolling(4).B.mean()
Out[134]:
A
1 0 NaN
1 NaN
2 NaN
3 1.5
4 2.5
...
5 15 13.5
16 14.5
17 15.5
18 16.5
19 17.5
Name: B, Length: 20, dtype: float64
The expanding() method will accumulate a given operation
(sum() in the example) for all the members of each particular
group.
In [135]: df_re.groupby("A").expanding().sum()
Out[135]:
B
A
1 0 0.0
1 1.0
2 3.0
3 6.0
4 10.0
... ...
5 15 75.0
16 91.0
17 108.0
18 126.0
19 145.0
[20 rows x 1 columns]
Suppose you want to use the resample() method to get a daily
frequency in each group of your dataframe and wish to complete the
missing values with the ffill() method.
In [136]: df_re = pd.DataFrame(
.....: {
.....: "date": pd.date_range(start="2016-01-01", periods=4, freq="W"),
.....: "group": [1, 1, 2, 2],
.....: "val": [5, 6, 7, 8],
.....: }
.....: ).set_index("date")
.....:
In [137]: df_re
Out[137]:
group val
date
2016-01-03 1 5
2016-01-10 1 6
2016-01-17 2 7
2016-01-24 2 8
In [138]: df_re.groupby("group").resample("1D").ffill()
Out[138]:
group val
group date
1 2016-01-03 1 5
2016-01-04 1 5
2016-01-05 1 5
2016-01-06 1 5
2016-01-07 1 5
... ... ...
2 2016-01-20 2 7
2016-01-21 2 7
2016-01-22 2 7
2016-01-23 2 7
2016-01-24 2 8
[16 rows x 2 columns]
Filtration#
The filter method returns a subset of the original object. Suppose we
want to take only elements that belong to groups with a group sum greater
than 2.
In [139]: sf = pd.Series([1, 1, 2, 3, 3, 3])
In [140]: sf.groupby(sf).filter(lambda x: x.sum() > 2)
Out[140]:
3 3
4 3
5 3
dtype: int64
The argument of filter must be a function that, applied to the group as a
whole, returns True or False.
Another useful operation is filtering out elements that belong to groups
with only a couple members.
In [141]: dff = pd.DataFrame({"A": np.arange(8), "B": list("aabbbbcc")})
In [142]: dff.groupby("B").filter(lambda x: len(x) > 2)
Out[142]:
A B
2 2 b
3 3 b
4 4 b
5 5 b
Alternatively, instead of dropping the offending groups, we can return a
like-indexed objects where the groups that do not pass the filter are filled
with NaNs.
In [143]: dff.groupby("B").filter(lambda x: len(x) > 2, dropna=False)
Out[143]:
A B
0 NaN NaN
1 NaN NaN
2 2.0 b
3 3.0 b
4 4.0 b
5 5.0 b
6 NaN NaN
7 NaN NaN
For DataFrames with multiple columns, filters should explicitly specify a column as the filter criterion.
In [144]: dff["C"] = np.arange(8)
In [145]: dff.groupby("B").filter(lambda x: len(x["C"]) > 2)
Out[145]:
A B C
2 2 b 2
3 3 b 3
4 4 b 4
5 5 b 5
Note
Some functions when applied to a groupby object will act as a filter on the input, returning
a reduced shape of the original (and potentially eliminating groups), but with the index unchanged.
Passing as_index=False will not affect these transformation methods.
For example: head, tail.
In [146]: dff.groupby("B").head(2)
Out[146]:
A B C
0 0 a 0
1 1 a 1
2 2 b 2
3 3 b 3
6 6 c 6
7 7 c 7
Dispatching to instance methods#
When doing an aggregation or transformation, you might just want to call an
instance method on each data group. This is pretty easy to do by passing lambda
functions:
In [147]: grouped = df.groupby("A")
In [148]: grouped.agg(lambda x: x.std())
Out[148]:
C D
A
bar 0.181231 1.366330
foo 0.912265 0.884785
But, it’s rather verbose and can be untidy if you need to pass additional
arguments. Using a bit of metaprogramming cleverness, GroupBy now has the
ability to “dispatch” method calls to the groups:
In [149]: grouped.std()
Out[149]:
C D
A
bar 0.181231 1.366330
foo 0.912265 0.884785
What is actually happening here is that a function wrapper is being
generated. When invoked, it takes any passed arguments and invokes the function
with any arguments on each group (in the above example, the std
function). The results are then combined together much in the style of agg
and transform (it actually uses apply to infer the gluing, documented
next). This enables some operations to be carried out rather succinctly:
In [150]: tsdf = pd.DataFrame(
.....: np.random.randn(1000, 3),
.....: index=pd.date_range("1/1/2000", periods=1000),
.....: columns=["A", "B", "C"],
.....: )
.....:
In [151]: tsdf.iloc[::2] = np.nan
In [152]: grouped = tsdf.groupby(lambda x: x.year)
In [153]: grouped.fillna(method="pad")
Out[153]:
A B C
2000-01-01 NaN NaN NaN
2000-01-02 -0.353501 -0.080957 -0.876864
2000-01-03 -0.353501 -0.080957 -0.876864
2000-01-04 0.050976 0.044273 -0.559849
2000-01-05 0.050976 0.044273 -0.559849
... ... ... ...
2002-09-22 0.005011 0.053897 -1.026922
2002-09-23 0.005011 0.053897 -1.026922
2002-09-24 -0.456542 -1.849051 1.559856
2002-09-25 -0.456542 -1.849051 1.559856
2002-09-26 1.123162 0.354660 1.128135
[1000 rows x 3 columns]
In this example, we chopped the collection of time series into yearly chunks
then independently called fillna on the
groups.
The nlargest and nsmallest methods work on Series style groupbys:
In [154]: s = pd.Series([9, 8, 7, 5, 19, 1, 4.2, 3.3])
In [155]: g = pd.Series(list("abababab"))
In [156]: gb = s.groupby(g)
In [157]: gb.nlargest(3)
Out[157]:
a 4 19.0
0 9.0
2 7.0
b 1 8.0
3 5.0
7 3.3
dtype: float64
In [158]: gb.nsmallest(3)
Out[158]:
a 6 4.2
2 7.0
0 9.0
b 5 1.0
7 3.3
3 5.0
dtype: float64
Flexible apply#
Some operations on the grouped data might not fit into either the aggregate or
transform categories. Or, you may simply want GroupBy to infer how to combine
the results. For these, use the apply function, which can be substituted
for both aggregate and transform in many standard use cases. However,
apply can handle some exceptional use cases.
Note
apply can act as a reducer, transformer, or filter function, depending
on exactly what is passed to it. It can depend on the passed function and
exactly what you are grouping. Thus the grouped column(s) may be included in
the output as well as set the indices.
In [159]: df
Out[159]:
A B C D
0 foo one -0.575247 1.346061
1 bar one 0.254161 1.511763
2 foo two -1.143704 1.627081
3 bar three 0.215897 -0.990582
4 foo two 1.193555 -0.441652
5 bar two -0.077118 1.211526
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
In [160]: grouped = df.groupby("A")
# could also just call .describe()
In [161]: grouped["C"].apply(lambda x: x.describe())
Out[161]:
A
bar count 3.000000
mean 0.130980
std 0.181231
min -0.077118
25% 0.069390
...
foo min -1.143704
25% -0.862495
50% -0.575247
75% -0.408530
max 1.193555
Name: C, Length: 16, dtype: float64
The dimension of the returned result can also change:
In [162]: grouped = df.groupby('A')['C']
In [163]: def f(group):
.....: return pd.DataFrame({'original': group,
.....: 'demeaned': group - group.mean()})
.....:
apply on a Series can operate on a returned value from the applied function,
that is itself a series, and possibly upcast the result to a DataFrame:
In [164]: def f(x):
.....: return pd.Series([x, x ** 2], index=["x", "x^2"])
.....:
In [165]: s = pd.Series(np.random.rand(5))
In [166]: s
Out[166]:
0 0.321438
1 0.493496
2 0.139505
3 0.910103
4 0.194158
dtype: float64
In [167]: s.apply(f)
Out[167]:
x x^2
0 0.321438 0.103323
1 0.493496 0.243538
2 0.139505 0.019462
3 0.910103 0.828287
4 0.194158 0.037697
Control grouped column(s) placement with group_keys#
Note
If group_keys=True is specified when calling groupby(),
functions passed to apply that return like-indexed outputs will have the
group keys added to the result index. Previous versions of pandas would add
the group keys only when the result from the applied function had a different
index than the input. If group_keys is not specified, the group keys will
not be added for like-indexed outputs. In the future this behavior
will change to always respect group_keys, which defaults to True.
Changed in version 1.5.0.
To control whether the grouped column(s) are included in the indices, you can use
the argument group_keys. Compare
In [168]: df.groupby("A", group_keys=True).apply(lambda x: x)
Out[168]:
A B C D
A
bar 1 bar one 0.254161 1.511763
3 bar three 0.215897 -0.990582
5 bar two -0.077118 1.211526
foo 0 foo one -0.575247 1.346061
2 foo two -1.143704 1.627081
4 foo two 1.193555 -0.441652
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
with
In [169]: df.groupby("A", group_keys=False).apply(lambda x: x)
Out[169]:
A B C D
0 foo one -0.575247 1.346061
1 bar one 0.254161 1.511763
2 foo two -1.143704 1.627081
3 bar three 0.215897 -0.990582
4 foo two 1.193555 -0.441652
5 bar two -0.077118 1.211526
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
Similar to Aggregations with User-Defined Functions, the resulting dtype will reflect that of the
apply function. If the results from different groups have different dtypes, then
a common dtype will be determined in the same way as DataFrame construction.
Numba Accelerated Routines#
New in version 1.1.
If Numba is installed as an optional dependency, the transform and
aggregate methods support engine='numba' and engine_kwargs arguments.
See enhancing performance with Numba for general usage of the arguments
and performance considerations.
The function signature must start with values, index exactly as the data belonging to each group
will be passed into values, and the group index will be passed into index.
Warning
When using engine='numba', there will be no “fall back” behavior internally. The group
data and group index will be passed as NumPy arrays to the JITed user defined function, and no
alternative execution attempts will be tried.
Other useful features#
Automatic exclusion of “nuisance” columns#
Again consider the example DataFrame we’ve been looking at:
In [170]: df
Out[170]:
A B C D
0 foo one -0.575247 1.346061
1 bar one 0.254161 1.511763
2 foo two -1.143704 1.627081
3 bar three 0.215897 -0.990582
4 foo two 1.193555 -0.441652
5 bar two -0.077118 1.211526
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
Suppose we wish to compute the standard deviation grouped by the A
column. There is a slight problem, namely that we don’t care about the data in
column B. We refer to this as a “nuisance” column. You can avoid nuisance
columns by specifying numeric_only=True:
In [171]: df.groupby("A").std(numeric_only=True)
Out[171]:
C D
A
bar 0.181231 1.366330
foo 0.912265 0.884785
Note that df.groupby('A').colname.std(). is more efficient than
df.groupby('A').std().colname, so if the result of an aggregation function
is only interesting over one column (here colname), it may be filtered
before applying the aggregation function.
Note
Any object column, also if it contains numerical values such as Decimal
objects, is considered as a “nuisance” columns. They are excluded from
aggregate functions automatically in groupby.
If you do wish to include decimal or object columns in an aggregation with
other non-nuisance data types, you must do so explicitly.
Warning
The automatic dropping of nuisance columns has been deprecated and will be removed
in a future version of pandas. If columns are included that cannot be operated
on, pandas will instead raise an error. In order to avoid this, either select
the columns you wish to operate on or specify numeric_only=True.
In [172]: from decimal import Decimal
In [173]: df_dec = pd.DataFrame(
.....: {
.....: "id": [1, 2, 1, 2],
.....: "int_column": [1, 2, 3, 4],
.....: "dec_column": [
.....: Decimal("0.50"),
.....: Decimal("0.15"),
.....: Decimal("0.25"),
.....: Decimal("0.40"),
.....: ],
.....: }
.....: )
.....:
# Decimal columns can be sum'd explicitly by themselves...
In [174]: df_dec.groupby(["id"])[["dec_column"]].sum()
Out[174]:
dec_column
id
1 0.75
2 0.55
# ...but cannot be combined with standard data types or they will be excluded
In [175]: df_dec.groupby(["id"])[["int_column", "dec_column"]].sum()
Out[175]:
int_column
id
1 4
2 6
# Use .agg function to aggregate over standard and "nuisance" data types
# at the same time
In [176]: df_dec.groupby(["id"]).agg({"int_column": "sum", "dec_column": "sum"})
Out[176]:
int_column dec_column
id
1 4 0.75
2 6 0.55
Handling of (un)observed Categorical values#
When using a Categorical grouper (as a single grouper, or as part of multiple groupers), the observed keyword
controls whether to return a cartesian product of all possible groupers values (observed=False) or only those
that are observed groupers (observed=True).
Show all values:
In [177]: pd.Series([1, 1, 1]).groupby(
.....: pd.Categorical(["a", "a", "a"], categories=["a", "b"]), observed=False
.....: ).count()
.....:
Out[177]:
a 3
b 0
dtype: int64
Show only the observed values:
In [178]: pd.Series([1, 1, 1]).groupby(
.....: pd.Categorical(["a", "a", "a"], categories=["a", "b"]), observed=True
.....: ).count()
.....:
Out[178]:
a 3
dtype: int64
The returned dtype of the grouped will always include all of the categories that were grouped.
In [179]: s = (
.....: pd.Series([1, 1, 1])
.....: .groupby(pd.Categorical(["a", "a", "a"], categories=["a", "b"]), observed=False)
.....: .count()
.....: )
.....:
In [180]: s.index.dtype
Out[180]: CategoricalDtype(categories=['a', 'b'], ordered=False)
NA and NaT group handling#
If there are any NaN or NaT values in the grouping key, these will be
automatically excluded. In other words, there will never be an “NA group” or
“NaT group”. This was not the case in older versions of pandas, but users were
generally discarding the NA group anyway (and supporting it was an
implementation headache).
Grouping with ordered factors#
Categorical variables represented as instance of pandas’s Categorical class
can be used as group keys. If so, the order of the levels will be preserved:
In [181]: data = pd.Series(np.random.randn(100))
In [182]: factor = pd.qcut(data, [0, 0.25, 0.5, 0.75, 1.0])
In [183]: data.groupby(factor).mean()
Out[183]:
(-2.645, -0.523] -1.362896
(-0.523, 0.0296] -0.260266
(0.0296, 0.654] 0.361802
(0.654, 2.21] 1.073801
dtype: float64
Grouping with a grouper specification#
You may need to specify a bit more data to properly group. You can
use the pd.Grouper to provide this local control.
In [184]: import datetime
In [185]: df = pd.DataFrame(
.....: {
.....: "Branch": "A A A A A A A B".split(),
.....: "Buyer": "Carl Mark Carl Carl Joe Joe Joe Carl".split(),
.....: "Quantity": [1, 3, 5, 1, 8, 1, 9, 3],
.....: "Date": [
.....: datetime.datetime(2013, 1, 1, 13, 0),
.....: datetime.datetime(2013, 1, 1, 13, 5),
.....: datetime.datetime(2013, 10, 1, 20, 0),
.....: datetime.datetime(2013, 10, 2, 10, 0),
.....: datetime.datetime(2013, 10, 1, 20, 0),
.....: datetime.datetime(2013, 10, 2, 10, 0),
.....: datetime.datetime(2013, 12, 2, 12, 0),
.....: datetime.datetime(2013, 12, 2, 14, 0),
.....: ],
.....: }
.....: )
.....:
In [186]: df
Out[186]:
Branch Buyer Quantity Date
0 A Carl 1 2013-01-01 13:00:00
1 A Mark 3 2013-01-01 13:05:00
2 A Carl 5 2013-10-01 20:00:00
3 A Carl 1 2013-10-02 10:00:00
4 A Joe 8 2013-10-01 20:00:00
5 A Joe 1 2013-10-02 10:00:00
6 A Joe 9 2013-12-02 12:00:00
7 B Carl 3 2013-12-02 14:00:00
Groupby a specific column with the desired frequency. This is like resampling.
In [187]: df.groupby([pd.Grouper(freq="1M", key="Date"), "Buyer"])[["Quantity"]].sum()
Out[187]:
Quantity
Date Buyer
2013-01-31 Carl 1
Mark 3
2013-10-31 Carl 6
Joe 9
2013-12-31 Carl 3
Joe 9
You have an ambiguous specification in that you have a named index and a column
that could be potential groupers.
In [188]: df = df.set_index("Date")
In [189]: df["Date"] = df.index + pd.offsets.MonthEnd(2)
In [190]: df.groupby([pd.Grouper(freq="6M", key="Date"), "Buyer"])[["Quantity"]].sum()
Out[190]:
Quantity
Date Buyer
2013-02-28 Carl 1
Mark 3
2014-02-28 Carl 9
Joe 18
In [191]: df.groupby([pd.Grouper(freq="6M", level="Date"), "Buyer"])[["Quantity"]].sum()
Out[191]:
Quantity
Date Buyer
2013-01-31 Carl 1
Mark 3
2014-01-31 Carl 9
Joe 18
Taking the first rows of each group#
Just like for a DataFrame or Series you can call head and tail on a groupby:
In [192]: df = pd.DataFrame([[1, 2], [1, 4], [5, 6]], columns=["A", "B"])
In [193]: df
Out[193]:
A B
0 1 2
1 1 4
2 5 6
In [194]: g = df.groupby("A")
In [195]: g.head(1)
Out[195]:
A B
0 1 2
2 5 6
In [196]: g.tail(1)
Out[196]:
A B
1 1 4
2 5 6
This shows the first or last n rows from each group.
Taking the nth row of each group#
To select from a DataFrame or Series the nth item, use
nth(). This is a reduction method, and
will return a single row (or no row) per group if you pass an int for n:
In [197]: df = pd.DataFrame([[1, np.nan], [1, 4], [5, 6]], columns=["A", "B"])
In [198]: g = df.groupby("A")
In [199]: g.nth(0)
Out[199]:
B
A
1 NaN
5 6.0
In [200]: g.nth(-1)
Out[200]:
B
A
1 4.0
5 6.0
In [201]: g.nth(1)
Out[201]:
B
A
1 4.0
If you want to select the nth not-null item, use the dropna kwarg. For a DataFrame this should be either 'any' or 'all' just like you would pass to dropna:
# nth(0) is the same as g.first()
In [202]: g.nth(0, dropna="any")
Out[202]:
B
A
1 4.0
5 6.0
In [203]: g.first()
Out[203]:
B
A
1 4.0
5 6.0
# nth(-1) is the same as g.last()
In [204]: g.nth(-1, dropna="any") # NaNs denote group exhausted when using dropna
Out[204]:
B
A
1 4.0
5 6.0
In [205]: g.last()
Out[205]:
B
A
1 4.0
5 6.0
In [206]: g.B.nth(0, dropna="all")
Out[206]:
A
1 4.0
5 6.0
Name: B, dtype: float64
As with other methods, passing as_index=False, will achieve a filtration, which returns the grouped row.
In [207]: df = pd.DataFrame([[1, np.nan], [1, 4], [5, 6]], columns=["A", "B"])
In [208]: g = df.groupby("A", as_index=False)
In [209]: g.nth(0)
Out[209]:
A B
0 1 NaN
2 5 6.0
In [210]: g.nth(-1)
Out[210]:
A B
1 1 4.0
2 5 6.0
You can also select multiple rows from each group by specifying multiple nth values as a list of ints.
In [211]: business_dates = pd.date_range(start="4/1/2014", end="6/30/2014", freq="B")
In [212]: df = pd.DataFrame(1, index=business_dates, columns=["a", "b"])
# get the first, 4th, and last date index for each month
In [213]: df.groupby([df.index.year, df.index.month]).nth([0, 3, -1])
Out[213]:
a b
2014 4 1 1
4 1 1
4 1 1
5 1 1
5 1 1
5 1 1
6 1 1
6 1 1
6 1 1
Enumerate group items#
To see the order in which each row appears within its group, use the
cumcount method:
In [214]: dfg = pd.DataFrame(list("aaabba"), columns=["A"])
In [215]: dfg
Out[215]:
A
0 a
1 a
2 a
3 b
4 b
5 a
In [216]: dfg.groupby("A").cumcount()
Out[216]:
0 0
1 1
2 2
3 0
4 1
5 3
dtype: int64
In [217]: dfg.groupby("A").cumcount(ascending=False)
Out[217]:
0 3
1 2
2 1
3 1
4 0
5 0
dtype: int64
Enumerate groups#
To see the ordering of the groups (as opposed to the order of rows
within a group given by cumcount) you can use
ngroup().
Note that the numbers given to the groups match the order in which the
groups would be seen when iterating over the groupby object, not the
order they are first observed.
In [218]: dfg = pd.DataFrame(list("aaabba"), columns=["A"])
In [219]: dfg
Out[219]:
A
0 a
1 a
2 a
3 b
4 b
5 a
In [220]: dfg.groupby("A").ngroup()
Out[220]:
0 0
1 0
2 0
3 1
4 1
5 0
dtype: int64
In [221]: dfg.groupby("A").ngroup(ascending=False)
Out[221]:
0 1
1 1
2 1
3 0
4 0
5 1
dtype: int64
Plotting#
Groupby also works with some plotting methods. For example, suppose we
suspect that some features in a DataFrame may differ by group, in this case,
the values in column 1 where the group is “B” are 3 higher on average.
In [222]: np.random.seed(1234)
In [223]: df = pd.DataFrame(np.random.randn(50, 2))
In [224]: df["g"] = np.random.choice(["A", "B"], size=50)
In [225]: df.loc[df["g"] == "B", 1] += 3
We can easily visualize this with a boxplot:
In [226]: df.groupby("g").boxplot()
Out[226]:
A AxesSubplot(0.1,0.15;0.363636x0.75)
B AxesSubplot(0.536364,0.15;0.363636x0.75)
dtype: object
The result of calling boxplot is a dictionary whose keys are the values
of our grouping column g (“A” and “B”). The values of the resulting dictionary
can be controlled by the return_type keyword of boxplot.
See the visualization documentation for more.
Warning
For historical reasons, df.groupby("g").boxplot() is not equivalent
to df.boxplot(by="g"). See here for
an explanation.
Piping function calls#
Similar to the functionality provided by DataFrame and Series, functions
that take GroupBy objects can be chained together using a pipe method to
allow for a cleaner, more readable syntax. To read about .pipe in general terms,
see here.
Combining .groupby and .pipe is often useful when you need to reuse
GroupBy objects.
As an example, imagine having a DataFrame with columns for stores, products,
revenue and quantity sold. We’d like to do a groupwise calculation of prices
(i.e. revenue/quantity) per store and per product. We could do this in a
multi-step operation, but expressing it in terms of piping can make the
code more readable. First we set the data:
In [227]: n = 1000
In [228]: df = pd.DataFrame(
.....: {
.....: "Store": np.random.choice(["Store_1", "Store_2"], n),
.....: "Product": np.random.choice(["Product_1", "Product_2"], n),
.....: "Revenue": (np.random.random(n) * 50 + 10).round(2),
.....: "Quantity": np.random.randint(1, 10, size=n),
.....: }
.....: )
.....:
In [229]: df.head(2)
Out[229]:
Store Product Revenue Quantity
0 Store_2 Product_1 26.12 1
1 Store_2 Product_1 28.86 1
Now, to find prices per store/product, we can simply do:
In [230]: (
.....: df.groupby(["Store", "Product"])
.....: .pipe(lambda grp: grp.Revenue.sum() / grp.Quantity.sum())
.....: .unstack()
.....: .round(2)
.....: )
.....:
Out[230]:
Product Product_1 Product_2
Store
Store_1 6.82 7.05
Store_2 6.30 6.64
Piping can also be expressive when you want to deliver a grouped object to some
arbitrary function, for example:
In [231]: def mean(groupby):
.....: return groupby.mean()
.....:
In [232]: df.groupby(["Store", "Product"]).pipe(mean)
Out[232]:
Revenue Quantity
Store Product
Store_1 Product_1 34.622727 5.075758
Product_2 35.482815 5.029630
Store_2 Product_1 32.972837 5.237589
Product_2 34.684360 5.224000
where mean takes a GroupBy object and finds the mean of the Revenue and Quantity
columns respectively for each Store-Product combination. The mean function can
be any function that takes in a GroupBy object; the .pipe will pass the GroupBy
object as a parameter into the function you specify.
Examples#
Regrouping by factor#
Regroup columns of a DataFrame according to their sum, and sum the aggregated ones.
In [233]: df = pd.DataFrame({"a": [1, 0, 0], "b": [0, 1, 0], "c": [1, 0, 0], "d": [2, 3, 4]})
In [234]: df
Out[234]:
a b c d
0 1 0 1 2
1 0 1 0 3
2 0 0 0 4
In [235]: df.groupby(df.sum(), axis=1).sum()
Out[235]:
1 9
0 2 2
1 1 3
2 0 4
Multi-column factorization#
By using ngroup(), we can extract
information about the groups in a way similar to factorize() (as described
further in the reshaping API) but which applies
naturally to multiple columns of mixed type and different
sources. This can be useful as an intermediate categorical-like step
in processing, when the relationships between the group rows are more
important than their content, or as input to an algorithm which only
accepts the integer encoding. (For more information about support in
pandas for full categorical data, see the Categorical
introduction and the
API documentation.)
In [236]: dfg = pd.DataFrame({"A": [1, 1, 2, 3, 2], "B": list("aaaba")})
In [237]: dfg
Out[237]:
A B
0 1 a
1 1 a
2 2 a
3 3 b
4 2 a
In [238]: dfg.groupby(["A", "B"]).ngroup()
Out[238]:
0 0
1 0
2 1
3 2
4 1
dtype: int64
In [239]: dfg.groupby(["A", [0, 0, 0, 1, 1]]).ngroup()
Out[239]:
0 0
1 0
2 1
3 3
4 2
dtype: int64
Groupby by indexer to ‘resample’ data#
Resampling produces new hypothetical samples (resamples) from already existing observed data or from a model that generates data. These new samples are similar to the pre-existing samples.
In order to resample to work on indices that are non-datetimelike, the following procedure can be utilized.
In the following examples, df.index // 5 returns a binary array which is used to determine what gets selected for the groupby operation.
Note
The below example shows how we can downsample by consolidation of samples into fewer samples. Here by using df.index // 5, we are aggregating the samples in bins. By applying std() function, we aggregate the information contained in many samples into a small subset of values which is their standard deviation thereby reducing the number of samples.
In [240]: df = pd.DataFrame(np.random.randn(10, 2))
In [241]: df
Out[241]:
0 1
0 -0.793893 0.321153
1 0.342250 1.618906
2 -0.975807 1.918201
3 -0.810847 -1.405919
4 -1.977759 0.461659
5 0.730057 -1.316938
6 -0.751328 0.528290
7 -0.257759 -1.081009
8 0.505895 -1.701948
9 -1.006349 0.020208
In [242]: df.index // 5
Out[242]: Int64Index([0, 0, 0, 0, 0, 1, 1, 1, 1, 1], dtype='int64')
In [243]: df.groupby(df.index // 5).std()
Out[243]:
0 1
0 0.823647 1.312912
1 0.760109 0.942941
Returning a Series to propagate names#
Group DataFrame columns, compute a set of metrics and return a named Series.
The Series name is used as the name for the column index. This is especially
useful in conjunction with reshaping operations such as stacking in which the
column index name will be used as the name of the inserted column:
In [244]: df = pd.DataFrame(
.....: {
.....: "a": [0, 0, 0, 0, 1, 1, 1, 1, 2, 2, 2, 2],
.....: "b": [0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1],
.....: "c": [1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0],
.....: "d": [0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1],
.....: }
.....: )
.....:
In [245]: def compute_metrics(x):
.....: result = {"b_sum": x["b"].sum(), "c_mean": x["c"].mean()}
.....: return pd.Series(result, name="metrics")
.....:
In [246]: result = df.groupby("a").apply(compute_metrics)
In [247]: result
Out[247]:
metrics b_sum c_mean
a
0 2.0 0.5
1 2.0 0.5
2 2.0 0.5
In [248]: result.stack()
Out[248]:
a metrics
0 b_sum 2.0
c_mean 0.5
1 b_sum 2.0
c_mean 0.5
2 b_sum 2.0
c_mean 0.5
dtype: float64
| 684
| 1,253
|
Sum of group but keep the same value for each row in pandas
How to solve same problem in this link Sum of group but keep the same value for each row in r using pandas?
I can generate separate df have the sum for each group and then merge the generated df with the original.
|
67,550,404
|
How to get value from nsmallest instead of .core.series.Series
|
<p>Pretty new to python so any advice is always welcome.</p>
<p>I am trying to map data from multiple sets of coordinates to one set and am trying to use Bilinear interpolation to do it.</p>
<p>I have a set of DataFrames I iterate over and am trying to find the nearest neighbors for my interpolation.</p>
<p>Since my grids may not be uniform in spacing I am sorting by Y position first:</p>
<pre><code>for i in range(0, len(df_x['X'])):
x_pos = df_x._get_value(i, 'X')#pull x coord y coord
y_pos = df_y._get_value(i, 'Y')
for n in data_list:
df = data_list[n] #
d_y = abs(df['Y'] - y_pos) #array of distance from Y pos
d_y.drop_duplicates() # remove duplicates
nn_y1 = d_y.nsmallest(1) # finds closest row
nn_y2 = d_y.nsmallest(2).iloc[-1] # finds next closest row
print(type(nn_y1))
d_x_y1 = df[df['DesignY'] == nn_y1] # creates list of X at closest row
</code></pre>
<p>I think this should provide me with my upper and lower bounds nearest my points.</p>
<p>however when then sorting for X position I get an error</p>
<p><code>ValueError: Can only compare identically-labeled Series objects</code></p>
<p>I think this is due to the fact that the type for <code>nn_y1</code> kicks out <code><class 'pandas.core.series.Series'></code></p>
<p>any advice for how to get the value instead of the series? I could create a dataframe with one element but that seems hacky? I tried some combinations of <code>_get_value()</code> but to no avail.</p>
| 67,550,671
| 2021-05-15T19:15:39.723000
| 1
| null | 0
| 29
|
python|pandas
|
<p><a href="https://pandas.pydata.org/docs/reference/api/pandas.Series.nsmallest.html#pandas-series-nsmallest" rel="nofollow noreferrer"><code>nsmallest</code></a> returns:</p>
<blockquote>
<p>"The n smallest values in the Series, sorted in increasing order." (Type <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.html#pandas-series" rel="nofollow noreferrer"><strong>Series</strong></a>)</p>
</blockquote>
<p>In this case the simple way is to unpack from <code>nsmallest(2)</code> since both values are needed:</p>
<pre><code>nn_y1, nn_y2 = d_y.nsmallest(2)
</code></pre>
<hr/>
<p>To modify the code directly <a href="https://pandas.pydata.org/docs/reference/api/pandas.Series.iloc.html#pandas-series-iloc" rel="nofollow noreferrer"><code>iloc</code></a> is needed to get the first value from the Series:</p>
<pre><code>nn_y1 = d_y.nsmallest(1).iloc[0]
</code></pre>
<hr/>
<p>Alternatively <code>d_y.nsmallest(2)</code> could've been used twice with <code>iloc</code> to get both values:</p>
<pre><code>smallest = d_y.nsmallest(2)
nn_y1 = smallest.iloc[0]
nn_y2 = smallest.iloc[1]
</code></pre>
| 2021-05-15T19:47:39.720000
| 0
|
https://pandas.pydata.org/docs/reference/api/pandas.Series.html
|
pandas.Series#
pandas.Series#
class pandas.Series(data=None, index=None, dtype=None, name=None, copy=False, fastpath=False)[source]#
nsmallest returns:
"The n smallest values in the Series, sorted in increasing order." (Type Series)
In this case the simple way is to unpack from nsmallest(2) since both values are needed:
nn_y1, nn_y2 = d_y.nsmallest(2)
To modify the code directly iloc is needed to get the first value from the Series:
nn_y1 = d_y.nsmallest(1).iloc[0]
Alternatively d_y.nsmallest(2) could've been used twice with iloc to get both values:
smallest = d_y.nsmallest(2)
nn_y1 = smallest.iloc[0]
nn_y2 = smallest.iloc[1]
One-dimensional ndarray with axis labels (including time series).
Labels need not be unique but must be a hashable type. The object
supports both integer- and label-based indexing and provides a host of
methods for performing operations involving the index. Statistical
methods from ndarray have been overridden to automatically exclude
missing data (currently represented as NaN).
Operations between Series (+, -, /, *, **) align values based on their
associated index values– they need not be the same length. The result
index will be the sorted union of the two indexes.
Parameters
dataarray-like, Iterable, dict, or scalar valueContains data stored in Series. If data is a dict, argument order is
maintained.
indexarray-like or Index (1d)Values must be hashable and have the same length as data.
Non-unique index values are allowed. Will default to
RangeIndex (0, 1, 2, …, n) if not provided. If data is dict-like
and index is None, then the keys in the data are used as the index. If the
index is not None, the resulting Series is reindexed with the index values.
dtypestr, numpy.dtype, or ExtensionDtype, optionalData type for the output Series. If not specified, this will be
inferred from data.
See the user guide for more usages.
namestr, optionalThe name to give to the Series.
copybool, default FalseCopy input data. Only affects Series or 1d ndarray input. See examples.
Notes
Please reference the User Guide for more information.
Examples
Constructing Series from a dictionary with an Index specified
>>> d = {'a': 1, 'b': 2, 'c': 3}
>>> ser = pd.Series(data=d, index=['a', 'b', 'c'])
>>> ser
a 1
b 2
c 3
dtype: int64
The keys of the dictionary match with the Index values, hence the Index
values have no effect.
>>> d = {'a': 1, 'b': 2, 'c': 3}
>>> ser = pd.Series(data=d, index=['x', 'y', 'z'])
>>> ser
x NaN
y NaN
z NaN
dtype: float64
Note that the Index is first build with the keys from the dictionary.
After this the Series is reindexed with the given Index values, hence we
get all NaN as a result.
Constructing Series from a list with copy=False.
>>> r = [1, 2]
>>> ser = pd.Series(r, copy=False)
>>> ser.iloc[0] = 999
>>> r
[1, 2]
>>> ser
0 999
1 2
dtype: int64
Due to input data type the Series has a copy of
the original data even though copy=False, so
the data is unchanged.
Constructing Series from a 1d ndarray with copy=False.
>>> r = np.array([1, 2])
>>> ser = pd.Series(r, copy=False)
>>> ser.iloc[0] = 999
>>> r
array([999, 2])
>>> ser
0 999
1 2
dtype: int64
Due to input data type the Series has a view on
the original data, so
the data is changed as well.
Attributes
T
Return the transpose, which is by definition self.
array
The ExtensionArray of the data backing this Series or Index.
at
Access a single value for a row/column label pair.
attrs
Dictionary of global attributes of this dataset.
axes
Return a list of the row axis labels.
dtype
Return the dtype object of the underlying data.
dtypes
Return the dtype object of the underlying data.
flags
Get the properties associated with this pandas object.
hasnans
Return True if there are any NaNs.
iat
Access a single value for a row/column pair by integer position.
iloc
Purely integer-location based indexing for selection by position.
index
The index (axis labels) of the Series.
is_monotonic
(DEPRECATED) Return boolean if values in the object are monotonically increasing.
is_monotonic_decreasing
Return boolean if values in the object are monotonically decreasing.
is_monotonic_increasing
Return boolean if values in the object are monotonically increasing.
is_unique
Return boolean if values in the object are unique.
loc
Access a group of rows and columns by label(s) or a boolean array.
name
Return the name of the Series.
nbytes
Return the number of bytes in the underlying data.
ndim
Number of dimensions of the underlying data, by definition 1.
shape
Return a tuple of the shape of the underlying data.
size
Return the number of elements in the underlying data.
values
Return Series as ndarray or ndarray-like depending on the dtype.
empty
Methods
abs()
Return a Series/DataFrame with absolute numeric value of each element.
add(other[, level, fill_value, axis])
Return Addition of series and other, element-wise (binary operator add).
add_prefix(prefix)
Prefix labels with string prefix.
add_suffix(suffix)
Suffix labels with string suffix.
agg([func, axis])
Aggregate using one or more operations over the specified axis.
aggregate([func, axis])
Aggregate using one or more operations over the specified axis.
align(other[, join, axis, level, copy, ...])
Align two objects on their axes with the specified join method.
all([axis, bool_only, skipna, level])
Return whether all elements are True, potentially over an axis.
any(*[, axis, bool_only, skipna, level])
Return whether any element is True, potentially over an axis.
append(to_append[, ignore_index, ...])
(DEPRECATED) Concatenate two or more Series.
apply(func[, convert_dtype, args])
Invoke function on values of Series.
argmax([axis, skipna])
Return int position of the largest value in the Series.
argmin([axis, skipna])
Return int position of the smallest value in the Series.
argsort([axis, kind, order])
Return the integer indices that would sort the Series values.
asfreq(freq[, method, how, normalize, ...])
Convert time series to specified frequency.
asof(where[, subset])
Return the last row(s) without any NaNs before where.
astype(dtype[, copy, errors])
Cast a pandas object to a specified dtype dtype.
at_time(time[, asof, axis])
Select values at particular time of day (e.g., 9:30AM).
autocorr([lag])
Compute the lag-N autocorrelation.
backfill(*[, axis, inplace, limit, downcast])
Synonym for DataFrame.fillna() with method='bfill'.
between(left, right[, inclusive])
Return boolean Series equivalent to left <= series <= right.
between_time(start_time, end_time[, ...])
Select values between particular times of the day (e.g., 9:00-9:30 AM).
bfill(*[, axis, inplace, limit, downcast])
Synonym for DataFrame.fillna() with method='bfill'.
bool()
Return the bool of a single element Series or DataFrame.
cat
alias of pandas.core.arrays.categorical.CategoricalAccessor
clip([lower, upper, axis, inplace])
Trim values at input threshold(s).
combine(other, func[, fill_value])
Combine the Series with a Series or scalar according to func.
combine_first(other)
Update null elements with value in the same location in 'other'.
compare(other[, align_axis, keep_shape, ...])
Compare to another Series and show the differences.
convert_dtypes([infer_objects, ...])
Convert columns to best possible dtypes using dtypes supporting pd.NA.
copy([deep])
Make a copy of this object's indices and data.
corr(other[, method, min_periods])
Compute correlation with other Series, excluding missing values.
count([level])
Return number of non-NA/null observations in the Series.
cov(other[, min_periods, ddof])
Compute covariance with Series, excluding missing values.
cummax([axis, skipna])
Return cumulative maximum over a DataFrame or Series axis.
cummin([axis, skipna])
Return cumulative minimum over a DataFrame or Series axis.
cumprod([axis, skipna])
Return cumulative product over a DataFrame or Series axis.
cumsum([axis, skipna])
Return cumulative sum over a DataFrame or Series axis.
describe([percentiles, include, exclude, ...])
Generate descriptive statistics.
diff([periods])
First discrete difference of element.
div(other[, level, fill_value, axis])
Return Floating division of series and other, element-wise (binary operator truediv).
divide(other[, level, fill_value, axis])
Return Floating division of series and other, element-wise (binary operator truediv).
divmod(other[, level, fill_value, axis])
Return Integer division and modulo of series and other, element-wise (binary operator divmod).
dot(other)
Compute the dot product between the Series and the columns of other.
drop([labels, axis, index, columns, level, ...])
Return Series with specified index labels removed.
drop_duplicates(*[, keep, inplace])
Return Series with duplicate values removed.
droplevel(level[, axis])
Return Series/DataFrame with requested index / column level(s) removed.
dropna(*[, axis, inplace, how])
Return a new Series with missing values removed.
dt
alias of pandas.core.indexes.accessors.CombinedDatetimelikeProperties
duplicated([keep])
Indicate duplicate Series values.
eq(other[, level, fill_value, axis])
Return Equal to of series and other, element-wise (binary operator eq).
equals(other)
Test whether two objects contain the same elements.
ewm([com, span, halflife, alpha, ...])
Provide exponentially weighted (EW) calculations.
expanding([min_periods, center, axis, method])
Provide expanding window calculations.
explode([ignore_index])
Transform each element of a list-like to a row.
factorize([sort, na_sentinel, use_na_sentinel])
Encode the object as an enumerated type or categorical variable.
ffill(*[, axis, inplace, limit, downcast])
Synonym for DataFrame.fillna() with method='ffill'.
fillna([value, method, axis, inplace, ...])
Fill NA/NaN values using the specified method.
filter([items, like, regex, axis])
Subset the dataframe rows or columns according to the specified index labels.
first(offset)
Select initial periods of time series data based on a date offset.
first_valid_index()
Return index for first non-NA value or None, if no non-NA value is found.
floordiv(other[, level, fill_value, axis])
Return Integer division of series and other, element-wise (binary operator floordiv).
ge(other[, level, fill_value, axis])
Return Greater than or equal to of series and other, element-wise (binary operator ge).
get(key[, default])
Get item from object for given key (ex: DataFrame column).
groupby([by, axis, level, as_index, sort, ...])
Group Series using a mapper or by a Series of columns.
gt(other[, level, fill_value, axis])
Return Greater than of series and other, element-wise (binary operator gt).
head([n])
Return the first n rows.
hist([by, ax, grid, xlabelsize, xrot, ...])
Draw histogram of the input series using matplotlib.
idxmax([axis, skipna])
Return the row label of the maximum value.
idxmin([axis, skipna])
Return the row label of the minimum value.
infer_objects()
Attempt to infer better dtypes for object columns.
info([verbose, buf, max_cols, memory_usage, ...])
Print a concise summary of a Series.
interpolate([method, axis, limit, inplace, ...])
Fill NaN values using an interpolation method.
isin(values)
Whether elements in Series are contained in values.
isna()
Detect missing values.
isnull()
Series.isnull is an alias for Series.isna.
item()
Return the first element of the underlying data as a Python scalar.
items()
Lazily iterate over (index, value) tuples.
iteritems()
(DEPRECATED) Lazily iterate over (index, value) tuples.
keys()
Return alias for index.
kurt([axis, skipna, level, numeric_only])
Return unbiased kurtosis over requested axis.
kurtosis([axis, skipna, level, numeric_only])
Return unbiased kurtosis over requested axis.
last(offset)
Select final periods of time series data based on a date offset.
last_valid_index()
Return index for last non-NA value or None, if no non-NA value is found.
le(other[, level, fill_value, axis])
Return Less than or equal to of series and other, element-wise (binary operator le).
lt(other[, level, fill_value, axis])
Return Less than of series and other, element-wise (binary operator lt).
mad([axis, skipna, level])
(DEPRECATED) Return the mean absolute deviation of the values over the requested axis.
map(arg[, na_action])
Map values of Series according to an input mapping or function.
mask(cond[, other, inplace, axis, level, ...])
Replace values where the condition is True.
max([axis, skipna, level, numeric_only])
Return the maximum of the values over the requested axis.
mean([axis, skipna, level, numeric_only])
Return the mean of the values over the requested axis.
median([axis, skipna, level, numeric_only])
Return the median of the values over the requested axis.
memory_usage([index, deep])
Return the memory usage of the Series.
min([axis, skipna, level, numeric_only])
Return the minimum of the values over the requested axis.
mod(other[, level, fill_value, axis])
Return Modulo of series and other, element-wise (binary operator mod).
mode([dropna])
Return the mode(s) of the Series.
mul(other[, level, fill_value, axis])
Return Multiplication of series and other, element-wise (binary operator mul).
multiply(other[, level, fill_value, axis])
Return Multiplication of series and other, element-wise (binary operator mul).
ne(other[, level, fill_value, axis])
Return Not equal to of series and other, element-wise (binary operator ne).
nlargest([n, keep])
Return the largest n elements.
notna()
Detect existing (non-missing) values.
notnull()
Series.notnull is an alias for Series.notna.
nsmallest([n, keep])
Return the smallest n elements.
nunique([dropna])
Return number of unique elements in the object.
pad(*[, axis, inplace, limit, downcast])
Synonym for DataFrame.fillna() with method='ffill'.
pct_change([periods, fill_method, limit, freq])
Percentage change between the current and a prior element.
pipe(func, *args, **kwargs)
Apply chainable functions that expect Series or DataFrames.
plot
alias of pandas.plotting._core.PlotAccessor
pop(item)
Return item and drops from series.
pow(other[, level, fill_value, axis])
Return Exponential power of series and other, element-wise (binary operator pow).
prod([axis, skipna, level, numeric_only, ...])
Return the product of the values over the requested axis.
product([axis, skipna, level, numeric_only, ...])
Return the product of the values over the requested axis.
quantile([q, interpolation])
Return value at the given quantile.
radd(other[, level, fill_value, axis])
Return Addition of series and other, element-wise (binary operator radd).
rank([axis, method, numeric_only, ...])
Compute numerical data ranks (1 through n) along axis.
ravel([order])
Return the flattened underlying data as an ndarray.
rdiv(other[, level, fill_value, axis])
Return Floating division of series and other, element-wise (binary operator rtruediv).
rdivmod(other[, level, fill_value, axis])
Return Integer division and modulo of series and other, element-wise (binary operator rdivmod).
reindex(*args, **kwargs)
Conform Series to new index with optional filling logic.
reindex_like(other[, method, copy, limit, ...])
Return an object with matching indices as other object.
rename([index, axis, copy, inplace, level, ...])
Alter Series index labels or name.
rename_axis([mapper, inplace])
Set the name of the axis for the index or columns.
reorder_levels(order)
Rearrange index levels using input order.
repeat(repeats[, axis])
Repeat elements of a Series.
replace([to_replace, value, inplace, limit, ...])
Replace values given in to_replace with value.
resample(rule[, axis, closed, label, ...])
Resample time-series data.
reset_index([level, drop, name, inplace, ...])
Generate a new DataFrame or Series with the index reset.
rfloordiv(other[, level, fill_value, axis])
Return Integer division of series and other, element-wise (binary operator rfloordiv).
rmod(other[, level, fill_value, axis])
Return Modulo of series and other, element-wise (binary operator rmod).
rmul(other[, level, fill_value, axis])
Return Multiplication of series and other, element-wise (binary operator rmul).
rolling(window[, min_periods, center, ...])
Provide rolling window calculations.
round([decimals])
Round each value in a Series to the given number of decimals.
rpow(other[, level, fill_value, axis])
Return Exponential power of series and other, element-wise (binary operator rpow).
rsub(other[, level, fill_value, axis])
Return Subtraction of series and other, element-wise (binary operator rsub).
rtruediv(other[, level, fill_value, axis])
Return Floating division of series and other, element-wise (binary operator rtruediv).
sample([n, frac, replace, weights, ...])
Return a random sample of items from an axis of object.
searchsorted(value[, side, sorter])
Find indices where elements should be inserted to maintain order.
sem([axis, skipna, level, ddof, numeric_only])
Return unbiased standard error of the mean over requested axis.
set_axis(labels, *[, axis, inplace, copy])
Assign desired index to given axis.
set_flags(*[, copy, allows_duplicate_labels])
Return a new object with updated flags.
shift([periods, freq, axis, fill_value])
Shift index by desired number of periods with an optional time freq.
skew([axis, skipna, level, numeric_only])
Return unbiased skew over requested axis.
slice_shift([periods, axis])
(DEPRECATED) Equivalent to shift without copying data.
sort_index(*[, axis, level, ascending, ...])
Sort Series by index labels.
sort_values(*[, axis, ascending, inplace, ...])
Sort by the values.
sparse
alias of pandas.core.arrays.sparse.accessor.SparseAccessor
squeeze([axis])
Squeeze 1 dimensional axis objects into scalars.
std([axis, skipna, level, ddof, numeric_only])
Return sample standard deviation over requested axis.
str
alias of pandas.core.strings.accessor.StringMethods
sub(other[, level, fill_value, axis])
Return Subtraction of series and other, element-wise (binary operator sub).
subtract(other[, level, fill_value, axis])
Return Subtraction of series and other, element-wise (binary operator sub).
sum([axis, skipna, level, numeric_only, ...])
Return the sum of the values over the requested axis.
swapaxes(axis1, axis2[, copy])
Interchange axes and swap values axes appropriately.
swaplevel([i, j, copy])
Swap levels i and j in a MultiIndex.
tail([n])
Return the last n rows.
take(indices[, axis, is_copy])
Return the elements in the given positional indices along an axis.
to_clipboard([excel, sep])
Copy object to the system clipboard.
to_csv([path_or_buf, sep, na_rep, ...])
Write object to a comma-separated values (csv) file.
to_dict([into])
Convert Series to {label -> value} dict or dict-like object.
to_excel(excel_writer[, sheet_name, na_rep, ...])
Write object to an Excel sheet.
to_frame([name])
Convert Series to DataFrame.
to_hdf(path_or_buf, key[, mode, complevel, ...])
Write the contained data to an HDF5 file using HDFStore.
to_json([path_or_buf, orient, date_format, ...])
Convert the object to a JSON string.
to_latex([buf, columns, col_space, header, ...])
Render object to a LaTeX tabular, longtable, or nested table.
to_list()
Return a list of the values.
to_markdown([buf, mode, index, storage_options])
Print Series in Markdown-friendly format.
to_numpy([dtype, copy, na_value])
A NumPy ndarray representing the values in this Series or Index.
to_period([freq, copy])
Convert Series from DatetimeIndex to PeriodIndex.
to_pickle(path[, compression, protocol, ...])
Pickle (serialize) object to file.
to_sql(name, con[, schema, if_exists, ...])
Write records stored in a DataFrame to a SQL database.
to_string([buf, na_rep, float_format, ...])
Render a string representation of the Series.
to_timestamp([freq, how, copy])
Cast to DatetimeIndex of Timestamps, at beginning of period.
to_xarray()
Return an xarray object from the pandas object.
tolist()
Return a list of the values.
transform(func[, axis])
Call func on self producing a Series with the same axis shape as self.
transpose(*args, **kwargs)
Return the transpose, which is by definition self.
truediv(other[, level, fill_value, axis])
Return Floating division of series and other, element-wise (binary operator truediv).
truncate([before, after, axis, copy])
Truncate a Series or DataFrame before and after some index value.
tshift([periods, freq, axis])
(DEPRECATED) Shift the time index, using the index's frequency if available.
tz_convert(tz[, axis, level, copy])
Convert tz-aware axis to target time zone.
tz_localize(tz[, axis, level, copy, ...])
Localize tz-naive index of a Series or DataFrame to target time zone.
unique()
Return unique values of Series object.
unstack([level, fill_value])
Unstack, also known as pivot, Series with MultiIndex to produce DataFrame.
update(other)
Modify Series in place using values from passed Series.
value_counts([normalize, sort, ascending, ...])
Return a Series containing counts of unique values.
var([axis, skipna, level, ddof, numeric_only])
Return unbiased variance over requested axis.
view([dtype])
Create a new view of the Series.
where(cond[, other, inplace, axis, level, ...])
Replace values where the condition is False.
xs(key[, axis, level, drop_level])
Return cross-section from the Series/DataFrame.
| 137
| 644
|
How to get value from nsmallest instead of .core.series.Series
Pretty new to python so any advice is always welcome.
I am trying to map data from multiple sets of coordinates to one set and am trying to use Bilinear interpolation to do it.
I have a set of DataFrames I iterate over and am trying to find the nearest neighbors for my interpolation.
Since my grids may not be uniform in spacing I am sorting by Y position first:
for i in range(0, len(df_x['X'])):
x_pos = df_x._get_value(i, 'X')#pull x coord y coord
y_pos = df_y._get_value(i, 'Y')
for n in data_list:
df = data_list[n] #
d_y = abs(df['Y'] - y_pos) #array of distance from Y pos
d_y.drop_duplicates() # remove duplicates
nn_y1 = d_y.nsmallest(1) # finds closest row
nn_y2 = d_y.nsmallest(2).iloc[-1] # finds next closest row
print(type(nn_y1))
d_x_y1 = df[df['DesignY'] == nn_y1] # creates list of X at closest row
I think this should provide me with my upper and lower bounds nearest my points.
however when then sorting for X position I get an error
ValueError: Can only compare identically-labeled Series objects
I think this is due to the fact that the type for nn_y1 kicks out <class 'pandas.core.series.Series'>
any advice for how to get the value instead of the series? I could create a dataframe with one element but that seems hacky? I tried some combinations of _get_value() but to no avail.
|
60,390,325
|
How to create several pandas frames with a for loop in python?
|
<p>I am trying to generate 11 models of Decision Tree, for that, one of the steps is to assigns the y values for each one. </p>
<p>Since I have 11 y variables, I would like to assign each one automatically.</p>
<p>The df['P1 d'] is a DataFrame column with the 'dummies' variables. </p>
<pre><code>X2 = df[['1_y', '2_y', '3_y', '4_y', '5_y', '6_y', '7_y', '8_y','9_y', '10_y','11_y', '12_y', '13_y', '14_y']]
for t in range(1,12):
'y.{}'.format(t) = df[['P{} d'.format(t)]]
</code></pre>
<p>The error message is:</p>
<pre><code> File "<ipython-input-83-017c94c44d4b>", line 3
'y.{}'.format(t) = df[['P{} d'.format(t)]]
^
</code></pre>
<p>SyntaxError: can't assign to function call</p>
<p>I know it might be something very simple, but I have not been able to think on anything to overcome this setback.</p>
| 60,390,396
| 2020-02-25T08:27:11.003000
| 1
| null | 0
| 29
|
python|pandas
|
<p><code>'y.{}'.format(t)</code> will return a string, not a variable. You can't assign a DataFrame to a string.</p>
<p>What you could do is:</p>
<ul>
<li>Create a dict with your y{} keys</li>
<li>Put each dataframe to a key</li>
</ul>
<pre class="lang-py prettyprint-override"><code>my_dict = {}
for t in range(1,12):
key = 'y.{}'.format(t)
my_dict[key] = df[['P{} d'.format(t)]]
</code></pre>
<p>You can use a dict comprehension if needed</p>
| 2020-02-25T08:32:37.890000
| 0
|
https://pandas.pydata.org/docs/user_guide/groupby.html
|
Group by: split-apply-combine#
Group by: split-apply-combine#
By “group by” we are referring to a process involving one or more of the following
steps:
Splitting the data into groups based on some criteria.
Applying a function to each group independently.
Combining the results into a data structure.
Out of these, the split step is the most straightforward. In fact, in many
situations we may wish to split the data set into groups and do something with
those groups. In the apply step, we might wish to do one of the
following:
Aggregation: compute a summary statistic (or statistics) for each
group. Some examples:
Compute group sums or means.
Compute group sizes / counts.
Transformation: perform some group-specific computations and return a
like-indexed object. Some examples:
'y.{}'.format(t) will return a string, not a variable. You can't assign a DataFrame to a string.
What you could do is:
Create a dict with your y{} keys
Put each dataframe to a key
my_dict = {}
for t in range(1,12):
key = 'y.{}'.format(t)
my_dict[key] = df[['P{} d'.format(t)]]
You can use a dict comprehension if needed
Standardize data (zscore) within a group.
Filling NAs within groups with a value derived from each group.
Filtration: discard some groups, according to a group-wise computation
that evaluates True or False. Some examples:
Discard data that belongs to groups with only a few members.
Filter out data based on the group sum or mean.
Some combination of the above: GroupBy will examine the results of the apply
step and try to return a sensibly combined result if it doesn’t fit into
either of the above two categories.
Since the set of object instance methods on pandas data structures are generally
rich and expressive, we often simply want to invoke, say, a DataFrame function
on each group. The name GroupBy should be quite familiar to those who have used
a SQL-based tool (or itertools), in which you can write code like:
SELECT Column1, Column2, mean(Column3), sum(Column4)
FROM SomeTable
GROUP BY Column1, Column2
We aim to make operations like this natural and easy to express using
pandas. We’ll address each area of GroupBy functionality then provide some
non-trivial examples / use cases.
See the cookbook for some advanced strategies.
Splitting an object into groups#
pandas objects can be split on any of their axes. The abstract definition of
grouping is to provide a mapping of labels to group names. To create a GroupBy
object (more on what the GroupBy object is later), you may do the following:
In [1]: df = pd.DataFrame(
...: [
...: ("bird", "Falconiformes", 389.0),
...: ("bird", "Psittaciformes", 24.0),
...: ("mammal", "Carnivora", 80.2),
...: ("mammal", "Primates", np.nan),
...: ("mammal", "Carnivora", 58),
...: ],
...: index=["falcon", "parrot", "lion", "monkey", "leopard"],
...: columns=("class", "order", "max_speed"),
...: )
...:
In [2]: df
Out[2]:
class order max_speed
falcon bird Falconiformes 389.0
parrot bird Psittaciformes 24.0
lion mammal Carnivora 80.2
monkey mammal Primates NaN
leopard mammal Carnivora 58.0
# default is axis=0
In [3]: grouped = df.groupby("class")
In [4]: grouped = df.groupby("order", axis="columns")
In [5]: grouped = df.groupby(["class", "order"])
The mapping can be specified many different ways:
A Python function, to be called on each of the axis labels.
A list or NumPy array of the same length as the selected axis.
A dict or Series, providing a label -> group name mapping.
For DataFrame objects, a string indicating either a column name or
an index level name to be used to group.
df.groupby('A') is just syntactic sugar for df.groupby(df['A']).
A list of any of the above things.
Collectively we refer to the grouping objects as the keys. For example,
consider the following DataFrame:
Note
A string passed to groupby may refer to either a column or an index level.
If a string matches both a column name and an index level name, a
ValueError will be raised.
In [6]: df = pd.DataFrame(
...: {
...: "A": ["foo", "bar", "foo", "bar", "foo", "bar", "foo", "foo"],
...: "B": ["one", "one", "two", "three", "two", "two", "one", "three"],
...: "C": np.random.randn(8),
...: "D": np.random.randn(8),
...: }
...: )
...:
In [7]: df
Out[7]:
A B C D
0 foo one 0.469112 -0.861849
1 bar one -0.282863 -2.104569
2 foo two -1.509059 -0.494929
3 bar three -1.135632 1.071804
4 foo two 1.212112 0.721555
5 bar two -0.173215 -0.706771
6 foo one 0.119209 -1.039575
7 foo three -1.044236 0.271860
On a DataFrame, we obtain a GroupBy object by calling groupby().
We could naturally group by either the A or B columns, or both:
In [8]: grouped = df.groupby("A")
In [9]: grouped = df.groupby(["A", "B"])
If we also have a MultiIndex on columns A and B, we can group by all
but the specified columns
In [10]: df2 = df.set_index(["A", "B"])
In [11]: grouped = df2.groupby(level=df2.index.names.difference(["B"]))
In [12]: grouped.sum()
Out[12]:
C D
A
bar -1.591710 -1.739537
foo -0.752861 -1.402938
These will split the DataFrame on its index (rows). We could also split by the
columns:
In [13]: def get_letter_type(letter):
....: if letter.lower() in 'aeiou':
....: return 'vowel'
....: else:
....: return 'consonant'
....:
In [14]: grouped = df.groupby(get_letter_type, axis=1)
pandas Index objects support duplicate values. If a
non-unique index is used as the group key in a groupby operation, all values
for the same index value will be considered to be in one group and thus the
output of aggregation functions will only contain unique index values:
In [15]: lst = [1, 2, 3, 1, 2, 3]
In [16]: s = pd.Series([1, 2, 3, 10, 20, 30], lst)
In [17]: grouped = s.groupby(level=0)
In [18]: grouped.first()
Out[18]:
1 1
2 2
3 3
dtype: int64
In [19]: grouped.last()
Out[19]:
1 10
2 20
3 30
dtype: int64
In [20]: grouped.sum()
Out[20]:
1 11
2 22
3 33
dtype: int64
Note that no splitting occurs until it’s needed. Creating the GroupBy object
only verifies that you’ve passed a valid mapping.
Note
Many kinds of complicated data manipulations can be expressed in terms of
GroupBy operations (though can’t be guaranteed to be the most
efficient). You can get quite creative with the label mapping functions.
GroupBy sorting#
By default the group keys are sorted during the groupby operation. You may however pass sort=False for potential speedups:
In [21]: df2 = pd.DataFrame({"X": ["B", "B", "A", "A"], "Y": [1, 2, 3, 4]})
In [22]: df2.groupby(["X"]).sum()
Out[22]:
Y
X
A 7
B 3
In [23]: df2.groupby(["X"], sort=False).sum()
Out[23]:
Y
X
B 3
A 7
Note that groupby will preserve the order in which observations are sorted within each group.
For example, the groups created by groupby() below are in the order they appeared in the original DataFrame:
In [24]: df3 = pd.DataFrame({"X": ["A", "B", "A", "B"], "Y": [1, 4, 3, 2]})
In [25]: df3.groupby(["X"]).get_group("A")
Out[25]:
X Y
0 A 1
2 A 3
In [26]: df3.groupby(["X"]).get_group("B")
Out[26]:
X Y
1 B 4
3 B 2
New in version 1.1.0.
GroupBy dropna#
By default NA values are excluded from group keys during the groupby operation. However,
in case you want to include NA values in group keys, you could pass dropna=False to achieve it.
In [27]: df_list = [[1, 2, 3], [1, None, 4], [2, 1, 3], [1, 2, 2]]
In [28]: df_dropna = pd.DataFrame(df_list, columns=["a", "b", "c"])
In [29]: df_dropna
Out[29]:
a b c
0 1 2.0 3
1 1 NaN 4
2 2 1.0 3
3 1 2.0 2
# Default ``dropna`` is set to True, which will exclude NaNs in keys
In [30]: df_dropna.groupby(by=["b"], dropna=True).sum()
Out[30]:
a c
b
1.0 2 3
2.0 2 5
# In order to allow NaN in keys, set ``dropna`` to False
In [31]: df_dropna.groupby(by=["b"], dropna=False).sum()
Out[31]:
a c
b
1.0 2 3
2.0 2 5
NaN 1 4
The default setting of dropna argument is True which means NA are not included in group keys.
GroupBy object attributes#
The groups attribute is a dict whose keys are the computed unique groups
and corresponding values being the axis labels belonging to each group. In the
above example we have:
In [32]: df.groupby("A").groups
Out[32]: {'bar': [1, 3, 5], 'foo': [0, 2, 4, 6, 7]}
In [33]: df.groupby(get_letter_type, axis=1).groups
Out[33]: {'consonant': ['B', 'C', 'D'], 'vowel': ['A']}
Calling the standard Python len function on the GroupBy object just returns
the length of the groups dict, so it is largely just a convenience:
In [34]: grouped = df.groupby(["A", "B"])
In [35]: grouped.groups
Out[35]: {('bar', 'one'): [1], ('bar', 'three'): [3], ('bar', 'two'): [5], ('foo', 'one'): [0, 6], ('foo', 'three'): [7], ('foo', 'two'): [2, 4]}
In [36]: len(grouped)
Out[36]: 6
GroupBy will tab complete column names (and other attributes):
In [37]: df
Out[37]:
height weight gender
2000-01-01 42.849980 157.500553 male
2000-01-02 49.607315 177.340407 male
2000-01-03 56.293531 171.524640 male
2000-01-04 48.421077 144.251986 female
2000-01-05 46.556882 152.526206 male
2000-01-06 68.448851 168.272968 female
2000-01-07 70.757698 136.431469 male
2000-01-08 58.909500 176.499753 female
2000-01-09 76.435631 174.094104 female
2000-01-10 45.306120 177.540920 male
In [38]: gb = df.groupby("gender")
In [39]: gb.<TAB> # noqa: E225, E999
gb.agg gb.boxplot gb.cummin gb.describe gb.filter gb.get_group gb.height gb.last gb.median gb.ngroups gb.plot gb.rank gb.std gb.transform
gb.aggregate gb.count gb.cumprod gb.dtype gb.first gb.groups gb.hist gb.max gb.min gb.nth gb.prod gb.resample gb.sum gb.var
gb.apply gb.cummax gb.cumsum gb.fillna gb.gender gb.head gb.indices gb.mean gb.name gb.ohlc gb.quantile gb.size gb.tail gb.weight
GroupBy with MultiIndex#
With hierarchically-indexed data, it’s quite
natural to group by one of the levels of the hierarchy.
Let’s create a Series with a two-level MultiIndex.
In [40]: arrays = [
....: ["bar", "bar", "baz", "baz", "foo", "foo", "qux", "qux"],
....: ["one", "two", "one", "two", "one", "two", "one", "two"],
....: ]
....:
In [41]: index = pd.MultiIndex.from_arrays(arrays, names=["first", "second"])
In [42]: s = pd.Series(np.random.randn(8), index=index)
In [43]: s
Out[43]:
first second
bar one -0.919854
two -0.042379
baz one 1.247642
two -0.009920
foo one 0.290213
two 0.495767
qux one 0.362949
two 1.548106
dtype: float64
We can then group by one of the levels in s.
In [44]: grouped = s.groupby(level=0)
In [45]: grouped.sum()
Out[45]:
first
bar -0.962232
baz 1.237723
foo 0.785980
qux 1.911055
dtype: float64
If the MultiIndex has names specified, these can be passed instead of the level
number:
In [46]: s.groupby(level="second").sum()
Out[46]:
second
one 0.980950
two 1.991575
dtype: float64
Grouping with multiple levels is supported.
In [47]: s
Out[47]:
first second third
bar doo one -1.131345
two -0.089329
baz bee one 0.337863
two -0.945867
foo bop one -0.932132
two 1.956030
qux bop one 0.017587
two -0.016692
dtype: float64
In [48]: s.groupby(level=["first", "second"]).sum()
Out[48]:
first second
bar doo -1.220674
baz bee -0.608004
foo bop 1.023898
qux bop 0.000895
dtype: float64
Index level names may be supplied as keys.
In [49]: s.groupby(["first", "second"]).sum()
Out[49]:
first second
bar doo -1.220674
baz bee -0.608004
foo bop 1.023898
qux bop 0.000895
dtype: float64
More on the sum function and aggregation later.
Grouping DataFrame with Index levels and columns#
A DataFrame may be grouped by a combination of columns and index levels by
specifying the column names as strings and the index levels as pd.Grouper
objects.
In [50]: arrays = [
....: ["bar", "bar", "baz", "baz", "foo", "foo", "qux", "qux"],
....: ["one", "two", "one", "two", "one", "two", "one", "two"],
....: ]
....:
In [51]: index = pd.MultiIndex.from_arrays(arrays, names=["first", "second"])
In [52]: df = pd.DataFrame({"A": [1, 1, 1, 1, 2, 2, 3, 3], "B": np.arange(8)}, index=index)
In [53]: df
Out[53]:
A B
first second
bar one 1 0
two 1 1
baz one 1 2
two 1 3
foo one 2 4
two 2 5
qux one 3 6
two 3 7
The following example groups df by the second index level and
the A column.
In [54]: df.groupby([pd.Grouper(level=1), "A"]).sum()
Out[54]:
B
second A
one 1 2
2 4
3 6
two 1 4
2 5
3 7
Index levels may also be specified by name.
In [55]: df.groupby([pd.Grouper(level="second"), "A"]).sum()
Out[55]:
B
second A
one 1 2
2 4
3 6
two 1 4
2 5
3 7
Index level names may be specified as keys directly to groupby.
In [56]: df.groupby(["second", "A"]).sum()
Out[56]:
B
second A
one 1 2
2 4
3 6
two 1 4
2 5
3 7
DataFrame column selection in GroupBy#
Once you have created the GroupBy object from a DataFrame, you might want to do
something different for each of the columns. Thus, using [] similar to
getting a column from a DataFrame, you can do:
In [57]: df = pd.DataFrame(
....: {
....: "A": ["foo", "bar", "foo", "bar", "foo", "bar", "foo", "foo"],
....: "B": ["one", "one", "two", "three", "two", "two", "one", "three"],
....: "C": np.random.randn(8),
....: "D": np.random.randn(8),
....: }
....: )
....:
In [58]: df
Out[58]:
A B C D
0 foo one -0.575247 1.346061
1 bar one 0.254161 1.511763
2 foo two -1.143704 1.627081
3 bar three 0.215897 -0.990582
4 foo two 1.193555 -0.441652
5 bar two -0.077118 1.211526
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
In [59]: grouped = df.groupby(["A"])
In [60]: grouped_C = grouped["C"]
In [61]: grouped_D = grouped["D"]
This is mainly syntactic sugar for the alternative and much more verbose:
In [62]: df["C"].groupby(df["A"])
Out[62]: <pandas.core.groupby.generic.SeriesGroupBy object at 0x7f1ea100a490>
Additionally this method avoids recomputing the internal grouping information
derived from the passed key.
Iterating through groups#
With the GroupBy object in hand, iterating through the grouped data is very
natural and functions similarly to itertools.groupby():
In [63]: grouped = df.groupby('A')
In [64]: for name, group in grouped:
....: print(name)
....: print(group)
....:
bar
A B C D
1 bar one 0.254161 1.511763
3 bar three 0.215897 -0.990582
5 bar two -0.077118 1.211526
foo
A B C D
0 foo one -0.575247 1.346061
2 foo two -1.143704 1.627081
4 foo two 1.193555 -0.441652
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
In the case of grouping by multiple keys, the group name will be a tuple:
In [65]: for name, group in df.groupby(['A', 'B']):
....: print(name)
....: print(group)
....:
('bar', 'one')
A B C D
1 bar one 0.254161 1.511763
('bar', 'three')
A B C D
3 bar three 0.215897 -0.990582
('bar', 'two')
A B C D
5 bar two -0.077118 1.211526
('foo', 'one')
A B C D
0 foo one -0.575247 1.346061
6 foo one -0.408530 0.268520
('foo', 'three')
A B C D
7 foo three -0.862495 0.02458
('foo', 'two')
A B C D
2 foo two -1.143704 1.627081
4 foo two 1.193555 -0.441652
See Iterating through groups.
Selecting a group#
A single group can be selected using
get_group():
In [66]: grouped.get_group("bar")
Out[66]:
A B C D
1 bar one 0.254161 1.511763
3 bar three 0.215897 -0.990582
5 bar two -0.077118 1.211526
Or for an object grouped on multiple columns:
In [67]: df.groupby(["A", "B"]).get_group(("bar", "one"))
Out[67]:
A B C D
1 bar one 0.254161 1.511763
Aggregation#
Once the GroupBy object has been created, several methods are available to
perform a computation on the grouped data. These operations are similar to the
aggregating API, window API,
and resample API.
An obvious one is aggregation via the
aggregate() or equivalently
agg() method:
In [68]: grouped = df.groupby("A")
In [69]: grouped[["C", "D"]].aggregate(np.sum)
Out[69]:
C D
A
bar 0.392940 1.732707
foo -1.796421 2.824590
In [70]: grouped = df.groupby(["A", "B"])
In [71]: grouped.aggregate(np.sum)
Out[71]:
C D
A B
bar one 0.254161 1.511763
three 0.215897 -0.990582
two -0.077118 1.211526
foo one -0.983776 1.614581
three -0.862495 0.024580
two 0.049851 1.185429
As you can see, the result of the aggregation will have the group names as the
new index along the grouped axis. In the case of multiple keys, the result is a
MultiIndex by default, though this can be
changed by using the as_index option:
In [72]: grouped = df.groupby(["A", "B"], as_index=False)
In [73]: grouped.aggregate(np.sum)
Out[73]:
A B C D
0 bar one 0.254161 1.511763
1 bar three 0.215897 -0.990582
2 bar two -0.077118 1.211526
3 foo one -0.983776 1.614581
4 foo three -0.862495 0.024580
5 foo two 0.049851 1.185429
In [74]: df.groupby("A", as_index=False)[["C", "D"]].sum()
Out[74]:
A C D
0 bar 0.392940 1.732707
1 foo -1.796421 2.824590
Note that you could use the reset_index DataFrame function to achieve the
same result as the column names are stored in the resulting MultiIndex:
In [75]: df.groupby(["A", "B"]).sum().reset_index()
Out[75]:
A B C D
0 bar one 0.254161 1.511763
1 bar three 0.215897 -0.990582
2 bar two -0.077118 1.211526
3 foo one -0.983776 1.614581
4 foo three -0.862495 0.024580
5 foo two 0.049851 1.185429
Another simple aggregation example is to compute the size of each group.
This is included in GroupBy as the size method. It returns a Series whose
index are the group names and whose values are the sizes of each group.
In [76]: grouped.size()
Out[76]:
A B size
0 bar one 1
1 bar three 1
2 bar two 1
3 foo one 2
4 foo three 1
5 foo two 2
In [77]: grouped.describe()
Out[77]:
C ... D
count mean std min ... 25% 50% 75% max
0 1.0 0.254161 NaN 0.254161 ... 1.511763 1.511763 1.511763 1.511763
1 1.0 0.215897 NaN 0.215897 ... -0.990582 -0.990582 -0.990582 -0.990582
2 1.0 -0.077118 NaN -0.077118 ... 1.211526 1.211526 1.211526 1.211526
3 2.0 -0.491888 0.117887 -0.575247 ... 0.537905 0.807291 1.076676 1.346061
4 1.0 -0.862495 NaN -0.862495 ... 0.024580 0.024580 0.024580 0.024580
5 2.0 0.024925 1.652692 -1.143704 ... 0.075531 0.592714 1.109898 1.627081
[6 rows x 16 columns]
Another aggregation example is to compute the number of unique values of each group. This is similar to the value_counts function, except that it only counts unique values.
In [78]: ll = [['foo', 1], ['foo', 2], ['foo', 2], ['bar', 1], ['bar', 1]]
In [79]: df4 = pd.DataFrame(ll, columns=["A", "B"])
In [80]: df4
Out[80]:
A B
0 foo 1
1 foo 2
2 foo 2
3 bar 1
4 bar 1
In [81]: df4.groupby("A")["B"].nunique()
Out[81]:
A
bar 1
foo 2
Name: B, dtype: int64
Note
Aggregation functions will not return the groups that you are aggregating over
if they are named columns, when as_index=True, the default. The grouped columns will
be the indices of the returned object.
Passing as_index=False will return the groups that you are aggregating over, if they are
named columns.
Aggregating functions are the ones that reduce the dimension of the returned objects.
Some common aggregating functions are tabulated below:
Function
Description
mean()
Compute mean of groups
sum()
Compute sum of group values
size()
Compute group sizes
count()
Compute count of group
std()
Standard deviation of groups
var()
Compute variance of groups
sem()
Standard error of the mean of groups
describe()
Generates descriptive statistics
first()
Compute first of group values
last()
Compute last of group values
nth()
Take nth value, or a subset if n is a list
min()
Compute min of group values
max()
Compute max of group values
The aggregating functions above will exclude NA values. Any function which
reduces a Series to a scalar value is an aggregation function and will work,
a trivial example is df.groupby('A').agg(lambda ser: 1). Note that
nth() can act as a reducer or a
filter, see here.
Applying multiple functions at once#
With grouped Series you can also pass a list or dict of functions to do
aggregation with, outputting a DataFrame:
In [82]: grouped = df.groupby("A")
In [83]: grouped["C"].agg([np.sum, np.mean, np.std])
Out[83]:
sum mean std
A
bar 0.392940 0.130980 0.181231
foo -1.796421 -0.359284 0.912265
On a grouped DataFrame, you can pass a list of functions to apply to each
column, which produces an aggregated result with a hierarchical index:
In [84]: grouped[["C", "D"]].agg([np.sum, np.mean, np.std])
Out[84]:
C D
sum mean std sum mean std
A
bar 0.392940 0.130980 0.181231 1.732707 0.577569 1.366330
foo -1.796421 -0.359284 0.912265 2.824590 0.564918 0.884785
The resulting aggregations are named for the functions themselves. If you
need to rename, then you can add in a chained operation for a Series like this:
In [85]: (
....: grouped["C"]
....: .agg([np.sum, np.mean, np.std])
....: .rename(columns={"sum": "foo", "mean": "bar", "std": "baz"})
....: )
....:
Out[85]:
foo bar baz
A
bar 0.392940 0.130980 0.181231
foo -1.796421 -0.359284 0.912265
For a grouped DataFrame, you can rename in a similar manner:
In [86]: (
....: grouped[["C", "D"]].agg([np.sum, np.mean, np.std]).rename(
....: columns={"sum": "foo", "mean": "bar", "std": "baz"}
....: )
....: )
....:
Out[86]:
C D
foo bar baz foo bar baz
A
bar 0.392940 0.130980 0.181231 1.732707 0.577569 1.366330
foo -1.796421 -0.359284 0.912265 2.824590 0.564918 0.884785
Note
In general, the output column names should be unique. You can’t apply
the same function (or two functions with the same name) to the same
column.
In [87]: grouped["C"].agg(["sum", "sum"])
Out[87]:
sum sum
A
bar 0.392940 0.392940
foo -1.796421 -1.796421
pandas does allow you to provide multiple lambdas. In this case, pandas
will mangle the name of the (nameless) lambda functions, appending _<i>
to each subsequent lambda.
In [88]: grouped["C"].agg([lambda x: x.max() - x.min(), lambda x: x.median() - x.mean()])
Out[88]:
<lambda_0> <lambda_1>
A
bar 0.331279 0.084917
foo 2.337259 -0.215962
Named aggregation#
New in version 0.25.0.
To support column-specific aggregation with control over the output column names, pandas
accepts the special syntax in GroupBy.agg(), known as “named aggregation”, where
The keywords are the output column names
The values are tuples whose first element is the column to select
and the second element is the aggregation to apply to that column. pandas
provides the pandas.NamedAgg namedtuple with the fields ['column', 'aggfunc']
to make it clearer what the arguments are. As usual, the aggregation can
be a callable or a string alias.
In [89]: animals = pd.DataFrame(
....: {
....: "kind": ["cat", "dog", "cat", "dog"],
....: "height": [9.1, 6.0, 9.5, 34.0],
....: "weight": [7.9, 7.5, 9.9, 198.0],
....: }
....: )
....:
In [90]: animals
Out[90]:
kind height weight
0 cat 9.1 7.9
1 dog 6.0 7.5
2 cat 9.5 9.9
3 dog 34.0 198.0
In [91]: animals.groupby("kind").agg(
....: min_height=pd.NamedAgg(column="height", aggfunc="min"),
....: max_height=pd.NamedAgg(column="height", aggfunc="max"),
....: average_weight=pd.NamedAgg(column="weight", aggfunc=np.mean),
....: )
....:
Out[91]:
min_height max_height average_weight
kind
cat 9.1 9.5 8.90
dog 6.0 34.0 102.75
pandas.NamedAgg is just a namedtuple. Plain tuples are allowed as well.
In [92]: animals.groupby("kind").agg(
....: min_height=("height", "min"),
....: max_height=("height", "max"),
....: average_weight=("weight", np.mean),
....: )
....:
Out[92]:
min_height max_height average_weight
kind
cat 9.1 9.5 8.90
dog 6.0 34.0 102.75
If your desired output column names are not valid Python keywords, construct a dictionary
and unpack the keyword arguments
In [93]: animals.groupby("kind").agg(
....: **{
....: "total weight": pd.NamedAgg(column="weight", aggfunc=sum)
....: }
....: )
....:
Out[93]:
total weight
kind
cat 17.8
dog 205.5
Additional keyword arguments are not passed through to the aggregation functions. Only pairs
of (column, aggfunc) should be passed as **kwargs. If your aggregation functions
requires additional arguments, partially apply them with functools.partial().
Note
For Python 3.5 and earlier, the order of **kwargs in a functions was not
preserved. This means that the output column ordering would not be
consistent. To ensure consistent ordering, the keys (and so output columns)
will always be sorted for Python 3.5.
Named aggregation is also valid for Series groupby aggregations. In this case there’s
no column selection, so the values are just the functions.
In [94]: animals.groupby("kind").height.agg(
....: min_height="min",
....: max_height="max",
....: )
....:
Out[94]:
min_height max_height
kind
cat 9.1 9.5
dog 6.0 34.0
Applying different functions to DataFrame columns#
By passing a dict to aggregate you can apply a different aggregation to the
columns of a DataFrame:
In [95]: grouped.agg({"C": np.sum, "D": lambda x: np.std(x, ddof=1)})
Out[95]:
C D
A
bar 0.392940 1.366330
foo -1.796421 0.884785
The function names can also be strings. In order for a string to be valid it
must be either implemented on GroupBy or available via dispatching:
In [96]: grouped.agg({"C": "sum", "D": "std"})
Out[96]:
C D
A
bar 0.392940 1.366330
foo -1.796421 0.884785
Cython-optimized aggregation functions#
Some common aggregations, currently only sum, mean, std, and sem, have
optimized Cython implementations:
In [97]: df.groupby("A")[["C", "D"]].sum()
Out[97]:
C D
A
bar 0.392940 1.732707
foo -1.796421 2.824590
In [98]: df.groupby(["A", "B"]).mean()
Out[98]:
C D
A B
bar one 0.254161 1.511763
three 0.215897 -0.990582
two -0.077118 1.211526
foo one -0.491888 0.807291
three -0.862495 0.024580
two 0.024925 0.592714
Of course sum and mean are implemented on pandas objects, so the above
code would work even without the special versions via dispatching (see below).
Aggregations with User-Defined Functions#
Users can also provide their own functions for custom aggregations. When aggregating
with a User-Defined Function (UDF), the UDF should not mutate the provided Series, see
Mutating with User Defined Function (UDF) methods for more information.
In [99]: animals.groupby("kind")[["height"]].agg(lambda x: set(x))
Out[99]:
height
kind
cat {9.1, 9.5}
dog {34.0, 6.0}
The resulting dtype will reflect that of the aggregating function. If the results from different groups have
different dtypes, then a common dtype will be determined in the same way as DataFrame construction.
In [100]: animals.groupby("kind")[["height"]].agg(lambda x: x.astype(int).sum())
Out[100]:
height
kind
cat 18
dog 40
Transformation#
The transform method returns an object that is indexed the same
as the one being grouped. The transform function must:
Return a result that is either the same size as the group chunk or
broadcastable to the size of the group chunk (e.g., a scalar,
grouped.transform(lambda x: x.iloc[-1])).
Operate column-by-column on the group chunk. The transform is applied to
the first group chunk using chunk.apply.
Not perform in-place operations on the group chunk. Group chunks should
be treated as immutable, and changes to a group chunk may produce unexpected
results. For example, when using fillna, inplace must be False
(grouped.transform(lambda x: x.fillna(inplace=False))).
(Optionally) operates on the entire group chunk. If this is supported, a
fast path is used starting from the second chunk.
Deprecated since version 1.5.0: When using .transform on a grouped DataFrame and the transformation function
returns a DataFrame, currently pandas does not align the result’s index
with the input’s index. This behavior is deprecated and alignment will
be performed in a future version of pandas. You can apply .to_numpy() to the
result of the transformation function to avoid alignment.
Similar to Aggregations with User-Defined Functions, the resulting dtype will reflect that of the
transformation function. If the results from different groups have different dtypes, then
a common dtype will be determined in the same way as DataFrame construction.
Suppose we wished to standardize the data within each group:
In [101]: index = pd.date_range("10/1/1999", periods=1100)
In [102]: ts = pd.Series(np.random.normal(0.5, 2, 1100), index)
In [103]: ts = ts.rolling(window=100, min_periods=100).mean().dropna()
In [104]: ts.head()
Out[104]:
2000-01-08 0.779333
2000-01-09 0.778852
2000-01-10 0.786476
2000-01-11 0.782797
2000-01-12 0.798110
Freq: D, dtype: float64
In [105]: ts.tail()
Out[105]:
2002-09-30 0.660294
2002-10-01 0.631095
2002-10-02 0.673601
2002-10-03 0.709213
2002-10-04 0.719369
Freq: D, dtype: float64
In [106]: transformed = ts.groupby(lambda x: x.year).transform(
.....: lambda x: (x - x.mean()) / x.std()
.....: )
.....:
We would expect the result to now have mean 0 and standard deviation 1 within
each group, which we can easily check:
# Original Data
In [107]: grouped = ts.groupby(lambda x: x.year)
In [108]: grouped.mean()
Out[108]:
2000 0.442441
2001 0.526246
2002 0.459365
dtype: float64
In [109]: grouped.std()
Out[109]:
2000 0.131752
2001 0.210945
2002 0.128753
dtype: float64
# Transformed Data
In [110]: grouped_trans = transformed.groupby(lambda x: x.year)
In [111]: grouped_trans.mean()
Out[111]:
2000 -4.870756e-16
2001 -1.545187e-16
2002 4.136282e-16
dtype: float64
In [112]: grouped_trans.std()
Out[112]:
2000 1.0
2001 1.0
2002 1.0
dtype: float64
We can also visually compare the original and transformed data sets.
In [113]: compare = pd.DataFrame({"Original": ts, "Transformed": transformed})
In [114]: compare.plot()
Out[114]: <AxesSubplot: >
Transformation functions that have lower dimension outputs are broadcast to
match the shape of the input array.
In [115]: ts.groupby(lambda x: x.year).transform(lambda x: x.max() - x.min())
Out[115]:
2000-01-08 0.623893
2000-01-09 0.623893
2000-01-10 0.623893
2000-01-11 0.623893
2000-01-12 0.623893
...
2002-09-30 0.558275
2002-10-01 0.558275
2002-10-02 0.558275
2002-10-03 0.558275
2002-10-04 0.558275
Freq: D, Length: 1001, dtype: float64
Alternatively, the built-in methods could be used to produce the same outputs.
In [116]: max_ts = ts.groupby(lambda x: x.year).transform("max")
In [117]: min_ts = ts.groupby(lambda x: x.year).transform("min")
In [118]: max_ts - min_ts
Out[118]:
2000-01-08 0.623893
2000-01-09 0.623893
2000-01-10 0.623893
2000-01-11 0.623893
2000-01-12 0.623893
...
2002-09-30 0.558275
2002-10-01 0.558275
2002-10-02 0.558275
2002-10-03 0.558275
2002-10-04 0.558275
Freq: D, Length: 1001, dtype: float64
Another common data transform is to replace missing data with the group mean.
In [119]: data_df
Out[119]:
A B C
0 1.539708 -1.166480 0.533026
1 1.302092 -0.505754 NaN
2 -0.371983 1.104803 -0.651520
3 -1.309622 1.118697 -1.161657
4 -1.924296 0.396437 0.812436
.. ... ... ...
995 -0.093110 0.683847 -0.774753
996 -0.185043 1.438572 NaN
997 -0.394469 -0.642343 0.011374
998 -1.174126 1.857148 NaN
999 0.234564 0.517098 0.393534
[1000 rows x 3 columns]
In [120]: countries = np.array(["US", "UK", "GR", "JP"])
In [121]: key = countries[np.random.randint(0, 4, 1000)]
In [122]: grouped = data_df.groupby(key)
# Non-NA count in each group
In [123]: grouped.count()
Out[123]:
A B C
GR 209 217 189
JP 240 255 217
UK 216 231 193
US 239 250 217
In [124]: transformed = grouped.transform(lambda x: x.fillna(x.mean()))
We can verify that the group means have not changed in the transformed data
and that the transformed data contains no NAs.
In [125]: grouped_trans = transformed.groupby(key)
In [126]: grouped.mean() # original group means
Out[126]:
A B C
GR -0.098371 -0.015420 0.068053
JP 0.069025 0.023100 -0.077324
UK 0.034069 -0.052580 -0.116525
US 0.058664 -0.020399 0.028603
In [127]: grouped_trans.mean() # transformation did not change group means
Out[127]:
A B C
GR -0.098371 -0.015420 0.068053
JP 0.069025 0.023100 -0.077324
UK 0.034069 -0.052580 -0.116525
US 0.058664 -0.020399 0.028603
In [128]: grouped.count() # original has some missing data points
Out[128]:
A B C
GR 209 217 189
JP 240 255 217
UK 216 231 193
US 239 250 217
In [129]: grouped_trans.count() # counts after transformation
Out[129]:
A B C
GR 228 228 228
JP 267 267 267
UK 247 247 247
US 258 258 258
In [130]: grouped_trans.size() # Verify non-NA count equals group size
Out[130]:
GR 228
JP 267
UK 247
US 258
dtype: int64
Note
Some functions will automatically transform the input when applied to a
GroupBy object, but returning an object of the same shape as the original.
Passing as_index=False will not affect these transformation methods.
For example: fillna, ffill, bfill, shift..
In [131]: grouped.ffill()
Out[131]:
A B C
0 1.539708 -1.166480 0.533026
1 1.302092 -0.505754 0.533026
2 -0.371983 1.104803 -0.651520
3 -1.309622 1.118697 -1.161657
4 -1.924296 0.396437 0.812436
.. ... ... ...
995 -0.093110 0.683847 -0.774753
996 -0.185043 1.438572 -0.774753
997 -0.394469 -0.642343 0.011374
998 -1.174126 1.857148 -0.774753
999 0.234564 0.517098 0.393534
[1000 rows x 3 columns]
Window and resample operations#
It is possible to use resample(), expanding() and
rolling() as methods on groupbys.
The example below will apply the rolling() method on the samples of
the column B based on the groups of column A.
In [132]: df_re = pd.DataFrame({"A": [1] * 10 + [5] * 10, "B": np.arange(20)})
In [133]: df_re
Out[133]:
A B
0 1 0
1 1 1
2 1 2
3 1 3
4 1 4
.. .. ..
15 5 15
16 5 16
17 5 17
18 5 18
19 5 19
[20 rows x 2 columns]
In [134]: df_re.groupby("A").rolling(4).B.mean()
Out[134]:
A
1 0 NaN
1 NaN
2 NaN
3 1.5
4 2.5
...
5 15 13.5
16 14.5
17 15.5
18 16.5
19 17.5
Name: B, Length: 20, dtype: float64
The expanding() method will accumulate a given operation
(sum() in the example) for all the members of each particular
group.
In [135]: df_re.groupby("A").expanding().sum()
Out[135]:
B
A
1 0 0.0
1 1.0
2 3.0
3 6.0
4 10.0
... ...
5 15 75.0
16 91.0
17 108.0
18 126.0
19 145.0
[20 rows x 1 columns]
Suppose you want to use the resample() method to get a daily
frequency in each group of your dataframe and wish to complete the
missing values with the ffill() method.
In [136]: df_re = pd.DataFrame(
.....: {
.....: "date": pd.date_range(start="2016-01-01", periods=4, freq="W"),
.....: "group": [1, 1, 2, 2],
.....: "val": [5, 6, 7, 8],
.....: }
.....: ).set_index("date")
.....:
In [137]: df_re
Out[137]:
group val
date
2016-01-03 1 5
2016-01-10 1 6
2016-01-17 2 7
2016-01-24 2 8
In [138]: df_re.groupby("group").resample("1D").ffill()
Out[138]:
group val
group date
1 2016-01-03 1 5
2016-01-04 1 5
2016-01-05 1 5
2016-01-06 1 5
2016-01-07 1 5
... ... ...
2 2016-01-20 2 7
2016-01-21 2 7
2016-01-22 2 7
2016-01-23 2 7
2016-01-24 2 8
[16 rows x 2 columns]
Filtration#
The filter method returns a subset of the original object. Suppose we
want to take only elements that belong to groups with a group sum greater
than 2.
In [139]: sf = pd.Series([1, 1, 2, 3, 3, 3])
In [140]: sf.groupby(sf).filter(lambda x: x.sum() > 2)
Out[140]:
3 3
4 3
5 3
dtype: int64
The argument of filter must be a function that, applied to the group as a
whole, returns True or False.
Another useful operation is filtering out elements that belong to groups
with only a couple members.
In [141]: dff = pd.DataFrame({"A": np.arange(8), "B": list("aabbbbcc")})
In [142]: dff.groupby("B").filter(lambda x: len(x) > 2)
Out[142]:
A B
2 2 b
3 3 b
4 4 b
5 5 b
Alternatively, instead of dropping the offending groups, we can return a
like-indexed objects where the groups that do not pass the filter are filled
with NaNs.
In [143]: dff.groupby("B").filter(lambda x: len(x) > 2, dropna=False)
Out[143]:
A B
0 NaN NaN
1 NaN NaN
2 2.0 b
3 3.0 b
4 4.0 b
5 5.0 b
6 NaN NaN
7 NaN NaN
For DataFrames with multiple columns, filters should explicitly specify a column as the filter criterion.
In [144]: dff["C"] = np.arange(8)
In [145]: dff.groupby("B").filter(lambda x: len(x["C"]) > 2)
Out[145]:
A B C
2 2 b 2
3 3 b 3
4 4 b 4
5 5 b 5
Note
Some functions when applied to a groupby object will act as a filter on the input, returning
a reduced shape of the original (and potentially eliminating groups), but with the index unchanged.
Passing as_index=False will not affect these transformation methods.
For example: head, tail.
In [146]: dff.groupby("B").head(2)
Out[146]:
A B C
0 0 a 0
1 1 a 1
2 2 b 2
3 3 b 3
6 6 c 6
7 7 c 7
Dispatching to instance methods#
When doing an aggregation or transformation, you might just want to call an
instance method on each data group. This is pretty easy to do by passing lambda
functions:
In [147]: grouped = df.groupby("A")
In [148]: grouped.agg(lambda x: x.std())
Out[148]:
C D
A
bar 0.181231 1.366330
foo 0.912265 0.884785
But, it’s rather verbose and can be untidy if you need to pass additional
arguments. Using a bit of metaprogramming cleverness, GroupBy now has the
ability to “dispatch” method calls to the groups:
In [149]: grouped.std()
Out[149]:
C D
A
bar 0.181231 1.366330
foo 0.912265 0.884785
What is actually happening here is that a function wrapper is being
generated. When invoked, it takes any passed arguments and invokes the function
with any arguments on each group (in the above example, the std
function). The results are then combined together much in the style of agg
and transform (it actually uses apply to infer the gluing, documented
next). This enables some operations to be carried out rather succinctly:
In [150]: tsdf = pd.DataFrame(
.....: np.random.randn(1000, 3),
.....: index=pd.date_range("1/1/2000", periods=1000),
.....: columns=["A", "B", "C"],
.....: )
.....:
In [151]: tsdf.iloc[::2] = np.nan
In [152]: grouped = tsdf.groupby(lambda x: x.year)
In [153]: grouped.fillna(method="pad")
Out[153]:
A B C
2000-01-01 NaN NaN NaN
2000-01-02 -0.353501 -0.080957 -0.876864
2000-01-03 -0.353501 -0.080957 -0.876864
2000-01-04 0.050976 0.044273 -0.559849
2000-01-05 0.050976 0.044273 -0.559849
... ... ... ...
2002-09-22 0.005011 0.053897 -1.026922
2002-09-23 0.005011 0.053897 -1.026922
2002-09-24 -0.456542 -1.849051 1.559856
2002-09-25 -0.456542 -1.849051 1.559856
2002-09-26 1.123162 0.354660 1.128135
[1000 rows x 3 columns]
In this example, we chopped the collection of time series into yearly chunks
then independently called fillna on the
groups.
The nlargest and nsmallest methods work on Series style groupbys:
In [154]: s = pd.Series([9, 8, 7, 5, 19, 1, 4.2, 3.3])
In [155]: g = pd.Series(list("abababab"))
In [156]: gb = s.groupby(g)
In [157]: gb.nlargest(3)
Out[157]:
a 4 19.0
0 9.0
2 7.0
b 1 8.0
3 5.0
7 3.3
dtype: float64
In [158]: gb.nsmallest(3)
Out[158]:
a 6 4.2
2 7.0
0 9.0
b 5 1.0
7 3.3
3 5.0
dtype: float64
Flexible apply#
Some operations on the grouped data might not fit into either the aggregate or
transform categories. Or, you may simply want GroupBy to infer how to combine
the results. For these, use the apply function, which can be substituted
for both aggregate and transform in many standard use cases. However,
apply can handle some exceptional use cases.
Note
apply can act as a reducer, transformer, or filter function, depending
on exactly what is passed to it. It can depend on the passed function and
exactly what you are grouping. Thus the grouped column(s) may be included in
the output as well as set the indices.
In [159]: df
Out[159]:
A B C D
0 foo one -0.575247 1.346061
1 bar one 0.254161 1.511763
2 foo two -1.143704 1.627081
3 bar three 0.215897 -0.990582
4 foo two 1.193555 -0.441652
5 bar two -0.077118 1.211526
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
In [160]: grouped = df.groupby("A")
# could also just call .describe()
In [161]: grouped["C"].apply(lambda x: x.describe())
Out[161]:
A
bar count 3.000000
mean 0.130980
std 0.181231
min -0.077118
25% 0.069390
...
foo min -1.143704
25% -0.862495
50% -0.575247
75% -0.408530
max 1.193555
Name: C, Length: 16, dtype: float64
The dimension of the returned result can also change:
In [162]: grouped = df.groupby('A')['C']
In [163]: def f(group):
.....: return pd.DataFrame({'original': group,
.....: 'demeaned': group - group.mean()})
.....:
apply on a Series can operate on a returned value from the applied function,
that is itself a series, and possibly upcast the result to a DataFrame:
In [164]: def f(x):
.....: return pd.Series([x, x ** 2], index=["x", "x^2"])
.....:
In [165]: s = pd.Series(np.random.rand(5))
In [166]: s
Out[166]:
0 0.321438
1 0.493496
2 0.139505
3 0.910103
4 0.194158
dtype: float64
In [167]: s.apply(f)
Out[167]:
x x^2
0 0.321438 0.103323
1 0.493496 0.243538
2 0.139505 0.019462
3 0.910103 0.828287
4 0.194158 0.037697
Control grouped column(s) placement with group_keys#
Note
If group_keys=True is specified when calling groupby(),
functions passed to apply that return like-indexed outputs will have the
group keys added to the result index. Previous versions of pandas would add
the group keys only when the result from the applied function had a different
index than the input. If group_keys is not specified, the group keys will
not be added for like-indexed outputs. In the future this behavior
will change to always respect group_keys, which defaults to True.
Changed in version 1.5.0.
To control whether the grouped column(s) are included in the indices, you can use
the argument group_keys. Compare
In [168]: df.groupby("A", group_keys=True).apply(lambda x: x)
Out[168]:
A B C D
A
bar 1 bar one 0.254161 1.511763
3 bar three 0.215897 -0.990582
5 bar two -0.077118 1.211526
foo 0 foo one -0.575247 1.346061
2 foo two -1.143704 1.627081
4 foo two 1.193555 -0.441652
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
with
In [169]: df.groupby("A", group_keys=False).apply(lambda x: x)
Out[169]:
A B C D
0 foo one -0.575247 1.346061
1 bar one 0.254161 1.511763
2 foo two -1.143704 1.627081
3 bar three 0.215897 -0.990582
4 foo two 1.193555 -0.441652
5 bar two -0.077118 1.211526
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
Similar to Aggregations with User-Defined Functions, the resulting dtype will reflect that of the
apply function. If the results from different groups have different dtypes, then
a common dtype will be determined in the same way as DataFrame construction.
Numba Accelerated Routines#
New in version 1.1.
If Numba is installed as an optional dependency, the transform and
aggregate methods support engine='numba' and engine_kwargs arguments.
See enhancing performance with Numba for general usage of the arguments
and performance considerations.
The function signature must start with values, index exactly as the data belonging to each group
will be passed into values, and the group index will be passed into index.
Warning
When using engine='numba', there will be no “fall back” behavior internally. The group
data and group index will be passed as NumPy arrays to the JITed user defined function, and no
alternative execution attempts will be tried.
Other useful features#
Automatic exclusion of “nuisance” columns#
Again consider the example DataFrame we’ve been looking at:
In [170]: df
Out[170]:
A B C D
0 foo one -0.575247 1.346061
1 bar one 0.254161 1.511763
2 foo two -1.143704 1.627081
3 bar three 0.215897 -0.990582
4 foo two 1.193555 -0.441652
5 bar two -0.077118 1.211526
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
Suppose we wish to compute the standard deviation grouped by the A
column. There is a slight problem, namely that we don’t care about the data in
column B. We refer to this as a “nuisance” column. You can avoid nuisance
columns by specifying numeric_only=True:
In [171]: df.groupby("A").std(numeric_only=True)
Out[171]:
C D
A
bar 0.181231 1.366330
foo 0.912265 0.884785
Note that df.groupby('A').colname.std(). is more efficient than
df.groupby('A').std().colname, so if the result of an aggregation function
is only interesting over one column (here colname), it may be filtered
before applying the aggregation function.
Note
Any object column, also if it contains numerical values such as Decimal
objects, is considered as a “nuisance” columns. They are excluded from
aggregate functions automatically in groupby.
If you do wish to include decimal or object columns in an aggregation with
other non-nuisance data types, you must do so explicitly.
Warning
The automatic dropping of nuisance columns has been deprecated and will be removed
in a future version of pandas. If columns are included that cannot be operated
on, pandas will instead raise an error. In order to avoid this, either select
the columns you wish to operate on or specify numeric_only=True.
In [172]: from decimal import Decimal
In [173]: df_dec = pd.DataFrame(
.....: {
.....: "id": [1, 2, 1, 2],
.....: "int_column": [1, 2, 3, 4],
.....: "dec_column": [
.....: Decimal("0.50"),
.....: Decimal("0.15"),
.....: Decimal("0.25"),
.....: Decimal("0.40"),
.....: ],
.....: }
.....: )
.....:
# Decimal columns can be sum'd explicitly by themselves...
In [174]: df_dec.groupby(["id"])[["dec_column"]].sum()
Out[174]:
dec_column
id
1 0.75
2 0.55
# ...but cannot be combined with standard data types or they will be excluded
In [175]: df_dec.groupby(["id"])[["int_column", "dec_column"]].sum()
Out[175]:
int_column
id
1 4
2 6
# Use .agg function to aggregate over standard and "nuisance" data types
# at the same time
In [176]: df_dec.groupby(["id"]).agg({"int_column": "sum", "dec_column": "sum"})
Out[176]:
int_column dec_column
id
1 4 0.75
2 6 0.55
Handling of (un)observed Categorical values#
When using a Categorical grouper (as a single grouper, or as part of multiple groupers), the observed keyword
controls whether to return a cartesian product of all possible groupers values (observed=False) or only those
that are observed groupers (observed=True).
Show all values:
In [177]: pd.Series([1, 1, 1]).groupby(
.....: pd.Categorical(["a", "a", "a"], categories=["a", "b"]), observed=False
.....: ).count()
.....:
Out[177]:
a 3
b 0
dtype: int64
Show only the observed values:
In [178]: pd.Series([1, 1, 1]).groupby(
.....: pd.Categorical(["a", "a", "a"], categories=["a", "b"]), observed=True
.....: ).count()
.....:
Out[178]:
a 3
dtype: int64
The returned dtype of the grouped will always include all of the categories that were grouped.
In [179]: s = (
.....: pd.Series([1, 1, 1])
.....: .groupby(pd.Categorical(["a", "a", "a"], categories=["a", "b"]), observed=False)
.....: .count()
.....: )
.....:
In [180]: s.index.dtype
Out[180]: CategoricalDtype(categories=['a', 'b'], ordered=False)
NA and NaT group handling#
If there are any NaN or NaT values in the grouping key, these will be
automatically excluded. In other words, there will never be an “NA group” or
“NaT group”. This was not the case in older versions of pandas, but users were
generally discarding the NA group anyway (and supporting it was an
implementation headache).
Grouping with ordered factors#
Categorical variables represented as instance of pandas’s Categorical class
can be used as group keys. If so, the order of the levels will be preserved:
In [181]: data = pd.Series(np.random.randn(100))
In [182]: factor = pd.qcut(data, [0, 0.25, 0.5, 0.75, 1.0])
In [183]: data.groupby(factor).mean()
Out[183]:
(-2.645, -0.523] -1.362896
(-0.523, 0.0296] -0.260266
(0.0296, 0.654] 0.361802
(0.654, 2.21] 1.073801
dtype: float64
Grouping with a grouper specification#
You may need to specify a bit more data to properly group. You can
use the pd.Grouper to provide this local control.
In [184]: import datetime
In [185]: df = pd.DataFrame(
.....: {
.....: "Branch": "A A A A A A A B".split(),
.....: "Buyer": "Carl Mark Carl Carl Joe Joe Joe Carl".split(),
.....: "Quantity": [1, 3, 5, 1, 8, 1, 9, 3],
.....: "Date": [
.....: datetime.datetime(2013, 1, 1, 13, 0),
.....: datetime.datetime(2013, 1, 1, 13, 5),
.....: datetime.datetime(2013, 10, 1, 20, 0),
.....: datetime.datetime(2013, 10, 2, 10, 0),
.....: datetime.datetime(2013, 10, 1, 20, 0),
.....: datetime.datetime(2013, 10, 2, 10, 0),
.....: datetime.datetime(2013, 12, 2, 12, 0),
.....: datetime.datetime(2013, 12, 2, 14, 0),
.....: ],
.....: }
.....: )
.....:
In [186]: df
Out[186]:
Branch Buyer Quantity Date
0 A Carl 1 2013-01-01 13:00:00
1 A Mark 3 2013-01-01 13:05:00
2 A Carl 5 2013-10-01 20:00:00
3 A Carl 1 2013-10-02 10:00:00
4 A Joe 8 2013-10-01 20:00:00
5 A Joe 1 2013-10-02 10:00:00
6 A Joe 9 2013-12-02 12:00:00
7 B Carl 3 2013-12-02 14:00:00
Groupby a specific column with the desired frequency. This is like resampling.
In [187]: df.groupby([pd.Grouper(freq="1M", key="Date"), "Buyer"])[["Quantity"]].sum()
Out[187]:
Quantity
Date Buyer
2013-01-31 Carl 1
Mark 3
2013-10-31 Carl 6
Joe 9
2013-12-31 Carl 3
Joe 9
You have an ambiguous specification in that you have a named index and a column
that could be potential groupers.
In [188]: df = df.set_index("Date")
In [189]: df["Date"] = df.index + pd.offsets.MonthEnd(2)
In [190]: df.groupby([pd.Grouper(freq="6M", key="Date"), "Buyer"])[["Quantity"]].sum()
Out[190]:
Quantity
Date Buyer
2013-02-28 Carl 1
Mark 3
2014-02-28 Carl 9
Joe 18
In [191]: df.groupby([pd.Grouper(freq="6M", level="Date"), "Buyer"])[["Quantity"]].sum()
Out[191]:
Quantity
Date Buyer
2013-01-31 Carl 1
Mark 3
2014-01-31 Carl 9
Joe 18
Taking the first rows of each group#
Just like for a DataFrame or Series you can call head and tail on a groupby:
In [192]: df = pd.DataFrame([[1, 2], [1, 4], [5, 6]], columns=["A", "B"])
In [193]: df
Out[193]:
A B
0 1 2
1 1 4
2 5 6
In [194]: g = df.groupby("A")
In [195]: g.head(1)
Out[195]:
A B
0 1 2
2 5 6
In [196]: g.tail(1)
Out[196]:
A B
1 1 4
2 5 6
This shows the first or last n rows from each group.
Taking the nth row of each group#
To select from a DataFrame or Series the nth item, use
nth(). This is a reduction method, and
will return a single row (or no row) per group if you pass an int for n:
In [197]: df = pd.DataFrame([[1, np.nan], [1, 4], [5, 6]], columns=["A", "B"])
In [198]: g = df.groupby("A")
In [199]: g.nth(0)
Out[199]:
B
A
1 NaN
5 6.0
In [200]: g.nth(-1)
Out[200]:
B
A
1 4.0
5 6.0
In [201]: g.nth(1)
Out[201]:
B
A
1 4.0
If you want to select the nth not-null item, use the dropna kwarg. For a DataFrame this should be either 'any' or 'all' just like you would pass to dropna:
# nth(0) is the same as g.first()
In [202]: g.nth(0, dropna="any")
Out[202]:
B
A
1 4.0
5 6.0
In [203]: g.first()
Out[203]:
B
A
1 4.0
5 6.0
# nth(-1) is the same as g.last()
In [204]: g.nth(-1, dropna="any") # NaNs denote group exhausted when using dropna
Out[204]:
B
A
1 4.0
5 6.0
In [205]: g.last()
Out[205]:
B
A
1 4.0
5 6.0
In [206]: g.B.nth(0, dropna="all")
Out[206]:
A
1 4.0
5 6.0
Name: B, dtype: float64
As with other methods, passing as_index=False, will achieve a filtration, which returns the grouped row.
In [207]: df = pd.DataFrame([[1, np.nan], [1, 4], [5, 6]], columns=["A", "B"])
In [208]: g = df.groupby("A", as_index=False)
In [209]: g.nth(0)
Out[209]:
A B
0 1 NaN
2 5 6.0
In [210]: g.nth(-1)
Out[210]:
A B
1 1 4.0
2 5 6.0
You can also select multiple rows from each group by specifying multiple nth values as a list of ints.
In [211]: business_dates = pd.date_range(start="4/1/2014", end="6/30/2014", freq="B")
In [212]: df = pd.DataFrame(1, index=business_dates, columns=["a", "b"])
# get the first, 4th, and last date index for each month
In [213]: df.groupby([df.index.year, df.index.month]).nth([0, 3, -1])
Out[213]:
a b
2014 4 1 1
4 1 1
4 1 1
5 1 1
5 1 1
5 1 1
6 1 1
6 1 1
6 1 1
Enumerate group items#
To see the order in which each row appears within its group, use the
cumcount method:
In [214]: dfg = pd.DataFrame(list("aaabba"), columns=["A"])
In [215]: dfg
Out[215]:
A
0 a
1 a
2 a
3 b
4 b
5 a
In [216]: dfg.groupby("A").cumcount()
Out[216]:
0 0
1 1
2 2
3 0
4 1
5 3
dtype: int64
In [217]: dfg.groupby("A").cumcount(ascending=False)
Out[217]:
0 3
1 2
2 1
3 1
4 0
5 0
dtype: int64
Enumerate groups#
To see the ordering of the groups (as opposed to the order of rows
within a group given by cumcount) you can use
ngroup().
Note that the numbers given to the groups match the order in which the
groups would be seen when iterating over the groupby object, not the
order they are first observed.
In [218]: dfg = pd.DataFrame(list("aaabba"), columns=["A"])
In [219]: dfg
Out[219]:
A
0 a
1 a
2 a
3 b
4 b
5 a
In [220]: dfg.groupby("A").ngroup()
Out[220]:
0 0
1 0
2 0
3 1
4 1
5 0
dtype: int64
In [221]: dfg.groupby("A").ngroup(ascending=False)
Out[221]:
0 1
1 1
2 1
3 0
4 0
5 1
dtype: int64
Plotting#
Groupby also works with some plotting methods. For example, suppose we
suspect that some features in a DataFrame may differ by group, in this case,
the values in column 1 where the group is “B” are 3 higher on average.
In [222]: np.random.seed(1234)
In [223]: df = pd.DataFrame(np.random.randn(50, 2))
In [224]: df["g"] = np.random.choice(["A", "B"], size=50)
In [225]: df.loc[df["g"] == "B", 1] += 3
We can easily visualize this with a boxplot:
In [226]: df.groupby("g").boxplot()
Out[226]:
A AxesSubplot(0.1,0.15;0.363636x0.75)
B AxesSubplot(0.536364,0.15;0.363636x0.75)
dtype: object
The result of calling boxplot is a dictionary whose keys are the values
of our grouping column g (“A” and “B”). The values of the resulting dictionary
can be controlled by the return_type keyword of boxplot.
See the visualization documentation for more.
Warning
For historical reasons, df.groupby("g").boxplot() is not equivalent
to df.boxplot(by="g"). See here for
an explanation.
Piping function calls#
Similar to the functionality provided by DataFrame and Series, functions
that take GroupBy objects can be chained together using a pipe method to
allow for a cleaner, more readable syntax. To read about .pipe in general terms,
see here.
Combining .groupby and .pipe is often useful when you need to reuse
GroupBy objects.
As an example, imagine having a DataFrame with columns for stores, products,
revenue and quantity sold. We’d like to do a groupwise calculation of prices
(i.e. revenue/quantity) per store and per product. We could do this in a
multi-step operation, but expressing it in terms of piping can make the
code more readable. First we set the data:
In [227]: n = 1000
In [228]: df = pd.DataFrame(
.....: {
.....: "Store": np.random.choice(["Store_1", "Store_2"], n),
.....: "Product": np.random.choice(["Product_1", "Product_2"], n),
.....: "Revenue": (np.random.random(n) * 50 + 10).round(2),
.....: "Quantity": np.random.randint(1, 10, size=n),
.....: }
.....: )
.....:
In [229]: df.head(2)
Out[229]:
Store Product Revenue Quantity
0 Store_2 Product_1 26.12 1
1 Store_2 Product_1 28.86 1
Now, to find prices per store/product, we can simply do:
In [230]: (
.....: df.groupby(["Store", "Product"])
.....: .pipe(lambda grp: grp.Revenue.sum() / grp.Quantity.sum())
.....: .unstack()
.....: .round(2)
.....: )
.....:
Out[230]:
Product Product_1 Product_2
Store
Store_1 6.82 7.05
Store_2 6.30 6.64
Piping can also be expressive when you want to deliver a grouped object to some
arbitrary function, for example:
In [231]: def mean(groupby):
.....: return groupby.mean()
.....:
In [232]: df.groupby(["Store", "Product"]).pipe(mean)
Out[232]:
Revenue Quantity
Store Product
Store_1 Product_1 34.622727 5.075758
Product_2 35.482815 5.029630
Store_2 Product_1 32.972837 5.237589
Product_2 34.684360 5.224000
where mean takes a GroupBy object and finds the mean of the Revenue and Quantity
columns respectively for each Store-Product combination. The mean function can
be any function that takes in a GroupBy object; the .pipe will pass the GroupBy
object as a parameter into the function you specify.
Examples#
Regrouping by factor#
Regroup columns of a DataFrame according to their sum, and sum the aggregated ones.
In [233]: df = pd.DataFrame({"a": [1, 0, 0], "b": [0, 1, 0], "c": [1, 0, 0], "d": [2, 3, 4]})
In [234]: df
Out[234]:
a b c d
0 1 0 1 2
1 0 1 0 3
2 0 0 0 4
In [235]: df.groupby(df.sum(), axis=1).sum()
Out[235]:
1 9
0 2 2
1 1 3
2 0 4
Multi-column factorization#
By using ngroup(), we can extract
information about the groups in a way similar to factorize() (as described
further in the reshaping API) but which applies
naturally to multiple columns of mixed type and different
sources. This can be useful as an intermediate categorical-like step
in processing, when the relationships between the group rows are more
important than their content, or as input to an algorithm which only
accepts the integer encoding. (For more information about support in
pandas for full categorical data, see the Categorical
introduction and the
API documentation.)
In [236]: dfg = pd.DataFrame({"A": [1, 1, 2, 3, 2], "B": list("aaaba")})
In [237]: dfg
Out[237]:
A B
0 1 a
1 1 a
2 2 a
3 3 b
4 2 a
In [238]: dfg.groupby(["A", "B"]).ngroup()
Out[238]:
0 0
1 0
2 1
3 2
4 1
dtype: int64
In [239]: dfg.groupby(["A", [0, 0, 0, 1, 1]]).ngroup()
Out[239]:
0 0
1 0
2 1
3 3
4 2
dtype: int64
Groupby by indexer to ‘resample’ data#
Resampling produces new hypothetical samples (resamples) from already existing observed data or from a model that generates data. These new samples are similar to the pre-existing samples.
In order to resample to work on indices that are non-datetimelike, the following procedure can be utilized.
In the following examples, df.index // 5 returns a binary array which is used to determine what gets selected for the groupby operation.
Note
The below example shows how we can downsample by consolidation of samples into fewer samples. Here by using df.index // 5, we are aggregating the samples in bins. By applying std() function, we aggregate the information contained in many samples into a small subset of values which is their standard deviation thereby reducing the number of samples.
In [240]: df = pd.DataFrame(np.random.randn(10, 2))
In [241]: df
Out[241]:
0 1
0 -0.793893 0.321153
1 0.342250 1.618906
2 -0.975807 1.918201
3 -0.810847 -1.405919
4 -1.977759 0.461659
5 0.730057 -1.316938
6 -0.751328 0.528290
7 -0.257759 -1.081009
8 0.505895 -1.701948
9 -1.006349 0.020208
In [242]: df.index // 5
Out[242]: Int64Index([0, 0, 0, 0, 0, 1, 1, 1, 1, 1], dtype='int64')
In [243]: df.groupby(df.index // 5).std()
Out[243]:
0 1
0 0.823647 1.312912
1 0.760109 0.942941
Returning a Series to propagate names#
Group DataFrame columns, compute a set of metrics and return a named Series.
The Series name is used as the name for the column index. This is especially
useful in conjunction with reshaping operations such as stacking in which the
column index name will be used as the name of the inserted column:
In [244]: df = pd.DataFrame(
.....: {
.....: "a": [0, 0, 0, 0, 1, 1, 1, 1, 2, 2, 2, 2],
.....: "b": [0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1],
.....: "c": [1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0],
.....: "d": [0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1],
.....: }
.....: )
.....:
In [245]: def compute_metrics(x):
.....: result = {"b_sum": x["b"].sum(), "c_mean": x["c"].mean()}
.....: return pd.Series(result, name="metrics")
.....:
In [246]: result = df.groupby("a").apply(compute_metrics)
In [247]: result
Out[247]:
metrics b_sum c_mean
a
0 2.0 0.5
1 2.0 0.5
2 2.0 0.5
In [248]: result.stack()
Out[248]:
a metrics
0 b_sum 2.0
c_mean 0.5
1 b_sum 2.0
c_mean 0.5
2 b_sum 2.0
c_mean 0.5
dtype: float64
| 793
| 1,123
|
How to create several pandas frames with a for loop in python?
I am trying to generate 11 models of Decision Tree, for that, one of the steps is to assigns the y values for each one.
Since I have 11 y variables, I would like to assign each one automatically.
The df['P1 d'] is a DataFrame column with the 'dummies' variables.
X2 = df[['1_y', '2_y', '3_y', '4_y', '5_y', '6_y', '7_y', '8_y','9_y', '10_y','11_y', '12_y', '13_y', '14_y']]
for t in range(1,12):
'y.{}'.format(t) = df[['P{} d'.format(t)]]
The error message is:
File "<ipython-input-83-017c94c44d4b>", line 3
'y.{}'.format(t) = df[['P{} d'.format(t)]]
^
SyntaxError: can't assign to function call
I know it might be something very simple, but I have not been able to think on anything to overcome this setback.
|
68,355,520
|
Index return the first letter of the destination value instead of the target value
|
<p>I use this code to pull API Data names from an Exchange and they retrieve their equivalent symbol, but my current problem is that I suspect that the index returned is correct because when I look for the associated symbol, I get the first letter of the name and not the symbol.</p>
<pre><code>from pycoingecko import CoinGeckoAPI
import pandas as pd
cg = CoinGeckoAPI()
response_list = cg.get_coins_list()
response_list_normalized = pd.json_normalize(response_list)
print('\n--- selected: LIST NORMALIZED ---')
print(response_list_normalized)
response_list_stringed = ''.join(map(str, response_list_normalized['name']))
if crypto_token_name in response_list_stringed:
print('\n--- selected: EXACT MATCHING RESULT ---')
print('Found it!')
position = response_list_stringed.index('Cardano')
print('\n--- position: INDEX ---')
print(position)
symbol = response_list_stringed[position]
print('\n--- position: SYMBOL ---')
print(symbol)
else:
print('\n--- selected: LIST MATCHING RESULT ---')
print('Not found! :(')
</code></pre>
<p>Is the list dimension in cause, or am I pointing to the wrong target? I spent days trying every possible variant to get it to look for the name and retrieve its index and associated symbol.</p>
| 68,384,196
| 2021-07-13T01:45:42.453000
| 1
| null | 0
| 30
|
python|pandas
|
<p>got it fixed with <code>index = response_list_normalized[response_list_normalized['name'] ==crypto_token_name].index.values </code></p>
| 2021-07-14T19:43:37.207000
| 0
|
https://pandas.pydata.org/docs/user_guide/io.html
|
IO tools (text, CSV, HDF5, …)#
IO tools (text, CSV, HDF5, …)#
The pandas I/O API is a set of top level reader functions accessed like
pandas.read_csv() that generally return a pandas object. The corresponding
got it fixed with index = response_list_normalized[response_list_normalized['name'] ==crypto_token_name].index.values
writer functions are object methods that are accessed like
DataFrame.to_csv(). Below is a table containing available readers and
writers.
Format Type
Data Description
Reader
Writer
text
CSV
read_csv
to_csv
text
Fixed-Width Text File
read_fwf
text
JSON
read_json
to_json
text
HTML
read_html
to_html
text
LaTeX
Styler.to_latex
text
XML
read_xml
to_xml
text
Local clipboard
read_clipboard
to_clipboard
binary
MS Excel
read_excel
to_excel
binary
OpenDocument
read_excel
binary
HDF5 Format
read_hdf
to_hdf
binary
Feather Format
read_feather
to_feather
binary
Parquet Format
read_parquet
to_parquet
binary
ORC Format
read_orc
to_orc
binary
Stata
read_stata
to_stata
binary
SAS
read_sas
binary
SPSS
read_spss
binary
Python Pickle Format
read_pickle
to_pickle
SQL
SQL
read_sql
to_sql
SQL
Google BigQuery
read_gbq
to_gbq
Here is an informal performance comparison for some of these IO methods.
Note
For examples that use the StringIO class, make sure you import it
with from io import StringIO for Python 3.
CSV & text files#
The workhorse function for reading text files (a.k.a. flat files) is
read_csv(). See the cookbook for some advanced strategies.
Parsing options#
read_csv() accepts the following common arguments:
Basic#
filepath_or_buffervariousEither a path to a file (a str, pathlib.Path,
or py:py._path.local.LocalPath), URL (including http, ftp, and S3
locations), or any object with a read() method (such as an open file or
StringIO).
sepstr, defaults to ',' for read_csv(), \t for read_table()Delimiter to use. If sep is None, the C engine cannot automatically detect
the separator, but the Python parsing engine can, meaning the latter will be
used and automatically detect the separator by Python’s builtin sniffer tool,
csv.Sniffer. In addition, separators longer than 1 character and
different from '\s+' will be interpreted as regular expressions and
will also force the use of the Python parsing engine. Note that regex
delimiters are prone to ignoring quoted data. Regex example: '\\r\\t'.
delimiterstr, default NoneAlternative argument name for sep.
delim_whitespaceboolean, default FalseSpecifies whether or not whitespace (e.g. ' ' or '\t')
will be used as the delimiter. Equivalent to setting sep='\s+'.
If this option is set to True, nothing should be passed in for the
delimiter parameter.
Column and index locations and names#
headerint or list of ints, default 'infer'Row number(s) to use as the column names, and the start of the
data. Default behavior is to infer the column names: if no names are
passed the behavior is identical to header=0 and column names
are inferred from the first line of the file, if column names are
passed explicitly then the behavior is identical to
header=None. Explicitly pass header=0 to be able to replace
existing names.
The header can be a list of ints that specify row locations
for a MultiIndex on the columns e.g. [0,1,3]. Intervening rows
that are not specified will be skipped (e.g. 2 in this example is
skipped). Note that this parameter ignores commented lines and empty
lines if skip_blank_lines=True, so header=0 denotes the first
line of data rather than the first line of the file.
namesarray-like, default NoneList of column names to use. If file contains no header row, then you should
explicitly pass header=None. Duplicates in this list are not allowed.
index_colint, str, sequence of int / str, or False, optional, default NoneColumn(s) to use as the row labels of the DataFrame, either given as
string name or column index. If a sequence of int / str is given, a
MultiIndex is used.
Note
index_col=False can be used to force pandas to not use the first
column as the index, e.g. when you have a malformed file with delimiters at
the end of each line.
The default value of None instructs pandas to guess. If the number of
fields in the column header row is equal to the number of fields in the body
of the data file, then a default index is used. If it is larger, then
the first columns are used as index so that the remaining number of fields in
the body are equal to the number of fields in the header.
The first row after the header is used to determine the number of columns,
which will go into the index. If the subsequent rows contain less columns
than the first row, they are filled with NaN.
This can be avoided through usecols. This ensures that the columns are
taken as is and the trailing data are ignored.
usecolslist-like or callable, default NoneReturn a subset of the columns. If list-like, all elements must either
be positional (i.e. integer indices into the document columns) or strings
that correspond to column names provided either by the user in names or
inferred from the document header row(s). If names are given, the document
header row(s) are not taken into account. For example, a valid list-like
usecols parameter would be [0, 1, 2] or ['foo', 'bar', 'baz'].
Element order is ignored, so usecols=[0, 1] is the same as [1, 0]. To
instantiate a DataFrame from data with element order preserved use
pd.read_csv(data, usecols=['foo', 'bar'])[['foo', 'bar']] for columns
in ['foo', 'bar'] order or
pd.read_csv(data, usecols=['foo', 'bar'])[['bar', 'foo']] for
['bar', 'foo'] order.
If callable, the callable function will be evaluated against the column names,
returning names where the callable function evaluates to True:
In [1]: import pandas as pd
In [2]: from io import StringIO
In [3]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"
In [4]: pd.read_csv(StringIO(data))
Out[4]:
col1 col2 col3
0 a b 1
1 a b 2
2 c d 3
In [5]: pd.read_csv(StringIO(data), usecols=lambda x: x.upper() in ["COL1", "COL3"])
Out[5]:
col1 col3
0 a 1
1 a 2
2 c 3
Using this parameter results in much faster parsing time and lower memory usage
when using the c engine. The Python engine loads the data first before deciding
which columns to drop.
squeezeboolean, default FalseIf the parsed data only contains one column then return a Series.
Deprecated since version 1.4.0: Append .squeeze("columns") to the call to {func_name} to squeeze
the data.
prefixstr, default NonePrefix to add to column numbers when no header, e.g. ‘X’ for X0, X1, …
Deprecated since version 1.4.0: Use a list comprehension on the DataFrame’s columns after calling read_csv.
In [6]: data = "col1,col2,col3\na,b,1"
In [7]: df = pd.read_csv(StringIO(data))
In [8]: df.columns = [f"pre_{col}" for col in df.columns]
In [9]: df
Out[9]:
pre_col1 pre_col2 pre_col3
0 a b 1
mangle_dupe_colsboolean, default TrueDuplicate columns will be specified as ‘X’, ‘X.1’…’X.N’, rather than ‘X’…’X’.
Passing in False will cause data to be overwritten if there are duplicate
names in the columns.
Deprecated since version 1.5.0: The argument was never implemented, and a new argument where the
renaming pattern can be specified will be added instead.
General parsing configuration#
dtypeType name or dict of column -> type, default NoneData type for data or columns. E.g. {'a': np.float64, 'b': np.int32, 'c': 'Int64'}
Use str or object together with suitable na_values settings to preserve
and not interpret dtype. If converters are specified, they will be applied INSTEAD
of dtype conversion.
New in version 1.5.0: Support for defaultdict was added. Specify a defaultdict as input where
the default determines the dtype of the columns which are not explicitly
listed.
engine{'c', 'python', 'pyarrow'}Parser engine to use. The C and pyarrow engines are faster, while the python engine
is currently more feature-complete. Multithreading is currently only supported by
the pyarrow engine.
New in version 1.4.0: The “pyarrow” engine was added as an experimental engine, and some features
are unsupported, or may not work correctly, with this engine.
convertersdict, default NoneDict of functions for converting values in certain columns. Keys can either be
integers or column labels.
true_valueslist, default NoneValues to consider as True.
false_valueslist, default NoneValues to consider as False.
skipinitialspaceboolean, default FalseSkip spaces after delimiter.
skiprowslist-like or integer, default NoneLine numbers to skip (0-indexed) or number of lines to skip (int) at the start
of the file.
If callable, the callable function will be evaluated against the row
indices, returning True if the row should be skipped and False otherwise:
In [10]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"
In [11]: pd.read_csv(StringIO(data))
Out[11]:
col1 col2 col3
0 a b 1
1 a b 2
2 c d 3
In [12]: pd.read_csv(StringIO(data), skiprows=lambda x: x % 2 != 0)
Out[12]:
col1 col2 col3
0 a b 2
skipfooterint, default 0Number of lines at bottom of file to skip (unsupported with engine=’c’).
nrowsint, default NoneNumber of rows of file to read. Useful for reading pieces of large files.
low_memoryboolean, default TrueInternally process the file in chunks, resulting in lower memory use
while parsing, but possibly mixed type inference. To ensure no mixed
types either set False, or specify the type with the dtype parameter.
Note that the entire file is read into a single DataFrame regardless,
use the chunksize or iterator parameter to return the data in chunks.
(Only valid with C parser)
memory_mapboolean, default FalseIf a filepath is provided for filepath_or_buffer, map the file object
directly onto memory and access the data directly from there. Using this
option can improve performance because there is no longer any I/O overhead.
NA and missing data handling#
na_valuesscalar, str, list-like, or dict, default NoneAdditional strings to recognize as NA/NaN. If dict passed, specific per-column
NA values. See na values const below
for a list of the values interpreted as NaN by default.
keep_default_naboolean, default TrueWhether or not to include the default NaN values when parsing the data.
Depending on whether na_values is passed in, the behavior is as follows:
If keep_default_na is True, and na_values are specified, na_values
is appended to the default NaN values used for parsing.
If keep_default_na is True, and na_values are not specified, only
the default NaN values are used for parsing.
If keep_default_na is False, and na_values are specified, only
the NaN values specified na_values are used for parsing.
If keep_default_na is False, and na_values are not specified, no
strings will be parsed as NaN.
Note that if na_filter is passed in as False, the keep_default_na and
na_values parameters will be ignored.
na_filterboolean, default TrueDetect missing value markers (empty strings and the value of na_values). In
data without any NAs, passing na_filter=False can improve the performance
of reading a large file.
verboseboolean, default FalseIndicate number of NA values placed in non-numeric columns.
skip_blank_linesboolean, default TrueIf True, skip over blank lines rather than interpreting as NaN values.
Datetime handling#
parse_datesboolean or list of ints or names or list of lists or dict, default False.
If True -> try parsing the index.
If [1, 2, 3] -> try parsing columns 1, 2, 3 each as a separate date
column.
If [[1, 3]] -> combine columns 1 and 3 and parse as a single date
column.
If {'foo': [1, 3]} -> parse columns 1, 3 as date and call result ‘foo’.
Note
A fast-path exists for iso8601-formatted dates.
infer_datetime_formatboolean, default FalseIf True and parse_dates is enabled for a column, attempt to infer the
datetime format to speed up the processing.
keep_date_colboolean, default FalseIf True and parse_dates specifies combining multiple columns then keep the
original columns.
date_parserfunction, default NoneFunction to use for converting a sequence of string columns to an array of
datetime instances. The default uses dateutil.parser.parser to do the
conversion. pandas will try to call date_parser in three different ways,
advancing to the next if an exception occurs: 1) Pass one or more arrays (as
defined by parse_dates) as arguments; 2) concatenate (row-wise) the string
values from the columns defined by parse_dates into a single array and pass
that; and 3) call date_parser once for each row using one or more strings
(corresponding to the columns defined by parse_dates) as arguments.
dayfirstboolean, default FalseDD/MM format dates, international and European format.
cache_datesboolean, default TrueIf True, use a cache of unique, converted dates to apply the datetime
conversion. May produce significant speed-up when parsing duplicate
date strings, especially ones with timezone offsets.
New in version 0.25.0.
Iteration#
iteratorboolean, default FalseReturn TextFileReader object for iteration or getting chunks with
get_chunk().
chunksizeint, default NoneReturn TextFileReader object for iteration. See iterating and chunking below.
Quoting, compression, and file format#
compression{'infer', 'gzip', 'bz2', 'zip', 'xz', 'zstd', None, dict}, default 'infer'For on-the-fly decompression of on-disk data. If ‘infer’, then use gzip,
bz2, zip, xz, or zstandard if filepath_or_buffer is path-like ending in ‘.gz’, ‘.bz2’,
‘.zip’, ‘.xz’, ‘.zst’, respectively, and no decompression otherwise. If using ‘zip’,
the ZIP file must contain only one data file to be read in.
Set to None for no decompression. Can also be a dict with key 'method'
set to one of {'zip', 'gzip', 'bz2', 'zstd'} and other key-value pairs are
forwarded to zipfile.ZipFile, gzip.GzipFile, bz2.BZ2File, or zstandard.ZstdDecompressor.
As an example, the following could be passed for faster compression and to
create a reproducible gzip archive:
compression={'method': 'gzip', 'compresslevel': 1, 'mtime': 1}.
Changed in version 1.1.0: dict option extended to support gzip and bz2.
Changed in version 1.2.0: Previous versions forwarded dict entries for ‘gzip’ to gzip.open.
thousandsstr, default NoneThousands separator.
decimalstr, default '.'Character to recognize as decimal point. E.g. use ',' for European data.
float_precisionstring, default NoneSpecifies which converter the C engine should use for floating-point values.
The options are None for the ordinary converter, high for the
high-precision converter, and round_trip for the round-trip converter.
lineterminatorstr (length 1), default NoneCharacter to break file into lines. Only valid with C parser.
quotecharstr (length 1)The character used to denote the start and end of a quoted item. Quoted items
can include the delimiter and it will be ignored.
quotingint or csv.QUOTE_* instance, default 0Control field quoting behavior per csv.QUOTE_* constants. Use one of
QUOTE_MINIMAL (0), QUOTE_ALL (1), QUOTE_NONNUMERIC (2) or
QUOTE_NONE (3).
doublequoteboolean, default TrueWhen quotechar is specified and quoting is not QUOTE_NONE,
indicate whether or not to interpret two consecutive quotechar elements
inside a field as a single quotechar element.
escapecharstr (length 1), default NoneOne-character string used to escape delimiter when quoting is QUOTE_NONE.
commentstr, default NoneIndicates remainder of line should not be parsed. If found at the beginning of
a line, the line will be ignored altogether. This parameter must be a single
character. Like empty lines (as long as skip_blank_lines=True), fully
commented lines are ignored by the parameter header but not by skiprows.
For example, if comment='#', parsing ‘#empty\na,b,c\n1,2,3’ with
header=0 will result in ‘a,b,c’ being treated as the header.
encodingstr, default NoneEncoding to use for UTF when reading/writing (e.g. 'utf-8'). List of
Python standard encodings.
dialectstr or csv.Dialect instance, default NoneIf provided, this parameter will override values (default or not) for the
following parameters: delimiter, doublequote, escapechar,
skipinitialspace, quotechar, and quoting. If it is necessary to
override values, a ParserWarning will be issued. See csv.Dialect
documentation for more details.
Error handling#
error_bad_linesboolean, optional, default NoneLines with too many fields (e.g. a csv line with too many commas) will by
default cause an exception to be raised, and no DataFrame will be
returned. If False, then these “bad lines” will dropped from the
DataFrame that is returned. See bad lines
below.
Deprecated since version 1.3.0: The on_bad_lines parameter should be used instead to specify behavior upon
encountering a bad line instead.
warn_bad_linesboolean, optional, default NoneIf error_bad_lines is False, and warn_bad_lines is True, a warning for
each “bad line” will be output.
Deprecated since version 1.3.0: The on_bad_lines parameter should be used instead to specify behavior upon
encountering a bad line instead.
on_bad_lines(‘error’, ‘warn’, ‘skip’), default ‘error’Specifies what to do upon encountering a bad line (a line with too many fields).
Allowed values are :
‘error’, raise an ParserError when a bad line is encountered.
‘warn’, print a warning when a bad line is encountered and skip that line.
‘skip’, skip bad lines without raising or warning when they are encountered.
New in version 1.3.0.
Specifying column data types#
You can indicate the data type for the whole DataFrame or individual
columns:
In [13]: import numpy as np
In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"
In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11
In [16]: df = pd.read_csv(StringIO(data), dtype=object)
In [17]: df
Out[17]:
a b c d
0 1 2 3 4
1 5 6 7 8
2 9 10 11 NaN
In [18]: df["a"][0]
Out[18]: '1'
In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})
In [20]: df.dtypes
Out[20]:
a int64
b object
c float64
d Int64
dtype: object
Fortunately, pandas offers more than one way to ensure that your column(s)
contain only one dtype. If you’re unfamiliar with these concepts, you can
see here to learn more about dtypes, and
here to learn more about object conversion in
pandas.
For instance, you can use the converters argument
of read_csv():
In [21]: data = "col_1\n1\n2\n'A'\n4.22"
In [22]: df = pd.read_csv(StringIO(data), converters={"col_1": str})
In [23]: df
Out[23]:
col_1
0 1
1 2
2 'A'
3 4.22
In [24]: df["col_1"].apply(type).value_counts()
Out[24]:
<class 'str'> 4
Name: col_1, dtype: int64
Or you can use the to_numeric() function to coerce the
dtypes after reading in the data,
In [25]: df2 = pd.read_csv(StringIO(data))
In [26]: df2["col_1"] = pd.to_numeric(df2["col_1"], errors="coerce")
In [27]: df2
Out[27]:
col_1
0 1.00
1 2.00
2 NaN
3 4.22
In [28]: df2["col_1"].apply(type).value_counts()
Out[28]:
<class 'float'> 4
Name: col_1, dtype: int64
which will convert all valid parsing to floats, leaving the invalid parsing
as NaN.
Ultimately, how you deal with reading in columns containing mixed dtypes
depends on your specific needs. In the case above, if you wanted to NaN out
the data anomalies, then to_numeric() is probably your best option.
However, if you wanted for all the data to be coerced, no matter the type, then
using the converters argument of read_csv() would certainly be
worth trying.
Note
In some cases, reading in abnormal data with columns containing mixed dtypes
will result in an inconsistent dataset. If you rely on pandas to infer the
dtypes of your columns, the parsing engine will go and infer the dtypes for
different chunks of the data, rather than the whole dataset at once. Consequently,
you can end up with column(s) with mixed dtypes. For example,
In [29]: col_1 = list(range(500000)) + ["a", "b"] + list(range(500000))
In [30]: df = pd.DataFrame({"col_1": col_1})
In [31]: df.to_csv("foo.csv")
In [32]: mixed_df = pd.read_csv("foo.csv")
In [33]: mixed_df["col_1"].apply(type).value_counts()
Out[33]:
<class 'int'> 737858
<class 'str'> 262144
Name: col_1, dtype: int64
In [34]: mixed_df["col_1"].dtype
Out[34]: dtype('O')
will result with mixed_df containing an int dtype for certain chunks
of the column, and str for others due to the mixed dtypes from the
data that was read in. It is important to note that the overall column will be
marked with a dtype of object, which is used for columns with mixed dtypes.
Specifying categorical dtype#
Categorical columns can be parsed directly by specifying dtype='category' or
dtype=CategoricalDtype(categories, ordered).
In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"
In [36]: pd.read_csv(StringIO(data))
Out[36]:
col1 col2 col3
0 a b 1
1 a b 2
2 c d 3
In [37]: pd.read_csv(StringIO(data)).dtypes
Out[37]:
col1 object
col2 object
col3 int64
dtype: object
In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
Out[38]:
col1 category
col2 category
col3 category
dtype: object
Individual columns can be parsed as a Categorical using a dict
specification:
In [39]: pd.read_csv(StringIO(data), dtype={"col1": "category"}).dtypes
Out[39]:
col1 category
col2 object
col3 int64
dtype: object
Specifying dtype='category' will result in an unordered Categorical
whose categories are the unique values observed in the data. For more
control on the categories and order, create a
CategoricalDtype ahead of time, and pass that for
that column’s dtype.
In [40]: from pandas.api.types import CategoricalDtype
In [41]: dtype = CategoricalDtype(["d", "c", "b", "a"], ordered=True)
In [42]: pd.read_csv(StringIO(data), dtype={"col1": dtype}).dtypes
Out[42]:
col1 category
col2 object
col3 int64
dtype: object
When using dtype=CategoricalDtype, “unexpected” values outside of
dtype.categories are treated as missing values.
In [43]: dtype = CategoricalDtype(["a", "b", "d"]) # No 'c'
In [44]: pd.read_csv(StringIO(data), dtype={"col1": dtype}).col1
Out[44]:
0 a
1 a
2 NaN
Name: col1, dtype: category
Categories (3, object): ['a', 'b', 'd']
This matches the behavior of Categorical.set_categories().
Note
With dtype='category', the resulting categories will always be parsed
as strings (object dtype). If the categories are numeric they can be
converted using the to_numeric() function, or as appropriate, another
converter such as to_datetime().
When dtype is a CategoricalDtype with homogeneous categories (
all numeric, all datetimes, etc.), the conversion is done automatically.
In [45]: df = pd.read_csv(StringIO(data), dtype="category")
In [46]: df.dtypes
Out[46]:
col1 category
col2 category
col3 category
dtype: object
In [47]: df["col3"]
Out[47]:
0 1
1 2
2 3
Name: col3, dtype: category
Categories (3, object): ['1', '2', '3']
In [48]: new_categories = pd.to_numeric(df["col3"].cat.categories)
In [49]: df["col3"] = df["col3"].cat.rename_categories(new_categories)
In [50]: df["col3"]
Out[50]:
0 1
1 2
2 3
Name: col3, dtype: category
Categories (3, int64): [1, 2, 3]
Naming and using columns#
Handling column names#
A file may or may not have a header row. pandas assumes the first row should be
used as the column names:
In [51]: data = "a,b,c\n1,2,3\n4,5,6\n7,8,9"
In [52]: print(data)
a,b,c
1,2,3
4,5,6
7,8,9
In [53]: pd.read_csv(StringIO(data))
Out[53]:
a b c
0 1 2 3
1 4 5 6
2 7 8 9
By specifying the names argument in conjunction with header you can
indicate other names to use and whether or not to throw away the header row (if
any):
In [54]: print(data)
a,b,c
1,2,3
4,5,6
7,8,9
In [55]: pd.read_csv(StringIO(data), names=["foo", "bar", "baz"], header=0)
Out[55]:
foo bar baz
0 1 2 3
1 4 5 6
2 7 8 9
In [56]: pd.read_csv(StringIO(data), names=["foo", "bar", "baz"], header=None)
Out[56]:
foo bar baz
0 a b c
1 1 2 3
2 4 5 6
3 7 8 9
If the header is in a row other than the first, pass the row number to
header. This will skip the preceding rows:
In [57]: data = "skip this skip it\na,b,c\n1,2,3\n4,5,6\n7,8,9"
In [58]: pd.read_csv(StringIO(data), header=1)
Out[58]:
a b c
0 1 2 3
1 4 5 6
2 7 8 9
Note
Default behavior is to infer the column names: if no names are
passed the behavior is identical to header=0 and column names
are inferred from the first non-blank line of the file, if column
names are passed explicitly then the behavior is identical to
header=None.
Duplicate names parsing#
Deprecated since version 1.5.0: mangle_dupe_cols was never implemented, and a new argument where the
renaming pattern can be specified will be added instead.
If the file or header contains duplicate names, pandas will by default
distinguish between them so as to prevent overwriting data:
In [59]: data = "a,b,a\n0,1,2\n3,4,5"
In [60]: pd.read_csv(StringIO(data))
Out[60]:
a b a.1
0 0 1 2
1 3 4 5
There is no more duplicate data because mangle_dupe_cols=True by default,
which modifies a series of duplicate columns ‘X’, …, ‘X’ to become
‘X’, ‘X.1’, …, ‘X.N’.
Filtering columns (usecols)#
The usecols argument allows you to select any subset of the columns in a
file, either using the column names, position numbers or a callable:
In [61]: data = "a,b,c,d\n1,2,3,foo\n4,5,6,bar\n7,8,9,baz"
In [62]: pd.read_csv(StringIO(data))
Out[62]:
a b c d
0 1 2 3 foo
1 4 5 6 bar
2 7 8 9 baz
In [63]: pd.read_csv(StringIO(data), usecols=["b", "d"])
Out[63]:
b d
0 2 foo
1 5 bar
2 8 baz
In [64]: pd.read_csv(StringIO(data), usecols=[0, 2, 3])
Out[64]:
a c d
0 1 3 foo
1 4 6 bar
2 7 9 baz
In [65]: pd.read_csv(StringIO(data), usecols=lambda x: x.upper() in ["A", "C"])
Out[65]:
a c
0 1 3
1 4 6
2 7 9
The usecols argument can also be used to specify which columns not to
use in the final result:
In [66]: pd.read_csv(StringIO(data), usecols=lambda x: x not in ["a", "c"])
Out[66]:
b d
0 2 foo
1 5 bar
2 8 baz
In this case, the callable is specifying that we exclude the “a” and “c”
columns from the output.
Comments and empty lines#
Ignoring line comments and empty lines#
If the comment parameter is specified, then completely commented lines will
be ignored. By default, completely blank lines will be ignored as well.
In [67]: data = "\na,b,c\n \n# commented line\n1,2,3\n\n4,5,6"
In [68]: print(data)
a,b,c
# commented line
1,2,3
4,5,6
In [69]: pd.read_csv(StringIO(data), comment="#")
Out[69]:
a b c
0 1 2 3
1 4 5 6
If skip_blank_lines=False, then read_csv will not ignore blank lines:
In [70]: data = "a,b,c\n\n1,2,3\n\n\n4,5,6"
In [71]: pd.read_csv(StringIO(data), skip_blank_lines=False)
Out[71]:
a b c
0 NaN NaN NaN
1 1.0 2.0 3.0
2 NaN NaN NaN
3 NaN NaN NaN
4 4.0 5.0 6.0
Warning
The presence of ignored lines might create ambiguities involving line numbers;
the parameter header uses row numbers (ignoring commented/empty
lines), while skiprows uses line numbers (including commented/empty lines):
In [72]: data = "#comment\na,b,c\nA,B,C\n1,2,3"
In [73]: pd.read_csv(StringIO(data), comment="#", header=1)
Out[73]:
A B C
0 1 2 3
In [74]: data = "A,B,C\n#comment\na,b,c\n1,2,3"
In [75]: pd.read_csv(StringIO(data), comment="#", skiprows=2)
Out[75]:
a b c
0 1 2 3
If both header and skiprows are specified, header will be
relative to the end of skiprows. For example:
In [76]: data = (
....: "# empty\n"
....: "# second empty line\n"
....: "# third emptyline\n"
....: "X,Y,Z\n"
....: "1,2,3\n"
....: "A,B,C\n"
....: "1,2.,4.\n"
....: "5.,NaN,10.0\n"
....: )
....:
In [77]: print(data)
# empty
# second empty line
# third emptyline
X,Y,Z
1,2,3
A,B,C
1,2.,4.
5.,NaN,10.0
In [78]: pd.read_csv(StringIO(data), comment="#", skiprows=4, header=1)
Out[78]:
A B C
0 1.0 2.0 4.0
1 5.0 NaN 10.0
Comments#
Sometimes comments or meta data may be included in a file:
In [79]: print(open("tmp.csv").read())
ID,level,category
Patient1,123000,x # really unpleasant
Patient2,23000,y # wouldn't take his medicine
Patient3,1234018,z # awesome
By default, the parser includes the comments in the output:
In [80]: df = pd.read_csv("tmp.csv")
In [81]: df
Out[81]:
ID level category
0 Patient1 123000 x # really unpleasant
1 Patient2 23000 y # wouldn't take his medicine
2 Patient3 1234018 z # awesome
We can suppress the comments using the comment keyword:
In [82]: df = pd.read_csv("tmp.csv", comment="#")
In [83]: df
Out[83]:
ID level category
0 Patient1 123000 x
1 Patient2 23000 y
2 Patient3 1234018 z
Dealing with Unicode data#
The encoding argument should be used for encoded unicode data, which will
result in byte strings being decoded to unicode in the result:
In [84]: from io import BytesIO
In [85]: data = b"word,length\n" b"Tr\xc3\xa4umen,7\n" b"Gr\xc3\xbc\xc3\x9fe,5"
In [86]: data = data.decode("utf8").encode("latin-1")
In [87]: df = pd.read_csv(BytesIO(data), encoding="latin-1")
In [88]: df
Out[88]:
word length
0 Träumen 7
1 Grüße 5
In [89]: df["word"][1]
Out[89]: 'Grüße'
Some formats which encode all characters as multiple bytes, like UTF-16, won’t
parse correctly at all without specifying the encoding. Full list of Python
standard encodings.
Index columns and trailing delimiters#
If a file has one more column of data than the number of column names, the
first column will be used as the DataFrame’s row names:
In [90]: data = "a,b,c\n4,apple,bat,5.7\n8,orange,cow,10"
In [91]: pd.read_csv(StringIO(data))
Out[91]:
a b c
4 apple bat 5.7
8 orange cow 10.0
In [92]: data = "index,a,b,c\n4,apple,bat,5.7\n8,orange,cow,10"
In [93]: pd.read_csv(StringIO(data), index_col=0)
Out[93]:
a b c
index
4 apple bat 5.7
8 orange cow 10.0
Ordinarily, you can achieve this behavior using the index_col option.
There are some exception cases when a file has been prepared with delimiters at
the end of each data line, confusing the parser. To explicitly disable the
index column inference and discard the last column, pass index_col=False:
In [94]: data = "a,b,c\n4,apple,bat,\n8,orange,cow,"
In [95]: print(data)
a,b,c
4,apple,bat,
8,orange,cow,
In [96]: pd.read_csv(StringIO(data))
Out[96]:
a b c
4 apple bat NaN
8 orange cow NaN
In [97]: pd.read_csv(StringIO(data), index_col=False)
Out[97]:
a b c
0 4 apple bat
1 8 orange cow
If a subset of data is being parsed using the usecols option, the
index_col specification is based on that subset, not the original data.
In [98]: data = "a,b,c\n4,apple,bat,\n8,orange,cow,"
In [99]: print(data)
a,b,c
4,apple,bat,
8,orange,cow,
In [100]: pd.read_csv(StringIO(data), usecols=["b", "c"])
Out[100]:
b c
4 bat NaN
8 cow NaN
In [101]: pd.read_csv(StringIO(data), usecols=["b", "c"], index_col=0)
Out[101]:
b c
4 bat NaN
8 cow NaN
Date Handling#
Specifying date columns#
To better facilitate working with datetime data, read_csv()
uses the keyword arguments parse_dates and date_parser
to allow users to specify a variety of columns and date/time formats to turn the
input text data into datetime objects.
The simplest case is to just pass in parse_dates=True:
In [102]: with open("foo.csv", mode="w") as f:
.....: f.write("date,A,B,C\n20090101,a,1,2\n20090102,b,3,4\n20090103,c,4,5")
.....:
# Use a column as an index, and parse it as dates.
In [103]: df = pd.read_csv("foo.csv", index_col=0, parse_dates=True)
In [104]: df
Out[104]:
A B C
date
2009-01-01 a 1 2
2009-01-02 b 3 4
2009-01-03 c 4 5
# These are Python datetime objects
In [105]: df.index
Out[105]: DatetimeIndex(['2009-01-01', '2009-01-02', '2009-01-03'], dtype='datetime64[ns]', name='date', freq=None)
It is often the case that we may want to store date and time data separately,
or store various date fields separately. the parse_dates keyword can be
used to specify a combination of columns to parse the dates and/or times from.
You can specify a list of column lists to parse_dates, the resulting date
columns will be prepended to the output (so as to not affect the existing column
order) and the new column names will be the concatenation of the component
column names:
In [106]: data = (
.....: "KORD,19990127, 19:00:00, 18:56:00, 0.8100\n"
.....: "KORD,19990127, 20:00:00, 19:56:00, 0.0100\n"
.....: "KORD,19990127, 21:00:00, 20:56:00, -0.5900\n"
.....: "KORD,19990127, 21:00:00, 21:18:00, -0.9900\n"
.....: "KORD,19990127, 22:00:00, 21:56:00, -0.5900\n"
.....: "KORD,19990127, 23:00:00, 22:56:00, -0.5900"
.....: )
.....:
In [107]: with open("tmp.csv", "w") as fh:
.....: fh.write(data)
.....:
In [108]: df = pd.read_csv("tmp.csv", header=None, parse_dates=[[1, 2], [1, 3]])
In [109]: df
Out[109]:
1_2 1_3 0 4
0 1999-01-27 19:00:00 1999-01-27 18:56:00 KORD 0.81
1 1999-01-27 20:00:00 1999-01-27 19:56:00 KORD 0.01
2 1999-01-27 21:00:00 1999-01-27 20:56:00 KORD -0.59
3 1999-01-27 21:00:00 1999-01-27 21:18:00 KORD -0.99
4 1999-01-27 22:00:00 1999-01-27 21:56:00 KORD -0.59
5 1999-01-27 23:00:00 1999-01-27 22:56:00 KORD -0.59
By default the parser removes the component date columns, but you can choose
to retain them via the keep_date_col keyword:
In [110]: df = pd.read_csv(
.....: "tmp.csv", header=None, parse_dates=[[1, 2], [1, 3]], keep_date_col=True
.....: )
.....:
In [111]: df
Out[111]:
1_2 1_3 0 ... 2 3 4
0 1999-01-27 19:00:00 1999-01-27 18:56:00 KORD ... 19:00:00 18:56:00 0.81
1 1999-01-27 20:00:00 1999-01-27 19:56:00 KORD ... 20:00:00 19:56:00 0.01
2 1999-01-27 21:00:00 1999-01-27 20:56:00 KORD ... 21:00:00 20:56:00 -0.59
3 1999-01-27 21:00:00 1999-01-27 21:18:00 KORD ... 21:00:00 21:18:00 -0.99
4 1999-01-27 22:00:00 1999-01-27 21:56:00 KORD ... 22:00:00 21:56:00 -0.59
5 1999-01-27 23:00:00 1999-01-27 22:56:00 KORD ... 23:00:00 22:56:00 -0.59
[6 rows x 7 columns]
Note that if you wish to combine multiple columns into a single date column, a
nested list must be used. In other words, parse_dates=[1, 2] indicates that
the second and third columns should each be parsed as separate date columns
while parse_dates=[[1, 2]] means the two columns should be parsed into a
single column.
You can also use a dict to specify custom name columns:
In [112]: date_spec = {"nominal": [1, 2], "actual": [1, 3]}
In [113]: df = pd.read_csv("tmp.csv", header=None, parse_dates=date_spec)
In [114]: df
Out[114]:
nominal actual 0 4
0 1999-01-27 19:00:00 1999-01-27 18:56:00 KORD 0.81
1 1999-01-27 20:00:00 1999-01-27 19:56:00 KORD 0.01
2 1999-01-27 21:00:00 1999-01-27 20:56:00 KORD -0.59
3 1999-01-27 21:00:00 1999-01-27 21:18:00 KORD -0.99
4 1999-01-27 22:00:00 1999-01-27 21:56:00 KORD -0.59
5 1999-01-27 23:00:00 1999-01-27 22:56:00 KORD -0.59
It is important to remember that if multiple text columns are to be parsed into
a single date column, then a new column is prepended to the data. The index_col
specification is based off of this new set of columns rather than the original
data columns:
In [115]: date_spec = {"nominal": [1, 2], "actual": [1, 3]}
In [116]: df = pd.read_csv(
.....: "tmp.csv", header=None, parse_dates=date_spec, index_col=0
.....: ) # index is the nominal column
.....:
In [117]: df
Out[117]:
actual 0 4
nominal
1999-01-27 19:00:00 1999-01-27 18:56:00 KORD 0.81
1999-01-27 20:00:00 1999-01-27 19:56:00 KORD 0.01
1999-01-27 21:00:00 1999-01-27 20:56:00 KORD -0.59
1999-01-27 21:00:00 1999-01-27 21:18:00 KORD -0.99
1999-01-27 22:00:00 1999-01-27 21:56:00 KORD -0.59
1999-01-27 23:00:00 1999-01-27 22:56:00 KORD -0.59
Note
If a column or index contains an unparsable date, the entire column or
index will be returned unaltered as an object data type. For non-standard
datetime parsing, use to_datetime() after pd.read_csv.
Note
read_csv has a fast_path for parsing datetime strings in iso8601 format,
e.g “2000-01-01T00:01:02+00:00” and similar variations. If you can arrange
for your data to store datetimes in this format, load times will be
significantly faster, ~20x has been observed.
Date parsing functions#
Finally, the parser allows you to specify a custom date_parser function to
take full advantage of the flexibility of the date parsing API:
In [118]: df = pd.read_csv(
.....: "tmp.csv", header=None, parse_dates=date_spec, date_parser=pd.to_datetime
.....: )
.....:
In [119]: df
Out[119]:
nominal actual 0 4
0 1999-01-27 19:00:00 1999-01-27 18:56:00 KORD 0.81
1 1999-01-27 20:00:00 1999-01-27 19:56:00 KORD 0.01
2 1999-01-27 21:00:00 1999-01-27 20:56:00 KORD -0.59
3 1999-01-27 21:00:00 1999-01-27 21:18:00 KORD -0.99
4 1999-01-27 22:00:00 1999-01-27 21:56:00 KORD -0.59
5 1999-01-27 23:00:00 1999-01-27 22:56:00 KORD -0.59
pandas will try to call the date_parser function in three different ways. If
an exception is raised, the next one is tried:
date_parser is first called with one or more arrays as arguments,
as defined using parse_dates (e.g., date_parser(['2013', '2013'], ['1', '2'])).
If #1 fails, date_parser is called with all the columns
concatenated row-wise into a single array (e.g., date_parser(['2013 1', '2013 2'])).
Note that performance-wise, you should try these methods of parsing dates in order:
Try to infer the format using infer_datetime_format=True (see section below).
If you know the format, use pd.to_datetime():
date_parser=lambda x: pd.to_datetime(x, format=...).
If you have a really non-standard format, use a custom date_parser function.
For optimal performance, this should be vectorized, i.e., it should accept arrays
as arguments.
Parsing a CSV with mixed timezones#
pandas cannot natively represent a column or index with mixed timezones. If your CSV
file contains columns with a mixture of timezones, the default result will be
an object-dtype column with strings, even with parse_dates.
In [120]: content = """\
.....: a
.....: 2000-01-01T00:00:00+05:00
.....: 2000-01-01T00:00:00+06:00"""
.....:
In [121]: df = pd.read_csv(StringIO(content), parse_dates=["a"])
In [122]: df["a"]
Out[122]:
0 2000-01-01 00:00:00+05:00
1 2000-01-01 00:00:00+06:00
Name: a, dtype: object
To parse the mixed-timezone values as a datetime column, pass a partially-applied
to_datetime() with utc=True as the date_parser.
In [123]: df = pd.read_csv(
.....: StringIO(content),
.....: parse_dates=["a"],
.....: date_parser=lambda col: pd.to_datetime(col, utc=True),
.....: )
.....:
In [124]: df["a"]
Out[124]:
0 1999-12-31 19:00:00+00:00
1 1999-12-31 18:00:00+00:00
Name: a, dtype: datetime64[ns, UTC]
Inferring datetime format#
If you have parse_dates enabled for some or all of your columns, and your
datetime strings are all formatted the same way, you may get a large speed
up by setting infer_datetime_format=True. If set, pandas will attempt
to guess the format of your datetime strings, and then use a faster means
of parsing the strings. 5-10x parsing speeds have been observed. pandas
will fallback to the usual parsing if either the format cannot be guessed
or the format that was guessed cannot properly parse the entire column
of strings. So in general, infer_datetime_format should not have any
negative consequences if enabled.
Here are some examples of datetime strings that can be guessed (All
representing December 30th, 2011 at 00:00:00):
“20111230”
“2011/12/30”
“20111230 00:00:00”
“12/30/2011 00:00:00”
“30/Dec/2011 00:00:00”
“30/December/2011 00:00:00”
Note that infer_datetime_format is sensitive to dayfirst. With
dayfirst=True, it will guess “01/12/2011” to be December 1st. With
dayfirst=False (default) it will guess “01/12/2011” to be January 12th.
# Try to infer the format for the index column
In [125]: df = pd.read_csv(
.....: "foo.csv",
.....: index_col=0,
.....: parse_dates=True,
.....: infer_datetime_format=True,
.....: )
.....:
In [126]: df
Out[126]:
A B C
date
2009-01-01 a 1 2
2009-01-02 b 3 4
2009-01-03 c 4 5
International date formats#
While US date formats tend to be MM/DD/YYYY, many international formats use
DD/MM/YYYY instead. For convenience, a dayfirst keyword is provided:
In [127]: data = "date,value,cat\n1/6/2000,5,a\n2/6/2000,10,b\n3/6/2000,15,c"
In [128]: print(data)
date,value,cat
1/6/2000,5,a
2/6/2000,10,b
3/6/2000,15,c
In [129]: with open("tmp.csv", "w") as fh:
.....: fh.write(data)
.....:
In [130]: pd.read_csv("tmp.csv", parse_dates=[0])
Out[130]:
date value cat
0 2000-01-06 5 a
1 2000-02-06 10 b
2 2000-03-06 15 c
In [131]: pd.read_csv("tmp.csv", dayfirst=True, parse_dates=[0])
Out[131]:
date value cat
0 2000-06-01 5 a
1 2000-06-02 10 b
2 2000-06-03 15 c
Writing CSVs to binary file objects#
New in version 1.2.0.
df.to_csv(..., mode="wb") allows writing a CSV to a file object
opened binary mode. In most cases, it is not necessary to specify
mode as Pandas will auto-detect whether the file object is
opened in text or binary mode.
In [132]: import io
In [133]: data = pd.DataFrame([0, 1, 2])
In [134]: buffer = io.BytesIO()
In [135]: data.to_csv(buffer, encoding="utf-8", compression="gzip")
Specifying method for floating-point conversion#
The parameter float_precision can be specified in order to use
a specific floating-point converter during parsing with the C engine.
The options are the ordinary converter, the high-precision converter, and
the round-trip converter (which is guaranteed to round-trip values after
writing to a file). For example:
In [136]: val = "0.3066101993807095471566981359501369297504425048828125"
In [137]: data = "a,b,c\n1,2,{0}".format(val)
In [138]: abs(
.....: pd.read_csv(
.....: StringIO(data),
.....: engine="c",
.....: float_precision=None,
.....: )["c"][0] - float(val)
.....: )
.....:
Out[138]: 5.551115123125783e-17
In [139]: abs(
.....: pd.read_csv(
.....: StringIO(data),
.....: engine="c",
.....: float_precision="high",
.....: )["c"][0] - float(val)
.....: )
.....:
Out[139]: 5.551115123125783e-17
In [140]: abs(
.....: pd.read_csv(StringIO(data), engine="c", float_precision="round_trip")["c"][0]
.....: - float(val)
.....: )
.....:
Out[140]: 0.0
Thousand separators#
For large numbers that have been written with a thousands separator, you can
set the thousands keyword to a string of length 1 so that integers will be parsed
correctly:
By default, numbers with a thousands separator will be parsed as strings:
In [141]: data = (
.....: "ID|level|category\n"
.....: "Patient1|123,000|x\n"
.....: "Patient2|23,000|y\n"
.....: "Patient3|1,234,018|z"
.....: )
.....:
In [142]: with open("tmp.csv", "w") as fh:
.....: fh.write(data)
.....:
In [143]: df = pd.read_csv("tmp.csv", sep="|")
In [144]: df
Out[144]:
ID level category
0 Patient1 123,000 x
1 Patient2 23,000 y
2 Patient3 1,234,018 z
In [145]: df.level.dtype
Out[145]: dtype('O')
The thousands keyword allows integers to be parsed correctly:
In [146]: df = pd.read_csv("tmp.csv", sep="|", thousands=",")
In [147]: df
Out[147]:
ID level category
0 Patient1 123000 x
1 Patient2 23000 y
2 Patient3 1234018 z
In [148]: df.level.dtype
Out[148]: dtype('int64')
NA values#
To control which values are parsed as missing values (which are signified by
NaN), specify a string in na_values. If you specify a list of strings,
then all values in it are considered to be missing values. If you specify a
number (a float, like 5.0 or an integer like 5), the
corresponding equivalent values will also imply a missing value (in this case
effectively [5.0, 5] are recognized as NaN).
To completely override the default values that are recognized as missing, specify keep_default_na=False.
The default NaN recognized values are ['-1.#IND', '1.#QNAN', '1.#IND', '-1.#QNAN', '#N/A N/A', '#N/A', 'N/A',
'n/a', 'NA', '<NA>', '#NA', 'NULL', 'null', 'NaN', '-NaN', 'nan', '-nan', ''].
Let us consider some examples:
pd.read_csv("path_to_file.csv", na_values=[5])
In the example above 5 and 5.0 will be recognized as NaN, in
addition to the defaults. A string will first be interpreted as a numerical
5, then as a NaN.
pd.read_csv("path_to_file.csv", keep_default_na=False, na_values=[""])
Above, only an empty field will be recognized as NaN.
pd.read_csv("path_to_file.csv", keep_default_na=False, na_values=["NA", "0"])
Above, both NA and 0 as strings are NaN.
pd.read_csv("path_to_file.csv", na_values=["Nope"])
The default values, in addition to the string "Nope" are recognized as
NaN.
Infinity#
inf like values will be parsed as np.inf (positive infinity), and -inf as -np.inf (negative infinity).
These will ignore the case of the value, meaning Inf, will also be parsed as np.inf.
Returning Series#
Using the squeeze keyword, the parser will return output with a single column
as a Series:
Deprecated since version 1.4.0: Users should append .squeeze("columns") to the DataFrame returned by
read_csv instead.
In [149]: data = "level\nPatient1,123000\nPatient2,23000\nPatient3,1234018"
In [150]: with open("tmp.csv", "w") as fh:
.....: fh.write(data)
.....:
In [151]: print(open("tmp.csv").read())
level
Patient1,123000
Patient2,23000
Patient3,1234018
In [152]: output = pd.read_csv("tmp.csv", squeeze=True)
In [153]: output
Out[153]:
Patient1 123000
Patient2 23000
Patient3 1234018
Name: level, dtype: int64
In [154]: type(output)
Out[154]: pandas.core.series.Series
Boolean values#
The common values True, False, TRUE, and FALSE are all
recognized as boolean. Occasionally you might want to recognize other values
as being boolean. To do this, use the true_values and false_values
options as follows:
In [155]: data = "a,b,c\n1,Yes,2\n3,No,4"
In [156]: print(data)
a,b,c
1,Yes,2
3,No,4
In [157]: pd.read_csv(StringIO(data))
Out[157]:
a b c
0 1 Yes 2
1 3 No 4
In [158]: pd.read_csv(StringIO(data), true_values=["Yes"], false_values=["No"])
Out[158]:
a b c
0 1 True 2
1 3 False 4
Handling “bad” lines#
Some files may have malformed lines with too few fields or too many. Lines with
too few fields will have NA values filled in the trailing fields. Lines with
too many fields will raise an error by default:
In [159]: data = "a,b,c\n1,2,3\n4,5,6,7\n8,9,10"
In [160]: pd.read_csv(StringIO(data))
---------------------------------------------------------------------------
ParserError Traceback (most recent call last)
Cell In[160], line 1
----> 1 pd.read_csv(StringIO(data))
File ~/work/pandas/pandas/pandas/util/_decorators.py:211, in deprecate_kwarg.<locals>._deprecate_kwarg.<locals>.wrapper(*args, **kwargs)
209 else:
210 kwargs[new_arg_name] = new_arg_value
--> 211 return func(*args, **kwargs)
File ~/work/pandas/pandas/pandas/util/_decorators.py:331, in deprecate_nonkeyword_arguments.<locals>.decorate.<locals>.wrapper(*args, **kwargs)
325 if len(args) > num_allow_args:
326 warnings.warn(
327 msg.format(arguments=_format_argument_list(allow_args)),
328 FutureWarning,
329 stacklevel=find_stack_level(),
330 )
--> 331 return func(*args, **kwargs)
File ~/work/pandas/pandas/pandas/io/parsers/readers.py:950, in read_csv(filepath_or_buffer, sep, delimiter, header, names, index_col, usecols, squeeze, prefix, mangle_dupe_cols, dtype, engine, converters, true_values, false_values, skipinitialspace, skiprows, skipfooter, nrows, na_values, keep_default_na, na_filter, verbose, skip_blank_lines, parse_dates, infer_datetime_format, keep_date_col, date_parser, dayfirst, cache_dates, iterator, chunksize, compression, thousands, decimal, lineterminator, quotechar, quoting, doublequote, escapechar, comment, encoding, encoding_errors, dialect, error_bad_lines, warn_bad_lines, on_bad_lines, delim_whitespace, low_memory, memory_map, float_precision, storage_options)
935 kwds_defaults = _refine_defaults_read(
936 dialect,
937 delimiter,
(...)
946 defaults={"delimiter": ","},
947 )
948 kwds.update(kwds_defaults)
--> 950 return _read(filepath_or_buffer, kwds)
File ~/work/pandas/pandas/pandas/io/parsers/readers.py:611, in _read(filepath_or_buffer, kwds)
608 return parser
610 with parser:
--> 611 return parser.read(nrows)
File ~/work/pandas/pandas/pandas/io/parsers/readers.py:1778, in TextFileReader.read(self, nrows)
1771 nrows = validate_integer("nrows", nrows)
1772 try:
1773 # error: "ParserBase" has no attribute "read"
1774 (
1775 index,
1776 columns,
1777 col_dict,
-> 1778 ) = self._engine.read( # type: ignore[attr-defined]
1779 nrows
1780 )
1781 except Exception:
1782 self.close()
File ~/work/pandas/pandas/pandas/io/parsers/c_parser_wrapper.py:230, in CParserWrapper.read(self, nrows)
228 try:
229 if self.low_memory:
--> 230 chunks = self._reader.read_low_memory(nrows)
231 # destructive to chunks
232 data = _concatenate_chunks(chunks)
File ~/work/pandas/pandas/pandas/_libs/parsers.pyx:808, in pandas._libs.parsers.TextReader.read_low_memory()
File ~/work/pandas/pandas/pandas/_libs/parsers.pyx:866, in pandas._libs.parsers.TextReader._read_rows()
File ~/work/pandas/pandas/pandas/_libs/parsers.pyx:852, in pandas._libs.parsers.TextReader._tokenize_rows()
File ~/work/pandas/pandas/pandas/_libs/parsers.pyx:1973, in pandas._libs.parsers.raise_parser_error()
ParserError: Error tokenizing data. C error: Expected 3 fields in line 3, saw 4
You can elect to skip bad lines:
In [29]: pd.read_csv(StringIO(data), on_bad_lines="warn")
Skipping line 3: expected 3 fields, saw 4
Out[29]:
a b c
0 1 2 3
1 8 9 10
Or pass a callable function to handle the bad line if engine="python".
The bad line will be a list of strings that was split by the sep:
In [29]: external_list = []
In [30]: def bad_lines_func(line):
...: external_list.append(line)
...: return line[-3:]
In [31]: pd.read_csv(StringIO(data), on_bad_lines=bad_lines_func, engine="python")
Out[31]:
a b c
0 1 2 3
1 5 6 7
2 8 9 10
In [32]: external_list
Out[32]: [4, 5, 6, 7]
.. versionadded:: 1.4.0
You can also use the usecols parameter to eliminate extraneous column
data that appear in some lines but not others:
In [33]: pd.read_csv(StringIO(data), usecols=[0, 1, 2])
Out[33]:
a b c
0 1 2 3
1 4 5 6
2 8 9 10
In case you want to keep all data including the lines with too many fields, you can
specify a sufficient number of names. This ensures that lines with not enough
fields are filled with NaN.
In [34]: pd.read_csv(StringIO(data), names=['a', 'b', 'c', 'd'])
Out[34]:
a b c d
0 1 2 3 NaN
1 4 5 6 7
2 8 9 10 NaN
Dialect#
The dialect keyword gives greater flexibility in specifying the file format.
By default it uses the Excel dialect but you can specify either the dialect name
or a csv.Dialect instance.
Suppose you had data with unenclosed quotes:
In [161]: data = "label1,label2,label3\n" 'index1,"a,c,e\n' "index2,b,d,f"
In [162]: print(data)
label1,label2,label3
index1,"a,c,e
index2,b,d,f
By default, read_csv uses the Excel dialect and treats the double quote as
the quote character, which causes it to fail when it finds a newline before it
finds the closing double quote.
We can get around this using dialect:
In [163]: import csv
In [164]: dia = csv.excel()
In [165]: dia.quoting = csv.QUOTE_NONE
In [166]: pd.read_csv(StringIO(data), dialect=dia)
Out[166]:
label1 label2 label3
index1 "a c e
index2 b d f
All of the dialect options can be specified separately by keyword arguments:
In [167]: data = "a,b,c~1,2,3~4,5,6"
In [168]: pd.read_csv(StringIO(data), lineterminator="~")
Out[168]:
a b c
0 1 2 3
1 4 5 6
Another common dialect option is skipinitialspace, to skip any whitespace
after a delimiter:
In [169]: data = "a, b, c\n1, 2, 3\n4, 5, 6"
In [170]: print(data)
a, b, c
1, 2, 3
4, 5, 6
In [171]: pd.read_csv(StringIO(data), skipinitialspace=True)
Out[171]:
a b c
0 1 2 3
1 4 5 6
The parsers make every attempt to “do the right thing” and not be fragile. Type
inference is a pretty big deal. If a column can be coerced to integer dtype
without altering the contents, the parser will do so. Any non-numeric
columns will come through as object dtype as with the rest of pandas objects.
Quoting and Escape Characters#
Quotes (and other escape characters) in embedded fields can be handled in any
number of ways. One way is to use backslashes; to properly parse this data, you
should pass the escapechar option:
In [172]: data = 'a,b\n"hello, \\"Bob\\", nice to see you",5'
In [173]: print(data)
a,b
"hello, \"Bob\", nice to see you",5
In [174]: pd.read_csv(StringIO(data), escapechar="\\")
Out[174]:
a b
0 hello, "Bob", nice to see you 5
Files with fixed width columns#
While read_csv() reads delimited data, the read_fwf() function works
with data files that have known and fixed column widths. The function parameters
to read_fwf are largely the same as read_csv with two extra parameters, and
a different usage of the delimiter parameter:
colspecs: A list of pairs (tuples) giving the extents of the
fixed-width fields of each line as half-open intervals (i.e., [from, to[ ).
String value ‘infer’ can be used to instruct the parser to try detecting
the column specifications from the first 100 rows of the data. Default
behavior, if not specified, is to infer.
widths: A list of field widths which can be used instead of ‘colspecs’
if the intervals are contiguous.
delimiter: Characters to consider as filler characters in the fixed-width file.
Can be used to specify the filler character of the fields
if it is not spaces (e.g., ‘~’).
Consider a typical fixed-width data file:
In [175]: data1 = (
.....: "id8141 360.242940 149.910199 11950.7\n"
.....: "id1594 444.953632 166.985655 11788.4\n"
.....: "id1849 364.136849 183.628767 11806.2\n"
.....: "id1230 413.836124 184.375703 11916.8\n"
.....: "id1948 502.953953 173.237159 12468.3"
.....: )
.....:
In [176]: with open("bar.csv", "w") as f:
.....: f.write(data1)
.....:
In order to parse this file into a DataFrame, we simply need to supply the
column specifications to the read_fwf function along with the file name:
# Column specifications are a list of half-intervals
In [177]: colspecs = [(0, 6), (8, 20), (21, 33), (34, 43)]
In [178]: df = pd.read_fwf("bar.csv", colspecs=colspecs, header=None, index_col=0)
In [179]: df
Out[179]:
1 2 3
0
id8141 360.242940 149.910199 11950.7
id1594 444.953632 166.985655 11788.4
id1849 364.136849 183.628767 11806.2
id1230 413.836124 184.375703 11916.8
id1948 502.953953 173.237159 12468.3
Note how the parser automatically picks column names X.<column number> when
header=None argument is specified. Alternatively, you can supply just the
column widths for contiguous columns:
# Widths are a list of integers
In [180]: widths = [6, 14, 13, 10]
In [181]: df = pd.read_fwf("bar.csv", widths=widths, header=None)
In [182]: df
Out[182]:
0 1 2 3
0 id8141 360.242940 149.910199 11950.7
1 id1594 444.953632 166.985655 11788.4
2 id1849 364.136849 183.628767 11806.2
3 id1230 413.836124 184.375703 11916.8
4 id1948 502.953953 173.237159 12468.3
The parser will take care of extra white spaces around the columns
so it’s ok to have extra separation between the columns in the file.
By default, read_fwf will try to infer the file’s colspecs by using the
first 100 rows of the file. It can do it only in cases when the columns are
aligned and correctly separated by the provided delimiter (default delimiter
is whitespace).
In [183]: df = pd.read_fwf("bar.csv", header=None, index_col=0)
In [184]: df
Out[184]:
1 2 3
0
id8141 360.242940 149.910199 11950.7
id1594 444.953632 166.985655 11788.4
id1849 364.136849 183.628767 11806.2
id1230 413.836124 184.375703 11916.8
id1948 502.953953 173.237159 12468.3
read_fwf supports the dtype parameter for specifying the types of
parsed columns to be different from the inferred type.
In [185]: pd.read_fwf("bar.csv", header=None, index_col=0).dtypes
Out[185]:
1 float64
2 float64
3 float64
dtype: object
In [186]: pd.read_fwf("bar.csv", header=None, dtype={2: "object"}).dtypes
Out[186]:
0 object
1 float64
2 object
3 float64
dtype: object
Indexes#
Files with an “implicit” index column#
Consider a file with one less entry in the header than the number of data
column:
In [187]: data = "A,B,C\n20090101,a,1,2\n20090102,b,3,4\n20090103,c,4,5"
In [188]: print(data)
A,B,C
20090101,a,1,2
20090102,b,3,4
20090103,c,4,5
In [189]: with open("foo.csv", "w") as f:
.....: f.write(data)
.....:
In this special case, read_csv assumes that the first column is to be used
as the index of the DataFrame:
In [190]: pd.read_csv("foo.csv")
Out[190]:
A B C
20090101 a 1 2
20090102 b 3 4
20090103 c 4 5
Note that the dates weren’t automatically parsed. In that case you would need
to do as before:
In [191]: df = pd.read_csv("foo.csv", parse_dates=True)
In [192]: df.index
Out[192]: DatetimeIndex(['2009-01-01', '2009-01-02', '2009-01-03'], dtype='datetime64[ns]', freq=None)
Reading an index with a MultiIndex#
Suppose you have data indexed by two columns:
In [193]: data = 'year,indiv,zit,xit\n1977,"A",1.2,.6\n1977,"B",1.5,.5'
In [194]: print(data)
year,indiv,zit,xit
1977,"A",1.2,.6
1977,"B",1.5,.5
In [195]: with open("mindex_ex.csv", mode="w") as f:
.....: f.write(data)
.....:
The index_col argument to read_csv can take a list of
column numbers to turn multiple columns into a MultiIndex for the index of the
returned object:
In [196]: df = pd.read_csv("mindex_ex.csv", index_col=[0, 1])
In [197]: df
Out[197]:
zit xit
year indiv
1977 A 1.2 0.6
B 1.5 0.5
In [198]: df.loc[1977]
Out[198]:
zit xit
indiv
A 1.2 0.6
B 1.5 0.5
Reading columns with a MultiIndex#
By specifying list of row locations for the header argument, you
can read in a MultiIndex for the columns. Specifying non-consecutive
rows will skip the intervening rows.
In [199]: from pandas._testing import makeCustomDataframe as mkdf
In [200]: df = mkdf(5, 3, r_idx_nlevels=2, c_idx_nlevels=4)
In [201]: df.to_csv("mi.csv")
In [202]: print(open("mi.csv").read())
C0,,C_l0_g0,C_l0_g1,C_l0_g2
C1,,C_l1_g0,C_l1_g1,C_l1_g2
C2,,C_l2_g0,C_l2_g1,C_l2_g2
C3,,C_l3_g0,C_l3_g1,C_l3_g2
R0,R1,,,
R_l0_g0,R_l1_g0,R0C0,R0C1,R0C2
R_l0_g1,R_l1_g1,R1C0,R1C1,R1C2
R_l0_g2,R_l1_g2,R2C0,R2C1,R2C2
R_l0_g3,R_l1_g3,R3C0,R3C1,R3C2
R_l0_g4,R_l1_g4,R4C0,R4C1,R4C2
In [203]: pd.read_csv("mi.csv", header=[0, 1, 2, 3], index_col=[0, 1])
Out[203]:
C0 C_l0_g0 C_l0_g1 C_l0_g2
C1 C_l1_g0 C_l1_g1 C_l1_g2
C2 C_l2_g0 C_l2_g1 C_l2_g2
C3 C_l3_g0 C_l3_g1 C_l3_g2
R0 R1
R_l0_g0 R_l1_g0 R0C0 R0C1 R0C2
R_l0_g1 R_l1_g1 R1C0 R1C1 R1C2
R_l0_g2 R_l1_g2 R2C0 R2C1 R2C2
R_l0_g3 R_l1_g3 R3C0 R3C1 R3C2
R_l0_g4 R_l1_g4 R4C0 R4C1 R4C2
read_csv is also able to interpret a more common format
of multi-columns indices.
In [204]: data = ",a,a,a,b,c,c\n,q,r,s,t,u,v\none,1,2,3,4,5,6\ntwo,7,8,9,10,11,12"
In [205]: print(data)
,a,a,a,b,c,c
,q,r,s,t,u,v
one,1,2,3,4,5,6
two,7,8,9,10,11,12
In [206]: with open("mi2.csv", "w") as fh:
.....: fh.write(data)
.....:
In [207]: pd.read_csv("mi2.csv", header=[0, 1], index_col=0)
Out[207]:
a b c
q r s t u v
one 1 2 3 4 5 6
two 7 8 9 10 11 12
Note
If an index_col is not specified (e.g. you don’t have an index, or wrote it
with df.to_csv(..., index=False), then any names on the columns index will
be lost.
Automatically “sniffing” the delimiter#
read_csv is capable of inferring delimited (not necessarily
comma-separated) files, as pandas uses the csv.Sniffer
class of the csv module. For this, you have to specify sep=None.
In [208]: df = pd.DataFrame(np.random.randn(10, 4))
In [209]: df.to_csv("tmp.csv", sep="|")
In [210]: df.to_csv("tmp2.csv", sep=":")
In [211]: pd.read_csv("tmp2.csv", sep=None, engine="python")
Out[211]:
Unnamed: 0 0 1 2 3
0 0 0.469112 -0.282863 -1.509059 -1.135632
1 1 1.212112 -0.173215 0.119209 -1.044236
2 2 -0.861849 -2.104569 -0.494929 1.071804
3 3 0.721555 -0.706771 -1.039575 0.271860
4 4 -0.424972 0.567020 0.276232 -1.087401
5 5 -0.673690 0.113648 -1.478427 0.524988
6 6 0.404705 0.577046 -1.715002 -1.039268
7 7 -0.370647 -1.157892 -1.344312 0.844885
8 8 1.075770 -0.109050 1.643563 -1.469388
9 9 0.357021 -0.674600 -1.776904 -0.968914
Reading multiple files to create a single DataFrame#
It’s best to use concat() to combine multiple files.
See the cookbook for an example.
Iterating through files chunk by chunk#
Suppose you wish to iterate through a (potentially very large) file lazily
rather than reading the entire file into memory, such as the following:
In [212]: df = pd.DataFrame(np.random.randn(10, 4))
In [213]: df.to_csv("tmp.csv", sep="|")
In [214]: table = pd.read_csv("tmp.csv", sep="|")
In [215]: table
Out[215]:
Unnamed: 0 0 1 2 3
0 0 -1.294524 0.413738 0.276662 -0.472035
1 1 -0.013960 -0.362543 -0.006154 -0.923061
2 2 0.895717 0.805244 -1.206412 2.565646
3 3 1.431256 1.340309 -1.170299 -0.226169
4 4 0.410835 0.813850 0.132003 -0.827317
5 5 -0.076467 -1.187678 1.130127 -1.436737
6 6 -1.413681 1.607920 1.024180 0.569605
7 7 0.875906 -2.211372 0.974466 -2.006747
8 8 -0.410001 -0.078638 0.545952 -1.219217
9 9 -1.226825 0.769804 -1.281247 -0.727707
By specifying a chunksize to read_csv, the return
value will be an iterable object of type TextFileReader:
In [216]: with pd.read_csv("tmp.csv", sep="|", chunksize=4) as reader:
.....: reader
.....: for chunk in reader:
.....: print(chunk)
.....:
Unnamed: 0 0 1 2 3
0 0 -1.294524 0.413738 0.276662 -0.472035
1 1 -0.013960 -0.362543 -0.006154 -0.923061
2 2 0.895717 0.805244 -1.206412 2.565646
3 3 1.431256 1.340309 -1.170299 -0.226169
Unnamed: 0 0 1 2 3
4 4 0.410835 0.813850 0.132003 -0.827317
5 5 -0.076467 -1.187678 1.130127 -1.436737
6 6 -1.413681 1.607920 1.024180 0.569605
7 7 0.875906 -2.211372 0.974466 -2.006747
Unnamed: 0 0 1 2 3
8 8 -0.410001 -0.078638 0.545952 -1.219217
9 9 -1.226825 0.769804 -1.281247 -0.727707
Changed in version 1.2: read_csv/json/sas return a context-manager when iterating through a file.
Specifying iterator=True will also return the TextFileReader object:
In [217]: with pd.read_csv("tmp.csv", sep="|", iterator=True) as reader:
.....: reader.get_chunk(5)
.....:
Specifying the parser engine#
Pandas currently supports three engines, the C engine, the python engine, and an experimental
pyarrow engine (requires the pyarrow package). In general, the pyarrow engine is fastest
on larger workloads and is equivalent in speed to the C engine on most other workloads.
The python engine tends to be slower than the pyarrow and C engines on most workloads. However,
the pyarrow engine is much less robust than the C engine, which lacks a few features compared to the
Python engine.
Where possible, pandas uses the C parser (specified as engine='c'), but it may fall
back to Python if C-unsupported options are specified.
Currently, options unsupported by the C and pyarrow engines include:
sep other than a single character (e.g. regex separators)
skipfooter
sep=None with delim_whitespace=False
Specifying any of the above options will produce a ParserWarning unless the
python engine is selected explicitly using engine='python'.
Options that are unsupported by the pyarrow engine which are not covered by the list above include:
float_precision
chunksize
comment
nrows
thousands
memory_map
dialect
warn_bad_lines
error_bad_lines
on_bad_lines
delim_whitespace
quoting
lineterminator
converters
decimal
iterator
dayfirst
infer_datetime_format
verbose
skipinitialspace
low_memory
Specifying these options with engine='pyarrow' will raise a ValueError.
Reading/writing remote files#
You can pass in a URL to read or write remote files to many of pandas’ IO
functions - the following example shows reading a CSV file:
df = pd.read_csv("https://download.bls.gov/pub/time.series/cu/cu.item", sep="\t")
New in version 1.3.0.
A custom header can be sent alongside HTTP(s) requests by passing a dictionary
of header key value mappings to the storage_options keyword argument as shown below:
headers = {"User-Agent": "pandas"}
df = pd.read_csv(
"https://download.bls.gov/pub/time.series/cu/cu.item",
sep="\t",
storage_options=headers
)
All URLs which are not local files or HTTP(s) are handled by
fsspec, if installed, and its various filesystem implementations
(including Amazon S3, Google Cloud, SSH, FTP, webHDFS…).
Some of these implementations will require additional packages to be
installed, for example
S3 URLs require the s3fs library:
df = pd.read_json("s3://pandas-test/adatafile.json")
When dealing with remote storage systems, you might need
extra configuration with environment variables or config files in
special locations. For example, to access data in your S3 bucket,
you will need to define credentials in one of the several ways listed in
the S3Fs documentation. The same is true
for several of the storage backends, and you should follow the links
at fsimpl1 for implementations built into fsspec and fsimpl2
for those not included in the main fsspec
distribution.
You can also pass parameters directly to the backend driver. For example,
if you do not have S3 credentials, you can still access public data by
specifying an anonymous connection, such as
New in version 1.2.0.
pd.read_csv(
"s3://ncei-wcsd-archive/data/processed/SH1305/18kHz/SaKe2013"
"-D20130523-T080854_to_SaKe2013-D20130523-T085643.csv",
storage_options={"anon": True},
)
fsspec also allows complex URLs, for accessing data in compressed
archives, local caching of files, and more. To locally cache the above
example, you would modify the call to
pd.read_csv(
"simplecache::s3://ncei-wcsd-archive/data/processed/SH1305/18kHz/"
"SaKe2013-D20130523-T080854_to_SaKe2013-D20130523-T085643.csv",
storage_options={"s3": {"anon": True}},
)
where we specify that the “anon” parameter is meant for the “s3” part of
the implementation, not to the caching implementation. Note that this caches to a temporary
directory for the duration of the session only, but you can also specify
a permanent store.
Writing out data#
Writing to CSV format#
The Series and DataFrame objects have an instance method to_csv which
allows storing the contents of the object as a comma-separated-values file. The
function takes a number of arguments. Only the first is required.
path_or_buf: A string path to the file to write or a file object. If a file object it must be opened with newline=''
sep : Field delimiter for the output file (default “,”)
na_rep: A string representation of a missing value (default ‘’)
float_format: Format string for floating point numbers
columns: Columns to write (default None)
header: Whether to write out the column names (default True)
index: whether to write row (index) names (default True)
index_label: Column label(s) for index column(s) if desired. If None
(default), and header and index are True, then the index names are
used. (A sequence should be given if the DataFrame uses MultiIndex).
mode : Python write mode, default ‘w’
encoding: a string representing the encoding to use if the contents are
non-ASCII, for Python versions prior to 3
lineterminator: Character sequence denoting line end (default os.linesep)
quoting: Set quoting rules as in csv module (default csv.QUOTE_MINIMAL). Note that if you have set a float_format then floats are converted to strings and csv.QUOTE_NONNUMERIC will treat them as non-numeric
quotechar: Character used to quote fields (default ‘”’)
doublequote: Control quoting of quotechar in fields (default True)
escapechar: Character used to escape sep and quotechar when
appropriate (default None)
chunksize: Number of rows to write at a time
date_format: Format string for datetime objects
Writing a formatted string#
The DataFrame object has an instance method to_string which allows control
over the string representation of the object. All arguments are optional:
buf default None, for example a StringIO object
columns default None, which columns to write
col_space default None, minimum width of each column.
na_rep default NaN, representation of NA value
formatters default None, a dictionary (by column) of functions each of
which takes a single argument and returns a formatted string
float_format default None, a function which takes a single (float)
argument and returns a formatted string; to be applied to floats in the
DataFrame.
sparsify default True, set to False for a DataFrame with a hierarchical
index to print every MultiIndex key at each row.
index_names default True, will print the names of the indices
index default True, will print the index (ie, row labels)
header default True, will print the column labels
justify default left, will print column headers left- or
right-justified
The Series object also has a to_string method, but with only the buf,
na_rep, float_format arguments. There is also a length argument
which, if set to True, will additionally output the length of the Series.
JSON#
Read and write JSON format files and strings.
Writing JSON#
A Series or DataFrame can be converted to a valid JSON string. Use to_json
with optional parameters:
path_or_buf : the pathname or buffer to write the output
This can be None in which case a JSON string is returned
orient :
Series:
default is index
allowed values are {split, records, index}
DataFrame:
default is columns
allowed values are {split, records, index, columns, values, table}
The format of the JSON string
split
dict like {index -> [index], columns -> [columns], data -> [values]}
records
list like [{column -> value}, … , {column -> value}]
index
dict like {index -> {column -> value}}
columns
dict like {column -> {index -> value}}
values
just the values array
table
adhering to the JSON Table Schema
date_format : string, type of date conversion, ‘epoch’ for timestamp, ‘iso’ for ISO8601.
double_precision : The number of decimal places to use when encoding floating point values, default 10.
force_ascii : force encoded string to be ASCII, default True.
date_unit : The time unit to encode to, governs timestamp and ISO8601 precision. One of ‘s’, ‘ms’, ‘us’ or ‘ns’ for seconds, milliseconds, microseconds and nanoseconds respectively. Default ‘ms’.
default_handler : The handler to call if an object cannot otherwise be converted to a suitable format for JSON. Takes a single argument, which is the object to convert, and returns a serializable object.
lines : If records orient, then will write each record per line as json.
Note NaN’s, NaT’s and None will be converted to null and datetime objects will be converted based on the date_format and date_unit parameters.
In [218]: dfj = pd.DataFrame(np.random.randn(5, 2), columns=list("AB"))
In [219]: json = dfj.to_json()
In [220]: json
Out[220]: '{"A":{"0":-0.1213062281,"1":0.6957746499,"2":0.9597255933,"3":-0.6199759194,"4":-0.7323393705},"B":{"0":-0.0978826728,"1":0.3417343559,"2":-1.1103361029,"3":0.1497483186,"4":0.6877383895}}'
Orient options#
There are a number of different options for the format of the resulting JSON
file / string. Consider the following DataFrame and Series:
In [221]: dfjo = pd.DataFrame(
.....: dict(A=range(1, 4), B=range(4, 7), C=range(7, 10)),
.....: columns=list("ABC"),
.....: index=list("xyz"),
.....: )
.....:
In [222]: dfjo
Out[222]:
A B C
x 1 4 7
y 2 5 8
z 3 6 9
In [223]: sjo = pd.Series(dict(x=15, y=16, z=17), name="D")
In [224]: sjo
Out[224]:
x 15
y 16
z 17
Name: D, dtype: int64
Column oriented (the default for DataFrame) serializes the data as
nested JSON objects with column labels acting as the primary index:
In [225]: dfjo.to_json(orient="columns")
Out[225]: '{"A":{"x":1,"y":2,"z":3},"B":{"x":4,"y":5,"z":6},"C":{"x":7,"y":8,"z":9}}'
# Not available for Series
Index oriented (the default for Series) similar to column oriented
but the index labels are now primary:
In [226]: dfjo.to_json(orient="index")
Out[226]: '{"x":{"A":1,"B":4,"C":7},"y":{"A":2,"B":5,"C":8},"z":{"A":3,"B":6,"C":9}}'
In [227]: sjo.to_json(orient="index")
Out[227]: '{"x":15,"y":16,"z":17}'
Record oriented serializes the data to a JSON array of column -> value records,
index labels are not included. This is useful for passing DataFrame data to plotting
libraries, for example the JavaScript library d3.js:
In [228]: dfjo.to_json(orient="records")
Out[228]: '[{"A":1,"B":4,"C":7},{"A":2,"B":5,"C":8},{"A":3,"B":6,"C":9}]'
In [229]: sjo.to_json(orient="records")
Out[229]: '[15,16,17]'
Value oriented is a bare-bones option which serializes to nested JSON arrays of
values only, column and index labels are not included:
In [230]: dfjo.to_json(orient="values")
Out[230]: '[[1,4,7],[2,5,8],[3,6,9]]'
# Not available for Series
Split oriented serializes to a JSON object containing separate entries for
values, index and columns. Name is also included for Series:
In [231]: dfjo.to_json(orient="split")
Out[231]: '{"columns":["A","B","C"],"index":["x","y","z"],"data":[[1,4,7],[2,5,8],[3,6,9]]}'
In [232]: sjo.to_json(orient="split")
Out[232]: '{"name":"D","index":["x","y","z"],"data":[15,16,17]}'
Table oriented serializes to the JSON Table Schema, allowing for the
preservation of metadata including but not limited to dtypes and index names.
Note
Any orient option that encodes to a JSON object will not preserve the ordering of
index and column labels during round-trip serialization. If you wish to preserve
label ordering use the split option as it uses ordered containers.
Date handling#
Writing in ISO date format:
In [233]: dfd = pd.DataFrame(np.random.randn(5, 2), columns=list("AB"))
In [234]: dfd["date"] = pd.Timestamp("20130101")
In [235]: dfd = dfd.sort_index(axis=1, ascending=False)
In [236]: json = dfd.to_json(date_format="iso")
In [237]: json
Out[237]: '{"date":{"0":"2013-01-01T00:00:00.000","1":"2013-01-01T00:00:00.000","2":"2013-01-01T00:00:00.000","3":"2013-01-01T00:00:00.000","4":"2013-01-01T00:00:00.000"},"B":{"0":0.403309524,"1":0.3016244523,"2":-1.3698493577,"3":1.4626960492,"4":-0.8265909164},"A":{"0":0.1764443426,"1":-0.1549507744,"2":-2.1798606054,"3":-0.9542078401,"4":-1.7431609117}}'
Writing in ISO date format, with microseconds:
In [238]: json = dfd.to_json(date_format="iso", date_unit="us")
In [239]: json
Out[239]: '{"date":{"0":"2013-01-01T00:00:00.000000","1":"2013-01-01T00:00:00.000000","2":"2013-01-01T00:00:00.000000","3":"2013-01-01T00:00:00.000000","4":"2013-01-01T00:00:00.000000"},"B":{"0":0.403309524,"1":0.3016244523,"2":-1.3698493577,"3":1.4626960492,"4":-0.8265909164},"A":{"0":0.1764443426,"1":-0.1549507744,"2":-2.1798606054,"3":-0.9542078401,"4":-1.7431609117}}'
Epoch timestamps, in seconds:
In [240]: json = dfd.to_json(date_format="epoch", date_unit="s")
In [241]: json
Out[241]: '{"date":{"0":1356998400,"1":1356998400,"2":1356998400,"3":1356998400,"4":1356998400},"B":{"0":0.403309524,"1":0.3016244523,"2":-1.3698493577,"3":1.4626960492,"4":-0.8265909164},"A":{"0":0.1764443426,"1":-0.1549507744,"2":-2.1798606054,"3":-0.9542078401,"4":-1.7431609117}}'
Writing to a file, with a date index and a date column:
In [242]: dfj2 = dfj.copy()
In [243]: dfj2["date"] = pd.Timestamp("20130101")
In [244]: dfj2["ints"] = list(range(5))
In [245]: dfj2["bools"] = True
In [246]: dfj2.index = pd.date_range("20130101", periods=5)
In [247]: dfj2.to_json("test.json")
In [248]: with open("test.json") as fh:
.....: print(fh.read())
.....:
{"A":{"1356998400000":-0.1213062281,"1357084800000":0.6957746499,"1357171200000":0.9597255933,"1357257600000":-0.6199759194,"1357344000000":-0.7323393705},"B":{"1356998400000":-0.0978826728,"1357084800000":0.3417343559,"1357171200000":-1.1103361029,"1357257600000":0.1497483186,"1357344000000":0.6877383895},"date":{"1356998400000":1356998400000,"1357084800000":1356998400000,"1357171200000":1356998400000,"1357257600000":1356998400000,"1357344000000":1356998400000},"ints":{"1356998400000":0,"1357084800000":1,"1357171200000":2,"1357257600000":3,"1357344000000":4},"bools":{"1356998400000":true,"1357084800000":true,"1357171200000":true,"1357257600000":true,"1357344000000":true}}
Fallback behavior#
If the JSON serializer cannot handle the container contents directly it will
fall back in the following manner:
if the dtype is unsupported (e.g. np.complex_) then the default_handler, if provided, will be called
for each value, otherwise an exception is raised.
if an object is unsupported it will attempt the following:
check if the object has defined a toDict method and call it.
A toDict method should return a dict which will then be JSON serialized.
invoke the default_handler if one was provided.
convert the object to a dict by traversing its contents. However this will often fail
with an OverflowError or give unexpected results.
In general the best approach for unsupported objects or dtypes is to provide a default_handler.
For example:
>>> DataFrame([1.0, 2.0, complex(1.0, 2.0)]).to_json() # raises
RuntimeError: Unhandled numpy dtype 15
can be dealt with by specifying a simple default_handler:
In [249]: pd.DataFrame([1.0, 2.0, complex(1.0, 2.0)]).to_json(default_handler=str)
Out[249]: '{"0":{"0":"(1+0j)","1":"(2+0j)","2":"(1+2j)"}}'
Reading JSON#
Reading a JSON string to pandas object can take a number of parameters.
The parser will try to parse a DataFrame if typ is not supplied or
is None. To explicitly force Series parsing, pass typ=series
filepath_or_buffer : a VALID JSON string or file handle / StringIO. The string could be
a URL. Valid URL schemes include http, ftp, S3, and file. For file URLs, a host
is expected. For instance, a local file could be
file ://localhost/path/to/table.json
typ : type of object to recover (series or frame), default ‘frame’
orient :
Series :
default is index
allowed values are {split, records, index}
DataFrame
default is columns
allowed values are {split, records, index, columns, values, table}
The format of the JSON string
split
dict like {index -> [index], columns -> [columns], data -> [values]}
records
list like [{column -> value}, … , {column -> value}]
index
dict like {index -> {column -> value}}
columns
dict like {column -> {index -> value}}
values
just the values array
table
adhering to the JSON Table Schema
dtype : if True, infer dtypes, if a dict of column to dtype, then use those, if False, then don’t infer dtypes at all, default is True, apply only to the data.
convert_axes : boolean, try to convert the axes to the proper dtypes, default is True
convert_dates : a list of columns to parse for dates; If True, then try to parse date-like columns, default is True.
keep_default_dates : boolean, default True. If parsing dates, then parse the default date-like columns.
numpy : direct decoding to NumPy arrays. default is False;
Supports numeric data only, although labels may be non-numeric. Also note that the JSON ordering MUST be the same for each term if numpy=True.
precise_float : boolean, default False. Set to enable usage of higher precision (strtod) function when decoding string to double values. Default (False) is to use fast but less precise builtin functionality.
date_unit : string, the timestamp unit to detect if converting dates. Default
None. By default the timestamp precision will be detected, if this is not desired
then pass one of ‘s’, ‘ms’, ‘us’ or ‘ns’ to force timestamp precision to
seconds, milliseconds, microseconds or nanoseconds respectively.
lines : reads file as one json object per line.
encoding : The encoding to use to decode py3 bytes.
chunksize : when used in combination with lines=True, return a JsonReader which reads in chunksize lines per iteration.
The parser will raise one of ValueError/TypeError/AssertionError if the JSON is not parseable.
If a non-default orient was used when encoding to JSON be sure to pass the same
option here so that decoding produces sensible results, see Orient Options for an
overview.
Data conversion#
The default of convert_axes=True, dtype=True, and convert_dates=True
will try to parse the axes, and all of the data into appropriate types,
including dates. If you need to override specific dtypes, pass a dict to
dtype. convert_axes should only be set to False if you need to
preserve string-like numbers (e.g. ‘1’, ‘2’) in an axes.
Note
Large integer values may be converted to dates if convert_dates=True and the data and / or column labels appear ‘date-like’. The exact threshold depends on the date_unit specified. ‘date-like’ means that the column label meets one of the following criteria:
it ends with '_at'
it ends with '_time'
it begins with 'timestamp'
it is 'modified'
it is 'date'
Warning
When reading JSON data, automatic coercing into dtypes has some quirks:
an index can be reconstructed in a different order from serialization, that is, the returned order is not guaranteed to be the same as before serialization
a column that was float data will be converted to integer if it can be done safely, e.g. a column of 1.
bool columns will be converted to integer on reconstruction
Thus there are times where you may want to specify specific dtypes via the dtype keyword argument.
Reading from a JSON string:
In [250]: pd.read_json(json)
Out[250]:
date B A
0 2013-01-01 0.403310 0.176444
1 2013-01-01 0.301624 -0.154951
2 2013-01-01 -1.369849 -2.179861
3 2013-01-01 1.462696 -0.954208
4 2013-01-01 -0.826591 -1.743161
Reading from a file:
In [251]: pd.read_json("test.json")
Out[251]:
A B date ints bools
2013-01-01 -0.121306 -0.097883 2013-01-01 0 True
2013-01-02 0.695775 0.341734 2013-01-01 1 True
2013-01-03 0.959726 -1.110336 2013-01-01 2 True
2013-01-04 -0.619976 0.149748 2013-01-01 3 True
2013-01-05 -0.732339 0.687738 2013-01-01 4 True
Don’t convert any data (but still convert axes and dates):
In [252]: pd.read_json("test.json", dtype=object).dtypes
Out[252]:
A object
B object
date object
ints object
bools object
dtype: object
Specify dtypes for conversion:
In [253]: pd.read_json("test.json", dtype={"A": "float32", "bools": "int8"}).dtypes
Out[253]:
A float32
B float64
date datetime64[ns]
ints int64
bools int8
dtype: object
Preserve string indices:
In [254]: si = pd.DataFrame(
.....: np.zeros((4, 4)), columns=list(range(4)), index=[str(i) for i in range(4)]
.....: )
.....:
In [255]: si
Out[255]:
0 1 2 3
0 0.0 0.0 0.0 0.0
1 0.0 0.0 0.0 0.0
2 0.0 0.0 0.0 0.0
3 0.0 0.0 0.0 0.0
In [256]: si.index
Out[256]: Index(['0', '1', '2', '3'], dtype='object')
In [257]: si.columns
Out[257]: Int64Index([0, 1, 2, 3], dtype='int64')
In [258]: json = si.to_json()
In [259]: sij = pd.read_json(json, convert_axes=False)
In [260]: sij
Out[260]:
0 1 2 3
0 0 0 0 0
1 0 0 0 0
2 0 0 0 0
3 0 0 0 0
In [261]: sij.index
Out[261]: Index(['0', '1', '2', '3'], dtype='object')
In [262]: sij.columns
Out[262]: Index(['0', '1', '2', '3'], dtype='object')
Dates written in nanoseconds need to be read back in nanoseconds:
In [263]: json = dfj2.to_json(date_unit="ns")
# Try to parse timestamps as milliseconds -> Won't Work
In [264]: dfju = pd.read_json(json, date_unit="ms")
In [265]: dfju
Out[265]:
A B date ints bools
1356998400000000000 -0.121306 -0.097883 1356998400000000000 0 True
1357084800000000000 0.695775 0.341734 1356998400000000000 1 True
1357171200000000000 0.959726 -1.110336 1356998400000000000 2 True
1357257600000000000 -0.619976 0.149748 1356998400000000000 3 True
1357344000000000000 -0.732339 0.687738 1356998400000000000 4 True
# Let pandas detect the correct precision
In [266]: dfju = pd.read_json(json)
In [267]: dfju
Out[267]:
A B date ints bools
2013-01-01 -0.121306 -0.097883 2013-01-01 0 True
2013-01-02 0.695775 0.341734 2013-01-01 1 True
2013-01-03 0.959726 -1.110336 2013-01-01 2 True
2013-01-04 -0.619976 0.149748 2013-01-01 3 True
2013-01-05 -0.732339 0.687738 2013-01-01 4 True
# Or specify that all timestamps are in nanoseconds
In [268]: dfju = pd.read_json(json, date_unit="ns")
In [269]: dfju
Out[269]:
A B date ints bools
2013-01-01 -0.121306 -0.097883 2013-01-01 0 True
2013-01-02 0.695775 0.341734 2013-01-01 1 True
2013-01-03 0.959726 -1.110336 2013-01-01 2 True
2013-01-04 -0.619976 0.149748 2013-01-01 3 True
2013-01-05 -0.732339 0.687738 2013-01-01 4 True
The Numpy parameter#
Note
This param has been deprecated as of version 1.0.0 and will raise a FutureWarning.
This supports numeric data only. Index and columns labels may be non-numeric, e.g. strings, dates etc.
If numpy=True is passed to read_json an attempt will be made to sniff
an appropriate dtype during deserialization and to subsequently decode directly
to NumPy arrays, bypassing the need for intermediate Python objects.
This can provide speedups if you are deserialising a large amount of numeric
data:
In [270]: randfloats = np.random.uniform(-100, 1000, 10000)
In [271]: randfloats.shape = (1000, 10)
In [272]: dffloats = pd.DataFrame(randfloats, columns=list("ABCDEFGHIJ"))
In [273]: jsonfloats = dffloats.to_json()
In [274]: %timeit pd.read_json(jsonfloats)
7.91 ms +- 77.3 us per loop (mean +- std. dev. of 7 runs, 100 loops each)
In [275]: %timeit pd.read_json(jsonfloats, numpy=True)
5.71 ms +- 333 us per loop (mean +- std. dev. of 7 runs, 100 loops each)
The speedup is less noticeable for smaller datasets:
In [276]: jsonfloats = dffloats.head(100).to_json()
In [277]: %timeit pd.read_json(jsonfloats)
4.46 ms +- 25.9 us per loop (mean +- std. dev. of 7 runs, 100 loops each)
In [278]: %timeit pd.read_json(jsonfloats, numpy=True)
4.09 ms +- 32.3 us per loop (mean +- std. dev. of 7 runs, 100 loops each)
Warning
Direct NumPy decoding makes a number of assumptions and may fail or produce
unexpected output if these assumptions are not satisfied:
data is numeric.
data is uniform. The dtype is sniffed from the first value decoded.
A ValueError may be raised, or incorrect output may be produced
if this condition is not satisfied.
labels are ordered. Labels are only read from the first container, it is assumed
that each subsequent row / column has been encoded in the same order. This should be satisfied if the
data was encoded using to_json but may not be the case if the JSON
is from another source.
Normalization#
pandas provides a utility function to take a dict or list of dicts and normalize this semi-structured data
into a flat table.
In [279]: data = [
.....: {"id": 1, "name": {"first": "Coleen", "last": "Volk"}},
.....: {"name": {"given": "Mark", "family": "Regner"}},
.....: {"id": 2, "name": "Faye Raker"},
.....: ]
.....:
In [280]: pd.json_normalize(data)
Out[280]:
id name.first name.last name.given name.family name
0 1.0 Coleen Volk NaN NaN NaN
1 NaN NaN NaN Mark Regner NaN
2 2.0 NaN NaN NaN NaN Faye Raker
In [281]: data = [
.....: {
.....: "state": "Florida",
.....: "shortname": "FL",
.....: "info": {"governor": "Rick Scott"},
.....: "county": [
.....: {"name": "Dade", "population": 12345},
.....: {"name": "Broward", "population": 40000},
.....: {"name": "Palm Beach", "population": 60000},
.....: ],
.....: },
.....: {
.....: "state": "Ohio",
.....: "shortname": "OH",
.....: "info": {"governor": "John Kasich"},
.....: "county": [
.....: {"name": "Summit", "population": 1234},
.....: {"name": "Cuyahoga", "population": 1337},
.....: ],
.....: },
.....: ]
.....:
In [282]: pd.json_normalize(data, "county", ["state", "shortname", ["info", "governor"]])
Out[282]:
name population state shortname info.governor
0 Dade 12345 Florida FL Rick Scott
1 Broward 40000 Florida FL Rick Scott
2 Palm Beach 60000 Florida FL Rick Scott
3 Summit 1234 Ohio OH John Kasich
4 Cuyahoga 1337 Ohio OH John Kasich
The max_level parameter provides more control over which level to end normalization.
With max_level=1 the following snippet normalizes until 1st nesting level of the provided dict.
In [283]: data = [
.....: {
.....: "CreatedBy": {"Name": "User001"},
.....: "Lookup": {
.....: "TextField": "Some text",
.....: "UserField": {"Id": "ID001", "Name": "Name001"},
.....: },
.....: "Image": {"a": "b"},
.....: }
.....: ]
.....:
In [284]: pd.json_normalize(data, max_level=1)
Out[284]:
CreatedBy.Name Lookup.TextField Lookup.UserField Image.a
0 User001 Some text {'Id': 'ID001', 'Name': 'Name001'} b
Line delimited json#
pandas is able to read and write line-delimited json files that are common in data processing pipelines
using Hadoop or Spark.
For line-delimited json files, pandas can also return an iterator which reads in chunksize lines at a time. This can be useful for large files or to read from a stream.
In [285]: jsonl = """
.....: {"a": 1, "b": 2}
.....: {"a": 3, "b": 4}
.....: """
.....:
In [286]: df = pd.read_json(jsonl, lines=True)
In [287]: df
Out[287]:
a b
0 1 2
1 3 4
In [288]: df.to_json(orient="records", lines=True)
Out[288]: '{"a":1,"b":2}\n{"a":3,"b":4}\n'
# reader is an iterator that returns ``chunksize`` lines each iteration
In [289]: with pd.read_json(StringIO(jsonl), lines=True, chunksize=1) as reader:
.....: reader
.....: for chunk in reader:
.....: print(chunk)
.....:
Empty DataFrame
Columns: []
Index: []
a b
0 1 2
a b
1 3 4
Table schema#
Table Schema is a spec for describing tabular datasets as a JSON
object. The JSON includes information on the field names, types, and
other attributes. You can use the orient table to build
a JSON string with two fields, schema and data.
In [290]: df = pd.DataFrame(
.....: {
.....: "A": [1, 2, 3],
.....: "B": ["a", "b", "c"],
.....: "C": pd.date_range("2016-01-01", freq="d", periods=3),
.....: },
.....: index=pd.Index(range(3), name="idx"),
.....: )
.....:
In [291]: df
Out[291]:
A B C
idx
0 1 a 2016-01-01
1 2 b 2016-01-02
2 3 c 2016-01-03
In [292]: df.to_json(orient="table", date_format="iso")
Out[292]: '{"schema":{"fields":[{"name":"idx","type":"integer"},{"name":"A","type":"integer"},{"name":"B","type":"string"},{"name":"C","type":"datetime"}],"primaryKey":["idx"],"pandas_version":"1.4.0"},"data":[{"idx":0,"A":1,"B":"a","C":"2016-01-01T00:00:00.000"},{"idx":1,"A":2,"B":"b","C":"2016-01-02T00:00:00.000"},{"idx":2,"A":3,"B":"c","C":"2016-01-03T00:00:00.000"}]}'
The schema field contains the fields key, which itself contains
a list of column name to type pairs, including the Index or MultiIndex
(see below for a list of types).
The schema field also contains a primaryKey field if the (Multi)index
is unique.
The second field, data, contains the serialized data with the records
orient.
The index is included, and any datetimes are ISO 8601 formatted, as required
by the Table Schema spec.
The full list of types supported are described in the Table Schema
spec. This table shows the mapping from pandas types:
pandas type
Table Schema type
int64
integer
float64
number
bool
boolean
datetime64[ns]
datetime
timedelta64[ns]
duration
categorical
any
object
str
A few notes on the generated table schema:
The schema object contains a pandas_version field. This contains
the version of pandas’ dialect of the schema, and will be incremented
with each revision.
All dates are converted to UTC when serializing. Even timezone naive values,
which are treated as UTC with an offset of 0.
In [293]: from pandas.io.json import build_table_schema
In [294]: s = pd.Series(pd.date_range("2016", periods=4))
In [295]: build_table_schema(s)
Out[295]:
{'fields': [{'name': 'index', 'type': 'integer'},
{'name': 'values', 'type': 'datetime'}],
'primaryKey': ['index'],
'pandas_version': '1.4.0'}
datetimes with a timezone (before serializing), include an additional field
tz with the time zone name (e.g. 'US/Central').
In [296]: s_tz = pd.Series(pd.date_range("2016", periods=12, tz="US/Central"))
In [297]: build_table_schema(s_tz)
Out[297]:
{'fields': [{'name': 'index', 'type': 'integer'},
{'name': 'values', 'type': 'datetime', 'tz': 'US/Central'}],
'primaryKey': ['index'],
'pandas_version': '1.4.0'}
Periods are converted to timestamps before serialization, and so have the
same behavior of being converted to UTC. In addition, periods will contain
and additional field freq with the period’s frequency, e.g. 'A-DEC'.
In [298]: s_per = pd.Series(1, index=pd.period_range("2016", freq="A-DEC", periods=4))
In [299]: build_table_schema(s_per)
Out[299]:
{'fields': [{'name': 'index', 'type': 'datetime', 'freq': 'A-DEC'},
{'name': 'values', 'type': 'integer'}],
'primaryKey': ['index'],
'pandas_version': '1.4.0'}
Categoricals use the any type and an enum constraint listing
the set of possible values. Additionally, an ordered field is included:
In [300]: s_cat = pd.Series(pd.Categorical(["a", "b", "a"]))
In [301]: build_table_schema(s_cat)
Out[301]:
{'fields': [{'name': 'index', 'type': 'integer'},
{'name': 'values',
'type': 'any',
'constraints': {'enum': ['a', 'b']},
'ordered': False}],
'primaryKey': ['index'],
'pandas_version': '1.4.0'}
A primaryKey field, containing an array of labels, is included
if the index is unique:
In [302]: s_dupe = pd.Series([1, 2], index=[1, 1])
In [303]: build_table_schema(s_dupe)
Out[303]:
{'fields': [{'name': 'index', 'type': 'integer'},
{'name': 'values', 'type': 'integer'}],
'pandas_version': '1.4.0'}
The primaryKey behavior is the same with MultiIndexes, but in this
case the primaryKey is an array:
In [304]: s_multi = pd.Series(1, index=pd.MultiIndex.from_product([("a", "b"), (0, 1)]))
In [305]: build_table_schema(s_multi)
Out[305]:
{'fields': [{'name': 'level_0', 'type': 'string'},
{'name': 'level_1', 'type': 'integer'},
{'name': 'values', 'type': 'integer'}],
'primaryKey': FrozenList(['level_0', 'level_1']),
'pandas_version': '1.4.0'}
The default naming roughly follows these rules:
For series, the object.name is used. If that’s none, then the
name is values
For DataFrames, the stringified version of the column name is used
For Index (not MultiIndex), index.name is used, with a
fallback to index if that is None.
For MultiIndex, mi.names is used. If any level has no name,
then level_<i> is used.
read_json also accepts orient='table' as an argument. This allows for
the preservation of metadata such as dtypes and index names in a
round-trippable manner.
In [306]: df = pd.DataFrame(
.....: {
.....: "foo": [1, 2, 3, 4],
.....: "bar": ["a", "b", "c", "d"],
.....: "baz": pd.date_range("2018-01-01", freq="d", periods=4),
.....: "qux": pd.Categorical(["a", "b", "c", "c"]),
.....: },
.....: index=pd.Index(range(4), name="idx"),
.....: )
.....:
In [307]: df
Out[307]:
foo bar baz qux
idx
0 1 a 2018-01-01 a
1 2 b 2018-01-02 b
2 3 c 2018-01-03 c
3 4 d 2018-01-04 c
In [308]: df.dtypes
Out[308]:
foo int64
bar object
baz datetime64[ns]
qux category
dtype: object
In [309]: df.to_json("test.json", orient="table")
In [310]: new_df = pd.read_json("test.json", orient="table")
In [311]: new_df
Out[311]:
foo bar baz qux
idx
0 1 a 2018-01-01 a
1 2 b 2018-01-02 b
2 3 c 2018-01-03 c
3 4 d 2018-01-04 c
In [312]: new_df.dtypes
Out[312]:
foo int64
bar object
baz datetime64[ns]
qux category
dtype: object
Please note that the literal string ‘index’ as the name of an Index
is not round-trippable, nor are any names beginning with 'level_' within a
MultiIndex. These are used by default in DataFrame.to_json() to
indicate missing values and the subsequent read cannot distinguish the intent.
In [313]: df.index.name = "index"
In [314]: df.to_json("test.json", orient="table")
In [315]: new_df = pd.read_json("test.json", orient="table")
In [316]: print(new_df.index.name)
None
When using orient='table' along with user-defined ExtensionArray,
the generated schema will contain an additional extDtype key in the respective
fields element. This extra key is not standard but does enable JSON roundtrips
for extension types (e.g. read_json(df.to_json(orient="table"), orient="table")).
The extDtype key carries the name of the extension, if you have properly registered
the ExtensionDtype, pandas will use said name to perform a lookup into the registry
and re-convert the serialized data into your custom dtype.
HTML#
Reading HTML content#
Warning
We highly encourage you to read the HTML Table Parsing gotchas
below regarding the issues surrounding the BeautifulSoup4/html5lib/lxml parsers.
The top-level read_html() function can accept an HTML
string/file/URL and will parse HTML tables into list of pandas DataFrames.
Let’s look at a few examples.
Note
read_html returns a list of DataFrame objects, even if there is
only a single table contained in the HTML content.
Read a URL with no options:
In [320]: "https://www.fdic.gov/resources/resolutions/bank-failures/failed-bank-list"
In [321]: pd.read_html(url)
Out[321]:
[ Bank NameBank CityCity StateSt ... Acquiring InstitutionAI Closing DateClosing FundFund
0 Almena State Bank Almena KS ... Equity Bank October 23, 2020 10538
1 First City Bank of Florida Fort Walton Beach FL ... United Fidelity Bank, fsb October 16, 2020 10537
2 The First State Bank Barboursville WV ... MVB Bank, Inc. April 3, 2020 10536
3 Ericson State Bank Ericson NE ... Farmers and Merchants Bank February 14, 2020 10535
4 City National Bank of New Jersey Newark NJ ... Industrial Bank November 1, 2019 10534
.. ... ... ... ... ... ... ...
558 Superior Bank, FSB Hinsdale IL ... Superior Federal, FSB July 27, 2001 6004
559 Malta National Bank Malta OH ... North Valley Bank May 3, 2001 4648
560 First Alliance Bank & Trust Co. Manchester NH ... Southern New Hampshire Bank & Trust February 2, 2001 4647
561 National State Bank of Metropolis Metropolis IL ... Banterra Bank of Marion December 14, 2000 4646
562 Bank of Honolulu Honolulu HI ... Bank of the Orient October 13, 2000 4645
[563 rows x 7 columns]]
Note
The data from the above URL changes every Monday so the resulting data above may be slightly different.
Read in the content of the file from the above URL and pass it to read_html
as a string:
In [317]: html_str = """
.....: <table>
.....: <tr>
.....: <th>A</th>
.....: <th colspan="1">B</th>
.....: <th rowspan="1">C</th>
.....: </tr>
.....: <tr>
.....: <td>a</td>
.....: <td>b</td>
.....: <td>c</td>
.....: </tr>
.....: </table>
.....: """
.....:
In [318]: with open("tmp.html", "w") as f:
.....: f.write(html_str)
.....:
In [319]: df = pd.read_html("tmp.html")
In [320]: df[0]
Out[320]:
A B C
0 a b c
You can even pass in an instance of StringIO if you so desire:
In [321]: dfs = pd.read_html(StringIO(html_str))
In [322]: dfs[0]
Out[322]:
A B C
0 a b c
Note
The following examples are not run by the IPython evaluator due to the fact
that having so many network-accessing functions slows down the documentation
build. If you spot an error or an example that doesn’t run, please do not
hesitate to report it over on pandas GitHub issues page.
Read a URL and match a table that contains specific text:
match = "Metcalf Bank"
df_list = pd.read_html(url, match=match)
Specify a header row (by default <th> or <td> elements located within a
<thead> are used to form the column index, if multiple rows are contained within
<thead> then a MultiIndex is created); if specified, the header row is taken
from the data minus the parsed header elements (<th> elements).
dfs = pd.read_html(url, header=0)
Specify an index column:
dfs = pd.read_html(url, index_col=0)
Specify a number of rows to skip:
dfs = pd.read_html(url, skiprows=0)
Specify a number of rows to skip using a list (range works
as well):
dfs = pd.read_html(url, skiprows=range(2))
Specify an HTML attribute:
dfs1 = pd.read_html(url, attrs={"id": "table"})
dfs2 = pd.read_html(url, attrs={"class": "sortable"})
print(np.array_equal(dfs1[0], dfs2[0])) # Should be True
Specify values that should be converted to NaN:
dfs = pd.read_html(url, na_values=["No Acquirer"])
Specify whether to keep the default set of NaN values:
dfs = pd.read_html(url, keep_default_na=False)
Specify converters for columns. This is useful for numerical text data that has
leading zeros. By default columns that are numerical are cast to numeric
types and the leading zeros are lost. To avoid this, we can convert these
columns to strings.
url_mcc = "https://en.wikipedia.org/wiki/Mobile_country_code"
dfs = pd.read_html(
url_mcc,
match="Telekom Albania",
header=0,
converters={"MNC": str},
)
Use some combination of the above:
dfs = pd.read_html(url, match="Metcalf Bank", index_col=0)
Read in pandas to_html output (with some loss of floating point precision):
df = pd.DataFrame(np.random.randn(2, 2))
s = df.to_html(float_format="{0:.40g}".format)
dfin = pd.read_html(s, index_col=0)
The lxml backend will raise an error on a failed parse if that is the only
parser you provide. If you only have a single parser you can provide just a
string, but it is considered good practice to pass a list with one string if,
for example, the function expects a sequence of strings. You may use:
dfs = pd.read_html(url, "Metcalf Bank", index_col=0, flavor=["lxml"])
Or you could pass flavor='lxml' without a list:
dfs = pd.read_html(url, "Metcalf Bank", index_col=0, flavor="lxml")
However, if you have bs4 and html5lib installed and pass None or ['lxml',
'bs4'] then the parse will most likely succeed. Note that as soon as a parse
succeeds, the function will return.
dfs = pd.read_html(url, "Metcalf Bank", index_col=0, flavor=["lxml", "bs4"])
Links can be extracted from cells along with the text using extract_links="all".
In [323]: html_table = """
.....: <table>
.....: <tr>
.....: <th>GitHub</th>
.....: </tr>
.....: <tr>
.....: <td><a href="https://github.com/pandas-dev/pandas">pandas</a></td>
.....: </tr>
.....: </table>
.....: """
.....:
In [324]: df = pd.read_html(
.....: html_table,
.....: extract_links="all"
.....: )[0]
.....:
In [325]: df
Out[325]:
(GitHub, None)
0 (pandas, https://github.com/pandas-dev/pandas)
In [326]: df[("GitHub", None)]
Out[326]:
0 (pandas, https://github.com/pandas-dev/pandas)
Name: (GitHub, None), dtype: object
In [327]: df[("GitHub", None)].str[1]
Out[327]:
0 https://github.com/pandas-dev/pandas
Name: (GitHub, None), dtype: object
New in version 1.5.0.
Writing to HTML files#
DataFrame objects have an instance method to_html which renders the
contents of the DataFrame as an HTML table. The function arguments are as
in the method to_string described above.
Note
Not all of the possible options for DataFrame.to_html are shown here for
brevity’s sake. See to_html() for the
full set of options.
Note
In an HTML-rendering supported environment like a Jupyter Notebook, display(HTML(...))`
will render the raw HTML into the environment.
In [328]: from IPython.display import display, HTML
In [329]: df = pd.DataFrame(np.random.randn(2, 2))
In [330]: df
Out[330]:
0 1
0 0.070319 1.773907
1 0.253908 0.414581
In [331]: html = df.to_html()
In [332]: print(html) # raw html
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>0</th>
<th>1</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>0.070319</td>
<td>1.773907</td>
</tr>
<tr>
<th>1</th>
<td>0.253908</td>
<td>0.414581</td>
</tr>
</tbody>
</table>
In [333]: display(HTML(html))
<IPython.core.display.HTML object>
The columns argument will limit the columns shown:
In [334]: html = df.to_html(columns=[0])
In [335]: print(html)
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>0</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>0.070319</td>
</tr>
<tr>
<th>1</th>
<td>0.253908</td>
</tr>
</tbody>
</table>
In [336]: display(HTML(html))
<IPython.core.display.HTML object>
float_format takes a Python callable to control the precision of floating
point values:
In [337]: html = df.to_html(float_format="{0:.10f}".format)
In [338]: print(html)
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>0</th>
<th>1</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>0.0703192665</td>
<td>1.7739074228</td>
</tr>
<tr>
<th>1</th>
<td>0.2539083433</td>
<td>0.4145805920</td>
</tr>
</tbody>
</table>
In [339]: display(HTML(html))
<IPython.core.display.HTML object>
bold_rows will make the row labels bold by default, but you can turn that
off:
In [340]: html = df.to_html(bold_rows=False)
In [341]: print(html)
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>0</th>
<th>1</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>0.070319</td>
<td>1.773907</td>
</tr>
<tr>
<td>1</td>
<td>0.253908</td>
<td>0.414581</td>
</tr>
</tbody>
</table>
In [342]: display(HTML(html))
<IPython.core.display.HTML object>
The classes argument provides the ability to give the resulting HTML
table CSS classes. Note that these classes are appended to the existing
'dataframe' class.
In [343]: print(df.to_html(classes=["awesome_table_class", "even_more_awesome_class"]))
<table border="1" class="dataframe awesome_table_class even_more_awesome_class">
<thead>
<tr style="text-align: right;">
<th></th>
<th>0</th>
<th>1</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>0.070319</td>
<td>1.773907</td>
</tr>
<tr>
<th>1</th>
<td>0.253908</td>
<td>0.414581</td>
</tr>
</tbody>
</table>
The render_links argument provides the ability to add hyperlinks to cells
that contain URLs.
In [344]: url_df = pd.DataFrame(
.....: {
.....: "name": ["Python", "pandas"],
.....: "url": ["https://www.python.org/", "https://pandas.pydata.org"],
.....: }
.....: )
.....:
In [345]: html = url_df.to_html(render_links=True)
In [346]: print(html)
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>name</th>
<th>url</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>Python</td>
<td><a href="https://www.python.org/" target="_blank">https://www.python.org/</a></td>
</tr>
<tr>
<th>1</th>
<td>pandas</td>
<td><a href="https://pandas.pydata.org" target="_blank">https://pandas.pydata.org</a></td>
</tr>
</tbody>
</table>
In [347]: display(HTML(html))
<IPython.core.display.HTML object>
Finally, the escape argument allows you to control whether the
“<”, “>” and “&” characters escaped in the resulting HTML (by default it is
True). So to get the HTML without escaped characters pass escape=False
In [348]: df = pd.DataFrame({"a": list("&<>"), "b": np.random.randn(3)})
Escaped:
In [349]: html = df.to_html()
In [350]: print(html)
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>a</th>
<th>b</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>&</td>
<td>0.842321</td>
</tr>
<tr>
<th>1</th>
<td><</td>
<td>0.211337</td>
</tr>
<tr>
<th>2</th>
<td>></td>
<td>-1.055427</td>
</tr>
</tbody>
</table>
In [351]: display(HTML(html))
<IPython.core.display.HTML object>
Not escaped:
In [352]: html = df.to_html(escape=False)
In [353]: print(html)
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>a</th>
<th>b</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>&</td>
<td>0.842321</td>
</tr>
<tr>
<th>1</th>
<td><</td>
<td>0.211337</td>
</tr>
<tr>
<th>2</th>
<td>></td>
<td>-1.055427</td>
</tr>
</tbody>
</table>
In [354]: display(HTML(html))
<IPython.core.display.HTML object>
Note
Some browsers may not show a difference in the rendering of the previous two
HTML tables.
HTML Table Parsing Gotchas#
There are some versioning issues surrounding the libraries that are used to
parse HTML tables in the top-level pandas io function read_html.
Issues with lxml
Benefits
lxml is very fast.
lxml requires Cython to install correctly.
Drawbacks
lxml does not make any guarantees about the results of its parse
unless it is given strictly valid markup.
In light of the above, we have chosen to allow you, the user, to use the
lxml backend, but this backend will use html5lib if lxml
fails to parse
It is therefore highly recommended that you install both
BeautifulSoup4 and html5lib, so that you will still get a valid
result (provided everything else is valid) even if lxml fails.
Issues with BeautifulSoup4 using lxml as a backend
The above issues hold here as well since BeautifulSoup4 is essentially
just a wrapper around a parser backend.
Issues with BeautifulSoup4 using html5lib as a backend
Benefits
html5lib is far more lenient than lxml and consequently deals
with real-life markup in a much saner way rather than just, e.g.,
dropping an element without notifying you.
html5lib generates valid HTML5 markup from invalid markup
automatically. This is extremely important for parsing HTML tables,
since it guarantees a valid document. However, that does NOT mean that
it is “correct”, since the process of fixing markup does not have a
single definition.
html5lib is pure Python and requires no additional build steps beyond
its own installation.
Drawbacks
The biggest drawback to using html5lib is that it is slow as
molasses. However consider the fact that many tables on the web are not
big enough for the parsing algorithm runtime to matter. It is more
likely that the bottleneck will be in the process of reading the raw
text from the URL over the web, i.e., IO (input-output). For very large
tables, this might not be true.
LaTeX#
New in version 1.3.0.
Currently there are no methods to read from LaTeX, only output methods.
Writing to LaTeX files#
Note
DataFrame and Styler objects currently have a to_latex method. We recommend
using the Styler.to_latex() method
over DataFrame.to_latex() due to the former’s greater flexibility with
conditional styling, and the latter’s possible future deprecation.
Review the documentation for Styler.to_latex,
which gives examples of conditional styling and explains the operation of its keyword
arguments.
For simple application the following pattern is sufficient.
In [355]: df = pd.DataFrame([[1, 2], [3, 4]], index=["a", "b"], columns=["c", "d"])
In [356]: print(df.style.to_latex())
\begin{tabular}{lrr}
& c & d \\
a & 1 & 2 \\
b & 3 & 4 \\
\end{tabular}
To format values before output, chain the Styler.format
method.
In [357]: print(df.style.format("€ {}").to_latex())
\begin{tabular}{lrr}
& c & d \\
a & € 1 & € 2 \\
b & € 3 & € 4 \\
\end{tabular}
XML#
Reading XML#
New in version 1.3.0.
The top-level read_xml() function can accept an XML
string/file/URL and will parse nodes and attributes into a pandas DataFrame.
Note
Since there is no standard XML structure where design types can vary in
many ways, read_xml works best with flatter, shallow versions. If
an XML document is deeply nested, use the stylesheet feature to
transform XML into a flatter version.
Let’s look at a few examples.
Read an XML string:
In [358]: xml = """<?xml version="1.0" encoding="UTF-8"?>
.....: <bookstore>
.....: <book category="cooking">
.....: <title lang="en">Everyday Italian</title>
.....: <author>Giada De Laurentiis</author>
.....: <year>2005</year>
.....: <price>30.00</price>
.....: </book>
.....: <book category="children">
.....: <title lang="en">Harry Potter</title>
.....: <author>J K. Rowling</author>
.....: <year>2005</year>
.....: <price>29.99</price>
.....: </book>
.....: <book category="web">
.....: <title lang="en">Learning XML</title>
.....: <author>Erik T. Ray</author>
.....: <year>2003</year>
.....: <price>39.95</price>
.....: </book>
.....: </bookstore>"""
.....:
In [359]: df = pd.read_xml(xml)
In [360]: df
Out[360]:
category title author year price
0 cooking Everyday Italian Giada De Laurentiis 2005 30.00
1 children Harry Potter J K. Rowling 2005 29.99
2 web Learning XML Erik T. Ray 2003 39.95
Read a URL with no options:
In [361]: df = pd.read_xml("https://www.w3schools.com/xml/books.xml")
In [362]: df
Out[362]:
category title author year price cover
0 cooking Everyday Italian Giada De Laurentiis 2005 30.00 None
1 children Harry Potter J K. Rowling 2005 29.99 None
2 web XQuery Kick Start Vaidyanathan Nagarajan 2003 49.99 None
3 web Learning XML Erik T. Ray 2003 39.95 paperback
Read in the content of the “books.xml” file and pass it to read_xml
as a string:
In [363]: file_path = "books.xml"
In [364]: with open(file_path, "w") as f:
.....: f.write(xml)
.....:
In [365]: with open(file_path, "r") as f:
.....: df = pd.read_xml(f.read())
.....:
In [366]: df
Out[366]:
category title author year price
0 cooking Everyday Italian Giada De Laurentiis 2005 30.00
1 children Harry Potter J K. Rowling 2005 29.99
2 web Learning XML Erik T. Ray 2003 39.95
Read in the content of the “books.xml” as instance of StringIO or
BytesIO and pass it to read_xml:
In [367]: with open(file_path, "r") as f:
.....: sio = StringIO(f.read())
.....:
In [368]: df = pd.read_xml(sio)
In [369]: df
Out[369]:
category title author year price
0 cooking Everyday Italian Giada De Laurentiis 2005 30.00
1 children Harry Potter J K. Rowling 2005 29.99
2 web Learning XML Erik T. Ray 2003 39.95
In [370]: with open(file_path, "rb") as f:
.....: bio = BytesIO(f.read())
.....:
In [371]: df = pd.read_xml(bio)
In [372]: df
Out[372]:
category title author year price
0 cooking Everyday Italian Giada De Laurentiis 2005 30.00
1 children Harry Potter J K. Rowling 2005 29.99
2 web Learning XML Erik T. Ray 2003 39.95
Even read XML from AWS S3 buckets such as NIH NCBI PMC Article Datasets providing
Biomedical and Life Science Jorurnals:
In [373]: df = pd.read_xml(
.....: "s3://pmc-oa-opendata/oa_comm/xml/all/PMC1236943.xml",
.....: xpath=".//journal-meta",
.....: )
.....:
In [374]: df
Out[374]:
journal-id journal-title issn publisher
0 Cardiovasc Ultrasound Cardiovascular Ultrasound 1476-7120 NaN
With lxml as default parser, you access the full-featured XML library
that extends Python’s ElementTree API. One powerful tool is ability to query
nodes selectively or conditionally with more expressive XPath:
In [375]: df = pd.read_xml(file_path, xpath="//book[year=2005]")
In [376]: df
Out[376]:
category title author year price
0 cooking Everyday Italian Giada De Laurentiis 2005 30.00
1 children Harry Potter J K. Rowling 2005 29.99
Specify only elements or only attributes to parse:
In [377]: df = pd.read_xml(file_path, elems_only=True)
In [378]: df
Out[378]:
title author year price
0 Everyday Italian Giada De Laurentiis 2005 30.00
1 Harry Potter J K. Rowling 2005 29.99
2 Learning XML Erik T. Ray 2003 39.95
In [379]: df = pd.read_xml(file_path, attrs_only=True)
In [380]: df
Out[380]:
category
0 cooking
1 children
2 web
XML documents can have namespaces with prefixes and default namespaces without
prefixes both of which are denoted with a special attribute xmlns. In order
to parse by node under a namespace context, xpath must reference a prefix.
For example, below XML contains a namespace with prefix, doc, and URI at
https://example.com. In order to parse doc:row nodes,
namespaces must be used.
In [381]: xml = """<?xml version='1.0' encoding='utf-8'?>
.....: <doc:data xmlns:doc="https://example.com">
.....: <doc:row>
.....: <doc:shape>square</doc:shape>
.....: <doc:degrees>360</doc:degrees>
.....: <doc:sides>4.0</doc:sides>
.....: </doc:row>
.....: <doc:row>
.....: <doc:shape>circle</doc:shape>
.....: <doc:degrees>360</doc:degrees>
.....: <doc:sides/>
.....: </doc:row>
.....: <doc:row>
.....: <doc:shape>triangle</doc:shape>
.....: <doc:degrees>180</doc:degrees>
.....: <doc:sides>3.0</doc:sides>
.....: </doc:row>
.....: </doc:data>"""
.....:
In [382]: df = pd.read_xml(xml,
.....: xpath="//doc:row",
.....: namespaces={"doc": "https://example.com"})
.....:
In [383]: df
Out[383]:
shape degrees sides
0 square 360 4.0
1 circle 360 NaN
2 triangle 180 3.0
Similarly, an XML document can have a default namespace without prefix. Failing
to assign a temporary prefix will return no nodes and raise a ValueError.
But assigning any temporary name to correct URI allows parsing by nodes.
In [384]: xml = """<?xml version='1.0' encoding='utf-8'?>
.....: <data xmlns="https://example.com">
.....: <row>
.....: <shape>square</shape>
.....: <degrees>360</degrees>
.....: <sides>4.0</sides>
.....: </row>
.....: <row>
.....: <shape>circle</shape>
.....: <degrees>360</degrees>
.....: <sides/>
.....: </row>
.....: <row>
.....: <shape>triangle</shape>
.....: <degrees>180</degrees>
.....: <sides>3.0</sides>
.....: </row>
.....: </data>"""
.....:
In [385]: df = pd.read_xml(xml,
.....: xpath="//pandas:row",
.....: namespaces={"pandas": "https://example.com"})
.....:
In [386]: df
Out[386]:
shape degrees sides
0 square 360 4.0
1 circle 360 NaN
2 triangle 180 3.0
However, if XPath does not reference node names such as default, /*, then
namespaces is not required.
With lxml as parser, you can flatten nested XML documents with an XSLT
script which also can be string/file/URL types. As background, XSLT is
a special-purpose language written in a special XML file that can transform
original XML documents into other XML, HTML, even text (CSV, JSON, etc.)
using an XSLT processor.
For example, consider this somewhat nested structure of Chicago “L” Rides
where station and rides elements encapsulate data in their own sections.
With below XSLT, lxml can transform original nested document into a flatter
output (as shown below for demonstration) for easier parse into DataFrame:
In [387]: xml = """<?xml version='1.0' encoding='utf-8'?>
.....: <response>
.....: <row>
.....: <station id="40850" name="Library"/>
.....: <month>2020-09-01T00:00:00</month>
.....: <rides>
.....: <avg_weekday_rides>864.2</avg_weekday_rides>
.....: <avg_saturday_rides>534</avg_saturday_rides>
.....: <avg_sunday_holiday_rides>417.2</avg_sunday_holiday_rides>
.....: </rides>
.....: </row>
.....: <row>
.....: <station id="41700" name="Washington/Wabash"/>
.....: <month>2020-09-01T00:00:00</month>
.....: <rides>
.....: <avg_weekday_rides>2707.4</avg_weekday_rides>
.....: <avg_saturday_rides>1909.8</avg_saturday_rides>
.....: <avg_sunday_holiday_rides>1438.6</avg_sunday_holiday_rides>
.....: </rides>
.....: </row>
.....: <row>
.....: <station id="40380" name="Clark/Lake"/>
.....: <month>2020-09-01T00:00:00</month>
.....: <rides>
.....: <avg_weekday_rides>2949.6</avg_weekday_rides>
.....: <avg_saturday_rides>1657</avg_saturday_rides>
.....: <avg_sunday_holiday_rides>1453.8</avg_sunday_holiday_rides>
.....: </rides>
.....: </row>
.....: </response>"""
.....:
In [388]: xsl = """<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
.....: <xsl:output method="xml" omit-xml-declaration="no" indent="yes"/>
.....: <xsl:strip-space elements="*"/>
.....: <xsl:template match="/response">
.....: <xsl:copy>
.....: <xsl:apply-templates select="row"/>
.....: </xsl:copy>
.....: </xsl:template>
.....: <xsl:template match="row">
.....: <xsl:copy>
.....: <station_id><xsl:value-of select="station/@id"/></station_id>
.....: <station_name><xsl:value-of select="station/@name"/></station_name>
.....: <xsl:copy-of select="month|rides/*"/>
.....: </xsl:copy>
.....: </xsl:template>
.....: </xsl:stylesheet>"""
.....:
In [389]: output = """<?xml version='1.0' encoding='utf-8'?>
.....: <response>
.....: <row>
.....: <station_id>40850</station_id>
.....: <station_name>Library</station_name>
.....: <month>2020-09-01T00:00:00</month>
.....: <avg_weekday_rides>864.2</avg_weekday_rides>
.....: <avg_saturday_rides>534</avg_saturday_rides>
.....: <avg_sunday_holiday_rides>417.2</avg_sunday_holiday_rides>
.....: </row>
.....: <row>
.....: <station_id>41700</station_id>
.....: <station_name>Washington/Wabash</station_name>
.....: <month>2020-09-01T00:00:00</month>
.....: <avg_weekday_rides>2707.4</avg_weekday_rides>
.....: <avg_saturday_rides>1909.8</avg_saturday_rides>
.....: <avg_sunday_holiday_rides>1438.6</avg_sunday_holiday_rides>
.....: </row>
.....: <row>
.....: <station_id>40380</station_id>
.....: <station_name>Clark/Lake</station_name>
.....: <month>2020-09-01T00:00:00</month>
.....: <avg_weekday_rides>2949.6</avg_weekday_rides>
.....: <avg_saturday_rides>1657</avg_saturday_rides>
.....: <avg_sunday_holiday_rides>1453.8</avg_sunday_holiday_rides>
.....: </row>
.....: </response>"""
.....:
In [390]: df = pd.read_xml(xml, stylesheet=xsl)
In [391]: df
Out[391]:
station_id station_name ... avg_saturday_rides avg_sunday_holiday_rides
0 40850 Library ... 534.0 417.2
1 41700 Washington/Wabash ... 1909.8 1438.6
2 40380 Clark/Lake ... 1657.0 1453.8
[3 rows x 6 columns]
For very large XML files that can range in hundreds of megabytes to gigabytes, pandas.read_xml()
supports parsing such sizeable files using lxml’s iterparse and etree’s iterparse
which are memory-efficient methods to iterate through an XML tree and extract specific elements and attributes.
without holding entire tree in memory.
New in version 1.5.0.
To use this feature, you must pass a physical XML file path into read_xml and use the iterparse argument.
Files should not be compressed or point to online sources but stored on local disk. Also, iterparse should be
a dictionary where the key is the repeating nodes in document (which become the rows) and the value is a list of
any element or attribute that is a descendant (i.e., child, grandchild) of repeating node. Since XPath is not
used in this method, descendants do not need to share same relationship with one another. Below shows example
of reading in Wikipedia’s very large (12 GB+) latest article data dump.
In [1]: df = pd.read_xml(
... "/path/to/downloaded/enwikisource-latest-pages-articles.xml",
... iterparse = {"page": ["title", "ns", "id"]}
... )
... df
Out[2]:
title ns id
0 Gettysburg Address 0 21450
1 Main Page 0 42950
2 Declaration by United Nations 0 8435
3 Constitution of the United States of America 0 8435
4 Declaration of Independence (Israel) 0 17858
... ... ... ...
3578760 Page:Black cat 1897 07 v2 n10.pdf/17 104 219649
3578761 Page:Black cat 1897 07 v2 n10.pdf/43 104 219649
3578762 Page:Black cat 1897 07 v2 n10.pdf/44 104 219649
3578763 The History of Tom Jones, a Foundling/Book IX 0 12084291
3578764 Page:Shakespeare of Stratford (1926) Yale.djvu/91 104 21450
[3578765 rows x 3 columns]
Writing XML#
New in version 1.3.0.
DataFrame objects have an instance method to_xml which renders the
contents of the DataFrame as an XML document.
Note
This method does not support special properties of XML including DTD,
CData, XSD schemas, processing instructions, comments, and others.
Only namespaces at the root level is supported. However, stylesheet
allows design changes after initial output.
Let’s look at a few examples.
Write an XML without options:
In [392]: geom_df = pd.DataFrame(
.....: {
.....: "shape": ["square", "circle", "triangle"],
.....: "degrees": [360, 360, 180],
.....: "sides": [4, np.nan, 3],
.....: }
.....: )
.....:
In [393]: print(geom_df.to_xml())
<?xml version='1.0' encoding='utf-8'?>
<data>
<row>
<index>0</index>
<shape>square</shape>
<degrees>360</degrees>
<sides>4.0</sides>
</row>
<row>
<index>1</index>
<shape>circle</shape>
<degrees>360</degrees>
<sides/>
</row>
<row>
<index>2</index>
<shape>triangle</shape>
<degrees>180</degrees>
<sides>3.0</sides>
</row>
</data>
Write an XML with new root and row name:
In [394]: print(geom_df.to_xml(root_name="geometry", row_name="objects"))
<?xml version='1.0' encoding='utf-8'?>
<geometry>
<objects>
<index>0</index>
<shape>square</shape>
<degrees>360</degrees>
<sides>4.0</sides>
</objects>
<objects>
<index>1</index>
<shape>circle</shape>
<degrees>360</degrees>
<sides/>
</objects>
<objects>
<index>2</index>
<shape>triangle</shape>
<degrees>180</degrees>
<sides>3.0</sides>
</objects>
</geometry>
Write an attribute-centric XML:
In [395]: print(geom_df.to_xml(attr_cols=geom_df.columns.tolist()))
<?xml version='1.0' encoding='utf-8'?>
<data>
<row index="0" shape="square" degrees="360" sides="4.0"/>
<row index="1" shape="circle" degrees="360"/>
<row index="2" shape="triangle" degrees="180" sides="3.0"/>
</data>
Write a mix of elements and attributes:
In [396]: print(
.....: geom_df.to_xml(
.....: index=False,
.....: attr_cols=['shape'],
.....: elem_cols=['degrees', 'sides'])
.....: )
.....:
<?xml version='1.0' encoding='utf-8'?>
<data>
<row shape="square">
<degrees>360</degrees>
<sides>4.0</sides>
</row>
<row shape="circle">
<degrees>360</degrees>
<sides/>
</row>
<row shape="triangle">
<degrees>180</degrees>
<sides>3.0</sides>
</row>
</data>
Any DataFrames with hierarchical columns will be flattened for XML element names
with levels delimited by underscores:
In [397]: ext_geom_df = pd.DataFrame(
.....: {
.....: "type": ["polygon", "other", "polygon"],
.....: "shape": ["square", "circle", "triangle"],
.....: "degrees": [360, 360, 180],
.....: "sides": [4, np.nan, 3],
.....: }
.....: )
.....:
In [398]: pvt_df = ext_geom_df.pivot_table(index='shape',
.....: columns='type',
.....: values=['degrees', 'sides'],
.....: aggfunc='sum')
.....:
In [399]: pvt_df
Out[399]:
degrees sides
type other polygon other polygon
shape
circle 360.0 NaN 0.0 NaN
square NaN 360.0 NaN 4.0
triangle NaN 180.0 NaN 3.0
In [400]: print(pvt_df.to_xml())
<?xml version='1.0' encoding='utf-8'?>
<data>
<row>
<shape>circle</shape>
<degrees_other>360.0</degrees_other>
<degrees_polygon/>
<sides_other>0.0</sides_other>
<sides_polygon/>
</row>
<row>
<shape>square</shape>
<degrees_other/>
<degrees_polygon>360.0</degrees_polygon>
<sides_other/>
<sides_polygon>4.0</sides_polygon>
</row>
<row>
<shape>triangle</shape>
<degrees_other/>
<degrees_polygon>180.0</degrees_polygon>
<sides_other/>
<sides_polygon>3.0</sides_polygon>
</row>
</data>
Write an XML with default namespace:
In [401]: print(geom_df.to_xml(namespaces={"": "https://example.com"}))
<?xml version='1.0' encoding='utf-8'?>
<data xmlns="https://example.com">
<row>
<index>0</index>
<shape>square</shape>
<degrees>360</degrees>
<sides>4.0</sides>
</row>
<row>
<index>1</index>
<shape>circle</shape>
<degrees>360</degrees>
<sides/>
</row>
<row>
<index>2</index>
<shape>triangle</shape>
<degrees>180</degrees>
<sides>3.0</sides>
</row>
</data>
Write an XML with namespace prefix:
In [402]: print(
.....: geom_df.to_xml(namespaces={"doc": "https://example.com"},
.....: prefix="doc")
.....: )
.....:
<?xml version='1.0' encoding='utf-8'?>
<doc:data xmlns:doc="https://example.com">
<doc:row>
<doc:index>0</doc:index>
<doc:shape>square</doc:shape>
<doc:degrees>360</doc:degrees>
<doc:sides>4.0</doc:sides>
</doc:row>
<doc:row>
<doc:index>1</doc:index>
<doc:shape>circle</doc:shape>
<doc:degrees>360</doc:degrees>
<doc:sides/>
</doc:row>
<doc:row>
<doc:index>2</doc:index>
<doc:shape>triangle</doc:shape>
<doc:degrees>180</doc:degrees>
<doc:sides>3.0</doc:sides>
</doc:row>
</doc:data>
Write an XML without declaration or pretty print:
In [403]: print(
.....: geom_df.to_xml(xml_declaration=False,
.....: pretty_print=False)
.....: )
.....:
<data><row><index>0</index><shape>square</shape><degrees>360</degrees><sides>4.0</sides></row><row><index>1</index><shape>circle</shape><degrees>360</degrees><sides/></row><row><index>2</index><shape>triangle</shape><degrees>180</degrees><sides>3.0</sides></row></data>
Write an XML and transform with stylesheet:
In [404]: xsl = """<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
.....: <xsl:output method="xml" omit-xml-declaration="no" indent="yes"/>
.....: <xsl:strip-space elements="*"/>
.....: <xsl:template match="/data">
.....: <geometry>
.....: <xsl:apply-templates select="row"/>
.....: </geometry>
.....: </xsl:template>
.....: <xsl:template match="row">
.....: <object index="{index}">
.....: <xsl:if test="shape!='circle'">
.....: <xsl:attribute name="type">polygon</xsl:attribute>
.....: </xsl:if>
.....: <xsl:copy-of select="shape"/>
.....: <property>
.....: <xsl:copy-of select="degrees|sides"/>
.....: </property>
.....: </object>
.....: </xsl:template>
.....: </xsl:stylesheet>"""
.....:
In [405]: print(geom_df.to_xml(stylesheet=xsl))
<?xml version="1.0"?>
<geometry>
<object index="0" type="polygon">
<shape>square</shape>
<property>
<degrees>360</degrees>
<sides>4.0</sides>
</property>
</object>
<object index="1">
<shape>circle</shape>
<property>
<degrees>360</degrees>
<sides/>
</property>
</object>
<object index="2" type="polygon">
<shape>triangle</shape>
<property>
<degrees>180</degrees>
<sides>3.0</sides>
</property>
</object>
</geometry>
XML Final Notes#
All XML documents adhere to W3C specifications. Both etree and lxml
parsers will fail to parse any markup document that is not well-formed or
follows XML syntax rules. Do be aware HTML is not an XML document unless it
follows XHTML specs. However, other popular markup types including KML, XAML,
RSS, MusicML, MathML are compliant XML schemas.
For above reason, if your application builds XML prior to pandas operations,
use appropriate DOM libraries like etree and lxml to build the necessary
document and not by string concatenation or regex adjustments. Always remember
XML is a special text file with markup rules.
With very large XML files (several hundred MBs to GBs), XPath and XSLT
can become memory-intensive operations. Be sure to have enough available
RAM for reading and writing to large XML files (roughly about 5 times the
size of text).
Because XSLT is a programming language, use it with caution since such scripts
can pose a security risk in your environment and can run large or infinite
recursive operations. Always test scripts on small fragments before full run.
The etree parser supports all functionality of both read_xml and
to_xml except for complex XPath and any XSLT. Though limited in features,
etree is still a reliable and capable parser and tree builder. Its
performance may trail lxml to a certain degree for larger files but
relatively unnoticeable on small to medium size files.
Excel files#
The read_excel() method can read Excel 2007+ (.xlsx) files
using the openpyxl Python module. Excel 2003 (.xls) files
can be read using xlrd. Binary Excel (.xlsb)
files can be read using pyxlsb.
The to_excel() instance method is used for
saving a DataFrame to Excel. Generally the semantics are
similar to working with csv data.
See the cookbook for some advanced strategies.
Warning
The xlwt package for writing old-style .xls
excel files is no longer maintained.
The xlrd package is now only for reading
old-style .xls files.
Before pandas 1.3.0, the default argument engine=None to read_excel()
would result in using the xlrd engine in many cases, including new
Excel 2007+ (.xlsx) files. pandas will now default to using the
openpyxl engine.
It is strongly encouraged to install openpyxl to read Excel 2007+
(.xlsx) files.
Please do not report issues when using ``xlrd`` to read ``.xlsx`` files.
This is no longer supported, switch to using openpyxl instead.
Attempting to use the xlwt engine will raise a FutureWarning
unless the option io.excel.xls.writer is set to "xlwt".
While this option is now deprecated and will also raise a FutureWarning,
it can be globally set and the warning suppressed. Users are recommended to
write .xlsx files using the openpyxl engine instead.
Reading Excel files#
In the most basic use-case, read_excel takes a path to an Excel
file, and the sheet_name indicating which sheet to parse.
# Returns a DataFrame
pd.read_excel("path_to_file.xls", sheet_name="Sheet1")
ExcelFile class#
To facilitate working with multiple sheets from the same file, the ExcelFile
class can be used to wrap the file and can be passed into read_excel
There will be a performance benefit for reading multiple sheets as the file is
read into memory only once.
xlsx = pd.ExcelFile("path_to_file.xls")
df = pd.read_excel(xlsx, "Sheet1")
The ExcelFile class can also be used as a context manager.
with pd.ExcelFile("path_to_file.xls") as xls:
df1 = pd.read_excel(xls, "Sheet1")
df2 = pd.read_excel(xls, "Sheet2")
The sheet_names property will generate
a list of the sheet names in the file.
The primary use-case for an ExcelFile is parsing multiple sheets with
different parameters:
data = {}
# For when Sheet1's format differs from Sheet2
with pd.ExcelFile("path_to_file.xls") as xls:
data["Sheet1"] = pd.read_excel(xls, "Sheet1", index_col=None, na_values=["NA"])
data["Sheet2"] = pd.read_excel(xls, "Sheet2", index_col=1)
Note that if the same parsing parameters are used for all sheets, a list
of sheet names can simply be passed to read_excel with no loss in performance.
# using the ExcelFile class
data = {}
with pd.ExcelFile("path_to_file.xls") as xls:
data["Sheet1"] = pd.read_excel(xls, "Sheet1", index_col=None, na_values=["NA"])
data["Sheet2"] = pd.read_excel(xls, "Sheet2", index_col=None, na_values=["NA"])
# equivalent using the read_excel function
data = pd.read_excel(
"path_to_file.xls", ["Sheet1", "Sheet2"], index_col=None, na_values=["NA"]
)
ExcelFile can also be called with a xlrd.book.Book object
as a parameter. This allows the user to control how the excel file is read.
For example, sheets can be loaded on demand by calling xlrd.open_workbook()
with on_demand=True.
import xlrd
xlrd_book = xlrd.open_workbook("path_to_file.xls", on_demand=True)
with pd.ExcelFile(xlrd_book) as xls:
df1 = pd.read_excel(xls, "Sheet1")
df2 = pd.read_excel(xls, "Sheet2")
Specifying sheets#
Note
The second argument is sheet_name, not to be confused with ExcelFile.sheet_names.
Note
An ExcelFile’s attribute sheet_names provides access to a list of sheets.
The arguments sheet_name allows specifying the sheet or sheets to read.
The default value for sheet_name is 0, indicating to read the first sheet
Pass a string to refer to the name of a particular sheet in the workbook.
Pass an integer to refer to the index of a sheet. Indices follow Python
convention, beginning at 0.
Pass a list of either strings or integers, to return a dictionary of specified sheets.
Pass a None to return a dictionary of all available sheets.
# Returns a DataFrame
pd.read_excel("path_to_file.xls", "Sheet1", index_col=None, na_values=["NA"])
Using the sheet index:
# Returns a DataFrame
pd.read_excel("path_to_file.xls", 0, index_col=None, na_values=["NA"])
Using all default values:
# Returns a DataFrame
pd.read_excel("path_to_file.xls")
Using None to get all sheets:
# Returns a dictionary of DataFrames
pd.read_excel("path_to_file.xls", sheet_name=None)
Using a list to get multiple sheets:
# Returns the 1st and 4th sheet, as a dictionary of DataFrames.
pd.read_excel("path_to_file.xls", sheet_name=["Sheet1", 3])
read_excel can read more than one sheet, by setting sheet_name to either
a list of sheet names, a list of sheet positions, or None to read all sheets.
Sheets can be specified by sheet index or sheet name, using an integer or string,
respectively.
Reading a MultiIndex#
read_excel can read a MultiIndex index, by passing a list of columns to index_col
and a MultiIndex column by passing a list of rows to header. If either the index
or columns have serialized level names those will be read in as well by specifying
the rows/columns that make up the levels.
For example, to read in a MultiIndex index without names:
In [406]: df = pd.DataFrame(
.....: {"a": [1, 2, 3, 4], "b": [5, 6, 7, 8]},
.....: index=pd.MultiIndex.from_product([["a", "b"], ["c", "d"]]),
.....: )
.....:
In [407]: df.to_excel("path_to_file.xlsx")
In [408]: df = pd.read_excel("path_to_file.xlsx", index_col=[0, 1])
In [409]: df
Out[409]:
a b
a c 1 5
d 2 6
b c 3 7
d 4 8
If the index has level names, they will parsed as well, using the same
parameters.
In [410]: df.index = df.index.set_names(["lvl1", "lvl2"])
In [411]: df.to_excel("path_to_file.xlsx")
In [412]: df = pd.read_excel("path_to_file.xlsx", index_col=[0, 1])
In [413]: df
Out[413]:
a b
lvl1 lvl2
a c 1 5
d 2 6
b c 3 7
d 4 8
If the source file has both MultiIndex index and columns, lists specifying each
should be passed to index_col and header:
In [414]: df.columns = pd.MultiIndex.from_product([["a"], ["b", "d"]], names=["c1", "c2"])
In [415]: df.to_excel("path_to_file.xlsx")
In [416]: df = pd.read_excel("path_to_file.xlsx", index_col=[0, 1], header=[0, 1])
In [417]: df
Out[417]:
c1 a
c2 b d
lvl1 lvl2
a c 1 5
d 2 6
b c 3 7
d 4 8
Missing values in columns specified in index_col will be forward filled to
allow roundtripping with to_excel for merged_cells=True. To avoid forward
filling the missing values use set_index after reading the data instead of
index_col.
Parsing specific columns#
It is often the case that users will insert columns to do temporary computations
in Excel and you may not want to read in those columns. read_excel takes
a usecols keyword to allow you to specify a subset of columns to parse.
Changed in version 1.0.0.
Passing in an integer for usecols will no longer work. Please pass in a list
of ints from 0 to usecols inclusive instead.
You can specify a comma-delimited set of Excel columns and ranges as a string:
pd.read_excel("path_to_file.xls", "Sheet1", usecols="A,C:E")
If usecols is a list of integers, then it is assumed to be the file column
indices to be parsed.
pd.read_excel("path_to_file.xls", "Sheet1", usecols=[0, 2, 3])
Element order is ignored, so usecols=[0, 1] is the same as [1, 0].
If usecols is a list of strings, it is assumed that each string corresponds
to a column name provided either by the user in names or inferred from the
document header row(s). Those strings define which columns will be parsed:
pd.read_excel("path_to_file.xls", "Sheet1", usecols=["foo", "bar"])
Element order is ignored, so usecols=['baz', 'joe'] is the same as ['joe', 'baz'].
If usecols is callable, the callable function will be evaluated against
the column names, returning names where the callable function evaluates to True.
pd.read_excel("path_to_file.xls", "Sheet1", usecols=lambda x: x.isalpha())
Parsing dates#
Datetime-like values are normally automatically converted to the appropriate
dtype when reading the excel file. But if you have a column of strings that
look like dates (but are not actually formatted as dates in excel), you can
use the parse_dates keyword to parse those strings to datetimes:
pd.read_excel("path_to_file.xls", "Sheet1", parse_dates=["date_strings"])
Cell converters#
It is possible to transform the contents of Excel cells via the converters
option. For instance, to convert a column to boolean:
pd.read_excel("path_to_file.xls", "Sheet1", converters={"MyBools": bool})
This options handles missing values and treats exceptions in the converters
as missing data. Transformations are applied cell by cell rather than to the
column as a whole, so the array dtype is not guaranteed. For instance, a
column of integers with missing values cannot be transformed to an array
with integer dtype, because NaN is strictly a float. You can manually mask
missing data to recover integer dtype:
def cfun(x):
return int(x) if x else -1
pd.read_excel("path_to_file.xls", "Sheet1", converters={"MyInts": cfun})
Dtype specifications#
As an alternative to converters, the type for an entire column can
be specified using the dtype keyword, which takes a dictionary
mapping column names to types. To interpret data with
no type inference, use the type str or object.
pd.read_excel("path_to_file.xls", dtype={"MyInts": "int64", "MyText": str})
Writing Excel files#
Writing Excel files to disk#
To write a DataFrame object to a sheet of an Excel file, you can use the
to_excel instance method. The arguments are largely the same as to_csv
described above, the first argument being the name of the excel file, and the
optional second argument the name of the sheet to which the DataFrame should be
written. For example:
df.to_excel("path_to_file.xlsx", sheet_name="Sheet1")
Files with a .xls extension will be written using xlwt and those with a
.xlsx extension will be written using xlsxwriter (if available) or
openpyxl.
The DataFrame will be written in a way that tries to mimic the REPL output.
The index_label will be placed in the second
row instead of the first. You can place it in the first row by setting the
merge_cells option in to_excel() to False:
df.to_excel("path_to_file.xlsx", index_label="label", merge_cells=False)
In order to write separate DataFrames to separate sheets in a single Excel file,
one can pass an ExcelWriter.
with pd.ExcelWriter("path_to_file.xlsx") as writer:
df1.to_excel(writer, sheet_name="Sheet1")
df2.to_excel(writer, sheet_name="Sheet2")
Writing Excel files to memory#
pandas supports writing Excel files to buffer-like objects such as StringIO or
BytesIO using ExcelWriter.
from io import BytesIO
bio = BytesIO()
# By setting the 'engine' in the ExcelWriter constructor.
writer = pd.ExcelWriter(bio, engine="xlsxwriter")
df.to_excel(writer, sheet_name="Sheet1")
# Save the workbook
writer.save()
# Seek to the beginning and read to copy the workbook to a variable in memory
bio.seek(0)
workbook = bio.read()
Note
engine is optional but recommended. Setting the engine determines
the version of workbook produced. Setting engine='xlrd' will produce an
Excel 2003-format workbook (xls). Using either 'openpyxl' or
'xlsxwriter' will produce an Excel 2007-format workbook (xlsx). If
omitted, an Excel 2007-formatted workbook is produced.
Excel writer engines#
Deprecated since version 1.2.0: As the xlwt package is no longer
maintained, the xlwt engine will be removed from a future version
of pandas. This is the only engine in pandas that supports writing to
.xls files.
pandas chooses an Excel writer via two methods:
the engine keyword argument
the filename extension (via the default specified in config options)
By default, pandas uses the XlsxWriter for .xlsx, openpyxl
for .xlsm, and xlwt for .xls files. If you have multiple
engines installed, you can set the default engine through setting the
config options io.excel.xlsx.writer and
io.excel.xls.writer. pandas will fall back on openpyxl for .xlsx
files if Xlsxwriter is not available.
To specify which writer you want to use, you can pass an engine keyword
argument to to_excel and to ExcelWriter. The built-in engines are:
openpyxl: version 2.4 or higher is required
xlsxwriter
xlwt
# By setting the 'engine' in the DataFrame 'to_excel()' methods.
df.to_excel("path_to_file.xlsx", sheet_name="Sheet1", engine="xlsxwriter")
# By setting the 'engine' in the ExcelWriter constructor.
writer = pd.ExcelWriter("path_to_file.xlsx", engine="xlsxwriter")
# Or via pandas configuration.
from pandas import options # noqa: E402
options.io.excel.xlsx.writer = "xlsxwriter"
df.to_excel("path_to_file.xlsx", sheet_name="Sheet1")
Style and formatting#
The look and feel of Excel worksheets created from pandas can be modified using the following parameters on the DataFrame’s to_excel method.
float_format : Format string for floating point numbers (default None).
freeze_panes : A tuple of two integers representing the bottommost row and rightmost column to freeze. Each of these parameters is one-based, so (1, 1) will freeze the first row and first column (default None).
Using the Xlsxwriter engine provides many options for controlling the
format of an Excel worksheet created with the to_excel method. Excellent examples can be found in the
Xlsxwriter documentation here: https://xlsxwriter.readthedocs.io/working_with_pandas.html
OpenDocument Spreadsheets#
New in version 0.25.
The read_excel() method can also read OpenDocument spreadsheets
using the odfpy module. The semantics and features for reading
OpenDocument spreadsheets match what can be done for Excel files using
engine='odf'.
# Returns a DataFrame
pd.read_excel("path_to_file.ods", engine="odf")
Note
Currently pandas only supports reading OpenDocument spreadsheets. Writing
is not implemented.
Binary Excel (.xlsb) files#
New in version 1.0.0.
The read_excel() method can also read binary Excel files
using the pyxlsb module. The semantics and features for reading
binary Excel files mostly match what can be done for Excel files using
engine='pyxlsb'. pyxlsb does not recognize datetime types
in files and will return floats instead.
# Returns a DataFrame
pd.read_excel("path_to_file.xlsb", engine="pyxlsb")
Note
Currently pandas only supports reading binary Excel files. Writing
is not implemented.
Clipboard#
A handy way to grab data is to use the read_clipboard() method,
which takes the contents of the clipboard buffer and passes them to the
read_csv method. For instance, you can copy the following text to the
clipboard (CTRL-C on many operating systems):
A B C
x 1 4 p
y 2 5 q
z 3 6 r
And then import the data directly to a DataFrame by calling:
>>> clipdf = pd.read_clipboard()
>>> clipdf
A B C
x 1 4 p
y 2 5 q
z 3 6 r
The to_clipboard method can be used to write the contents of a DataFrame to
the clipboard. Following which you can paste the clipboard contents into other
applications (CTRL-V on many operating systems). Here we illustrate writing a
DataFrame into clipboard and reading it back.
>>> df = pd.DataFrame(
... {"A": [1, 2, 3], "B": [4, 5, 6], "C": ["p", "q", "r"]}, index=["x", "y", "z"]
... )
>>> df
A B C
x 1 4 p
y 2 5 q
z 3 6 r
>>> df.to_clipboard()
>>> pd.read_clipboard()
A B C
x 1 4 p
y 2 5 q
z 3 6 r
We can see that we got the same content back, which we had earlier written to the clipboard.
Note
You may need to install xclip or xsel (with PyQt5, PyQt4 or qtpy) on Linux to use these methods.
Pickling#
All pandas objects are equipped with to_pickle methods which use Python’s
cPickle module to save data structures to disk using the pickle format.
In [418]: df
Out[418]:
c1 a
c2 b d
lvl1 lvl2
a c 1 5
d 2 6
b c 3 7
d 4 8
In [419]: df.to_pickle("foo.pkl")
The read_pickle function in the pandas namespace can be used to load
any pickled pandas object (or any other pickled object) from file:
In [420]: pd.read_pickle("foo.pkl")
Out[420]:
c1 a
c2 b d
lvl1 lvl2
a c 1 5
d 2 6
b c 3 7
d 4 8
Warning
Loading pickled data received from untrusted sources can be unsafe.
See: https://docs.python.org/3/library/pickle.html
Warning
read_pickle() is only guaranteed backwards compatible back to pandas version 0.20.3
Compressed pickle files#
read_pickle(), DataFrame.to_pickle() and Series.to_pickle() can read
and write compressed pickle files. The compression types of gzip, bz2, xz, zstd are supported for reading and writing.
The zip file format only supports reading and must contain only one data file
to be read.
The compression type can be an explicit parameter or be inferred from the file extension.
If ‘infer’, then use gzip, bz2, zip, xz, zstd if filename ends in '.gz', '.bz2', '.zip',
'.xz', or '.zst', respectively.
The compression parameter can also be a dict in order to pass options to the
compression protocol. It must have a 'method' key set to the name
of the compression protocol, which must be one of
{'zip', 'gzip', 'bz2', 'xz', 'zstd'}. All other key-value pairs are passed to
the underlying compression library.
In [421]: df = pd.DataFrame(
.....: {
.....: "A": np.random.randn(1000),
.....: "B": "foo",
.....: "C": pd.date_range("20130101", periods=1000, freq="s"),
.....: }
.....: )
.....:
In [422]: df
Out[422]:
A B C
0 -0.828876 foo 2013-01-01 00:00:00
1 -0.110383 foo 2013-01-01 00:00:01
2 2.357598 foo 2013-01-01 00:00:02
3 -1.620073 foo 2013-01-01 00:00:03
4 0.440903 foo 2013-01-01 00:00:04
.. ... ... ...
995 -1.177365 foo 2013-01-01 00:16:35
996 1.236988 foo 2013-01-01 00:16:36
997 0.743946 foo 2013-01-01 00:16:37
998 -0.533097 foo 2013-01-01 00:16:38
999 -0.140850 foo 2013-01-01 00:16:39
[1000 rows x 3 columns]
Using an explicit compression type:
In [423]: df.to_pickle("data.pkl.compress", compression="gzip")
In [424]: rt = pd.read_pickle("data.pkl.compress", compression="gzip")
In [425]: rt
Out[425]:
A B C
0 -0.828876 foo 2013-01-01 00:00:00
1 -0.110383 foo 2013-01-01 00:00:01
2 2.357598 foo 2013-01-01 00:00:02
3 -1.620073 foo 2013-01-01 00:00:03
4 0.440903 foo 2013-01-01 00:00:04
.. ... ... ...
995 -1.177365 foo 2013-01-01 00:16:35
996 1.236988 foo 2013-01-01 00:16:36
997 0.743946 foo 2013-01-01 00:16:37
998 -0.533097 foo 2013-01-01 00:16:38
999 -0.140850 foo 2013-01-01 00:16:39
[1000 rows x 3 columns]
Inferring compression type from the extension:
In [426]: df.to_pickle("data.pkl.xz", compression="infer")
In [427]: rt = pd.read_pickle("data.pkl.xz", compression="infer")
In [428]: rt
Out[428]:
A B C
0 -0.828876 foo 2013-01-01 00:00:00
1 -0.110383 foo 2013-01-01 00:00:01
2 2.357598 foo 2013-01-01 00:00:02
3 -1.620073 foo 2013-01-01 00:00:03
4 0.440903 foo 2013-01-01 00:00:04
.. ... ... ...
995 -1.177365 foo 2013-01-01 00:16:35
996 1.236988 foo 2013-01-01 00:16:36
997 0.743946 foo 2013-01-01 00:16:37
998 -0.533097 foo 2013-01-01 00:16:38
999 -0.140850 foo 2013-01-01 00:16:39
[1000 rows x 3 columns]
The default is to ‘infer’:
In [429]: df.to_pickle("data.pkl.gz")
In [430]: rt = pd.read_pickle("data.pkl.gz")
In [431]: rt
Out[431]:
A B C
0 -0.828876 foo 2013-01-01 00:00:00
1 -0.110383 foo 2013-01-01 00:00:01
2 2.357598 foo 2013-01-01 00:00:02
3 -1.620073 foo 2013-01-01 00:00:03
4 0.440903 foo 2013-01-01 00:00:04
.. ... ... ...
995 -1.177365 foo 2013-01-01 00:16:35
996 1.236988 foo 2013-01-01 00:16:36
997 0.743946 foo 2013-01-01 00:16:37
998 -0.533097 foo 2013-01-01 00:16:38
999 -0.140850 foo 2013-01-01 00:16:39
[1000 rows x 3 columns]
In [432]: df["A"].to_pickle("s1.pkl.bz2")
In [433]: rt = pd.read_pickle("s1.pkl.bz2")
In [434]: rt
Out[434]:
0 -0.828876
1 -0.110383
2 2.357598
3 -1.620073
4 0.440903
...
995 -1.177365
996 1.236988
997 0.743946
998 -0.533097
999 -0.140850
Name: A, Length: 1000, dtype: float64
Passing options to the compression protocol in order to speed up compression:
In [435]: df.to_pickle("data.pkl.gz", compression={"method": "gzip", "compresslevel": 1})
msgpack#
pandas support for msgpack has been removed in version 1.0.0. It is
recommended to use pickle instead.
Alternatively, you can also the Arrow IPC serialization format for on-the-wire
transmission of pandas objects. For documentation on pyarrow, see
here.
HDF5 (PyTables)#
HDFStore is a dict-like object which reads and writes pandas using
the high performance HDF5 format using the excellent PyTables library. See the cookbook
for some advanced strategies
Warning
pandas uses PyTables for reading and writing HDF5 files, which allows
serializing object-dtype data with pickle. Loading pickled data received from
untrusted sources can be unsafe.
See: https://docs.python.org/3/library/pickle.html for more.
In [436]: store = pd.HDFStore("store.h5")
In [437]: print(store)
<class 'pandas.io.pytables.HDFStore'>
File path: store.h5
Objects can be written to the file just like adding key-value pairs to a
dict:
In [438]: index = pd.date_range("1/1/2000", periods=8)
In [439]: s = pd.Series(np.random.randn(5), index=["a", "b", "c", "d", "e"])
In [440]: df = pd.DataFrame(np.random.randn(8, 3), index=index, columns=["A", "B", "C"])
# store.put('s', s) is an equivalent method
In [441]: store["s"] = s
In [442]: store["df"] = df
In [443]: store
Out[443]:
<class 'pandas.io.pytables.HDFStore'>
File path: store.h5
In a current or later Python session, you can retrieve stored objects:
# store.get('df') is an equivalent method
In [444]: store["df"]
Out[444]:
A B C
2000-01-01 -0.398501 -0.677311 -0.874991
2000-01-02 -1.167564 -0.593353 0.146262
2000-01-03 -0.131959 0.089012 0.667450
2000-01-04 0.169405 -1.358046 -0.105563
2000-01-05 0.492195 0.076693 0.213685
2000-01-06 -0.285283 -1.210529 -1.408386
2000-01-07 0.941577 -0.342447 0.222031
2000-01-08 0.052607 2.093214 1.064908
# dotted (attribute) access provides get as well
In [445]: store.df
Out[445]:
A B C
2000-01-01 -0.398501 -0.677311 -0.874991
2000-01-02 -1.167564 -0.593353 0.146262
2000-01-03 -0.131959 0.089012 0.667450
2000-01-04 0.169405 -1.358046 -0.105563
2000-01-05 0.492195 0.076693 0.213685
2000-01-06 -0.285283 -1.210529 -1.408386
2000-01-07 0.941577 -0.342447 0.222031
2000-01-08 0.052607 2.093214 1.064908
Deletion of the object specified by the key:
# store.remove('df') is an equivalent method
In [446]: del store["df"]
In [447]: store
Out[447]:
<class 'pandas.io.pytables.HDFStore'>
File path: store.h5
Closing a Store and using a context manager:
In [448]: store.close()
In [449]: store
Out[449]:
<class 'pandas.io.pytables.HDFStore'>
File path: store.h5
In [450]: store.is_open
Out[450]: False
# Working with, and automatically closing the store using a context manager
In [451]: with pd.HDFStore("store.h5") as store:
.....: store.keys()
.....:
Read/write API#
HDFStore supports a top-level API using read_hdf for reading and to_hdf for writing,
similar to how read_csv and to_csv work.
In [452]: df_tl = pd.DataFrame({"A": list(range(5)), "B": list(range(5))})
In [453]: df_tl.to_hdf("store_tl.h5", "table", append=True)
In [454]: pd.read_hdf("store_tl.h5", "table", where=["index>2"])
Out[454]:
A B
3 3 3
4 4 4
HDFStore will by default not drop rows that are all missing. This behavior can be changed by setting dropna=True.
In [455]: df_with_missing = pd.DataFrame(
.....: {
.....: "col1": [0, np.nan, 2],
.....: "col2": [1, np.nan, np.nan],
.....: }
.....: )
.....:
In [456]: df_with_missing
Out[456]:
col1 col2
0 0.0 1.0
1 NaN NaN
2 2.0 NaN
In [457]: df_with_missing.to_hdf("file.h5", "df_with_missing", format="table", mode="w")
In [458]: pd.read_hdf("file.h5", "df_with_missing")
Out[458]:
col1 col2
0 0.0 1.0
1 NaN NaN
2 2.0 NaN
In [459]: df_with_missing.to_hdf(
.....: "file.h5", "df_with_missing", format="table", mode="w", dropna=True
.....: )
.....:
In [460]: pd.read_hdf("file.h5", "df_with_missing")
Out[460]:
col1 col2
0 0.0 1.0
2 2.0 NaN
Fixed format#
The examples above show storing using put, which write the HDF5 to PyTables in a fixed array format, called
the fixed format. These types of stores are not appendable once written (though you can simply
remove them and rewrite). Nor are they queryable; they must be
retrieved in their entirety. They also do not support dataframes with non-unique column names.
The fixed format stores offer very fast writing and slightly faster reading than table stores.
This format is specified by default when using put or to_hdf or by format='fixed' or format='f'.
Warning
A fixed format will raise a TypeError if you try to retrieve using a where:
>>> pd.DataFrame(np.random.randn(10, 2)).to_hdf("test_fixed.h5", "df")
>>> pd.read_hdf("test_fixed.h5", "df", where="index>5")
TypeError: cannot pass a where specification when reading a fixed format.
this store must be selected in its entirety
Table format#
HDFStore supports another PyTables format on disk, the table
format. Conceptually a table is shaped very much like a DataFrame,
with rows and columns. A table may be appended to in the same or
other sessions. In addition, delete and query type operations are
supported. This format is specified by format='table' or format='t'
to append or put or to_hdf.
This format can be set as an option as well pd.set_option('io.hdf.default_format','table') to
enable put/append/to_hdf to by default store in the table format.
In [461]: store = pd.HDFStore("store.h5")
In [462]: df1 = df[0:4]
In [463]: df2 = df[4:]
# append data (creates a table automatically)
In [464]: store.append("df", df1)
In [465]: store.append("df", df2)
In [466]: store
Out[466]:
<class 'pandas.io.pytables.HDFStore'>
File path: store.h5
# select the entire object
In [467]: store.select("df")
Out[467]:
A B C
2000-01-01 -0.398501 -0.677311 -0.874991
2000-01-02 -1.167564 -0.593353 0.146262
2000-01-03 -0.131959 0.089012 0.667450
2000-01-04 0.169405 -1.358046 -0.105563
2000-01-05 0.492195 0.076693 0.213685
2000-01-06 -0.285283 -1.210529 -1.408386
2000-01-07 0.941577 -0.342447 0.222031
2000-01-08 0.052607 2.093214 1.064908
# the type of stored data
In [468]: store.root.df._v_attrs.pandas_type
Out[468]: 'frame_table'
Note
You can also create a table by passing format='table' or format='t' to a put operation.
Hierarchical keys#
Keys to a store can be specified as a string. These can be in a
hierarchical path-name like format (e.g. foo/bar/bah), which will
generate a hierarchy of sub-stores (or Groups in PyTables
parlance). Keys can be specified without the leading ‘/’ and are always
absolute (e.g. ‘foo’ refers to ‘/foo’). Removal operations can remove
everything in the sub-store and below, so be careful.
In [469]: store.put("foo/bar/bah", df)
In [470]: store.append("food/orange", df)
In [471]: store.append("food/apple", df)
In [472]: store
Out[472]:
<class 'pandas.io.pytables.HDFStore'>
File path: store.h5
# a list of keys are returned
In [473]: store.keys()
Out[473]: ['/df', '/food/apple', '/food/orange', '/foo/bar/bah']
# remove all nodes under this level
In [474]: store.remove("food")
In [475]: store
Out[475]:
<class 'pandas.io.pytables.HDFStore'>
File path: store.h5
You can walk through the group hierarchy using the walk method which
will yield a tuple for each group key along with the relative keys of its contents.
In [476]: for (path, subgroups, subkeys) in store.walk():
.....: for subgroup in subgroups:
.....: print("GROUP: {}/{}".format(path, subgroup))
.....: for subkey in subkeys:
.....: key = "/".join([path, subkey])
.....: print("KEY: {}".format(key))
.....: print(store.get(key))
.....:
GROUP: /foo
KEY: /df
A B C
2000-01-01 -0.398501 -0.677311 -0.874991
2000-01-02 -1.167564 -0.593353 0.146262
2000-01-03 -0.131959 0.089012 0.667450
2000-01-04 0.169405 -1.358046 -0.105563
2000-01-05 0.492195 0.076693 0.213685
2000-01-06 -0.285283 -1.210529 -1.408386
2000-01-07 0.941577 -0.342447 0.222031
2000-01-08 0.052607 2.093214 1.064908
GROUP: /foo/bar
KEY: /foo/bar/bah
A B C
2000-01-01 -0.398501 -0.677311 -0.874991
2000-01-02 -1.167564 -0.593353 0.146262
2000-01-03 -0.131959 0.089012 0.667450
2000-01-04 0.169405 -1.358046 -0.105563
2000-01-05 0.492195 0.076693 0.213685
2000-01-06 -0.285283 -1.210529 -1.408386
2000-01-07 0.941577 -0.342447 0.222031
2000-01-08 0.052607 2.093214 1.064908
Warning
Hierarchical keys cannot be retrieved as dotted (attribute) access as described above for items stored under the root node.
In [8]: store.foo.bar.bah
AttributeError: 'HDFStore' object has no attribute 'foo'
# you can directly access the actual PyTables node but using the root node
In [9]: store.root.foo.bar.bah
Out[9]:
/foo/bar/bah (Group) ''
children := ['block0_items' (Array), 'block0_values' (Array), 'axis0' (Array), 'axis1' (Array)]
Instead, use explicit string based keys:
In [477]: store["foo/bar/bah"]
Out[477]:
A B C
2000-01-01 -0.398501 -0.677311 -0.874991
2000-01-02 -1.167564 -0.593353 0.146262
2000-01-03 -0.131959 0.089012 0.667450
2000-01-04 0.169405 -1.358046 -0.105563
2000-01-05 0.492195 0.076693 0.213685
2000-01-06 -0.285283 -1.210529 -1.408386
2000-01-07 0.941577 -0.342447 0.222031
2000-01-08 0.052607 2.093214 1.064908
Storing types#
Storing mixed types in a table#
Storing mixed-dtype data is supported. Strings are stored as a
fixed-width using the maximum size of the appended column. Subsequent attempts
at appending longer strings will raise a ValueError.
Passing min_itemsize={`values`: size} as a parameter to append
will set a larger minimum for the string columns. Storing floats,
strings, ints, bools, datetime64 are currently supported. For string
columns, passing nan_rep = 'nan' to append will change the default
nan representation on disk (which converts to/from np.nan), this
defaults to nan.
In [478]: df_mixed = pd.DataFrame(
.....: {
.....: "A": np.random.randn(8),
.....: "B": np.random.randn(8),
.....: "C": np.array(np.random.randn(8), dtype="float32"),
.....: "string": "string",
.....: "int": 1,
.....: "bool": True,
.....: "datetime64": pd.Timestamp("20010102"),
.....: },
.....: index=list(range(8)),
.....: )
.....:
In [479]: df_mixed.loc[df_mixed.index[3:5], ["A", "B", "string", "datetime64"]] = np.nan
In [480]: store.append("df_mixed", df_mixed, min_itemsize={"values": 50})
In [481]: df_mixed1 = store.select("df_mixed")
In [482]: df_mixed1
Out[482]:
A B C string int bool datetime64
0 1.778161 -0.898283 -0.263043 string 1 True 2001-01-02
1 -0.913867 -0.218499 -0.639244 string 1 True 2001-01-02
2 -0.030004 1.408028 -0.866305 string 1 True 2001-01-02
3 NaN NaN -0.225250 NaN 1 True NaT
4 NaN NaN -0.890978 NaN 1 True NaT
5 0.081323 0.520995 -0.553839 string 1 True 2001-01-02
6 -0.268494 0.620028 -2.762875 string 1 True 2001-01-02
7 0.168016 0.159416 -1.244763 string 1 True 2001-01-02
In [483]: df_mixed1.dtypes.value_counts()
Out[483]:
float64 2
float32 1
object 1
int64 1
bool 1
datetime64[ns] 1
dtype: int64
# we have provided a minimum string column size
In [484]: store.root.df_mixed.table
Out[484]:
/df_mixed/table (Table(8,)) ''
description := {
"index": Int64Col(shape=(), dflt=0, pos=0),
"values_block_0": Float64Col(shape=(2,), dflt=0.0, pos=1),
"values_block_1": Float32Col(shape=(1,), dflt=0.0, pos=2),
"values_block_2": StringCol(itemsize=50, shape=(1,), dflt=b'', pos=3),
"values_block_3": Int64Col(shape=(1,), dflt=0, pos=4),
"values_block_4": BoolCol(shape=(1,), dflt=False, pos=5),
"values_block_5": Int64Col(shape=(1,), dflt=0, pos=6)}
byteorder := 'little'
chunkshape := (689,)
autoindex := True
colindexes := {
"index": Index(6, mediumshuffle, zlib(1)).is_csi=False}
Storing MultiIndex DataFrames#
Storing MultiIndex DataFrames as tables is very similar to
storing/selecting from homogeneous index DataFrames.
In [485]: index = pd.MultiIndex(
.....: levels=[["foo", "bar", "baz", "qux"], ["one", "two", "three"]],
.....: codes=[[0, 0, 0, 1, 1, 2, 2, 3, 3, 3], [0, 1, 2, 0, 1, 1, 2, 0, 1, 2]],
.....: names=["foo", "bar"],
.....: )
.....:
In [486]: df_mi = pd.DataFrame(np.random.randn(10, 3), index=index, columns=["A", "B", "C"])
In [487]: df_mi
Out[487]:
A B C
foo bar
foo one -1.280289 0.692545 -0.536722
two 1.005707 0.296917 0.139796
three -1.083889 0.811865 1.648435
bar one -0.164377 -0.402227 1.618922
two -1.424723 -0.023232 0.948196
baz two 0.183573 0.145277 0.308146
three -1.043530 -0.708145 1.430905
qux one -0.850136 0.813949 1.508891
two -1.556154 0.187597 1.176488
three -1.246093 -0.002726 -0.444249
In [488]: store.append("df_mi", df_mi)
In [489]: store.select("df_mi")
Out[489]:
A B C
foo bar
foo one -1.280289 0.692545 -0.536722
two 1.005707 0.296917 0.139796
three -1.083889 0.811865 1.648435
bar one -0.164377 -0.402227 1.618922
two -1.424723 -0.023232 0.948196
baz two 0.183573 0.145277 0.308146
three -1.043530 -0.708145 1.430905
qux one -0.850136 0.813949 1.508891
two -1.556154 0.187597 1.176488
three -1.246093 -0.002726 -0.444249
# the levels are automatically included as data columns
In [490]: store.select("df_mi", "foo=bar")
Out[490]:
A B C
foo bar
bar one -0.164377 -0.402227 1.618922
two -1.424723 -0.023232 0.948196
Note
The index keyword is reserved and cannot be use as a level name.
Querying#
Querying a table#
select and delete operations have an optional criterion that can
be specified to select/delete only a subset of the data. This allows one
to have a very large on-disk table and retrieve only a portion of the
data.
A query is specified using the Term class under the hood, as a boolean expression.
index and columns are supported indexers of DataFrames.
if data_columns are specified, these can be used as additional indexers.
level name in a MultiIndex, with default name level_0, level_1, … if not provided.
Valid comparison operators are:
=, ==, !=, >, >=, <, <=
Valid boolean expressions are combined with:
| : or
& : and
( and ) : for grouping
These rules are similar to how boolean expressions are used in pandas for indexing.
Note
= will be automatically expanded to the comparison operator ==
~ is the not operator, but can only be used in very limited
circumstances
If a list/tuple of expressions is passed they will be combined via &
The following are valid expressions:
'index >= date'
"columns = ['A', 'D']"
"columns in ['A', 'D']"
'columns = A'
'columns == A'
"~(columns = ['A', 'B'])"
'index > df.index[3] & string = "bar"'
'(index > df.index[3] & index <= df.index[6]) | string = "bar"'
"ts >= Timestamp('2012-02-01')"
"major_axis>=20130101"
The indexers are on the left-hand side of the sub-expression:
columns, major_axis, ts
The right-hand side of the sub-expression (after a comparison operator) can be:
functions that will be evaluated, e.g. Timestamp('2012-02-01')
strings, e.g. "bar"
date-like, e.g. 20130101, or "20130101"
lists, e.g. "['A', 'B']"
variables that are defined in the local names space, e.g. date
Note
Passing a string to a query by interpolating it into the query
expression is not recommended. Simply assign the string of interest to a
variable and use that variable in an expression. For example, do this
string = "HolyMoly'"
store.select("df", "index == string")
instead of this
string = "HolyMoly'"
store.select('df', f'index == {string}')
The latter will not work and will raise a SyntaxError.Note that
there’s a single quote followed by a double quote in the string
variable.
If you must interpolate, use the '%r' format specifier
store.select("df", "index == %r" % string)
which will quote string.
Here are some examples:
In [491]: dfq = pd.DataFrame(
.....: np.random.randn(10, 4),
.....: columns=list("ABCD"),
.....: index=pd.date_range("20130101", periods=10),
.....: )
.....:
In [492]: store.append("dfq", dfq, format="table", data_columns=True)
Use boolean expressions, with in-line function evaluation.
In [493]: store.select("dfq", "index>pd.Timestamp('20130104') & columns=['A', 'B']")
Out[493]:
A B
2013-01-05 1.366810 1.073372
2013-01-06 2.119746 -2.628174
2013-01-07 0.337920 -0.634027
2013-01-08 1.053434 1.109090
2013-01-09 -0.772942 -0.269415
2013-01-10 0.048562 -0.285920
Use inline column reference.
In [494]: store.select("dfq", where="A>0 or C>0")
Out[494]:
A B C D
2013-01-01 0.856838 1.491776 0.001283 0.701816
2013-01-02 -1.097917 0.102588 0.661740 0.443531
2013-01-03 0.559313 -0.459055 -1.222598 -0.455304
2013-01-05 1.366810 1.073372 -0.994957 0.755314
2013-01-06 2.119746 -2.628174 -0.089460 -0.133636
2013-01-07 0.337920 -0.634027 0.421107 0.604303
2013-01-08 1.053434 1.109090 -0.367891 -0.846206
2013-01-10 0.048562 -0.285920 1.334100 0.194462
The columns keyword can be supplied to select a list of columns to be
returned, this is equivalent to passing a
'columns=list_of_columns_to_filter':
In [495]: store.select("df", "columns=['A', 'B']")
Out[495]:
A B
2000-01-01 -0.398501 -0.677311
2000-01-02 -1.167564 -0.593353
2000-01-03 -0.131959 0.089012
2000-01-04 0.169405 -1.358046
2000-01-05 0.492195 0.076693
2000-01-06 -0.285283 -1.210529
2000-01-07 0.941577 -0.342447
2000-01-08 0.052607 2.093214
start and stop parameters can be specified to limit the total search
space. These are in terms of the total number of rows in a table.
Note
select will raise a ValueError if the query expression has an unknown
variable reference. Usually this means that you are trying to select on a column
that is not a data_column.
select will raise a SyntaxError if the query expression is not valid.
Query timedelta64[ns]#
You can store and query using the timedelta64[ns] type. Terms can be
specified in the format: <float>(<unit>), where float may be signed (and fractional), and unit can be
D,s,ms,us,ns for the timedelta. Here’s an example:
In [496]: from datetime import timedelta
In [497]: dftd = pd.DataFrame(
.....: {
.....: "A": pd.Timestamp("20130101"),
.....: "B": [
.....: pd.Timestamp("20130101") + timedelta(days=i, seconds=10)
.....: for i in range(10)
.....: ],
.....: }
.....: )
.....:
In [498]: dftd["C"] = dftd["A"] - dftd["B"]
In [499]: dftd
Out[499]:
A B C
0 2013-01-01 2013-01-01 00:00:10 -1 days +23:59:50
1 2013-01-01 2013-01-02 00:00:10 -2 days +23:59:50
2 2013-01-01 2013-01-03 00:00:10 -3 days +23:59:50
3 2013-01-01 2013-01-04 00:00:10 -4 days +23:59:50
4 2013-01-01 2013-01-05 00:00:10 -5 days +23:59:50
5 2013-01-01 2013-01-06 00:00:10 -6 days +23:59:50
6 2013-01-01 2013-01-07 00:00:10 -7 days +23:59:50
7 2013-01-01 2013-01-08 00:00:10 -8 days +23:59:50
8 2013-01-01 2013-01-09 00:00:10 -9 days +23:59:50
9 2013-01-01 2013-01-10 00:00:10 -10 days +23:59:50
In [500]: store.append("dftd", dftd, data_columns=True)
In [501]: store.select("dftd", "C<'-3.5D'")
Out[501]:
A B C
4 2013-01-01 2013-01-05 00:00:10 -5 days +23:59:50
5 2013-01-01 2013-01-06 00:00:10 -6 days +23:59:50
6 2013-01-01 2013-01-07 00:00:10 -7 days +23:59:50
7 2013-01-01 2013-01-08 00:00:10 -8 days +23:59:50
8 2013-01-01 2013-01-09 00:00:10 -9 days +23:59:50
9 2013-01-01 2013-01-10 00:00:10 -10 days +23:59:50
Query MultiIndex#
Selecting from a MultiIndex can be achieved by using the name of the level.
In [502]: df_mi.index.names
Out[502]: FrozenList(['foo', 'bar'])
In [503]: store.select("df_mi", "foo=baz and bar=two")
Out[503]:
A B C
foo bar
baz two 0.183573 0.145277 0.308146
If the MultiIndex levels names are None, the levels are automatically made available via
the level_n keyword with n the level of the MultiIndex you want to select from.
In [504]: index = pd.MultiIndex(
.....: levels=[["foo", "bar", "baz", "qux"], ["one", "two", "three"]],
.....: codes=[[0, 0, 0, 1, 1, 2, 2, 3, 3, 3], [0, 1, 2, 0, 1, 1, 2, 0, 1, 2]],
.....: )
.....:
In [505]: df_mi_2 = pd.DataFrame(np.random.randn(10, 3), index=index, columns=["A", "B", "C"])
In [506]: df_mi_2
Out[506]:
A B C
foo one -0.646538 1.210676 -0.315409
two 1.528366 0.376542 0.174490
three 1.247943 -0.742283 0.710400
bar one 0.434128 -1.246384 1.139595
two 1.388668 -0.413554 -0.666287
baz two 0.010150 -0.163820 -0.115305
three 0.216467 0.633720 0.473945
qux one -0.155446 1.287082 0.320201
two -1.256989 0.874920 0.765944
three 0.025557 -0.729782 -0.127439
In [507]: store.append("df_mi_2", df_mi_2)
# the levels are automatically included as data columns with keyword level_n
In [508]: store.select("df_mi_2", "level_0=foo and level_1=two")
Out[508]:
A B C
foo two 1.528366 0.376542 0.17449
Indexing#
You can create/modify an index for a table with create_table_index
after data is already in the table (after and append/put
operation). Creating a table index is highly encouraged. This will
speed your queries a great deal when you use a select with the
indexed dimension as the where.
Note
Indexes are automagically created on the indexables
and any data columns you specify. This behavior can be turned off by passing
index=False to append.
# we have automagically already created an index (in the first section)
In [509]: i = store.root.df.table.cols.index.index
In [510]: i.optlevel, i.kind
Out[510]: (6, 'medium')
# change an index by passing new parameters
In [511]: store.create_table_index("df", optlevel=9, kind="full")
In [512]: i = store.root.df.table.cols.index.index
In [513]: i.optlevel, i.kind
Out[513]: (9, 'full')
Oftentimes when appending large amounts of data to a store, it is useful to turn off index creation for each append, then recreate at the end.
In [514]: df_1 = pd.DataFrame(np.random.randn(10, 2), columns=list("AB"))
In [515]: df_2 = pd.DataFrame(np.random.randn(10, 2), columns=list("AB"))
In [516]: st = pd.HDFStore("appends.h5", mode="w")
In [517]: st.append("df", df_1, data_columns=["B"], index=False)
In [518]: st.append("df", df_2, data_columns=["B"], index=False)
In [519]: st.get_storer("df").table
Out[519]:
/df/table (Table(20,)) ''
description := {
"index": Int64Col(shape=(), dflt=0, pos=0),
"values_block_0": Float64Col(shape=(1,), dflt=0.0, pos=1),
"B": Float64Col(shape=(), dflt=0.0, pos=2)}
byteorder := 'little'
chunkshape := (2730,)
Then create the index when finished appending.
In [520]: st.create_table_index("df", columns=["B"], optlevel=9, kind="full")
In [521]: st.get_storer("df").table
Out[521]:
/df/table (Table(20,)) ''
description := {
"index": Int64Col(shape=(), dflt=0, pos=0),
"values_block_0": Float64Col(shape=(1,), dflt=0.0, pos=1),
"B": Float64Col(shape=(), dflt=0.0, pos=2)}
byteorder := 'little'
chunkshape := (2730,)
autoindex := True
colindexes := {
"B": Index(9, fullshuffle, zlib(1)).is_csi=True}
In [522]: st.close()
See here for how to create a completely-sorted-index (CSI) on an existing store.
Query via data columns#
You can designate (and index) certain columns that you want to be able
to perform queries (other than the indexable columns, which you can
always query). For instance say you want to perform this common
operation, on-disk, and return just the frame that matches this
query. You can specify data_columns = True to force all columns to
be data_columns.
In [523]: df_dc = df.copy()
In [524]: df_dc["string"] = "foo"
In [525]: df_dc.loc[df_dc.index[4:6], "string"] = np.nan
In [526]: df_dc.loc[df_dc.index[7:9], "string"] = "bar"
In [527]: df_dc["string2"] = "cool"
In [528]: df_dc.loc[df_dc.index[1:3], ["B", "C"]] = 1.0
In [529]: df_dc
Out[529]:
A B C string string2
2000-01-01 -0.398501 -0.677311 -0.874991 foo cool
2000-01-02 -1.167564 1.000000 1.000000 foo cool
2000-01-03 -0.131959 1.000000 1.000000 foo cool
2000-01-04 0.169405 -1.358046 -0.105563 foo cool
2000-01-05 0.492195 0.076693 0.213685 NaN cool
2000-01-06 -0.285283 -1.210529 -1.408386 NaN cool
2000-01-07 0.941577 -0.342447 0.222031 foo cool
2000-01-08 0.052607 2.093214 1.064908 bar cool
# on-disk operations
In [530]: store.append("df_dc", df_dc, data_columns=["B", "C", "string", "string2"])
In [531]: store.select("df_dc", where="B > 0")
Out[531]:
A B C string string2
2000-01-02 -1.167564 1.000000 1.000000 foo cool
2000-01-03 -0.131959 1.000000 1.000000 foo cool
2000-01-05 0.492195 0.076693 0.213685 NaN cool
2000-01-08 0.052607 2.093214 1.064908 bar cool
# getting creative
In [532]: store.select("df_dc", "B > 0 & C > 0 & string == foo")
Out[532]:
A B C string string2
2000-01-02 -1.167564 1.0 1.0 foo cool
2000-01-03 -0.131959 1.0 1.0 foo cool
# this is in-memory version of this type of selection
In [533]: df_dc[(df_dc.B > 0) & (df_dc.C > 0) & (df_dc.string == "foo")]
Out[533]:
A B C string string2
2000-01-02 -1.167564 1.0 1.0 foo cool
2000-01-03 -0.131959 1.0 1.0 foo cool
# we have automagically created this index and the B/C/string/string2
# columns are stored separately as ``PyTables`` columns
In [534]: store.root.df_dc.table
Out[534]:
/df_dc/table (Table(8,)) ''
description := {
"index": Int64Col(shape=(), dflt=0, pos=0),
"values_block_0": Float64Col(shape=(1,), dflt=0.0, pos=1),
"B": Float64Col(shape=(), dflt=0.0, pos=2),
"C": Float64Col(shape=(), dflt=0.0, pos=3),
"string": StringCol(itemsize=3, shape=(), dflt=b'', pos=4),
"string2": StringCol(itemsize=4, shape=(), dflt=b'', pos=5)}
byteorder := 'little'
chunkshape := (1680,)
autoindex := True
colindexes := {
"index": Index(6, mediumshuffle, zlib(1)).is_csi=False,
"B": Index(6, mediumshuffle, zlib(1)).is_csi=False,
"C": Index(6, mediumshuffle, zlib(1)).is_csi=False,
"string": Index(6, mediumshuffle, zlib(1)).is_csi=False,
"string2": Index(6, mediumshuffle, zlib(1)).is_csi=False}
There is some performance degradation by making lots of columns into
data columns, so it is up to the user to designate these. In addition,
you cannot change data columns (nor indexables) after the first
append/put operation (Of course you can simply read in the data and
create a new table!).
Iterator#
You can pass iterator=True or chunksize=number_in_a_chunk
to select and select_as_multiple to return an iterator on the results.
The default is 50,000 rows returned in a chunk.
In [535]: for df in store.select("df", chunksize=3):
.....: print(df)
.....:
A B C
2000-01-01 -0.398501 -0.677311 -0.874991
2000-01-02 -1.167564 -0.593353 0.146262
2000-01-03 -0.131959 0.089012 0.667450
A B C
2000-01-04 0.169405 -1.358046 -0.105563
2000-01-05 0.492195 0.076693 0.213685
2000-01-06 -0.285283 -1.210529 -1.408386
A B C
2000-01-07 0.941577 -0.342447 0.222031
2000-01-08 0.052607 2.093214 1.064908
Note
You can also use the iterator with read_hdf which will open, then
automatically close the store when finished iterating.
for df in pd.read_hdf("store.h5", "df", chunksize=3):
print(df)
Note, that the chunksize keyword applies to the source rows. So if you
are doing a query, then the chunksize will subdivide the total rows in the table
and the query applied, returning an iterator on potentially unequal sized chunks.
Here is a recipe for generating a query and using it to create equal sized return
chunks.
In [536]: dfeq = pd.DataFrame({"number": np.arange(1, 11)})
In [537]: dfeq
Out[537]:
number
0 1
1 2
2 3
3 4
4 5
5 6
6 7
7 8
8 9
9 10
In [538]: store.append("dfeq", dfeq, data_columns=["number"])
In [539]: def chunks(l, n):
.....: return [l[i: i + n] for i in range(0, len(l), n)]
.....:
In [540]: evens = [2, 4, 6, 8, 10]
In [541]: coordinates = store.select_as_coordinates("dfeq", "number=evens")
In [542]: for c in chunks(coordinates, 2):
.....: print(store.select("dfeq", where=c))
.....:
number
1 2
3 4
number
5 6
7 8
number
9 10
Advanced queries#
Select a single column#
To retrieve a single indexable or data column, use the
method select_column. This will, for example, enable you to get the index
very quickly. These return a Series of the result, indexed by the row number.
These do not currently accept the where selector.
In [543]: store.select_column("df_dc", "index")
Out[543]:
0 2000-01-01
1 2000-01-02
2 2000-01-03
3 2000-01-04
4 2000-01-05
5 2000-01-06
6 2000-01-07
7 2000-01-08
Name: index, dtype: datetime64[ns]
In [544]: store.select_column("df_dc", "string")
Out[544]:
0 foo
1 foo
2 foo
3 foo
4 NaN
5 NaN
6 foo
7 bar
Name: string, dtype: object
Selecting coordinates#
Sometimes you want to get the coordinates (a.k.a the index locations) of your query. This returns an
Int64Index of the resulting locations. These coordinates can also be passed to subsequent
where operations.
In [545]: df_coord = pd.DataFrame(
.....: np.random.randn(1000, 2), index=pd.date_range("20000101", periods=1000)
.....: )
.....:
In [546]: store.append("df_coord", df_coord)
In [547]: c = store.select_as_coordinates("df_coord", "index > 20020101")
In [548]: c
Out[548]:
Int64Index([732, 733, 734, 735, 736, 737, 738, 739, 740, 741,
...
990, 991, 992, 993, 994, 995, 996, 997, 998, 999],
dtype='int64', length=268)
In [549]: store.select("df_coord", where=c)
Out[549]:
0 1
2002-01-02 0.009035 0.921784
2002-01-03 -1.476563 -1.376375
2002-01-04 1.266731 2.173681
2002-01-05 0.147621 0.616468
2002-01-06 0.008611 2.136001
... ... ...
2002-09-22 0.781169 -0.791687
2002-09-23 -0.764810 -2.000933
2002-09-24 -0.345662 0.393915
2002-09-25 -0.116661 0.834638
2002-09-26 -1.341780 0.686366
[268 rows x 2 columns]
Selecting using a where mask#
Sometime your query can involve creating a list of rows to select. Usually this mask would
be a resulting index from an indexing operation. This example selects the months of
a datetimeindex which are 5.
In [550]: df_mask = pd.DataFrame(
.....: np.random.randn(1000, 2), index=pd.date_range("20000101", periods=1000)
.....: )
.....:
In [551]: store.append("df_mask", df_mask)
In [552]: c = store.select_column("df_mask", "index")
In [553]: where = c[pd.DatetimeIndex(c).month == 5].index
In [554]: store.select("df_mask", where=where)
Out[554]:
0 1
2000-05-01 -0.386742 -0.977433
2000-05-02 -0.228819 0.471671
2000-05-03 0.337307 1.840494
2000-05-04 0.050249 0.307149
2000-05-05 -0.802947 -0.946730
... ... ...
2002-05-27 1.605281 1.741415
2002-05-28 -0.804450 -0.715040
2002-05-29 -0.874851 0.037178
2002-05-30 -0.161167 -1.294944
2002-05-31 -0.258463 -0.731969
[93 rows x 2 columns]
Storer object#
If you want to inspect the stored object, retrieve via
get_storer. You could use this programmatically to say get the number
of rows in an object.
In [555]: store.get_storer("df_dc").nrows
Out[555]: 8
Multiple table queries#
The methods append_to_multiple and
select_as_multiple can perform appending/selecting from
multiple tables at once. The idea is to have one table (call it the
selector table) that you index most/all of the columns, and perform your
queries. The other table(s) are data tables with an index matching the
selector table’s index. You can then perform a very fast query
on the selector table, yet get lots of data back. This method is similar to
having a very wide table, but enables more efficient queries.
The append_to_multiple method splits a given single DataFrame
into multiple tables according to d, a dictionary that maps the
table names to a list of ‘columns’ you want in that table. If None
is used in place of a list, that table will have the remaining
unspecified columns of the given DataFrame. The argument selector
defines which table is the selector table (which you can make queries from).
The argument dropna will drop rows from the input DataFrame to ensure
tables are synchronized. This means that if a row for one of the tables
being written to is entirely np.NaN, that row will be dropped from all tables.
If dropna is False, THE USER IS RESPONSIBLE FOR SYNCHRONIZING THE TABLES.
Remember that entirely np.Nan rows are not written to the HDFStore, so if
you choose to call dropna=False, some tables may have more rows than others,
and therefore select_as_multiple may not work or it may return unexpected
results.
In [556]: df_mt = pd.DataFrame(
.....: np.random.randn(8, 6),
.....: index=pd.date_range("1/1/2000", periods=8),
.....: columns=["A", "B", "C", "D", "E", "F"],
.....: )
.....:
In [557]: df_mt["foo"] = "bar"
In [558]: df_mt.loc[df_mt.index[1], ("A", "B")] = np.nan
# you can also create the tables individually
In [559]: store.append_to_multiple(
.....: {"df1_mt": ["A", "B"], "df2_mt": None}, df_mt, selector="df1_mt"
.....: )
.....:
In [560]: store
Out[560]:
<class 'pandas.io.pytables.HDFStore'>
File path: store.h5
# individual tables were created
In [561]: store.select("df1_mt")
Out[561]:
A B
2000-01-01 0.079529 -1.459471
2000-01-02 NaN NaN
2000-01-03 -0.423113 2.314361
2000-01-04 0.756744 -0.792372
2000-01-05 -0.184971 0.170852
2000-01-06 0.678830 0.633974
2000-01-07 0.034973 0.974369
2000-01-08 -2.110103 0.243062
In [562]: store.select("df2_mt")
Out[562]:
C D E F foo
2000-01-01 -0.596306 -0.910022 -1.057072 -0.864360 bar
2000-01-02 0.477849 0.283128 -2.045700 -0.338206 bar
2000-01-03 -0.033100 -0.965461 -0.001079 -0.351689 bar
2000-01-04 -0.513555 -1.484776 -0.796280 -0.182321 bar
2000-01-05 -0.872407 -1.751515 0.934334 0.938818 bar
2000-01-06 -1.398256 1.347142 -0.029520 0.082738 bar
2000-01-07 -0.755544 0.380786 -1.634116 1.293610 bar
2000-01-08 1.453064 0.500558 -0.574475 0.694324 bar
# as a multiple
In [563]: store.select_as_multiple(
.....: ["df1_mt", "df2_mt"],
.....: where=["A>0", "B>0"],
.....: selector="df1_mt",
.....: )
.....:
Out[563]:
A B C D E F foo
2000-01-06 0.678830 0.633974 -1.398256 1.347142 -0.029520 0.082738 bar
2000-01-07 0.034973 0.974369 -0.755544 0.380786 -1.634116 1.293610 bar
Delete from a table#
You can delete from a table selectively by specifying a where. In
deleting rows, it is important to understand the PyTables deletes
rows by erasing the rows, then moving the following data. Thus
deleting can potentially be a very expensive operation depending on the
orientation of your data. To get optimal performance, it’s
worthwhile to have the dimension you are deleting be the first of the
indexables.
Data is ordered (on the disk) in terms of the indexables. Here’s a
simple use case. You store panel-type data, with dates in the
major_axis and ids in the minor_axis. The data is then
interleaved like this:
date_1
id_1
id_2
.
id_n
date_2
id_1
.
id_n
It should be clear that a delete operation on the major_axis will be
fairly quick, as one chunk is removed, then the following data moved. On
the other hand a delete operation on the minor_axis will be very
expensive. In this case it would almost certainly be faster to rewrite
the table using a where that selects all but the missing data.
Warning
Please note that HDF5 DOES NOT RECLAIM SPACE in the h5 files
automatically. Thus, repeatedly deleting (or removing nodes) and adding
again, WILL TEND TO INCREASE THE FILE SIZE.
To repack and clean the file, use ptrepack.
Notes & caveats#
Compression#
PyTables allows the stored data to be compressed. This applies to
all kinds of stores, not just tables. Two parameters are used to
control compression: complevel and complib.
complevel specifies if and how hard data is to be compressed.
complevel=0 and complevel=None disables compression and
0<complevel<10 enables compression.
complib specifies which compression library to use.
If nothing is specified the default library zlib is used. A
compression library usually optimizes for either good compression rates
or speed and the results will depend on the type of data. Which type of
compression to choose depends on your specific needs and data. The list
of supported compression libraries:
zlib: The default compression library.
A classic in terms of compression, achieves good compression
rates but is somewhat slow.
lzo: Fast
compression and decompression.
bzip2: Good compression rates.
blosc: Fast compression and
decompression.
Support for alternative blosc compressors:
blosc:blosclz This is the
default compressor for blosc
blosc:lz4:
A compact, very popular and fast compressor.
blosc:lz4hc:
A tweaked version of LZ4, produces better
compression ratios at the expense of speed.
blosc:snappy:
A popular compressor used in many places.
blosc:zlib: A classic;
somewhat slower than the previous ones, but
achieving better compression ratios.
blosc:zstd: An
extremely well balanced codec; it provides the best
compression ratios among the others above, and at
reasonably fast speed.
If complib is defined as something other than the listed libraries a
ValueError exception is issued.
Note
If the library specified with the complib option is missing on your platform,
compression defaults to zlib without further ado.
Enable compression for all objects within the file:
store_compressed = pd.HDFStore(
"store_compressed.h5", complevel=9, complib="blosc:blosclz"
)
Or on-the-fly compression (this only applies to tables) in stores where compression is not enabled:
store.append("df", df, complib="zlib", complevel=5)
ptrepack#
PyTables offers better write performance when tables are compressed after
they are written, as opposed to turning on compression at the very
beginning. You can use the supplied PyTables utility
ptrepack. In addition, ptrepack can change compression levels
after the fact.
ptrepack --chunkshape=auto --propindexes --complevel=9 --complib=blosc in.h5 out.h5
Furthermore ptrepack in.h5 out.h5 will repack the file to allow
you to reuse previously deleted space. Alternatively, one can simply
remove the file and write again, or use the copy method.
Caveats#
Warning
HDFStore is not-threadsafe for writing. The underlying
PyTables only supports concurrent reads (via threading or
processes). If you need reading and writing at the same time, you
need to serialize these operations in a single thread in a single
process. You will corrupt your data otherwise. See the (GH2397) for more information.
If you use locks to manage write access between multiple processes, you
may want to use fsync() before releasing write locks. For
convenience you can use store.flush(fsync=True) to do this for you.
Once a table is created columns (DataFrame)
are fixed; only exactly the same columns can be appended
Be aware that timezones (e.g., pytz.timezone('US/Eastern'))
are not necessarily equal across timezone versions. So if data is
localized to a specific timezone in the HDFStore using one version
of a timezone library and that data is updated with another version, the data
will be converted to UTC since these timezones are not considered
equal. Either use the same version of timezone library or use tz_convert with
the updated timezone definition.
Warning
PyTables will show a NaturalNameWarning if a column name
cannot be used as an attribute selector.
Natural identifiers contain only letters, numbers, and underscores,
and may not begin with a number.
Other identifiers cannot be used in a where clause
and are generally a bad idea.
DataTypes#
HDFStore will map an object dtype to the PyTables underlying
dtype. This means the following types are known to work:
Type
Represents missing values
floating : float64, float32, float16
np.nan
integer : int64, int32, int8, uint64,uint32, uint8
boolean
datetime64[ns]
NaT
timedelta64[ns]
NaT
categorical : see the section below
object : strings
np.nan
unicode columns are not supported, and WILL FAIL.
Categorical data#
You can write data that contains category dtypes to a HDFStore.
Queries work the same as if it was an object array. However, the category dtyped data is
stored in a more efficient manner.
In [564]: dfcat = pd.DataFrame(
.....: {"A": pd.Series(list("aabbcdba")).astype("category"), "B": np.random.randn(8)}
.....: )
.....:
In [565]: dfcat
Out[565]:
A B
0 a -1.608059
1 a 0.851060
2 b -0.736931
3 b 0.003538
4 c -1.422611
5 d 2.060901
6 b 0.993899
7 a -1.371768
In [566]: dfcat.dtypes
Out[566]:
A category
B float64
dtype: object
In [567]: cstore = pd.HDFStore("cats.h5", mode="w")
In [568]: cstore.append("dfcat", dfcat, format="table", data_columns=["A"])
In [569]: result = cstore.select("dfcat", where="A in ['b', 'c']")
In [570]: result
Out[570]:
A B
2 b -0.736931
3 b 0.003538
4 c -1.422611
6 b 0.993899
In [571]: result.dtypes
Out[571]:
A category
B float64
dtype: object
String columns#
min_itemsize
The underlying implementation of HDFStore uses a fixed column width (itemsize) for string columns.
A string column itemsize is calculated as the maximum of the
length of data (for that column) that is passed to the HDFStore, in the first append. Subsequent appends,
may introduce a string for a column larger than the column can hold, an Exception will be raised (otherwise you
could have a silent truncation of these columns, leading to loss of information). In the future we may relax this and
allow a user-specified truncation to occur.
Pass min_itemsize on the first table creation to a-priori specify the minimum length of a particular string column.
min_itemsize can be an integer, or a dict mapping a column name to an integer. You can pass values as a key to
allow all indexables or data_columns to have this min_itemsize.
Passing a min_itemsize dict will cause all passed columns to be created as data_columns automatically.
Note
If you are not passing any data_columns, then the min_itemsize will be the maximum of the length of any string passed
In [572]: dfs = pd.DataFrame({"A": "foo", "B": "bar"}, index=list(range(5)))
In [573]: dfs
Out[573]:
A B
0 foo bar
1 foo bar
2 foo bar
3 foo bar
4 foo bar
# A and B have a size of 30
In [574]: store.append("dfs", dfs, min_itemsize=30)
In [575]: store.get_storer("dfs").table
Out[575]:
/dfs/table (Table(5,)) ''
description := {
"index": Int64Col(shape=(), dflt=0, pos=0),
"values_block_0": StringCol(itemsize=30, shape=(2,), dflt=b'', pos=1)}
byteorder := 'little'
chunkshape := (963,)
autoindex := True
colindexes := {
"index": Index(6, mediumshuffle, zlib(1)).is_csi=False}
# A is created as a data_column with a size of 30
# B is size is calculated
In [576]: store.append("dfs2", dfs, min_itemsize={"A": 30})
In [577]: store.get_storer("dfs2").table
Out[577]:
/dfs2/table (Table(5,)) ''
description := {
"index": Int64Col(shape=(), dflt=0, pos=0),
"values_block_0": StringCol(itemsize=3, shape=(1,), dflt=b'', pos=1),
"A": StringCol(itemsize=30, shape=(), dflt=b'', pos=2)}
byteorder := 'little'
chunkshape := (1598,)
autoindex := True
colindexes := {
"index": Index(6, mediumshuffle, zlib(1)).is_csi=False,
"A": Index(6, mediumshuffle, zlib(1)).is_csi=False}
nan_rep
String columns will serialize a np.nan (a missing value) with the nan_rep string representation. This defaults to the string value nan.
You could inadvertently turn an actual nan value into a missing value.
In [578]: dfss = pd.DataFrame({"A": ["foo", "bar", "nan"]})
In [579]: dfss
Out[579]:
A
0 foo
1 bar
2 nan
In [580]: store.append("dfss", dfss)
In [581]: store.select("dfss")
Out[581]:
A
0 foo
1 bar
2 NaN
# here you need to specify a different nan rep
In [582]: store.append("dfss2", dfss, nan_rep="_nan_")
In [583]: store.select("dfss2")
Out[583]:
A
0 foo
1 bar
2 nan
External compatibility#
HDFStore writes table format objects in specific formats suitable for
producing loss-less round trips to pandas objects. For external
compatibility, HDFStore can read native PyTables format
tables.
It is possible to write an HDFStore object that can easily be imported into R using the
rhdf5 library (Package website). Create a table format store like this:
In [584]: df_for_r = pd.DataFrame(
.....: {
.....: "first": np.random.rand(100),
.....: "second": np.random.rand(100),
.....: "class": np.random.randint(0, 2, (100,)),
.....: },
.....: index=range(100),
.....: )
.....:
In [585]: df_for_r.head()
Out[585]:
first second class
0 0.013480 0.504941 0
1 0.690984 0.898188 1
2 0.510113 0.618748 1
3 0.357698 0.004972 0
4 0.451658 0.012065 1
In [586]: store_export = pd.HDFStore("export.h5")
In [587]: store_export.append("df_for_r", df_for_r, data_columns=df_dc.columns)
In [588]: store_export
Out[588]:
<class 'pandas.io.pytables.HDFStore'>
File path: export.h5
In R this file can be read into a data.frame object using the rhdf5
library. The following example function reads the corresponding column names
and data values from the values and assembles them into a data.frame:
# Load values and column names for all datasets from corresponding nodes and
# insert them into one data.frame object.
library(rhdf5)
loadhdf5data <- function(h5File) {
listing <- h5ls(h5File)
# Find all data nodes, values are stored in *_values and corresponding column
# titles in *_items
data_nodes <- grep("_values", listing$name)
name_nodes <- grep("_items", listing$name)
data_paths = paste(listing$group[data_nodes], listing$name[data_nodes], sep = "/")
name_paths = paste(listing$group[name_nodes], listing$name[name_nodes], sep = "/")
columns = list()
for (idx in seq(data_paths)) {
# NOTE: matrices returned by h5read have to be transposed to obtain
# required Fortran order!
data <- data.frame(t(h5read(h5File, data_paths[idx])))
names <- t(h5read(h5File, name_paths[idx]))
entry <- data.frame(data)
colnames(entry) <- names
columns <- append(columns, entry)
}
data <- data.frame(columns)
return(data)
}
Now you can import the DataFrame into R:
> data = loadhdf5data("transfer.hdf5")
> head(data)
first second class
1 0.4170220047 0.3266449 0
2 0.7203244934 0.5270581 0
3 0.0001143748 0.8859421 1
4 0.3023325726 0.3572698 1
5 0.1467558908 0.9085352 1
6 0.0923385948 0.6233601 1
Note
The R function lists the entire HDF5 file’s contents and assembles the
data.frame object from all matching nodes, so use this only as a
starting point if you have stored multiple DataFrame objects to a
single HDF5 file.
Performance#
tables format come with a writing performance penalty as compared to
fixed stores. The benefit is the ability to append/delete and
query (potentially very large amounts of data). Write times are
generally longer as compared with regular stores. Query times can
be quite fast, especially on an indexed axis.
You can pass chunksize=<int> to append, specifying the
write chunksize (default is 50000). This will significantly lower
your memory usage on writing.
You can pass expectedrows=<int> to the first append,
to set the TOTAL number of rows that PyTables will expect.
This will optimize read/write performance.
Duplicate rows can be written to tables, but are filtered out in
selection (with the last items being selected; thus a table is
unique on major, minor pairs)
A PerformanceWarning will be raised if you are attempting to
store types that will be pickled by PyTables (rather than stored as
endemic types). See
Here
for more information and some solutions.
Feather#
Feather provides binary columnar serialization for data frames. It is designed to make reading and writing data
frames efficient, and to make sharing data across data analysis languages easy.
Feather is designed to faithfully serialize and de-serialize DataFrames, supporting all of the pandas
dtypes, including extension dtypes such as categorical and datetime with tz.
Several caveats:
The format will NOT write an Index, or MultiIndex for the
DataFrame and will raise an error if a non-default one is provided. You
can .reset_index() to store the index or .reset_index(drop=True) to
ignore it.
Duplicate column names and non-string columns names are not supported
Actual Python objects in object dtype columns are not supported. These will
raise a helpful error message on an attempt at serialization.
See the Full Documentation.
In [589]: df = pd.DataFrame(
.....: {
.....: "a": list("abc"),
.....: "b": list(range(1, 4)),
.....: "c": np.arange(3, 6).astype("u1"),
.....: "d": np.arange(4.0, 7.0, dtype="float64"),
.....: "e": [True, False, True],
.....: "f": pd.Categorical(list("abc")),
.....: "g": pd.date_range("20130101", periods=3),
.....: "h": pd.date_range("20130101", periods=3, tz="US/Eastern"),
.....: "i": pd.date_range("20130101", periods=3, freq="ns"),
.....: }
.....: )
.....:
In [590]: df
Out[590]:
a b c ... g h i
0 a 1 3 ... 2013-01-01 2013-01-01 00:00:00-05:00 2013-01-01 00:00:00.000000000
1 b 2 4 ... 2013-01-02 2013-01-02 00:00:00-05:00 2013-01-01 00:00:00.000000001
2 c 3 5 ... 2013-01-03 2013-01-03 00:00:00-05:00 2013-01-01 00:00:00.000000002
[3 rows x 9 columns]
In [591]: df.dtypes
Out[591]:
a object
b int64
c uint8
d float64
e bool
f category
g datetime64[ns]
h datetime64[ns, US/Eastern]
i datetime64[ns]
dtype: object
Write to a feather file.
In [592]: df.to_feather("example.feather")
Read from a feather file.
In [593]: result = pd.read_feather("example.feather")
In [594]: result
Out[594]:
a b c ... g h i
0 a 1 3 ... 2013-01-01 2013-01-01 00:00:00-05:00 2013-01-01 00:00:00.000000000
1 b 2 4 ... 2013-01-02 2013-01-02 00:00:00-05:00 2013-01-01 00:00:00.000000001
2 c 3 5 ... 2013-01-03 2013-01-03 00:00:00-05:00 2013-01-01 00:00:00.000000002
[3 rows x 9 columns]
# we preserve dtypes
In [595]: result.dtypes
Out[595]:
a object
b int64
c uint8
d float64
e bool
f category
g datetime64[ns]
h datetime64[ns, US/Eastern]
i datetime64[ns]
dtype: object
Parquet#
Apache Parquet provides a partitioned binary columnar serialization for data frames. It is designed to
make reading and writing data frames efficient, and to make sharing data across data analysis
languages easy. Parquet can use a variety of compression techniques to shrink the file size as much as possible
while still maintaining good read performance.
Parquet is designed to faithfully serialize and de-serialize DataFrame s, supporting all of the pandas
dtypes, including extension dtypes such as datetime with tz.
Several caveats.
Duplicate column names and non-string columns names are not supported.
The pyarrow engine always writes the index to the output, but fastparquet only writes non-default
indexes. This extra column can cause problems for non-pandas consumers that are not expecting it. You can
force including or omitting indexes with the index argument, regardless of the underlying engine.
Index level names, if specified, must be strings.
In the pyarrow engine, categorical dtypes for non-string types can be serialized to parquet, but will de-serialize as their primitive dtype.
The pyarrow engine preserves the ordered flag of categorical dtypes with string types. fastparquet does not preserve the ordered flag.
Non supported types include Interval and actual Python object types. These will raise a helpful error message
on an attempt at serialization. Period type is supported with pyarrow >= 0.16.0.
The pyarrow engine preserves extension data types such as the nullable integer and string data
type (requiring pyarrow >= 0.16.0, and requiring the extension type to implement the needed protocols,
see the extension types documentation).
You can specify an engine to direct the serialization. This can be one of pyarrow, or fastparquet, or auto.
If the engine is NOT specified, then the pd.options.io.parquet.engine option is checked; if this is also auto,
then pyarrow is tried, and falling back to fastparquet.
See the documentation for pyarrow and fastparquet.
Note
These engines are very similar and should read/write nearly identical parquet format files.
pyarrow>=8.0.0 supports timedelta data, fastparquet>=0.1.4 supports timezone aware datetimes.
These libraries differ by having different underlying dependencies (fastparquet by using numba, while pyarrow uses a c-library).
In [596]: df = pd.DataFrame(
.....: {
.....: "a": list("abc"),
.....: "b": list(range(1, 4)),
.....: "c": np.arange(3, 6).astype("u1"),
.....: "d": np.arange(4.0, 7.0, dtype="float64"),
.....: "e": [True, False, True],
.....: "f": pd.date_range("20130101", periods=3),
.....: "g": pd.date_range("20130101", periods=3, tz="US/Eastern"),
.....: "h": pd.Categorical(list("abc")),
.....: "i": pd.Categorical(list("abc"), ordered=True),
.....: }
.....: )
.....:
In [597]: df
Out[597]:
a b c d e f g h i
0 a 1 3 4.0 True 2013-01-01 2013-01-01 00:00:00-05:00 a a
1 b 2 4 5.0 False 2013-01-02 2013-01-02 00:00:00-05:00 b b
2 c 3 5 6.0 True 2013-01-03 2013-01-03 00:00:00-05:00 c c
In [598]: df.dtypes
Out[598]:
a object
b int64
c uint8
d float64
e bool
f datetime64[ns]
g datetime64[ns, US/Eastern]
h category
i category
dtype: object
Write to a parquet file.
In [599]: df.to_parquet("example_pa.parquet", engine="pyarrow")
In [600]: df.to_parquet("example_fp.parquet", engine="fastparquet")
Read from a parquet file.
In [601]: result = pd.read_parquet("example_fp.parquet", engine="fastparquet")
In [602]: result = pd.read_parquet("example_pa.parquet", engine="pyarrow")
In [603]: result.dtypes
Out[603]:
a object
b int64
c uint8
d float64
e bool
f datetime64[ns]
g datetime64[ns, US/Eastern]
h category
i category
dtype: object
Read only certain columns of a parquet file.
In [604]: result = pd.read_parquet(
.....: "example_fp.parquet",
.....: engine="fastparquet",
.....: columns=["a", "b"],
.....: )
.....:
In [605]: result = pd.read_parquet(
.....: "example_pa.parquet",
.....: engine="pyarrow",
.....: columns=["a", "b"],
.....: )
.....:
In [606]: result.dtypes
Out[606]:
a object
b int64
dtype: object
Handling indexes#
Serializing a DataFrame to parquet may include the implicit index as one or
more columns in the output file. Thus, this code:
In [607]: df = pd.DataFrame({"a": [1, 2], "b": [3, 4]})
In [608]: df.to_parquet("test.parquet", engine="pyarrow")
creates a parquet file with three columns if you use pyarrow for serialization:
a, b, and __index_level_0__. If you’re using fastparquet, the
index may or may not
be written to the file.
This unexpected extra column causes some databases like Amazon Redshift to reject
the file, because that column doesn’t exist in the target table.
If you want to omit a dataframe’s indexes when writing, pass index=False to
to_parquet():
In [609]: df.to_parquet("test.parquet", index=False)
This creates a parquet file with just the two expected columns, a and b.
If your DataFrame has a custom index, you won’t get it back when you load
this file into a DataFrame.
Passing index=True will always write the index, even if that’s not the
underlying engine’s default behavior.
Partitioning Parquet files#
Parquet supports partitioning of data based on the values of one or more columns.
In [610]: df = pd.DataFrame({"a": [0, 0, 1, 1], "b": [0, 1, 0, 1]})
In [611]: df.to_parquet(path="test", engine="pyarrow", partition_cols=["a"], compression=None)
The path specifies the parent directory to which data will be saved.
The partition_cols are the column names by which the dataset will be partitioned.
Columns are partitioned in the order they are given. The partition splits are
determined by the unique values in the partition columns.
The above example creates a partitioned dataset that may look like:
test
├── a=0
│ ├── 0bac803e32dc42ae83fddfd029cbdebc.parquet
│ └── ...
└── a=1
├── e6ab24a4f45147b49b54a662f0c412a3.parquet
└── ...
ORC#
New in version 1.0.0.
Similar to the parquet format, the ORC Format is a binary columnar serialization
for data frames. It is designed to make reading data frames efficient. pandas provides both the reader and the writer for the
ORC format, read_orc() and to_orc(). This requires the pyarrow library.
Warning
It is highly recommended to install pyarrow using conda due to some issues occurred by pyarrow.
to_orc() requires pyarrow>=7.0.0.
read_orc() and to_orc() are not supported on Windows yet, you can find valid environments on install optional dependencies.
For supported dtypes please refer to supported ORC features in Arrow.
Currently timezones in datetime columns are not preserved when a dataframe is converted into ORC files.
In [612]: df = pd.DataFrame(
.....: {
.....: "a": list("abc"),
.....: "b": list(range(1, 4)),
.....: "c": np.arange(4.0, 7.0, dtype="float64"),
.....: "d": [True, False, True],
.....: "e": pd.date_range("20130101", periods=3),
.....: }
.....: )
.....:
In [613]: df
Out[613]:
a b c d e
0 a 1 4.0 True 2013-01-01
1 b 2 5.0 False 2013-01-02
2 c 3 6.0 True 2013-01-03
In [614]: df.dtypes
Out[614]:
a object
b int64
c float64
d bool
e datetime64[ns]
dtype: object
Write to an orc file.
In [615]: df.to_orc("example_pa.orc", engine="pyarrow")
Read from an orc file.
In [616]: result = pd.read_orc("example_pa.orc")
In [617]: result.dtypes
Out[617]:
a object
b int64
c float64
d bool
e datetime64[ns]
dtype: object
Read only certain columns of an orc file.
In [618]: result = pd.read_orc(
.....: "example_pa.orc",
.....: columns=["a", "b"],
.....: )
.....:
In [619]: result.dtypes
Out[619]:
a object
b int64
dtype: object
SQL queries#
The pandas.io.sql module provides a collection of query wrappers to both
facilitate data retrieval and to reduce dependency on DB-specific API. Database abstraction
is provided by SQLAlchemy if installed. In addition you will need a driver library for
your database. Examples of such drivers are psycopg2
for PostgreSQL or pymysql for MySQL.
For SQLite this is
included in Python’s standard library by default.
You can find an overview of supported drivers for each SQL dialect in the
SQLAlchemy docs.
If SQLAlchemy is not installed, a fallback is only provided for sqlite (and
for mysql for backwards compatibility, but this is deprecated and will be
removed in a future version).
This mode requires a Python database adapter which respect the Python
DB-API.
See also some cookbook examples for some advanced strategies.
The key functions are:
read_sql_table(table_name, con[, schema, ...])
Read SQL database table into a DataFrame.
read_sql_query(sql, con[, index_col, ...])
Read SQL query into a DataFrame.
read_sql(sql, con[, index_col, ...])
Read SQL query or database table into a DataFrame.
DataFrame.to_sql(name, con[, schema, ...])
Write records stored in a DataFrame to a SQL database.
Note
The function read_sql() is a convenience wrapper around
read_sql_table() and read_sql_query() (and for
backward compatibility) and will delegate to specific function depending on
the provided input (database table name or sql query).
Table names do not need to be quoted if they have special characters.
In the following example, we use the SQlite SQL database
engine. You can use a temporary SQLite database where data are stored in
“memory”.
To connect with SQLAlchemy you use the create_engine() function to create an engine
object from database URI. You only need to create the engine once per database you are
connecting to.
For more information on create_engine() and the URI formatting, see the examples
below and the SQLAlchemy documentation
In [620]: from sqlalchemy import create_engine
# Create your engine.
In [621]: engine = create_engine("sqlite:///:memory:")
If you want to manage your own connections you can pass one of those instead. The example below opens a
connection to the database using a Python context manager that automatically closes the connection after
the block has completed.
See the SQLAlchemy docs
for an explanation of how the database connection is handled.
with engine.connect() as conn, conn.begin():
data = pd.read_sql_table("data", conn)
Warning
When you open a connection to a database you are also responsible for closing it.
Side effects of leaving a connection open may include locking the database or
other breaking behaviour.
Writing DataFrames#
Assuming the following data is in a DataFrame data, we can insert it into
the database using to_sql().
id
Date
Col_1
Col_2
Col_3
26
2012-10-18
X
25.7
True
42
2012-10-19
Y
-12.4
False
63
2012-10-20
Z
5.73
True
In [622]: import datetime
In [623]: c = ["id", "Date", "Col_1", "Col_2", "Col_3"]
In [624]: d = [
.....: (26, datetime.datetime(2010, 10, 18), "X", 27.5, True),
.....: (42, datetime.datetime(2010, 10, 19), "Y", -12.5, False),
.....: (63, datetime.datetime(2010, 10, 20), "Z", 5.73, True),
.....: ]
.....:
In [625]: data = pd.DataFrame(d, columns=c)
In [626]: data
Out[626]:
id Date Col_1 Col_2 Col_3
0 26 2010-10-18 X 27.50 True
1 42 2010-10-19 Y -12.50 False
2 63 2010-10-20 Z 5.73 True
In [627]: data.to_sql("data", engine)
Out[627]: 3
With some databases, writing large DataFrames can result in errors due to
packet size limitations being exceeded. This can be avoided by setting the
chunksize parameter when calling to_sql. For example, the following
writes data to the database in batches of 1000 rows at a time:
In [628]: data.to_sql("data_chunked", engine, chunksize=1000)
Out[628]: 3
SQL data types#
to_sql() will try to map your data to an appropriate
SQL data type based on the dtype of the data. When you have columns of dtype
object, pandas will try to infer the data type.
You can always override the default type by specifying the desired SQL type of
any of the columns by using the dtype argument. This argument needs a
dictionary mapping column names to SQLAlchemy types (or strings for the sqlite3
fallback mode).
For example, specifying to use the sqlalchemy String type instead of the
default Text type for string columns:
In [629]: from sqlalchemy.types import String
In [630]: data.to_sql("data_dtype", engine, dtype={"Col_1": String})
Out[630]: 3
Note
Due to the limited support for timedelta’s in the different database
flavors, columns with type timedelta64 will be written as integer
values as nanoseconds to the database and a warning will be raised.
Note
Columns of category dtype will be converted to the dense representation
as you would get with np.asarray(categorical) (e.g. for string categories
this gives an array of strings).
Because of this, reading the database table back in does not generate
a categorical.
Datetime data types#
Using SQLAlchemy, to_sql() is capable of writing
datetime data that is timezone naive or timezone aware. However, the resulting
data stored in the database ultimately depends on the supported data type
for datetime data of the database system being used.
The following table lists supported data types for datetime data for some
common databases. Other database dialects may have different data types for
datetime data.
Database
SQL Datetime Types
Timezone Support
SQLite
TEXT
No
MySQL
TIMESTAMP or DATETIME
No
PostgreSQL
TIMESTAMP or TIMESTAMP WITH TIME ZONE
Yes
When writing timezone aware data to databases that do not support timezones,
the data will be written as timezone naive timestamps that are in local time
with respect to the timezone.
read_sql_table() is also capable of reading datetime data that is
timezone aware or naive. When reading TIMESTAMP WITH TIME ZONE types, pandas
will convert the data to UTC.
Insertion method#
The parameter method controls the SQL insertion clause used.
Possible values are:
None: Uses standard SQL INSERT clause (one per row).
'multi': Pass multiple values in a single INSERT clause.
It uses a special SQL syntax not supported by all backends.
This usually provides better performance for analytic databases
like Presto and Redshift, but has worse performance for
traditional SQL backend if the table contains many columns.
For more information check the SQLAlchemy documentation.
callable with signature (pd_table, conn, keys, data_iter):
This can be used to implement a more performant insertion method based on
specific backend dialect features.
Example of a callable using PostgreSQL COPY clause:
# Alternative to_sql() *method* for DBs that support COPY FROM
import csv
from io import StringIO
def psql_insert_copy(table, conn, keys, data_iter):
"""
Execute SQL statement inserting data
Parameters
----------
table : pandas.io.sql.SQLTable
conn : sqlalchemy.engine.Engine or sqlalchemy.engine.Connection
keys : list of str
Column names
data_iter : Iterable that iterates the values to be inserted
"""
# gets a DBAPI connection that can provide a cursor
dbapi_conn = conn.connection
with dbapi_conn.cursor() as cur:
s_buf = StringIO()
writer = csv.writer(s_buf)
writer.writerows(data_iter)
s_buf.seek(0)
columns = ', '.join(['"{}"'.format(k) for k in keys])
if table.schema:
table_name = '{}.{}'.format(table.schema, table.name)
else:
table_name = table.name
sql = 'COPY {} ({}) FROM STDIN WITH CSV'.format(
table_name, columns)
cur.copy_expert(sql=sql, file=s_buf)
Reading tables#
read_sql_table() will read a database table given the
table name and optionally a subset of columns to read.
Note
In order to use read_sql_table(), you must have the
SQLAlchemy optional dependency installed.
In [631]: pd.read_sql_table("data", engine)
Out[631]:
index id Date Col_1 Col_2 Col_3
0 0 26 2010-10-18 X 27.50 True
1 1 42 2010-10-19 Y -12.50 False
2 2 63 2010-10-20 Z 5.73 True
Note
Note that pandas infers column dtypes from query outputs, and not by looking
up data types in the physical database schema. For example, assume userid
is an integer column in a table. Then, intuitively, select userid ... will
return integer-valued series, while select cast(userid as text) ... will
return object-valued (str) series. Accordingly, if the query output is empty,
then all resulting columns will be returned as object-valued (since they are
most general). If you foresee that your query will sometimes generate an empty
result, you may want to explicitly typecast afterwards to ensure dtype
integrity.
You can also specify the name of the column as the DataFrame index,
and specify a subset of columns to be read.
In [632]: pd.read_sql_table("data", engine, index_col="id")
Out[632]:
index Date Col_1 Col_2 Col_3
id
26 0 2010-10-18 X 27.50 True
42 1 2010-10-19 Y -12.50 False
63 2 2010-10-20 Z 5.73 True
In [633]: pd.read_sql_table("data", engine, columns=["Col_1", "Col_2"])
Out[633]:
Col_1 Col_2
0 X 27.50
1 Y -12.50
2 Z 5.73
And you can explicitly force columns to be parsed as dates:
In [634]: pd.read_sql_table("data", engine, parse_dates=["Date"])
Out[634]:
index id Date Col_1 Col_2 Col_3
0 0 26 2010-10-18 X 27.50 True
1 1 42 2010-10-19 Y -12.50 False
2 2 63 2010-10-20 Z 5.73 True
If needed you can explicitly specify a format string, or a dict of arguments
to pass to pandas.to_datetime():
pd.read_sql_table("data", engine, parse_dates={"Date": "%Y-%m-%d"})
pd.read_sql_table(
"data",
engine,
parse_dates={"Date": {"format": "%Y-%m-%d %H:%M:%S"}},
)
You can check if a table exists using has_table()
Schema support#
Reading from and writing to different schema’s is supported through the schema
keyword in the read_sql_table() and to_sql()
functions. Note however that this depends on the database flavor (sqlite does not
have schema’s). For example:
df.to_sql("table", engine, schema="other_schema")
pd.read_sql_table("table", engine, schema="other_schema")
Querying#
You can query using raw SQL in the read_sql_query() function.
In this case you must use the SQL variant appropriate for your database.
When using SQLAlchemy, you can also pass SQLAlchemy Expression language constructs,
which are database-agnostic.
In [635]: pd.read_sql_query("SELECT * FROM data", engine)
Out[635]:
index id Date Col_1 Col_2 Col_3
0 0 26 2010-10-18 00:00:00.000000 X 27.50 1
1 1 42 2010-10-19 00:00:00.000000 Y -12.50 0
2 2 63 2010-10-20 00:00:00.000000 Z 5.73 1
Of course, you can specify a more “complex” query.
In [636]: pd.read_sql_query("SELECT id, Col_1, Col_2 FROM data WHERE id = 42;", engine)
Out[636]:
id Col_1 Col_2
0 42 Y -12.5
The read_sql_query() function supports a chunksize argument.
Specifying this will return an iterator through chunks of the query result:
In [637]: df = pd.DataFrame(np.random.randn(20, 3), columns=list("abc"))
In [638]: df.to_sql("data_chunks", engine, index=False)
Out[638]: 20
In [639]: for chunk in pd.read_sql_query("SELECT * FROM data_chunks", engine, chunksize=5):
.....: print(chunk)
.....:
a b c
0 0.070470 0.901320 0.937577
1 0.295770 1.420548 -0.005283
2 -1.518598 -0.730065 0.226497
3 -2.061465 0.632115 0.853619
4 2.719155 0.139018 0.214557
a b c
0 -1.538924 -0.366973 -0.748801
1 -0.478137 -1.559153 -3.097759
2 -2.320335 -0.221090 0.119763
3 0.608228 1.064810 -0.780506
4 -2.736887 0.143539 1.170191
a b c
0 -1.573076 0.075792 -1.722223
1 -0.774650 0.803627 0.221665
2 0.584637 0.147264 1.057825
3 -0.284136 0.912395 1.552808
4 0.189376 -0.109830 0.539341
a b c
0 0.592591 -0.155407 -1.356475
1 0.833837 1.524249 1.606722
2 -0.029487 -0.051359 1.700152
3 0.921484 -0.926347 0.979818
4 0.182380 -0.186376 0.049820
You can also run a plain query without creating a DataFrame with
execute(). This is useful for queries that don’t return values,
such as INSERT. This is functionally equivalent to calling execute on the
SQLAlchemy engine or db connection object. Again, you must use the SQL syntax
variant appropriate for your database.
from pandas.io import sql
sql.execute("SELECT * FROM table_name", engine)
sql.execute(
"INSERT INTO table_name VALUES(?, ?, ?)", engine, params=[("id", 1, 12.2, True)]
)
Engine connection examples#
To connect with SQLAlchemy you use the create_engine() function to create an engine
object from database URI. You only need to create the engine once per database you are
connecting to.
from sqlalchemy import create_engine
engine = create_engine("postgresql://scott:[email protected]:5432/mydatabase")
engine = create_engine("mysql+mysqldb://scott:[email protected]/foo")
engine = create_engine("oracle://scott:[email protected]:1521/sidname")
engine = create_engine("mssql+pyodbc://mydsn")
# sqlite://<nohostname>/<path>
# where <path> is relative:
engine = create_engine("sqlite:///foo.db")
# or absolute, starting with a slash:
engine = create_engine("sqlite:////absolute/path/to/foo.db")
For more information see the examples the SQLAlchemy documentation
Advanced SQLAlchemy queries#
You can use SQLAlchemy constructs to describe your query.
Use sqlalchemy.text() to specify query parameters in a backend-neutral way
In [640]: import sqlalchemy as sa
In [641]: pd.read_sql(
.....: sa.text("SELECT * FROM data where Col_1=:col1"), engine, params={"col1": "X"}
.....: )
.....:
Out[641]:
index id Date Col_1 Col_2 Col_3
0 0 26 2010-10-18 00:00:00.000000 X 27.5 1
If you have an SQLAlchemy description of your database you can express where conditions using SQLAlchemy expressions
In [642]: metadata = sa.MetaData()
In [643]: data_table = sa.Table(
.....: "data",
.....: metadata,
.....: sa.Column("index", sa.Integer),
.....: sa.Column("Date", sa.DateTime),
.....: sa.Column("Col_1", sa.String),
.....: sa.Column("Col_2", sa.Float),
.....: sa.Column("Col_3", sa.Boolean),
.....: )
.....:
In [644]: pd.read_sql(sa.select([data_table]).where(data_table.c.Col_3 is True), engine)
Out[644]:
Empty DataFrame
Columns: [index, Date, Col_1, Col_2, Col_3]
Index: []
You can combine SQLAlchemy expressions with parameters passed to read_sql() using sqlalchemy.bindparam()
In [645]: import datetime as dt
In [646]: expr = sa.select([data_table]).where(data_table.c.Date > sa.bindparam("date"))
In [647]: pd.read_sql(expr, engine, params={"date": dt.datetime(2010, 10, 18)})
Out[647]:
index Date Col_1 Col_2 Col_3
0 1 2010-10-19 Y -12.50 False
1 2 2010-10-20 Z 5.73 True
Sqlite fallback#
The use of sqlite is supported without using SQLAlchemy.
This mode requires a Python database adapter which respect the Python
DB-API.
You can create connections like so:
import sqlite3
con = sqlite3.connect(":memory:")
And then issue the following queries:
data.to_sql("data", con)
pd.read_sql_query("SELECT * FROM data", con)
Google BigQuery#
Warning
Starting in 0.20.0, pandas has split off Google BigQuery support into the
separate package pandas-gbq. You can pip install pandas-gbq to get it.
The pandas-gbq package provides functionality to read/write from Google BigQuery.
pandas integrates with this external package. if pandas-gbq is installed, you can
use the pandas methods pd.read_gbq and DataFrame.to_gbq, which will call the
respective functions from pandas-gbq.
Full documentation can be found here.
Stata format#
Writing to stata format#
The method to_stata() will write a DataFrame
into a .dta file. The format version of this file is always 115 (Stata 12).
In [648]: df = pd.DataFrame(np.random.randn(10, 2), columns=list("AB"))
In [649]: df.to_stata("stata.dta")
Stata data files have limited data type support; only strings with
244 or fewer characters, int8, int16, int32, float32
and float64 can be stored in .dta files. Additionally,
Stata reserves certain values to represent missing data. Exporting a
non-missing value that is outside of the permitted range in Stata for
a particular data type will retype the variable to the next larger
size. For example, int8 values are restricted to lie between -127
and 100 in Stata, and so variables with values above 100 will trigger
a conversion to int16. nan values in floating points data
types are stored as the basic missing data type (. in Stata).
Note
It is not possible to export missing data values for integer data types.
The Stata writer gracefully handles other data types including int64,
bool, uint8, uint16, uint32 by casting to
the smallest supported type that can represent the data. For example, data
with a type of uint8 will be cast to int8 if all values are less than
100 (the upper bound for non-missing int8 data in Stata), or, if values are
outside of this range, the variable is cast to int16.
Warning
Conversion from int64 to float64 may result in a loss of precision
if int64 values are larger than 2**53.
Warning
StataWriter and
to_stata() only support fixed width
strings containing up to 244 characters, a limitation imposed by the version
115 dta file format. Attempting to write Stata dta files with strings
longer than 244 characters raises a ValueError.
Reading from Stata format#
The top-level function read_stata will read a dta file and return
either a DataFrame or a StataReader that can
be used to read the file incrementally.
In [650]: pd.read_stata("stata.dta")
Out[650]:
index A B
0 0 -1.690072 0.405144
1 1 -1.511309 -1.531396
2 2 0.572698 -1.106845
3 3 -1.185859 0.174564
4 4 0.603797 -1.796129
5 5 -0.791679 1.173795
6 6 -0.277710 1.859988
7 7 -0.258413 1.251808
8 8 1.443262 0.441553
9 9 1.168163 -2.054946
Specifying a chunksize yields a
StataReader instance that can be used to
read chunksize lines from the file at a time. The StataReader
object can be used as an iterator.
In [651]: with pd.read_stata("stata.dta", chunksize=3) as reader:
.....: for df in reader:
.....: print(df.shape)
.....:
(3, 3)
(3, 3)
(3, 3)
(1, 3)
For more fine-grained control, use iterator=True and specify
chunksize with each call to
read().
In [652]: with pd.read_stata("stata.dta", iterator=True) as reader:
.....: chunk1 = reader.read(5)
.....: chunk2 = reader.read(5)
.....:
Currently the index is retrieved as a column.
The parameter convert_categoricals indicates whether value labels should be
read and used to create a Categorical variable from them. Value labels can
also be retrieved by the function value_labels, which requires read()
to be called before use.
The parameter convert_missing indicates whether missing value
representations in Stata should be preserved. If False (the default),
missing values are represented as np.nan. If True, missing values are
represented using StataMissingValue objects, and columns containing missing
values will have object data type.
Note
read_stata() and
StataReader support .dta formats 113-115
(Stata 10-12), 117 (Stata 13), and 118 (Stata 14).
Note
Setting preserve_dtypes=False will upcast to the standard pandas data types:
int64 for all integer types and float64 for floating point data. By default,
the Stata data types are preserved when importing.
Categorical data#
Categorical data can be exported to Stata data files as value labeled data.
The exported data consists of the underlying category codes as integer data values
and the categories as value labels. Stata does not have an explicit equivalent
to a Categorical and information about whether the variable is ordered
is lost when exporting.
Warning
Stata only supports string value labels, and so str is called on the
categories when exporting data. Exporting Categorical variables with
non-string categories produces a warning, and can result a loss of
information if the str representations of the categories are not unique.
Labeled data can similarly be imported from Stata data files as Categorical
variables using the keyword argument convert_categoricals (True by default).
The keyword argument order_categoricals (True by default) determines
whether imported Categorical variables are ordered.
Note
When importing categorical data, the values of the variables in the Stata
data file are not preserved since Categorical variables always
use integer data types between -1 and n-1 where n is the number
of categories. If the original values in the Stata data file are required,
these can be imported by setting convert_categoricals=False, which will
import original data (but not the variable labels). The original values can
be matched to the imported categorical data since there is a simple mapping
between the original Stata data values and the category codes of imported
Categorical variables: missing values are assigned code -1, and the
smallest original value is assigned 0, the second smallest is assigned
1 and so on until the largest original value is assigned the code n-1.
Note
Stata supports partially labeled series. These series have value labels for
some but not all data values. Importing a partially labeled series will produce
a Categorical with string categories for the values that are labeled and
numeric categories for values with no label.
SAS formats#
The top-level function read_sas() can read (but not write) SAS
XPORT (.xpt) and (since v0.18.0) SAS7BDAT (.sas7bdat) format files.
SAS files only contain two value types: ASCII text and floating point
values (usually 8 bytes but sometimes truncated). For xport files,
there is no automatic type conversion to integers, dates, or
categoricals. For SAS7BDAT files, the format codes may allow date
variables to be automatically converted to dates. By default the
whole file is read and returned as a DataFrame.
Specify a chunksize or use iterator=True to obtain reader
objects (XportReader or SAS7BDATReader) for incrementally
reading the file. The reader objects also have attributes that
contain additional information about the file and its variables.
Read a SAS7BDAT file:
df = pd.read_sas("sas_data.sas7bdat")
Obtain an iterator and read an XPORT file 100,000 lines at a time:
def do_something(chunk):
pass
with pd.read_sas("sas_xport.xpt", chunk=100000) as rdr:
for chunk in rdr:
do_something(chunk)
The specification for the xport file format is available from the SAS
web site.
No official documentation is available for the SAS7BDAT format.
SPSS formats#
New in version 0.25.0.
The top-level function read_spss() can read (but not write) SPSS
SAV (.sav) and ZSAV (.zsav) format files.
SPSS files contain column names. By default the
whole file is read, categorical columns are converted into pd.Categorical,
and a DataFrame with all columns is returned.
Specify the usecols parameter to obtain a subset of columns. Specify convert_categoricals=False
to avoid converting categorical columns into pd.Categorical.
Read an SPSS file:
df = pd.read_spss("spss_data.sav")
Extract a subset of columns contained in usecols from an SPSS file and
avoid converting categorical columns into pd.Categorical:
df = pd.read_spss(
"spss_data.sav",
usecols=["foo", "bar"],
convert_categoricals=False,
)
More information about the SAV and ZSAV file formats is available here.
Other file formats#
pandas itself only supports IO with a limited set of file formats that map
cleanly to its tabular data model. For reading and writing other file formats
into and from pandas, we recommend these packages from the broader community.
netCDF#
xarray provides data structures inspired by the pandas DataFrame for working
with multi-dimensional datasets, with a focus on the netCDF file format and
easy conversion to and from pandas.
Performance considerations#
This is an informal comparison of various IO methods, using pandas
0.24.2. Timings are machine dependent and small differences should be
ignored.
In [1]: sz = 1000000
In [2]: df = pd.DataFrame({'A': np.random.randn(sz), 'B': [1] * sz})
In [3]: df.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1000000 entries, 0 to 999999
Data columns (total 2 columns):
A 1000000 non-null float64
B 1000000 non-null int64
dtypes: float64(1), int64(1)
memory usage: 15.3 MB
The following test functions will be used below to compare the performance of several IO methods:
import numpy as np
import os
sz = 1000000
df = pd.DataFrame({"A": np.random.randn(sz), "B": [1] * sz})
sz = 1000000
np.random.seed(42)
df = pd.DataFrame({"A": np.random.randn(sz), "B": [1] * sz})
def test_sql_write(df):
if os.path.exists("test.sql"):
os.remove("test.sql")
sql_db = sqlite3.connect("test.sql")
df.to_sql(name="test_table", con=sql_db)
sql_db.close()
def test_sql_read():
sql_db = sqlite3.connect("test.sql")
pd.read_sql_query("select * from test_table", sql_db)
sql_db.close()
def test_hdf_fixed_write(df):
df.to_hdf("test_fixed.hdf", "test", mode="w")
def test_hdf_fixed_read():
pd.read_hdf("test_fixed.hdf", "test")
def test_hdf_fixed_write_compress(df):
df.to_hdf("test_fixed_compress.hdf", "test", mode="w", complib="blosc")
def test_hdf_fixed_read_compress():
pd.read_hdf("test_fixed_compress.hdf", "test")
def test_hdf_table_write(df):
df.to_hdf("test_table.hdf", "test", mode="w", format="table")
def test_hdf_table_read():
pd.read_hdf("test_table.hdf", "test")
def test_hdf_table_write_compress(df):
df.to_hdf(
"test_table_compress.hdf", "test", mode="w", complib="blosc", format="table"
)
def test_hdf_table_read_compress():
pd.read_hdf("test_table_compress.hdf", "test")
def test_csv_write(df):
df.to_csv("test.csv", mode="w")
def test_csv_read():
pd.read_csv("test.csv", index_col=0)
def test_feather_write(df):
df.to_feather("test.feather")
def test_feather_read():
pd.read_feather("test.feather")
def test_pickle_write(df):
df.to_pickle("test.pkl")
def test_pickle_read():
pd.read_pickle("test.pkl")
def test_pickle_write_compress(df):
df.to_pickle("test.pkl.compress", compression="xz")
def test_pickle_read_compress():
pd.read_pickle("test.pkl.compress", compression="xz")
def test_parquet_write(df):
df.to_parquet("test.parquet")
def test_parquet_read():
pd.read_parquet("test.parquet")
When writing, the top three functions in terms of speed are test_feather_write, test_hdf_fixed_write and test_hdf_fixed_write_compress.
In [4]: %timeit test_sql_write(df)
3.29 s ± 43.2 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [5]: %timeit test_hdf_fixed_write(df)
19.4 ms ± 560 µs per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [6]: %timeit test_hdf_fixed_write_compress(df)
19.6 ms ± 308 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [7]: %timeit test_hdf_table_write(df)
449 ms ± 5.61 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [8]: %timeit test_hdf_table_write_compress(df)
448 ms ± 11.9 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [9]: %timeit test_csv_write(df)
3.66 s ± 26.2 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [10]: %timeit test_feather_write(df)
9.75 ms ± 117 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [11]: %timeit test_pickle_write(df)
30.1 ms ± 229 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [12]: %timeit test_pickle_write_compress(df)
4.29 s ± 15.9 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [13]: %timeit test_parquet_write(df)
67.6 ms ± 706 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
When reading, the top three functions in terms of speed are test_feather_read, test_pickle_read and
test_hdf_fixed_read.
In [14]: %timeit test_sql_read()
1.77 s ± 17.7 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [15]: %timeit test_hdf_fixed_read()
19.4 ms ± 436 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [16]: %timeit test_hdf_fixed_read_compress()
19.5 ms ± 222 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [17]: %timeit test_hdf_table_read()
38.6 ms ± 857 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [18]: %timeit test_hdf_table_read_compress()
38.8 ms ± 1.49 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [19]: %timeit test_csv_read()
452 ms ± 9.04 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [20]: %timeit test_feather_read()
12.4 ms ± 99.7 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [21]: %timeit test_pickle_read()
18.4 ms ± 191 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [22]: %timeit test_pickle_read_compress()
915 ms ± 7.48 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [23]: %timeit test_parquet_read()
24.4 ms ± 146 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
The files test.pkl.compress, test.parquet and test.feather took the least space on disk (in bytes).
29519500 Oct 10 06:45 test.csv
16000248 Oct 10 06:45 test.feather
8281983 Oct 10 06:49 test.parquet
16000857 Oct 10 06:47 test.pkl
7552144 Oct 10 06:48 test.pkl.compress
34816000 Oct 10 06:42 test.sql
24009288 Oct 10 06:43 test_fixed.hdf
24009288 Oct 10 06:43 test_fixed_compress.hdf
24458940 Oct 10 06:44 test_table.hdf
24458940 Oct 10 06:44 test_table_compress.hdf
| 211
| 329
|
Index return the first letter of the destination value instead of the target value
I use this code to pull API Data names from an Exchange and they retrieve their equivalent symbol, but my current problem is that I suspect that the index returned is correct because when I look for the associated symbol, I get the first letter of the name and not the symbol.
from pycoingecko import CoinGeckoAPI
import pandas as pd
cg = CoinGeckoAPI()
response_list = cg.get_coins_list()
response_list_normalized = pd.json_normalize(response_list)
print('\n--- selected: LIST NORMALIZED ---')
print(response_list_normalized)
response_list_stringed = ''.join(map(str, response_list_normalized['name']))
if crypto_token_name in response_list_stringed:
print('\n--- selected: EXACT MATCHING RESULT ---')
print('Found it!')
position = response_list_stringed.index('Cardano')
print('\n--- position: INDEX ---')
print(position)
symbol = response_list_stringed[position]
print('\n--- position: SYMBOL ---')
print(symbol)
else:
print('\n--- selected: LIST MATCHING RESULT ---')
print('Not found! :(')
Is the list dimension in cause, or am I pointing to the wrong target? I spent days trying every possible variant to get it to look for the name and retrieve its index and associated symbol.
|
63,839,570
|
Very new to python, wanting to add a new column named ‘Total’, which is the sum of other totals
|
<pre><code>df['Total']= df.iloc[3:5].sum(axis=1)
</code></pre>
<p>returns NaN for some most values, why is this? They are all intergers.</p>
<p><img src="https://i.stack.imgur.com/Mlwbt.png" alt="pic of df.head, also shows incorrect addition? is is also adding generation column?" /></p>
<p>Also is there a better way of doing this?</p>
| 63,839,590
| 2020-09-11T01:20:54.367000
| 1
| null | 0
| 32
|
python|pandas
|
<p>Check with add with columns slice</p>
<pre><code>df['Total'] = df.iloc[:, 3:5].sum(axis=1)
</code></pre>
| 2020-09-11T01:22:48.007000
| 0
|
https://pandas.pydata.org/docs/user_guide/groupby.html
|
Group by: split-apply-combine#
Group by: split-apply-combine#
By “group by” we are referring to a process involving one or more of the following
steps:
Splitting the data into groups based on some criteria.
Applying a function to each group independently.
Check with add with columns slice
df['Total'] = df.iloc[:, 3:5].sum(axis=1)
Combining the results into a data structure.
Out of these, the split step is the most straightforward. In fact, in many
situations we may wish to split the data set into groups and do something with
those groups. In the apply step, we might wish to do one of the
following:
Aggregation: compute a summary statistic (or statistics) for each
group. Some examples:
Compute group sums or means.
Compute group sizes / counts.
Transformation: perform some group-specific computations and return a
like-indexed object. Some examples:
Standardize data (zscore) within a group.
Filling NAs within groups with a value derived from each group.
Filtration: discard some groups, according to a group-wise computation
that evaluates True or False. Some examples:
Discard data that belongs to groups with only a few members.
Filter out data based on the group sum or mean.
Some combination of the above: GroupBy will examine the results of the apply
step and try to return a sensibly combined result if it doesn’t fit into
either of the above two categories.
Since the set of object instance methods on pandas data structures are generally
rich and expressive, we often simply want to invoke, say, a DataFrame function
on each group. The name GroupBy should be quite familiar to those who have used
a SQL-based tool (or itertools), in which you can write code like:
SELECT Column1, Column2, mean(Column3), sum(Column4)
FROM SomeTable
GROUP BY Column1, Column2
We aim to make operations like this natural and easy to express using
pandas. We’ll address each area of GroupBy functionality then provide some
non-trivial examples / use cases.
See the cookbook for some advanced strategies.
Splitting an object into groups#
pandas objects can be split on any of their axes. The abstract definition of
grouping is to provide a mapping of labels to group names. To create a GroupBy
object (more on what the GroupBy object is later), you may do the following:
In [1]: df = pd.DataFrame(
...: [
...: ("bird", "Falconiformes", 389.0),
...: ("bird", "Psittaciformes", 24.0),
...: ("mammal", "Carnivora", 80.2),
...: ("mammal", "Primates", np.nan),
...: ("mammal", "Carnivora", 58),
...: ],
...: index=["falcon", "parrot", "lion", "monkey", "leopard"],
...: columns=("class", "order", "max_speed"),
...: )
...:
In [2]: df
Out[2]:
class order max_speed
falcon bird Falconiformes 389.0
parrot bird Psittaciformes 24.0
lion mammal Carnivora 80.2
monkey mammal Primates NaN
leopard mammal Carnivora 58.0
# default is axis=0
In [3]: grouped = df.groupby("class")
In [4]: grouped = df.groupby("order", axis="columns")
In [5]: grouped = df.groupby(["class", "order"])
The mapping can be specified many different ways:
A Python function, to be called on each of the axis labels.
A list or NumPy array of the same length as the selected axis.
A dict or Series, providing a label -> group name mapping.
For DataFrame objects, a string indicating either a column name or
an index level name to be used to group.
df.groupby('A') is just syntactic sugar for df.groupby(df['A']).
A list of any of the above things.
Collectively we refer to the grouping objects as the keys. For example,
consider the following DataFrame:
Note
A string passed to groupby may refer to either a column or an index level.
If a string matches both a column name and an index level name, a
ValueError will be raised.
In [6]: df = pd.DataFrame(
...: {
...: "A": ["foo", "bar", "foo", "bar", "foo", "bar", "foo", "foo"],
...: "B": ["one", "one", "two", "three", "two", "two", "one", "three"],
...: "C": np.random.randn(8),
...: "D": np.random.randn(8),
...: }
...: )
...:
In [7]: df
Out[7]:
A B C D
0 foo one 0.469112 -0.861849
1 bar one -0.282863 -2.104569
2 foo two -1.509059 -0.494929
3 bar three -1.135632 1.071804
4 foo two 1.212112 0.721555
5 bar two -0.173215 -0.706771
6 foo one 0.119209 -1.039575
7 foo three -1.044236 0.271860
On a DataFrame, we obtain a GroupBy object by calling groupby().
We could naturally group by either the A or B columns, or both:
In [8]: grouped = df.groupby("A")
In [9]: grouped = df.groupby(["A", "B"])
If we also have a MultiIndex on columns A and B, we can group by all
but the specified columns
In [10]: df2 = df.set_index(["A", "B"])
In [11]: grouped = df2.groupby(level=df2.index.names.difference(["B"]))
In [12]: grouped.sum()
Out[12]:
C D
A
bar -1.591710 -1.739537
foo -0.752861 -1.402938
These will split the DataFrame on its index (rows). We could also split by the
columns:
In [13]: def get_letter_type(letter):
....: if letter.lower() in 'aeiou':
....: return 'vowel'
....: else:
....: return 'consonant'
....:
In [14]: grouped = df.groupby(get_letter_type, axis=1)
pandas Index objects support duplicate values. If a
non-unique index is used as the group key in a groupby operation, all values
for the same index value will be considered to be in one group and thus the
output of aggregation functions will only contain unique index values:
In [15]: lst = [1, 2, 3, 1, 2, 3]
In [16]: s = pd.Series([1, 2, 3, 10, 20, 30], lst)
In [17]: grouped = s.groupby(level=0)
In [18]: grouped.first()
Out[18]:
1 1
2 2
3 3
dtype: int64
In [19]: grouped.last()
Out[19]:
1 10
2 20
3 30
dtype: int64
In [20]: grouped.sum()
Out[20]:
1 11
2 22
3 33
dtype: int64
Note that no splitting occurs until it’s needed. Creating the GroupBy object
only verifies that you’ve passed a valid mapping.
Note
Many kinds of complicated data manipulations can be expressed in terms of
GroupBy operations (though can’t be guaranteed to be the most
efficient). You can get quite creative with the label mapping functions.
GroupBy sorting#
By default the group keys are sorted during the groupby operation. You may however pass sort=False for potential speedups:
In [21]: df2 = pd.DataFrame({"X": ["B", "B", "A", "A"], "Y": [1, 2, 3, 4]})
In [22]: df2.groupby(["X"]).sum()
Out[22]:
Y
X
A 7
B 3
In [23]: df2.groupby(["X"], sort=False).sum()
Out[23]:
Y
X
B 3
A 7
Note that groupby will preserve the order in which observations are sorted within each group.
For example, the groups created by groupby() below are in the order they appeared in the original DataFrame:
In [24]: df3 = pd.DataFrame({"X": ["A", "B", "A", "B"], "Y": [1, 4, 3, 2]})
In [25]: df3.groupby(["X"]).get_group("A")
Out[25]:
X Y
0 A 1
2 A 3
In [26]: df3.groupby(["X"]).get_group("B")
Out[26]:
X Y
1 B 4
3 B 2
New in version 1.1.0.
GroupBy dropna#
By default NA values are excluded from group keys during the groupby operation. However,
in case you want to include NA values in group keys, you could pass dropna=False to achieve it.
In [27]: df_list = [[1, 2, 3], [1, None, 4], [2, 1, 3], [1, 2, 2]]
In [28]: df_dropna = pd.DataFrame(df_list, columns=["a", "b", "c"])
In [29]: df_dropna
Out[29]:
a b c
0 1 2.0 3
1 1 NaN 4
2 2 1.0 3
3 1 2.0 2
# Default ``dropna`` is set to True, which will exclude NaNs in keys
In [30]: df_dropna.groupby(by=["b"], dropna=True).sum()
Out[30]:
a c
b
1.0 2 3
2.0 2 5
# In order to allow NaN in keys, set ``dropna`` to False
In [31]: df_dropna.groupby(by=["b"], dropna=False).sum()
Out[31]:
a c
b
1.0 2 3
2.0 2 5
NaN 1 4
The default setting of dropna argument is True which means NA are not included in group keys.
GroupBy object attributes#
The groups attribute is a dict whose keys are the computed unique groups
and corresponding values being the axis labels belonging to each group. In the
above example we have:
In [32]: df.groupby("A").groups
Out[32]: {'bar': [1, 3, 5], 'foo': [0, 2, 4, 6, 7]}
In [33]: df.groupby(get_letter_type, axis=1).groups
Out[33]: {'consonant': ['B', 'C', 'D'], 'vowel': ['A']}
Calling the standard Python len function on the GroupBy object just returns
the length of the groups dict, so it is largely just a convenience:
In [34]: grouped = df.groupby(["A", "B"])
In [35]: grouped.groups
Out[35]: {('bar', 'one'): [1], ('bar', 'three'): [3], ('bar', 'two'): [5], ('foo', 'one'): [0, 6], ('foo', 'three'): [7], ('foo', 'two'): [2, 4]}
In [36]: len(grouped)
Out[36]: 6
GroupBy will tab complete column names (and other attributes):
In [37]: df
Out[37]:
height weight gender
2000-01-01 42.849980 157.500553 male
2000-01-02 49.607315 177.340407 male
2000-01-03 56.293531 171.524640 male
2000-01-04 48.421077 144.251986 female
2000-01-05 46.556882 152.526206 male
2000-01-06 68.448851 168.272968 female
2000-01-07 70.757698 136.431469 male
2000-01-08 58.909500 176.499753 female
2000-01-09 76.435631 174.094104 female
2000-01-10 45.306120 177.540920 male
In [38]: gb = df.groupby("gender")
In [39]: gb.<TAB> # noqa: E225, E999
gb.agg gb.boxplot gb.cummin gb.describe gb.filter gb.get_group gb.height gb.last gb.median gb.ngroups gb.plot gb.rank gb.std gb.transform
gb.aggregate gb.count gb.cumprod gb.dtype gb.first gb.groups gb.hist gb.max gb.min gb.nth gb.prod gb.resample gb.sum gb.var
gb.apply gb.cummax gb.cumsum gb.fillna gb.gender gb.head gb.indices gb.mean gb.name gb.ohlc gb.quantile gb.size gb.tail gb.weight
GroupBy with MultiIndex#
With hierarchically-indexed data, it’s quite
natural to group by one of the levels of the hierarchy.
Let’s create a Series with a two-level MultiIndex.
In [40]: arrays = [
....: ["bar", "bar", "baz", "baz", "foo", "foo", "qux", "qux"],
....: ["one", "two", "one", "two", "one", "two", "one", "two"],
....: ]
....:
In [41]: index = pd.MultiIndex.from_arrays(arrays, names=["first", "second"])
In [42]: s = pd.Series(np.random.randn(8), index=index)
In [43]: s
Out[43]:
first second
bar one -0.919854
two -0.042379
baz one 1.247642
two -0.009920
foo one 0.290213
two 0.495767
qux one 0.362949
two 1.548106
dtype: float64
We can then group by one of the levels in s.
In [44]: grouped = s.groupby(level=0)
In [45]: grouped.sum()
Out[45]:
first
bar -0.962232
baz 1.237723
foo 0.785980
qux 1.911055
dtype: float64
If the MultiIndex has names specified, these can be passed instead of the level
number:
In [46]: s.groupby(level="second").sum()
Out[46]:
second
one 0.980950
two 1.991575
dtype: float64
Grouping with multiple levels is supported.
In [47]: s
Out[47]:
first second third
bar doo one -1.131345
two -0.089329
baz bee one 0.337863
two -0.945867
foo bop one -0.932132
two 1.956030
qux bop one 0.017587
two -0.016692
dtype: float64
In [48]: s.groupby(level=["first", "second"]).sum()
Out[48]:
first second
bar doo -1.220674
baz bee -0.608004
foo bop 1.023898
qux bop 0.000895
dtype: float64
Index level names may be supplied as keys.
In [49]: s.groupby(["first", "second"]).sum()
Out[49]:
first second
bar doo -1.220674
baz bee -0.608004
foo bop 1.023898
qux bop 0.000895
dtype: float64
More on the sum function and aggregation later.
Grouping DataFrame with Index levels and columns#
A DataFrame may be grouped by a combination of columns and index levels by
specifying the column names as strings and the index levels as pd.Grouper
objects.
In [50]: arrays = [
....: ["bar", "bar", "baz", "baz", "foo", "foo", "qux", "qux"],
....: ["one", "two", "one", "two", "one", "two", "one", "two"],
....: ]
....:
In [51]: index = pd.MultiIndex.from_arrays(arrays, names=["first", "second"])
In [52]: df = pd.DataFrame({"A": [1, 1, 1, 1, 2, 2, 3, 3], "B": np.arange(8)}, index=index)
In [53]: df
Out[53]:
A B
first second
bar one 1 0
two 1 1
baz one 1 2
two 1 3
foo one 2 4
two 2 5
qux one 3 6
two 3 7
The following example groups df by the second index level and
the A column.
In [54]: df.groupby([pd.Grouper(level=1), "A"]).sum()
Out[54]:
B
second A
one 1 2
2 4
3 6
two 1 4
2 5
3 7
Index levels may also be specified by name.
In [55]: df.groupby([pd.Grouper(level="second"), "A"]).sum()
Out[55]:
B
second A
one 1 2
2 4
3 6
two 1 4
2 5
3 7
Index level names may be specified as keys directly to groupby.
In [56]: df.groupby(["second", "A"]).sum()
Out[56]:
B
second A
one 1 2
2 4
3 6
two 1 4
2 5
3 7
DataFrame column selection in GroupBy#
Once you have created the GroupBy object from a DataFrame, you might want to do
something different for each of the columns. Thus, using [] similar to
getting a column from a DataFrame, you can do:
In [57]: df = pd.DataFrame(
....: {
....: "A": ["foo", "bar", "foo", "bar", "foo", "bar", "foo", "foo"],
....: "B": ["one", "one", "two", "three", "two", "two", "one", "three"],
....: "C": np.random.randn(8),
....: "D": np.random.randn(8),
....: }
....: )
....:
In [58]: df
Out[58]:
A B C D
0 foo one -0.575247 1.346061
1 bar one 0.254161 1.511763
2 foo two -1.143704 1.627081
3 bar three 0.215897 -0.990582
4 foo two 1.193555 -0.441652
5 bar two -0.077118 1.211526
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
In [59]: grouped = df.groupby(["A"])
In [60]: grouped_C = grouped["C"]
In [61]: grouped_D = grouped["D"]
This is mainly syntactic sugar for the alternative and much more verbose:
In [62]: df["C"].groupby(df["A"])
Out[62]: <pandas.core.groupby.generic.SeriesGroupBy object at 0x7f1ea100a490>
Additionally this method avoids recomputing the internal grouping information
derived from the passed key.
Iterating through groups#
With the GroupBy object in hand, iterating through the grouped data is very
natural and functions similarly to itertools.groupby():
In [63]: grouped = df.groupby('A')
In [64]: for name, group in grouped:
....: print(name)
....: print(group)
....:
bar
A B C D
1 bar one 0.254161 1.511763
3 bar three 0.215897 -0.990582
5 bar two -0.077118 1.211526
foo
A B C D
0 foo one -0.575247 1.346061
2 foo two -1.143704 1.627081
4 foo two 1.193555 -0.441652
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
In the case of grouping by multiple keys, the group name will be a tuple:
In [65]: for name, group in df.groupby(['A', 'B']):
....: print(name)
....: print(group)
....:
('bar', 'one')
A B C D
1 bar one 0.254161 1.511763
('bar', 'three')
A B C D
3 bar three 0.215897 -0.990582
('bar', 'two')
A B C D
5 bar two -0.077118 1.211526
('foo', 'one')
A B C D
0 foo one -0.575247 1.346061
6 foo one -0.408530 0.268520
('foo', 'three')
A B C D
7 foo three -0.862495 0.02458
('foo', 'two')
A B C D
2 foo two -1.143704 1.627081
4 foo two 1.193555 -0.441652
See Iterating through groups.
Selecting a group#
A single group can be selected using
get_group():
In [66]: grouped.get_group("bar")
Out[66]:
A B C D
1 bar one 0.254161 1.511763
3 bar three 0.215897 -0.990582
5 bar two -0.077118 1.211526
Or for an object grouped on multiple columns:
In [67]: df.groupby(["A", "B"]).get_group(("bar", "one"))
Out[67]:
A B C D
1 bar one 0.254161 1.511763
Aggregation#
Once the GroupBy object has been created, several methods are available to
perform a computation on the grouped data. These operations are similar to the
aggregating API, window API,
and resample API.
An obvious one is aggregation via the
aggregate() or equivalently
agg() method:
In [68]: grouped = df.groupby("A")
In [69]: grouped[["C", "D"]].aggregate(np.sum)
Out[69]:
C D
A
bar 0.392940 1.732707
foo -1.796421 2.824590
In [70]: grouped = df.groupby(["A", "B"])
In [71]: grouped.aggregate(np.sum)
Out[71]:
C D
A B
bar one 0.254161 1.511763
three 0.215897 -0.990582
two -0.077118 1.211526
foo one -0.983776 1.614581
three -0.862495 0.024580
two 0.049851 1.185429
As you can see, the result of the aggregation will have the group names as the
new index along the grouped axis. In the case of multiple keys, the result is a
MultiIndex by default, though this can be
changed by using the as_index option:
In [72]: grouped = df.groupby(["A", "B"], as_index=False)
In [73]: grouped.aggregate(np.sum)
Out[73]:
A B C D
0 bar one 0.254161 1.511763
1 bar three 0.215897 -0.990582
2 bar two -0.077118 1.211526
3 foo one -0.983776 1.614581
4 foo three -0.862495 0.024580
5 foo two 0.049851 1.185429
In [74]: df.groupby("A", as_index=False)[["C", "D"]].sum()
Out[74]:
A C D
0 bar 0.392940 1.732707
1 foo -1.796421 2.824590
Note that you could use the reset_index DataFrame function to achieve the
same result as the column names are stored in the resulting MultiIndex:
In [75]: df.groupby(["A", "B"]).sum().reset_index()
Out[75]:
A B C D
0 bar one 0.254161 1.511763
1 bar three 0.215897 -0.990582
2 bar two -0.077118 1.211526
3 foo one -0.983776 1.614581
4 foo three -0.862495 0.024580
5 foo two 0.049851 1.185429
Another simple aggregation example is to compute the size of each group.
This is included in GroupBy as the size method. It returns a Series whose
index are the group names and whose values are the sizes of each group.
In [76]: grouped.size()
Out[76]:
A B size
0 bar one 1
1 bar three 1
2 bar two 1
3 foo one 2
4 foo three 1
5 foo two 2
In [77]: grouped.describe()
Out[77]:
C ... D
count mean std min ... 25% 50% 75% max
0 1.0 0.254161 NaN 0.254161 ... 1.511763 1.511763 1.511763 1.511763
1 1.0 0.215897 NaN 0.215897 ... -0.990582 -0.990582 -0.990582 -0.990582
2 1.0 -0.077118 NaN -0.077118 ... 1.211526 1.211526 1.211526 1.211526
3 2.0 -0.491888 0.117887 -0.575247 ... 0.537905 0.807291 1.076676 1.346061
4 1.0 -0.862495 NaN -0.862495 ... 0.024580 0.024580 0.024580 0.024580
5 2.0 0.024925 1.652692 -1.143704 ... 0.075531 0.592714 1.109898 1.627081
[6 rows x 16 columns]
Another aggregation example is to compute the number of unique values of each group. This is similar to the value_counts function, except that it only counts unique values.
In [78]: ll = [['foo', 1], ['foo', 2], ['foo', 2], ['bar', 1], ['bar', 1]]
In [79]: df4 = pd.DataFrame(ll, columns=["A", "B"])
In [80]: df4
Out[80]:
A B
0 foo 1
1 foo 2
2 foo 2
3 bar 1
4 bar 1
In [81]: df4.groupby("A")["B"].nunique()
Out[81]:
A
bar 1
foo 2
Name: B, dtype: int64
Note
Aggregation functions will not return the groups that you are aggregating over
if they are named columns, when as_index=True, the default. The grouped columns will
be the indices of the returned object.
Passing as_index=False will return the groups that you are aggregating over, if they are
named columns.
Aggregating functions are the ones that reduce the dimension of the returned objects.
Some common aggregating functions are tabulated below:
Function
Description
mean()
Compute mean of groups
sum()
Compute sum of group values
size()
Compute group sizes
count()
Compute count of group
std()
Standard deviation of groups
var()
Compute variance of groups
sem()
Standard error of the mean of groups
describe()
Generates descriptive statistics
first()
Compute first of group values
last()
Compute last of group values
nth()
Take nth value, or a subset if n is a list
min()
Compute min of group values
max()
Compute max of group values
The aggregating functions above will exclude NA values. Any function which
reduces a Series to a scalar value is an aggregation function and will work,
a trivial example is df.groupby('A').agg(lambda ser: 1). Note that
nth() can act as a reducer or a
filter, see here.
Applying multiple functions at once#
With grouped Series you can also pass a list or dict of functions to do
aggregation with, outputting a DataFrame:
In [82]: grouped = df.groupby("A")
In [83]: grouped["C"].agg([np.sum, np.mean, np.std])
Out[83]:
sum mean std
A
bar 0.392940 0.130980 0.181231
foo -1.796421 -0.359284 0.912265
On a grouped DataFrame, you can pass a list of functions to apply to each
column, which produces an aggregated result with a hierarchical index:
In [84]: grouped[["C", "D"]].agg([np.sum, np.mean, np.std])
Out[84]:
C D
sum mean std sum mean std
A
bar 0.392940 0.130980 0.181231 1.732707 0.577569 1.366330
foo -1.796421 -0.359284 0.912265 2.824590 0.564918 0.884785
The resulting aggregations are named for the functions themselves. If you
need to rename, then you can add in a chained operation for a Series like this:
In [85]: (
....: grouped["C"]
....: .agg([np.sum, np.mean, np.std])
....: .rename(columns={"sum": "foo", "mean": "bar", "std": "baz"})
....: )
....:
Out[85]:
foo bar baz
A
bar 0.392940 0.130980 0.181231
foo -1.796421 -0.359284 0.912265
For a grouped DataFrame, you can rename in a similar manner:
In [86]: (
....: grouped[["C", "D"]].agg([np.sum, np.mean, np.std]).rename(
....: columns={"sum": "foo", "mean": "bar", "std": "baz"}
....: )
....: )
....:
Out[86]:
C D
foo bar baz foo bar baz
A
bar 0.392940 0.130980 0.181231 1.732707 0.577569 1.366330
foo -1.796421 -0.359284 0.912265 2.824590 0.564918 0.884785
Note
In general, the output column names should be unique. You can’t apply
the same function (or two functions with the same name) to the same
column.
In [87]: grouped["C"].agg(["sum", "sum"])
Out[87]:
sum sum
A
bar 0.392940 0.392940
foo -1.796421 -1.796421
pandas does allow you to provide multiple lambdas. In this case, pandas
will mangle the name of the (nameless) lambda functions, appending _<i>
to each subsequent lambda.
In [88]: grouped["C"].agg([lambda x: x.max() - x.min(), lambda x: x.median() - x.mean()])
Out[88]:
<lambda_0> <lambda_1>
A
bar 0.331279 0.084917
foo 2.337259 -0.215962
Named aggregation#
New in version 0.25.0.
To support column-specific aggregation with control over the output column names, pandas
accepts the special syntax in GroupBy.agg(), known as “named aggregation”, where
The keywords are the output column names
The values are tuples whose first element is the column to select
and the second element is the aggregation to apply to that column. pandas
provides the pandas.NamedAgg namedtuple with the fields ['column', 'aggfunc']
to make it clearer what the arguments are. As usual, the aggregation can
be a callable or a string alias.
In [89]: animals = pd.DataFrame(
....: {
....: "kind": ["cat", "dog", "cat", "dog"],
....: "height": [9.1, 6.0, 9.5, 34.0],
....: "weight": [7.9, 7.5, 9.9, 198.0],
....: }
....: )
....:
In [90]: animals
Out[90]:
kind height weight
0 cat 9.1 7.9
1 dog 6.0 7.5
2 cat 9.5 9.9
3 dog 34.0 198.0
In [91]: animals.groupby("kind").agg(
....: min_height=pd.NamedAgg(column="height", aggfunc="min"),
....: max_height=pd.NamedAgg(column="height", aggfunc="max"),
....: average_weight=pd.NamedAgg(column="weight", aggfunc=np.mean),
....: )
....:
Out[91]:
min_height max_height average_weight
kind
cat 9.1 9.5 8.90
dog 6.0 34.0 102.75
pandas.NamedAgg is just a namedtuple. Plain tuples are allowed as well.
In [92]: animals.groupby("kind").agg(
....: min_height=("height", "min"),
....: max_height=("height", "max"),
....: average_weight=("weight", np.mean),
....: )
....:
Out[92]:
min_height max_height average_weight
kind
cat 9.1 9.5 8.90
dog 6.0 34.0 102.75
If your desired output column names are not valid Python keywords, construct a dictionary
and unpack the keyword arguments
In [93]: animals.groupby("kind").agg(
....: **{
....: "total weight": pd.NamedAgg(column="weight", aggfunc=sum)
....: }
....: )
....:
Out[93]:
total weight
kind
cat 17.8
dog 205.5
Additional keyword arguments are not passed through to the aggregation functions. Only pairs
of (column, aggfunc) should be passed as **kwargs. If your aggregation functions
requires additional arguments, partially apply them with functools.partial().
Note
For Python 3.5 and earlier, the order of **kwargs in a functions was not
preserved. This means that the output column ordering would not be
consistent. To ensure consistent ordering, the keys (and so output columns)
will always be sorted for Python 3.5.
Named aggregation is also valid for Series groupby aggregations. In this case there’s
no column selection, so the values are just the functions.
In [94]: animals.groupby("kind").height.agg(
....: min_height="min",
....: max_height="max",
....: )
....:
Out[94]:
min_height max_height
kind
cat 9.1 9.5
dog 6.0 34.0
Applying different functions to DataFrame columns#
By passing a dict to aggregate you can apply a different aggregation to the
columns of a DataFrame:
In [95]: grouped.agg({"C": np.sum, "D": lambda x: np.std(x, ddof=1)})
Out[95]:
C D
A
bar 0.392940 1.366330
foo -1.796421 0.884785
The function names can also be strings. In order for a string to be valid it
must be either implemented on GroupBy or available via dispatching:
In [96]: grouped.agg({"C": "sum", "D": "std"})
Out[96]:
C D
A
bar 0.392940 1.366330
foo -1.796421 0.884785
Cython-optimized aggregation functions#
Some common aggregations, currently only sum, mean, std, and sem, have
optimized Cython implementations:
In [97]: df.groupby("A")[["C", "D"]].sum()
Out[97]:
C D
A
bar 0.392940 1.732707
foo -1.796421 2.824590
In [98]: df.groupby(["A", "B"]).mean()
Out[98]:
C D
A B
bar one 0.254161 1.511763
three 0.215897 -0.990582
two -0.077118 1.211526
foo one -0.491888 0.807291
three -0.862495 0.024580
two 0.024925 0.592714
Of course sum and mean are implemented on pandas objects, so the above
code would work even without the special versions via dispatching (see below).
Aggregations with User-Defined Functions#
Users can also provide their own functions for custom aggregations. When aggregating
with a User-Defined Function (UDF), the UDF should not mutate the provided Series, see
Mutating with User Defined Function (UDF) methods for more information.
In [99]: animals.groupby("kind")[["height"]].agg(lambda x: set(x))
Out[99]:
height
kind
cat {9.1, 9.5}
dog {34.0, 6.0}
The resulting dtype will reflect that of the aggregating function. If the results from different groups have
different dtypes, then a common dtype will be determined in the same way as DataFrame construction.
In [100]: animals.groupby("kind")[["height"]].agg(lambda x: x.astype(int).sum())
Out[100]:
height
kind
cat 18
dog 40
Transformation#
The transform method returns an object that is indexed the same
as the one being grouped. The transform function must:
Return a result that is either the same size as the group chunk or
broadcastable to the size of the group chunk (e.g., a scalar,
grouped.transform(lambda x: x.iloc[-1])).
Operate column-by-column on the group chunk. The transform is applied to
the first group chunk using chunk.apply.
Not perform in-place operations on the group chunk. Group chunks should
be treated as immutable, and changes to a group chunk may produce unexpected
results. For example, when using fillna, inplace must be False
(grouped.transform(lambda x: x.fillna(inplace=False))).
(Optionally) operates on the entire group chunk. If this is supported, a
fast path is used starting from the second chunk.
Deprecated since version 1.5.0: When using .transform on a grouped DataFrame and the transformation function
returns a DataFrame, currently pandas does not align the result’s index
with the input’s index. This behavior is deprecated and alignment will
be performed in a future version of pandas. You can apply .to_numpy() to the
result of the transformation function to avoid alignment.
Similar to Aggregations with User-Defined Functions, the resulting dtype will reflect that of the
transformation function. If the results from different groups have different dtypes, then
a common dtype will be determined in the same way as DataFrame construction.
Suppose we wished to standardize the data within each group:
In [101]: index = pd.date_range("10/1/1999", periods=1100)
In [102]: ts = pd.Series(np.random.normal(0.5, 2, 1100), index)
In [103]: ts = ts.rolling(window=100, min_periods=100).mean().dropna()
In [104]: ts.head()
Out[104]:
2000-01-08 0.779333
2000-01-09 0.778852
2000-01-10 0.786476
2000-01-11 0.782797
2000-01-12 0.798110
Freq: D, dtype: float64
In [105]: ts.tail()
Out[105]:
2002-09-30 0.660294
2002-10-01 0.631095
2002-10-02 0.673601
2002-10-03 0.709213
2002-10-04 0.719369
Freq: D, dtype: float64
In [106]: transformed = ts.groupby(lambda x: x.year).transform(
.....: lambda x: (x - x.mean()) / x.std()
.....: )
.....:
We would expect the result to now have mean 0 and standard deviation 1 within
each group, which we can easily check:
# Original Data
In [107]: grouped = ts.groupby(lambda x: x.year)
In [108]: grouped.mean()
Out[108]:
2000 0.442441
2001 0.526246
2002 0.459365
dtype: float64
In [109]: grouped.std()
Out[109]:
2000 0.131752
2001 0.210945
2002 0.128753
dtype: float64
# Transformed Data
In [110]: grouped_trans = transformed.groupby(lambda x: x.year)
In [111]: grouped_trans.mean()
Out[111]:
2000 -4.870756e-16
2001 -1.545187e-16
2002 4.136282e-16
dtype: float64
In [112]: grouped_trans.std()
Out[112]:
2000 1.0
2001 1.0
2002 1.0
dtype: float64
We can also visually compare the original and transformed data sets.
In [113]: compare = pd.DataFrame({"Original": ts, "Transformed": transformed})
In [114]: compare.plot()
Out[114]: <AxesSubplot: >
Transformation functions that have lower dimension outputs are broadcast to
match the shape of the input array.
In [115]: ts.groupby(lambda x: x.year).transform(lambda x: x.max() - x.min())
Out[115]:
2000-01-08 0.623893
2000-01-09 0.623893
2000-01-10 0.623893
2000-01-11 0.623893
2000-01-12 0.623893
...
2002-09-30 0.558275
2002-10-01 0.558275
2002-10-02 0.558275
2002-10-03 0.558275
2002-10-04 0.558275
Freq: D, Length: 1001, dtype: float64
Alternatively, the built-in methods could be used to produce the same outputs.
In [116]: max_ts = ts.groupby(lambda x: x.year).transform("max")
In [117]: min_ts = ts.groupby(lambda x: x.year).transform("min")
In [118]: max_ts - min_ts
Out[118]:
2000-01-08 0.623893
2000-01-09 0.623893
2000-01-10 0.623893
2000-01-11 0.623893
2000-01-12 0.623893
...
2002-09-30 0.558275
2002-10-01 0.558275
2002-10-02 0.558275
2002-10-03 0.558275
2002-10-04 0.558275
Freq: D, Length: 1001, dtype: float64
Another common data transform is to replace missing data with the group mean.
In [119]: data_df
Out[119]:
A B C
0 1.539708 -1.166480 0.533026
1 1.302092 -0.505754 NaN
2 -0.371983 1.104803 -0.651520
3 -1.309622 1.118697 -1.161657
4 -1.924296 0.396437 0.812436
.. ... ... ...
995 -0.093110 0.683847 -0.774753
996 -0.185043 1.438572 NaN
997 -0.394469 -0.642343 0.011374
998 -1.174126 1.857148 NaN
999 0.234564 0.517098 0.393534
[1000 rows x 3 columns]
In [120]: countries = np.array(["US", "UK", "GR", "JP"])
In [121]: key = countries[np.random.randint(0, 4, 1000)]
In [122]: grouped = data_df.groupby(key)
# Non-NA count in each group
In [123]: grouped.count()
Out[123]:
A B C
GR 209 217 189
JP 240 255 217
UK 216 231 193
US 239 250 217
In [124]: transformed = grouped.transform(lambda x: x.fillna(x.mean()))
We can verify that the group means have not changed in the transformed data
and that the transformed data contains no NAs.
In [125]: grouped_trans = transformed.groupby(key)
In [126]: grouped.mean() # original group means
Out[126]:
A B C
GR -0.098371 -0.015420 0.068053
JP 0.069025 0.023100 -0.077324
UK 0.034069 -0.052580 -0.116525
US 0.058664 -0.020399 0.028603
In [127]: grouped_trans.mean() # transformation did not change group means
Out[127]:
A B C
GR -0.098371 -0.015420 0.068053
JP 0.069025 0.023100 -0.077324
UK 0.034069 -0.052580 -0.116525
US 0.058664 -0.020399 0.028603
In [128]: grouped.count() # original has some missing data points
Out[128]:
A B C
GR 209 217 189
JP 240 255 217
UK 216 231 193
US 239 250 217
In [129]: grouped_trans.count() # counts after transformation
Out[129]:
A B C
GR 228 228 228
JP 267 267 267
UK 247 247 247
US 258 258 258
In [130]: grouped_trans.size() # Verify non-NA count equals group size
Out[130]:
GR 228
JP 267
UK 247
US 258
dtype: int64
Note
Some functions will automatically transform the input when applied to a
GroupBy object, but returning an object of the same shape as the original.
Passing as_index=False will not affect these transformation methods.
For example: fillna, ffill, bfill, shift..
In [131]: grouped.ffill()
Out[131]:
A B C
0 1.539708 -1.166480 0.533026
1 1.302092 -0.505754 0.533026
2 -0.371983 1.104803 -0.651520
3 -1.309622 1.118697 -1.161657
4 -1.924296 0.396437 0.812436
.. ... ... ...
995 -0.093110 0.683847 -0.774753
996 -0.185043 1.438572 -0.774753
997 -0.394469 -0.642343 0.011374
998 -1.174126 1.857148 -0.774753
999 0.234564 0.517098 0.393534
[1000 rows x 3 columns]
Window and resample operations#
It is possible to use resample(), expanding() and
rolling() as methods on groupbys.
The example below will apply the rolling() method on the samples of
the column B based on the groups of column A.
In [132]: df_re = pd.DataFrame({"A": [1] * 10 + [5] * 10, "B": np.arange(20)})
In [133]: df_re
Out[133]:
A B
0 1 0
1 1 1
2 1 2
3 1 3
4 1 4
.. .. ..
15 5 15
16 5 16
17 5 17
18 5 18
19 5 19
[20 rows x 2 columns]
In [134]: df_re.groupby("A").rolling(4).B.mean()
Out[134]:
A
1 0 NaN
1 NaN
2 NaN
3 1.5
4 2.5
...
5 15 13.5
16 14.5
17 15.5
18 16.5
19 17.5
Name: B, Length: 20, dtype: float64
The expanding() method will accumulate a given operation
(sum() in the example) for all the members of each particular
group.
In [135]: df_re.groupby("A").expanding().sum()
Out[135]:
B
A
1 0 0.0
1 1.0
2 3.0
3 6.0
4 10.0
... ...
5 15 75.0
16 91.0
17 108.0
18 126.0
19 145.0
[20 rows x 1 columns]
Suppose you want to use the resample() method to get a daily
frequency in each group of your dataframe and wish to complete the
missing values with the ffill() method.
In [136]: df_re = pd.DataFrame(
.....: {
.....: "date": pd.date_range(start="2016-01-01", periods=4, freq="W"),
.....: "group": [1, 1, 2, 2],
.....: "val": [5, 6, 7, 8],
.....: }
.....: ).set_index("date")
.....:
In [137]: df_re
Out[137]:
group val
date
2016-01-03 1 5
2016-01-10 1 6
2016-01-17 2 7
2016-01-24 2 8
In [138]: df_re.groupby("group").resample("1D").ffill()
Out[138]:
group val
group date
1 2016-01-03 1 5
2016-01-04 1 5
2016-01-05 1 5
2016-01-06 1 5
2016-01-07 1 5
... ... ...
2 2016-01-20 2 7
2016-01-21 2 7
2016-01-22 2 7
2016-01-23 2 7
2016-01-24 2 8
[16 rows x 2 columns]
Filtration#
The filter method returns a subset of the original object. Suppose we
want to take only elements that belong to groups with a group sum greater
than 2.
In [139]: sf = pd.Series([1, 1, 2, 3, 3, 3])
In [140]: sf.groupby(sf).filter(lambda x: x.sum() > 2)
Out[140]:
3 3
4 3
5 3
dtype: int64
The argument of filter must be a function that, applied to the group as a
whole, returns True or False.
Another useful operation is filtering out elements that belong to groups
with only a couple members.
In [141]: dff = pd.DataFrame({"A": np.arange(8), "B": list("aabbbbcc")})
In [142]: dff.groupby("B").filter(lambda x: len(x) > 2)
Out[142]:
A B
2 2 b
3 3 b
4 4 b
5 5 b
Alternatively, instead of dropping the offending groups, we can return a
like-indexed objects where the groups that do not pass the filter are filled
with NaNs.
In [143]: dff.groupby("B").filter(lambda x: len(x) > 2, dropna=False)
Out[143]:
A B
0 NaN NaN
1 NaN NaN
2 2.0 b
3 3.0 b
4 4.0 b
5 5.0 b
6 NaN NaN
7 NaN NaN
For DataFrames with multiple columns, filters should explicitly specify a column as the filter criterion.
In [144]: dff["C"] = np.arange(8)
In [145]: dff.groupby("B").filter(lambda x: len(x["C"]) > 2)
Out[145]:
A B C
2 2 b 2
3 3 b 3
4 4 b 4
5 5 b 5
Note
Some functions when applied to a groupby object will act as a filter on the input, returning
a reduced shape of the original (and potentially eliminating groups), but with the index unchanged.
Passing as_index=False will not affect these transformation methods.
For example: head, tail.
In [146]: dff.groupby("B").head(2)
Out[146]:
A B C
0 0 a 0
1 1 a 1
2 2 b 2
3 3 b 3
6 6 c 6
7 7 c 7
Dispatching to instance methods#
When doing an aggregation or transformation, you might just want to call an
instance method on each data group. This is pretty easy to do by passing lambda
functions:
In [147]: grouped = df.groupby("A")
In [148]: grouped.agg(lambda x: x.std())
Out[148]:
C D
A
bar 0.181231 1.366330
foo 0.912265 0.884785
But, it’s rather verbose and can be untidy if you need to pass additional
arguments. Using a bit of metaprogramming cleverness, GroupBy now has the
ability to “dispatch” method calls to the groups:
In [149]: grouped.std()
Out[149]:
C D
A
bar 0.181231 1.366330
foo 0.912265 0.884785
What is actually happening here is that a function wrapper is being
generated. When invoked, it takes any passed arguments and invokes the function
with any arguments on each group (in the above example, the std
function). The results are then combined together much in the style of agg
and transform (it actually uses apply to infer the gluing, documented
next). This enables some operations to be carried out rather succinctly:
In [150]: tsdf = pd.DataFrame(
.....: np.random.randn(1000, 3),
.....: index=pd.date_range("1/1/2000", periods=1000),
.....: columns=["A", "B", "C"],
.....: )
.....:
In [151]: tsdf.iloc[::2] = np.nan
In [152]: grouped = tsdf.groupby(lambda x: x.year)
In [153]: grouped.fillna(method="pad")
Out[153]:
A B C
2000-01-01 NaN NaN NaN
2000-01-02 -0.353501 -0.080957 -0.876864
2000-01-03 -0.353501 -0.080957 -0.876864
2000-01-04 0.050976 0.044273 -0.559849
2000-01-05 0.050976 0.044273 -0.559849
... ... ... ...
2002-09-22 0.005011 0.053897 -1.026922
2002-09-23 0.005011 0.053897 -1.026922
2002-09-24 -0.456542 -1.849051 1.559856
2002-09-25 -0.456542 -1.849051 1.559856
2002-09-26 1.123162 0.354660 1.128135
[1000 rows x 3 columns]
In this example, we chopped the collection of time series into yearly chunks
then independently called fillna on the
groups.
The nlargest and nsmallest methods work on Series style groupbys:
In [154]: s = pd.Series([9, 8, 7, 5, 19, 1, 4.2, 3.3])
In [155]: g = pd.Series(list("abababab"))
In [156]: gb = s.groupby(g)
In [157]: gb.nlargest(3)
Out[157]:
a 4 19.0
0 9.0
2 7.0
b 1 8.0
3 5.0
7 3.3
dtype: float64
In [158]: gb.nsmallest(3)
Out[158]:
a 6 4.2
2 7.0
0 9.0
b 5 1.0
7 3.3
3 5.0
dtype: float64
Flexible apply#
Some operations on the grouped data might not fit into either the aggregate or
transform categories. Or, you may simply want GroupBy to infer how to combine
the results. For these, use the apply function, which can be substituted
for both aggregate and transform in many standard use cases. However,
apply can handle some exceptional use cases.
Note
apply can act as a reducer, transformer, or filter function, depending
on exactly what is passed to it. It can depend on the passed function and
exactly what you are grouping. Thus the grouped column(s) may be included in
the output as well as set the indices.
In [159]: df
Out[159]:
A B C D
0 foo one -0.575247 1.346061
1 bar one 0.254161 1.511763
2 foo two -1.143704 1.627081
3 bar three 0.215897 -0.990582
4 foo two 1.193555 -0.441652
5 bar two -0.077118 1.211526
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
In [160]: grouped = df.groupby("A")
# could also just call .describe()
In [161]: grouped["C"].apply(lambda x: x.describe())
Out[161]:
A
bar count 3.000000
mean 0.130980
std 0.181231
min -0.077118
25% 0.069390
...
foo min -1.143704
25% -0.862495
50% -0.575247
75% -0.408530
max 1.193555
Name: C, Length: 16, dtype: float64
The dimension of the returned result can also change:
In [162]: grouped = df.groupby('A')['C']
In [163]: def f(group):
.....: return pd.DataFrame({'original': group,
.....: 'demeaned': group - group.mean()})
.....:
apply on a Series can operate on a returned value from the applied function,
that is itself a series, and possibly upcast the result to a DataFrame:
In [164]: def f(x):
.....: return pd.Series([x, x ** 2], index=["x", "x^2"])
.....:
In [165]: s = pd.Series(np.random.rand(5))
In [166]: s
Out[166]:
0 0.321438
1 0.493496
2 0.139505
3 0.910103
4 0.194158
dtype: float64
In [167]: s.apply(f)
Out[167]:
x x^2
0 0.321438 0.103323
1 0.493496 0.243538
2 0.139505 0.019462
3 0.910103 0.828287
4 0.194158 0.037697
Control grouped column(s) placement with group_keys#
Note
If group_keys=True is specified when calling groupby(),
functions passed to apply that return like-indexed outputs will have the
group keys added to the result index. Previous versions of pandas would add
the group keys only when the result from the applied function had a different
index than the input. If group_keys is not specified, the group keys will
not be added for like-indexed outputs. In the future this behavior
will change to always respect group_keys, which defaults to True.
Changed in version 1.5.0.
To control whether the grouped column(s) are included in the indices, you can use
the argument group_keys. Compare
In [168]: df.groupby("A", group_keys=True).apply(lambda x: x)
Out[168]:
A B C D
A
bar 1 bar one 0.254161 1.511763
3 bar three 0.215897 -0.990582
5 bar two -0.077118 1.211526
foo 0 foo one -0.575247 1.346061
2 foo two -1.143704 1.627081
4 foo two 1.193555 -0.441652
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
with
In [169]: df.groupby("A", group_keys=False).apply(lambda x: x)
Out[169]:
A B C D
0 foo one -0.575247 1.346061
1 bar one 0.254161 1.511763
2 foo two -1.143704 1.627081
3 bar three 0.215897 -0.990582
4 foo two 1.193555 -0.441652
5 bar two -0.077118 1.211526
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
Similar to Aggregations with User-Defined Functions, the resulting dtype will reflect that of the
apply function. If the results from different groups have different dtypes, then
a common dtype will be determined in the same way as DataFrame construction.
Numba Accelerated Routines#
New in version 1.1.
If Numba is installed as an optional dependency, the transform and
aggregate methods support engine='numba' and engine_kwargs arguments.
See enhancing performance with Numba for general usage of the arguments
and performance considerations.
The function signature must start with values, index exactly as the data belonging to each group
will be passed into values, and the group index will be passed into index.
Warning
When using engine='numba', there will be no “fall back” behavior internally. The group
data and group index will be passed as NumPy arrays to the JITed user defined function, and no
alternative execution attempts will be tried.
Other useful features#
Automatic exclusion of “nuisance” columns#
Again consider the example DataFrame we’ve been looking at:
In [170]: df
Out[170]:
A B C D
0 foo one -0.575247 1.346061
1 bar one 0.254161 1.511763
2 foo two -1.143704 1.627081
3 bar three 0.215897 -0.990582
4 foo two 1.193555 -0.441652
5 bar two -0.077118 1.211526
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
Suppose we wish to compute the standard deviation grouped by the A
column. There is a slight problem, namely that we don’t care about the data in
column B. We refer to this as a “nuisance” column. You can avoid nuisance
columns by specifying numeric_only=True:
In [171]: df.groupby("A").std(numeric_only=True)
Out[171]:
C D
A
bar 0.181231 1.366330
foo 0.912265 0.884785
Note that df.groupby('A').colname.std(). is more efficient than
df.groupby('A').std().colname, so if the result of an aggregation function
is only interesting over one column (here colname), it may be filtered
before applying the aggregation function.
Note
Any object column, also if it contains numerical values such as Decimal
objects, is considered as a “nuisance” columns. They are excluded from
aggregate functions automatically in groupby.
If you do wish to include decimal or object columns in an aggregation with
other non-nuisance data types, you must do so explicitly.
Warning
The automatic dropping of nuisance columns has been deprecated and will be removed
in a future version of pandas. If columns are included that cannot be operated
on, pandas will instead raise an error. In order to avoid this, either select
the columns you wish to operate on or specify numeric_only=True.
In [172]: from decimal import Decimal
In [173]: df_dec = pd.DataFrame(
.....: {
.....: "id": [1, 2, 1, 2],
.....: "int_column": [1, 2, 3, 4],
.....: "dec_column": [
.....: Decimal("0.50"),
.....: Decimal("0.15"),
.....: Decimal("0.25"),
.....: Decimal("0.40"),
.....: ],
.....: }
.....: )
.....:
# Decimal columns can be sum'd explicitly by themselves...
In [174]: df_dec.groupby(["id"])[["dec_column"]].sum()
Out[174]:
dec_column
id
1 0.75
2 0.55
# ...but cannot be combined with standard data types or they will be excluded
In [175]: df_dec.groupby(["id"])[["int_column", "dec_column"]].sum()
Out[175]:
int_column
id
1 4
2 6
# Use .agg function to aggregate over standard and "nuisance" data types
# at the same time
In [176]: df_dec.groupby(["id"]).agg({"int_column": "sum", "dec_column": "sum"})
Out[176]:
int_column dec_column
id
1 4 0.75
2 6 0.55
Handling of (un)observed Categorical values#
When using a Categorical grouper (as a single grouper, or as part of multiple groupers), the observed keyword
controls whether to return a cartesian product of all possible groupers values (observed=False) or only those
that are observed groupers (observed=True).
Show all values:
In [177]: pd.Series([1, 1, 1]).groupby(
.....: pd.Categorical(["a", "a", "a"], categories=["a", "b"]), observed=False
.....: ).count()
.....:
Out[177]:
a 3
b 0
dtype: int64
Show only the observed values:
In [178]: pd.Series([1, 1, 1]).groupby(
.....: pd.Categorical(["a", "a", "a"], categories=["a", "b"]), observed=True
.....: ).count()
.....:
Out[178]:
a 3
dtype: int64
The returned dtype of the grouped will always include all of the categories that were grouped.
In [179]: s = (
.....: pd.Series([1, 1, 1])
.....: .groupby(pd.Categorical(["a", "a", "a"], categories=["a", "b"]), observed=False)
.....: .count()
.....: )
.....:
In [180]: s.index.dtype
Out[180]: CategoricalDtype(categories=['a', 'b'], ordered=False)
NA and NaT group handling#
If there are any NaN or NaT values in the grouping key, these will be
automatically excluded. In other words, there will never be an “NA group” or
“NaT group”. This was not the case in older versions of pandas, but users were
generally discarding the NA group anyway (and supporting it was an
implementation headache).
Grouping with ordered factors#
Categorical variables represented as instance of pandas’s Categorical class
can be used as group keys. If so, the order of the levels will be preserved:
In [181]: data = pd.Series(np.random.randn(100))
In [182]: factor = pd.qcut(data, [0, 0.25, 0.5, 0.75, 1.0])
In [183]: data.groupby(factor).mean()
Out[183]:
(-2.645, -0.523] -1.362896
(-0.523, 0.0296] -0.260266
(0.0296, 0.654] 0.361802
(0.654, 2.21] 1.073801
dtype: float64
Grouping with a grouper specification#
You may need to specify a bit more data to properly group. You can
use the pd.Grouper to provide this local control.
In [184]: import datetime
In [185]: df = pd.DataFrame(
.....: {
.....: "Branch": "A A A A A A A B".split(),
.....: "Buyer": "Carl Mark Carl Carl Joe Joe Joe Carl".split(),
.....: "Quantity": [1, 3, 5, 1, 8, 1, 9, 3],
.....: "Date": [
.....: datetime.datetime(2013, 1, 1, 13, 0),
.....: datetime.datetime(2013, 1, 1, 13, 5),
.....: datetime.datetime(2013, 10, 1, 20, 0),
.....: datetime.datetime(2013, 10, 2, 10, 0),
.....: datetime.datetime(2013, 10, 1, 20, 0),
.....: datetime.datetime(2013, 10, 2, 10, 0),
.....: datetime.datetime(2013, 12, 2, 12, 0),
.....: datetime.datetime(2013, 12, 2, 14, 0),
.....: ],
.....: }
.....: )
.....:
In [186]: df
Out[186]:
Branch Buyer Quantity Date
0 A Carl 1 2013-01-01 13:00:00
1 A Mark 3 2013-01-01 13:05:00
2 A Carl 5 2013-10-01 20:00:00
3 A Carl 1 2013-10-02 10:00:00
4 A Joe 8 2013-10-01 20:00:00
5 A Joe 1 2013-10-02 10:00:00
6 A Joe 9 2013-12-02 12:00:00
7 B Carl 3 2013-12-02 14:00:00
Groupby a specific column with the desired frequency. This is like resampling.
In [187]: df.groupby([pd.Grouper(freq="1M", key="Date"), "Buyer"])[["Quantity"]].sum()
Out[187]:
Quantity
Date Buyer
2013-01-31 Carl 1
Mark 3
2013-10-31 Carl 6
Joe 9
2013-12-31 Carl 3
Joe 9
You have an ambiguous specification in that you have a named index and a column
that could be potential groupers.
In [188]: df = df.set_index("Date")
In [189]: df["Date"] = df.index + pd.offsets.MonthEnd(2)
In [190]: df.groupby([pd.Grouper(freq="6M", key="Date"), "Buyer"])[["Quantity"]].sum()
Out[190]:
Quantity
Date Buyer
2013-02-28 Carl 1
Mark 3
2014-02-28 Carl 9
Joe 18
In [191]: df.groupby([pd.Grouper(freq="6M", level="Date"), "Buyer"])[["Quantity"]].sum()
Out[191]:
Quantity
Date Buyer
2013-01-31 Carl 1
Mark 3
2014-01-31 Carl 9
Joe 18
Taking the first rows of each group#
Just like for a DataFrame or Series you can call head and tail on a groupby:
In [192]: df = pd.DataFrame([[1, 2], [1, 4], [5, 6]], columns=["A", "B"])
In [193]: df
Out[193]:
A B
0 1 2
1 1 4
2 5 6
In [194]: g = df.groupby("A")
In [195]: g.head(1)
Out[195]:
A B
0 1 2
2 5 6
In [196]: g.tail(1)
Out[196]:
A B
1 1 4
2 5 6
This shows the first or last n rows from each group.
Taking the nth row of each group#
To select from a DataFrame or Series the nth item, use
nth(). This is a reduction method, and
will return a single row (or no row) per group if you pass an int for n:
In [197]: df = pd.DataFrame([[1, np.nan], [1, 4], [5, 6]], columns=["A", "B"])
In [198]: g = df.groupby("A")
In [199]: g.nth(0)
Out[199]:
B
A
1 NaN
5 6.0
In [200]: g.nth(-1)
Out[200]:
B
A
1 4.0
5 6.0
In [201]: g.nth(1)
Out[201]:
B
A
1 4.0
If you want to select the nth not-null item, use the dropna kwarg. For a DataFrame this should be either 'any' or 'all' just like you would pass to dropna:
# nth(0) is the same as g.first()
In [202]: g.nth(0, dropna="any")
Out[202]:
B
A
1 4.0
5 6.0
In [203]: g.first()
Out[203]:
B
A
1 4.0
5 6.0
# nth(-1) is the same as g.last()
In [204]: g.nth(-1, dropna="any") # NaNs denote group exhausted when using dropna
Out[204]:
B
A
1 4.0
5 6.0
In [205]: g.last()
Out[205]:
B
A
1 4.0
5 6.0
In [206]: g.B.nth(0, dropna="all")
Out[206]:
A
1 4.0
5 6.0
Name: B, dtype: float64
As with other methods, passing as_index=False, will achieve a filtration, which returns the grouped row.
In [207]: df = pd.DataFrame([[1, np.nan], [1, 4], [5, 6]], columns=["A", "B"])
In [208]: g = df.groupby("A", as_index=False)
In [209]: g.nth(0)
Out[209]:
A B
0 1 NaN
2 5 6.0
In [210]: g.nth(-1)
Out[210]:
A B
1 1 4.0
2 5 6.0
You can also select multiple rows from each group by specifying multiple nth values as a list of ints.
In [211]: business_dates = pd.date_range(start="4/1/2014", end="6/30/2014", freq="B")
In [212]: df = pd.DataFrame(1, index=business_dates, columns=["a", "b"])
# get the first, 4th, and last date index for each month
In [213]: df.groupby([df.index.year, df.index.month]).nth([0, 3, -1])
Out[213]:
a b
2014 4 1 1
4 1 1
4 1 1
5 1 1
5 1 1
5 1 1
6 1 1
6 1 1
6 1 1
Enumerate group items#
To see the order in which each row appears within its group, use the
cumcount method:
In [214]: dfg = pd.DataFrame(list("aaabba"), columns=["A"])
In [215]: dfg
Out[215]:
A
0 a
1 a
2 a
3 b
4 b
5 a
In [216]: dfg.groupby("A").cumcount()
Out[216]:
0 0
1 1
2 2
3 0
4 1
5 3
dtype: int64
In [217]: dfg.groupby("A").cumcount(ascending=False)
Out[217]:
0 3
1 2
2 1
3 1
4 0
5 0
dtype: int64
Enumerate groups#
To see the ordering of the groups (as opposed to the order of rows
within a group given by cumcount) you can use
ngroup().
Note that the numbers given to the groups match the order in which the
groups would be seen when iterating over the groupby object, not the
order they are first observed.
In [218]: dfg = pd.DataFrame(list("aaabba"), columns=["A"])
In [219]: dfg
Out[219]:
A
0 a
1 a
2 a
3 b
4 b
5 a
In [220]: dfg.groupby("A").ngroup()
Out[220]:
0 0
1 0
2 0
3 1
4 1
5 0
dtype: int64
In [221]: dfg.groupby("A").ngroup(ascending=False)
Out[221]:
0 1
1 1
2 1
3 0
4 0
5 1
dtype: int64
Plotting#
Groupby also works with some plotting methods. For example, suppose we
suspect that some features in a DataFrame may differ by group, in this case,
the values in column 1 where the group is “B” are 3 higher on average.
In [222]: np.random.seed(1234)
In [223]: df = pd.DataFrame(np.random.randn(50, 2))
In [224]: df["g"] = np.random.choice(["A", "B"], size=50)
In [225]: df.loc[df["g"] == "B", 1] += 3
We can easily visualize this with a boxplot:
In [226]: df.groupby("g").boxplot()
Out[226]:
A AxesSubplot(0.1,0.15;0.363636x0.75)
B AxesSubplot(0.536364,0.15;0.363636x0.75)
dtype: object
The result of calling boxplot is a dictionary whose keys are the values
of our grouping column g (“A” and “B”). The values of the resulting dictionary
can be controlled by the return_type keyword of boxplot.
See the visualization documentation for more.
Warning
For historical reasons, df.groupby("g").boxplot() is not equivalent
to df.boxplot(by="g"). See here for
an explanation.
Piping function calls#
Similar to the functionality provided by DataFrame and Series, functions
that take GroupBy objects can be chained together using a pipe method to
allow for a cleaner, more readable syntax. To read about .pipe in general terms,
see here.
Combining .groupby and .pipe is often useful when you need to reuse
GroupBy objects.
As an example, imagine having a DataFrame with columns for stores, products,
revenue and quantity sold. We’d like to do a groupwise calculation of prices
(i.e. revenue/quantity) per store and per product. We could do this in a
multi-step operation, but expressing it in terms of piping can make the
code more readable. First we set the data:
In [227]: n = 1000
In [228]: df = pd.DataFrame(
.....: {
.....: "Store": np.random.choice(["Store_1", "Store_2"], n),
.....: "Product": np.random.choice(["Product_1", "Product_2"], n),
.....: "Revenue": (np.random.random(n) * 50 + 10).round(2),
.....: "Quantity": np.random.randint(1, 10, size=n),
.....: }
.....: )
.....:
In [229]: df.head(2)
Out[229]:
Store Product Revenue Quantity
0 Store_2 Product_1 26.12 1
1 Store_2 Product_1 28.86 1
Now, to find prices per store/product, we can simply do:
In [230]: (
.....: df.groupby(["Store", "Product"])
.....: .pipe(lambda grp: grp.Revenue.sum() / grp.Quantity.sum())
.....: .unstack()
.....: .round(2)
.....: )
.....:
Out[230]:
Product Product_1 Product_2
Store
Store_1 6.82 7.05
Store_2 6.30 6.64
Piping can also be expressive when you want to deliver a grouped object to some
arbitrary function, for example:
In [231]: def mean(groupby):
.....: return groupby.mean()
.....:
In [232]: df.groupby(["Store", "Product"]).pipe(mean)
Out[232]:
Revenue Quantity
Store Product
Store_1 Product_1 34.622727 5.075758
Product_2 35.482815 5.029630
Store_2 Product_1 32.972837 5.237589
Product_2 34.684360 5.224000
where mean takes a GroupBy object and finds the mean of the Revenue and Quantity
columns respectively for each Store-Product combination. The mean function can
be any function that takes in a GroupBy object; the .pipe will pass the GroupBy
object as a parameter into the function you specify.
Examples#
Regrouping by factor#
Regroup columns of a DataFrame according to their sum, and sum the aggregated ones.
In [233]: df = pd.DataFrame({"a": [1, 0, 0], "b": [0, 1, 0], "c": [1, 0, 0], "d": [2, 3, 4]})
In [234]: df
Out[234]:
a b c d
0 1 0 1 2
1 0 1 0 3
2 0 0 0 4
In [235]: df.groupby(df.sum(), axis=1).sum()
Out[235]:
1 9
0 2 2
1 1 3
2 0 4
Multi-column factorization#
By using ngroup(), we can extract
information about the groups in a way similar to factorize() (as described
further in the reshaping API) but which applies
naturally to multiple columns of mixed type and different
sources. This can be useful as an intermediate categorical-like step
in processing, when the relationships between the group rows are more
important than their content, or as input to an algorithm which only
accepts the integer encoding. (For more information about support in
pandas for full categorical data, see the Categorical
introduction and the
API documentation.)
In [236]: dfg = pd.DataFrame({"A": [1, 1, 2, 3, 2], "B": list("aaaba")})
In [237]: dfg
Out[237]:
A B
0 1 a
1 1 a
2 2 a
3 3 b
4 2 a
In [238]: dfg.groupby(["A", "B"]).ngroup()
Out[238]:
0 0
1 0
2 1
3 2
4 1
dtype: int64
In [239]: dfg.groupby(["A", [0, 0, 0, 1, 1]]).ngroup()
Out[239]:
0 0
1 0
2 1
3 3
4 2
dtype: int64
Groupby by indexer to ‘resample’ data#
Resampling produces new hypothetical samples (resamples) from already existing observed data or from a model that generates data. These new samples are similar to the pre-existing samples.
In order to resample to work on indices that are non-datetimelike, the following procedure can be utilized.
In the following examples, df.index // 5 returns a binary array which is used to determine what gets selected for the groupby operation.
Note
The below example shows how we can downsample by consolidation of samples into fewer samples. Here by using df.index // 5, we are aggregating the samples in bins. By applying std() function, we aggregate the information contained in many samples into a small subset of values which is their standard deviation thereby reducing the number of samples.
In [240]: df = pd.DataFrame(np.random.randn(10, 2))
In [241]: df
Out[241]:
0 1
0 -0.793893 0.321153
1 0.342250 1.618906
2 -0.975807 1.918201
3 -0.810847 -1.405919
4 -1.977759 0.461659
5 0.730057 -1.316938
6 -0.751328 0.528290
7 -0.257759 -1.081009
8 0.505895 -1.701948
9 -1.006349 0.020208
In [242]: df.index // 5
Out[242]: Int64Index([0, 0, 0, 0, 0, 1, 1, 1, 1, 1], dtype='int64')
In [243]: df.groupby(df.index // 5).std()
Out[243]:
0 1
0 0.823647 1.312912
1 0.760109 0.942941
Returning a Series to propagate names#
Group DataFrame columns, compute a set of metrics and return a named Series.
The Series name is used as the name for the column index. This is especially
useful in conjunction with reshaping operations such as stacking in which the
column index name will be used as the name of the inserted column:
In [244]: df = pd.DataFrame(
.....: {
.....: "a": [0, 0, 0, 0, 1, 1, 1, 1, 2, 2, 2, 2],
.....: "b": [0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1],
.....: "c": [1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0],
.....: "d": [0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1],
.....: }
.....: )
.....:
In [245]: def compute_metrics(x):
.....: result = {"b_sum": x["b"].sum(), "c_mean": x["c"].mean()}
.....: return pd.Series(result, name="metrics")
.....:
In [246]: result = df.groupby("a").apply(compute_metrics)
In [247]: result
Out[247]:
metrics b_sum c_mean
a
0 2.0 0.5
1 2.0 0.5
2 2.0 0.5
In [248]: result.stack()
Out[248]:
a metrics
0 b_sum 2.0
c_mean 0.5
1 b_sum 2.0
c_mean 0.5
2 b_sum 2.0
c_mean 0.5
dtype: float64
| 259
| 335
|
Very new to python, wanting to add a new column named ‘Total’, which is the sum of other totals
df['Total']= df.iloc[3:5].sum(axis=1)
returns NaN for some most values, why is this? They are all intergers.
Also is there a better way of doing this?
|
61,602,341
|
Trying to sum combine a whole lot of columns faster/easier... help appreciated
|
<p>I'm trying to sum columns into groups of 30 (month). Each column is a day. There are almost 2,000 columns</p>
<p>Each row is an individual product and there are about 30,000 of them. </p>
<p>Below is what I am doing to sum them in jupyter.</p>
<p>My question is that is there an easier/faster way to do this without having to do what I did below over 60 more times?</p>
<pre><code>Month1 = (df_sales["d_1"] + df_sales["d_2"] + df_sales["d_3"] + df_sales["d_4"] + df_sales["d_5"] + df_sales["d_6"] + df_sales["d_7"] + df_sales["d_8"] + df_sales["d_9"] + df_sales["d_10"]
+ df_sales["d_11"] + df_sales["d_12"] + df_sales["d_13"] + df_sales["d_14"] + df_sales["d_15"] + df_sales["d_16"] + df_sales["d_17"] + df_sales["d_18"] + df_sales["d_19"] + df_sales["d_20"]
+ df_sales["d_21"] + df_sales["d_22"] + df_sales["d_23"] + df_sales["d_24"] + df_sales["d_25"] + df_sales["d_26"] + df_sales["d_27"] + df_sales["d_28"] + df_sales["d_29"] + df_sales["d_30"])
</code></pre>
| 61,602,529
| 2020-05-04T21:58:19.300000
| 1
| null | 1
| 33
|
python|pandas
|
<pre><code>Month1 = df_sales.loc[:, "d_1":"d_30"].sum(axis=1)
</code></pre>
<p>If every month in your table has 30 days (columns) and you start with the first column, you may perform</p>
<pre><code>all_months = pd.concat((df_sales.iloc[:, i:i+30].sum(axis=1)
for i in range(0, df_sales.shape[1], 30)),
axis=1)
</code></pre>
<p>to obtain the dataframe of all months sums.</p>
<p>Replace </p>
<pre><code>range(0, df_sales.shape[1], 30)
</code></pre>
<p>with </p>
<pre><code>range(n, df_sales.shape[1], 30)
</code></pre>
<p>if your days start in the column <code>n</code> (be aware - the first column has number <code>0</code>).</p>
| 2020-05-04T22:14:46.270000
| 0
|
https://pandas.pydata.org/docs/user_guide/missing_data.html
|
Working with missing data#
Working with missing data#
In this section, we will discuss missing (also referred to as NA) values in
pandas.
Note
The choice of using NaN internally to denote missing data was largely
for simplicity and performance reasons.
Starting from pandas 1.0, some optional data types start experimenting
with a native NA scalar using a mask-based approach. See
here for more.
See the cookbook for some advanced strategies.
Values considered “missing”#
Month1 = df_sales.loc[:, "d_1":"d_30"].sum(axis=1)
If every month in your table has 30 days (columns) and you start with the first column, you may perform
all_months = pd.concat((df_sales.iloc[:, i:i+30].sum(axis=1)
for i in range(0, df_sales.shape[1], 30)),
axis=1)
to obtain the dataframe of all months sums.
Replace
range(0, df_sales.shape[1], 30)
with
range(n, df_sales.shape[1], 30)
if your days start in the column n (be aware - the first column has number 0).
As data comes in many shapes and forms, pandas aims to be flexible with regard
to handling missing data. While NaN is the default missing value marker for
reasons of computational speed and convenience, we need to be able to easily
detect this value with data of different types: floating point, integer,
boolean, and general object. In many cases, however, the Python None will
arise and we wish to also consider that “missing” or “not available” or “NA”.
Note
If you want to consider inf and -inf to be “NA” in computations,
you can set pandas.options.mode.use_inf_as_na = True.
In [1]: df = pd.DataFrame(
...: np.random.randn(5, 3),
...: index=["a", "c", "e", "f", "h"],
...: columns=["one", "two", "three"],
...: )
...:
In [2]: df["four"] = "bar"
In [3]: df["five"] = df["one"] > 0
In [4]: df
Out[4]:
one two three four five
a 0.469112 -0.282863 -1.509059 bar True
c -1.135632 1.212112 -0.173215 bar False
e 0.119209 -1.044236 -0.861849 bar True
f -2.104569 -0.494929 1.071804 bar False
h 0.721555 -0.706771 -1.039575 bar True
In [5]: df2 = df.reindex(["a", "b", "c", "d", "e", "f", "g", "h"])
In [6]: df2
Out[6]:
one two three four five
a 0.469112 -0.282863 -1.509059 bar True
b NaN NaN NaN NaN NaN
c -1.135632 1.212112 -0.173215 bar False
d NaN NaN NaN NaN NaN
e 0.119209 -1.044236 -0.861849 bar True
f -2.104569 -0.494929 1.071804 bar False
g NaN NaN NaN NaN NaN
h 0.721555 -0.706771 -1.039575 bar True
To make detecting missing values easier (and across different array dtypes),
pandas provides the isna() and
notna() functions, which are also methods on
Series and DataFrame objects:
In [7]: df2["one"]
Out[7]:
a 0.469112
b NaN
c -1.135632
d NaN
e 0.119209
f -2.104569
g NaN
h 0.721555
Name: one, dtype: float64
In [8]: pd.isna(df2["one"])
Out[8]:
a False
b True
c False
d True
e False
f False
g True
h False
Name: one, dtype: bool
In [9]: df2["four"].notna()
Out[9]:
a True
b False
c True
d False
e True
f True
g False
h True
Name: four, dtype: bool
In [10]: df2.isna()
Out[10]:
one two three four five
a False False False False False
b True True True True True
c False False False False False
d True True True True True
e False False False False False
f False False False False False
g True True True True True
h False False False False False
Warning
One has to be mindful that in Python (and NumPy), the nan's don’t compare equal, but None's do.
Note that pandas/NumPy uses the fact that np.nan != np.nan, and treats None like np.nan.
In [11]: None == None # noqa: E711
Out[11]: True
In [12]: np.nan == np.nan
Out[12]: False
So as compared to above, a scalar equality comparison versus a None/np.nan doesn’t provide useful information.
In [13]: df2["one"] == np.nan
Out[13]:
a False
b False
c False
d False
e False
f False
g False
h False
Name: one, dtype: bool
Integer dtypes and missing data#
Because NaN is a float, a column of integers with even one missing values
is cast to floating-point dtype (see Support for integer NA for more). pandas
provides a nullable integer array, which can be used by explicitly requesting
the dtype:
In [14]: pd.Series([1, 2, np.nan, 4], dtype=pd.Int64Dtype())
Out[14]:
0 1
1 2
2 <NA>
3 4
dtype: Int64
Alternatively, the string alias dtype='Int64' (note the capital "I") can be
used.
See Nullable integer data type for more.
Datetimes#
For datetime64[ns] types, NaT represents missing values. This is a pseudo-native
sentinel value that can be represented by NumPy in a singular dtype (datetime64[ns]).
pandas objects provide compatibility between NaT and NaN.
In [15]: df2 = df.copy()
In [16]: df2["timestamp"] = pd.Timestamp("20120101")
In [17]: df2
Out[17]:
one two three four five timestamp
a 0.469112 -0.282863 -1.509059 bar True 2012-01-01
c -1.135632 1.212112 -0.173215 bar False 2012-01-01
e 0.119209 -1.044236 -0.861849 bar True 2012-01-01
f -2.104569 -0.494929 1.071804 bar False 2012-01-01
h 0.721555 -0.706771 -1.039575 bar True 2012-01-01
In [18]: df2.loc[["a", "c", "h"], ["one", "timestamp"]] = np.nan
In [19]: df2
Out[19]:
one two three four five timestamp
a NaN -0.282863 -1.509059 bar True NaT
c NaN 1.212112 -0.173215 bar False NaT
e 0.119209 -1.044236 -0.861849 bar True 2012-01-01
f -2.104569 -0.494929 1.071804 bar False 2012-01-01
h NaN -0.706771 -1.039575 bar True NaT
In [20]: df2.dtypes.value_counts()
Out[20]:
float64 3
object 1
bool 1
datetime64[ns] 1
dtype: int64
Inserting missing data#
You can insert missing values by simply assigning to containers. The
actual missing value used will be chosen based on the dtype.
For example, numeric containers will always use NaN regardless of
the missing value type chosen:
In [21]: s = pd.Series([1, 2, 3])
In [22]: s.loc[0] = None
In [23]: s
Out[23]:
0 NaN
1 2.0
2 3.0
dtype: float64
Likewise, datetime containers will always use NaT.
For object containers, pandas will use the value given:
In [24]: s = pd.Series(["a", "b", "c"])
In [25]: s.loc[0] = None
In [26]: s.loc[1] = np.nan
In [27]: s
Out[27]:
0 None
1 NaN
2 c
dtype: object
Calculations with missing data#
Missing values propagate naturally through arithmetic operations between pandas
objects.
In [28]: a
Out[28]:
one two
a NaN -0.282863
c NaN 1.212112
e 0.119209 -1.044236
f -2.104569 -0.494929
h -2.104569 -0.706771
In [29]: b
Out[29]:
one two three
a NaN -0.282863 -1.509059
c NaN 1.212112 -0.173215
e 0.119209 -1.044236 -0.861849
f -2.104569 -0.494929 1.071804
h NaN -0.706771 -1.039575
In [30]: a + b
Out[30]:
one three two
a NaN NaN -0.565727
c NaN NaN 2.424224
e 0.238417 NaN -2.088472
f -4.209138 NaN -0.989859
h NaN NaN -1.413542
The descriptive statistics and computational methods discussed in the
data structure overview (and listed here and here) are all written to
account for missing data. For example:
When summing data, NA (missing) values will be treated as zero.
If the data are all NA, the result will be 0.
Cumulative methods like cumsum() and cumprod() ignore NA values by default, but preserve them in the resulting arrays. To override this behaviour and include NA values, use skipna=False.
In [31]: df
Out[31]:
one two three
a NaN -0.282863 -1.509059
c NaN 1.212112 -0.173215
e 0.119209 -1.044236 -0.861849
f -2.104569 -0.494929 1.071804
h NaN -0.706771 -1.039575
In [32]: df["one"].sum()
Out[32]: -1.9853605075978744
In [33]: df.mean(1)
Out[33]:
a -0.895961
c 0.519449
e -0.595625
f -0.509232
h -0.873173
dtype: float64
In [34]: df.cumsum()
Out[34]:
one two three
a NaN -0.282863 -1.509059
c NaN 0.929249 -1.682273
e 0.119209 -0.114987 -2.544122
f -1.985361 -0.609917 -1.472318
h NaN -1.316688 -2.511893
In [35]: df.cumsum(skipna=False)
Out[35]:
one two three
a NaN -0.282863 -1.509059
c NaN 0.929249 -1.682273
e NaN -0.114987 -2.544122
f NaN -0.609917 -1.472318
h NaN -1.316688 -2.511893
Sum/prod of empties/nans#
Warning
This behavior is now standard as of v0.22.0 and is consistent with the default in numpy; previously sum/prod of all-NA or empty Series/DataFrames would return NaN.
See v0.22.0 whatsnew for more.
The sum of an empty or all-NA Series or column of a DataFrame is 0.
In [36]: pd.Series([np.nan]).sum()
Out[36]: 0.0
In [37]: pd.Series([], dtype="float64").sum()
Out[37]: 0.0
The product of an empty or all-NA Series or column of a DataFrame is 1.
In [38]: pd.Series([np.nan]).prod()
Out[38]: 1.0
In [39]: pd.Series([], dtype="float64").prod()
Out[39]: 1.0
NA values in GroupBy#
NA groups in GroupBy are automatically excluded. This behavior is consistent
with R, for example:
In [40]: df
Out[40]:
one two three
a NaN -0.282863 -1.509059
c NaN 1.212112 -0.173215
e 0.119209 -1.044236 -0.861849
f -2.104569 -0.494929 1.071804
h NaN -0.706771 -1.039575
In [41]: df.groupby("one").mean()
Out[41]:
two three
one
-2.104569 -0.494929 1.071804
0.119209 -1.044236 -0.861849
See the groupby section here for more information.
Cleaning / filling missing data#
pandas objects are equipped with various data manipulation methods for dealing
with missing data.
Filling missing values: fillna#
fillna() can “fill in” NA values with non-NA data in a couple
of ways, which we illustrate:
Replace NA with a scalar value
In [42]: df2
Out[42]:
one two three four five timestamp
a NaN -0.282863 -1.509059 bar True NaT
c NaN 1.212112 -0.173215 bar False NaT
e 0.119209 -1.044236 -0.861849 bar True 2012-01-01
f -2.104569 -0.494929 1.071804 bar False 2012-01-01
h NaN -0.706771 -1.039575 bar True NaT
In [43]: df2.fillna(0)
Out[43]:
one two three four five timestamp
a 0.000000 -0.282863 -1.509059 bar True 0
c 0.000000 1.212112 -0.173215 bar False 0
e 0.119209 -1.044236 -0.861849 bar True 2012-01-01 00:00:00
f -2.104569 -0.494929 1.071804 bar False 2012-01-01 00:00:00
h 0.000000 -0.706771 -1.039575 bar True 0
In [44]: df2["one"].fillna("missing")
Out[44]:
a missing
c missing
e 0.119209
f -2.104569
h missing
Name: one, dtype: object
Fill gaps forward or backward
Using the same filling arguments as reindexing, we
can propagate non-NA values forward or backward:
In [45]: df
Out[45]:
one two three
a NaN -0.282863 -1.509059
c NaN 1.212112 -0.173215
e 0.119209 -1.044236 -0.861849
f -2.104569 -0.494929 1.071804
h NaN -0.706771 -1.039575
In [46]: df.fillna(method="pad")
Out[46]:
one two three
a NaN -0.282863 -1.509059
c NaN 1.212112 -0.173215
e 0.119209 -1.044236 -0.861849
f -2.104569 -0.494929 1.071804
h -2.104569 -0.706771 -1.039575
Limit the amount of filling
If we only want consecutive gaps filled up to a certain number of data points,
we can use the limit keyword:
In [47]: df
Out[47]:
one two three
a NaN -0.282863 -1.509059
c NaN 1.212112 -0.173215
e NaN NaN NaN
f NaN NaN NaN
h NaN -0.706771 -1.039575
In [48]: df.fillna(method="pad", limit=1)
Out[48]:
one two three
a NaN -0.282863 -1.509059
c NaN 1.212112 -0.173215
e NaN 1.212112 -0.173215
f NaN NaN NaN
h NaN -0.706771 -1.039575
To remind you, these are the available filling methods:
Method
Action
pad / ffill
Fill values forward
bfill / backfill
Fill values backward
With time series data, using pad/ffill is extremely common so that the “last
known value” is available at every time point.
ffill() is equivalent to fillna(method='ffill')
and bfill() is equivalent to fillna(method='bfill')
Filling with a PandasObject#
You can also fillna using a dict or Series that is alignable. The labels of the dict or index of the Series
must match the columns of the frame you wish to fill. The
use case of this is to fill a DataFrame with the mean of that column.
In [49]: dff = pd.DataFrame(np.random.randn(10, 3), columns=list("ABC"))
In [50]: dff.iloc[3:5, 0] = np.nan
In [51]: dff.iloc[4:6, 1] = np.nan
In [52]: dff.iloc[5:8, 2] = np.nan
In [53]: dff
Out[53]:
A B C
0 0.271860 -0.424972 0.567020
1 0.276232 -1.087401 -0.673690
2 0.113648 -1.478427 0.524988
3 NaN 0.577046 -1.715002
4 NaN NaN -1.157892
5 -1.344312 NaN NaN
6 -0.109050 1.643563 NaN
7 0.357021 -0.674600 NaN
8 -0.968914 -1.294524 0.413738
9 0.276662 -0.472035 -0.013960
In [54]: dff.fillna(dff.mean())
Out[54]:
A B C
0 0.271860 -0.424972 0.567020
1 0.276232 -1.087401 -0.673690
2 0.113648 -1.478427 0.524988
3 -0.140857 0.577046 -1.715002
4 -0.140857 -0.401419 -1.157892
5 -1.344312 -0.401419 -0.293543
6 -0.109050 1.643563 -0.293543
7 0.357021 -0.674600 -0.293543
8 -0.968914 -1.294524 0.413738
9 0.276662 -0.472035 -0.013960
In [55]: dff.fillna(dff.mean()["B":"C"])
Out[55]:
A B C
0 0.271860 -0.424972 0.567020
1 0.276232 -1.087401 -0.673690
2 0.113648 -1.478427 0.524988
3 NaN 0.577046 -1.715002
4 NaN -0.401419 -1.157892
5 -1.344312 -0.401419 -0.293543
6 -0.109050 1.643563 -0.293543
7 0.357021 -0.674600 -0.293543
8 -0.968914 -1.294524 0.413738
9 0.276662 -0.472035 -0.013960
Same result as above, but is aligning the ‘fill’ value which is
a Series in this case.
In [56]: dff.where(pd.notna(dff), dff.mean(), axis="columns")
Out[56]:
A B C
0 0.271860 -0.424972 0.567020
1 0.276232 -1.087401 -0.673690
2 0.113648 -1.478427 0.524988
3 -0.140857 0.577046 -1.715002
4 -0.140857 -0.401419 -1.157892
5 -1.344312 -0.401419 -0.293543
6 -0.109050 1.643563 -0.293543
7 0.357021 -0.674600 -0.293543
8 -0.968914 -1.294524 0.413738
9 0.276662 -0.472035 -0.013960
Dropping axis labels with missing data: dropna#
You may wish to simply exclude labels from a data set which refer to missing
data. To do this, use dropna():
In [57]: df
Out[57]:
one two three
a NaN -0.282863 -1.509059
c NaN 1.212112 -0.173215
e NaN 0.000000 0.000000
f NaN 0.000000 0.000000
h NaN -0.706771 -1.039575
In [58]: df.dropna(axis=0)
Out[58]:
Empty DataFrame
Columns: [one, two, three]
Index: []
In [59]: df.dropna(axis=1)
Out[59]:
two three
a -0.282863 -1.509059
c 1.212112 -0.173215
e 0.000000 0.000000
f 0.000000 0.000000
h -0.706771 -1.039575
In [60]: df["one"].dropna()
Out[60]: Series([], Name: one, dtype: float64)
An equivalent dropna() is available for Series.
DataFrame.dropna has considerably more options than Series.dropna, which can be
examined in the API.
Interpolation#
Both Series and DataFrame objects have interpolate()
that, by default, performs linear interpolation at missing data points.
In [61]: ts
Out[61]:
2000-01-31 0.469112
2000-02-29 NaN
2000-03-31 NaN
2000-04-28 NaN
2000-05-31 NaN
...
2007-12-31 -6.950267
2008-01-31 -7.904475
2008-02-29 -6.441779
2008-03-31 -8.184940
2008-04-30 -9.011531
Freq: BM, Length: 100, dtype: float64
In [62]: ts.count()
Out[62]: 66
In [63]: ts.plot()
Out[63]: <AxesSubplot: >
In [64]: ts.interpolate()
Out[64]:
2000-01-31 0.469112
2000-02-29 0.434469
2000-03-31 0.399826
2000-04-28 0.365184
2000-05-31 0.330541
...
2007-12-31 -6.950267
2008-01-31 -7.904475
2008-02-29 -6.441779
2008-03-31 -8.184940
2008-04-30 -9.011531
Freq: BM, Length: 100, dtype: float64
In [65]: ts.interpolate().count()
Out[65]: 100
In [66]: ts.interpolate().plot()
Out[66]: <AxesSubplot: >
Index aware interpolation is available via the method keyword:
In [67]: ts2
Out[67]:
2000-01-31 0.469112
2000-02-29 NaN
2002-07-31 -5.785037
2005-01-31 NaN
2008-04-30 -9.011531
dtype: float64
In [68]: ts2.interpolate()
Out[68]:
2000-01-31 0.469112
2000-02-29 -2.657962
2002-07-31 -5.785037
2005-01-31 -7.398284
2008-04-30 -9.011531
dtype: float64
In [69]: ts2.interpolate(method="time")
Out[69]:
2000-01-31 0.469112
2000-02-29 0.270241
2002-07-31 -5.785037
2005-01-31 -7.190866
2008-04-30 -9.011531
dtype: float64
For a floating-point index, use method='values':
In [70]: ser
Out[70]:
0.0 0.0
1.0 NaN
10.0 10.0
dtype: float64
In [71]: ser.interpolate()
Out[71]:
0.0 0.0
1.0 5.0
10.0 10.0
dtype: float64
In [72]: ser.interpolate(method="values")
Out[72]:
0.0 0.0
1.0 1.0
10.0 10.0
dtype: float64
You can also interpolate with a DataFrame:
In [73]: df = pd.DataFrame(
....: {
....: "A": [1, 2.1, np.nan, 4.7, 5.6, 6.8],
....: "B": [0.25, np.nan, np.nan, 4, 12.2, 14.4],
....: }
....: )
....:
In [74]: df
Out[74]:
A B
0 1.0 0.25
1 2.1 NaN
2 NaN NaN
3 4.7 4.00
4 5.6 12.20
5 6.8 14.40
In [75]: df.interpolate()
Out[75]:
A B
0 1.0 0.25
1 2.1 1.50
2 3.4 2.75
3 4.7 4.00
4 5.6 12.20
5 6.8 14.40
The method argument gives access to fancier interpolation methods.
If you have scipy installed, you can pass the name of a 1-d interpolation routine to method.
You’ll want to consult the full scipy interpolation documentation and reference guide for details.
The appropriate interpolation method will depend on the type of data you are working with.
If you are dealing with a time series that is growing at an increasing rate,
method='quadratic' may be appropriate.
If you have values approximating a cumulative distribution function,
then method='pchip' should work well.
To fill missing values with goal of smooth plotting, consider method='akima'.
Warning
These methods require scipy.
In [76]: df.interpolate(method="barycentric")
Out[76]:
A B
0 1.00 0.250
1 2.10 -7.660
2 3.53 -4.515
3 4.70 4.000
4 5.60 12.200
5 6.80 14.400
In [77]: df.interpolate(method="pchip")
Out[77]:
A B
0 1.00000 0.250000
1 2.10000 0.672808
2 3.43454 1.928950
3 4.70000 4.000000
4 5.60000 12.200000
5 6.80000 14.400000
In [78]: df.interpolate(method="akima")
Out[78]:
A B
0 1.000000 0.250000
1 2.100000 -0.873316
2 3.406667 0.320034
3 4.700000 4.000000
4 5.600000 12.200000
5 6.800000 14.400000
When interpolating via a polynomial or spline approximation, you must also specify
the degree or order of the approximation:
In [79]: df.interpolate(method="spline", order=2)
Out[79]:
A B
0 1.000000 0.250000
1 2.100000 -0.428598
2 3.404545 1.206900
3 4.700000 4.000000
4 5.600000 12.200000
5 6.800000 14.400000
In [80]: df.interpolate(method="polynomial", order=2)
Out[80]:
A B
0 1.000000 0.250000
1 2.100000 -2.703846
2 3.451351 -1.453846
3 4.700000 4.000000
4 5.600000 12.200000
5 6.800000 14.400000
Compare several methods:
In [81]: np.random.seed(2)
In [82]: ser = pd.Series(np.arange(1, 10.1, 0.25) ** 2 + np.random.randn(37))
In [83]: missing = np.array([4, 13, 14, 15, 16, 17, 18, 20, 29])
In [84]: ser[missing] = np.nan
In [85]: methods = ["linear", "quadratic", "cubic"]
In [86]: df = pd.DataFrame({m: ser.interpolate(method=m) for m in methods})
In [87]: df.plot()
Out[87]: <AxesSubplot: >
Another use case is interpolation at new values.
Suppose you have 100 observations from some distribution. And let’s suppose
that you’re particularly interested in what’s happening around the middle.
You can mix pandas’ reindex and interpolate methods to interpolate
at the new values.
In [88]: ser = pd.Series(np.sort(np.random.uniform(size=100)))
# interpolate at new_index
In [89]: new_index = ser.index.union(pd.Index([49.25, 49.5, 49.75, 50.25, 50.5, 50.75]))
In [90]: interp_s = ser.reindex(new_index).interpolate(method="pchip")
In [91]: interp_s[49:51]
Out[91]:
49.00 0.471410
49.25 0.476841
49.50 0.481780
49.75 0.485998
50.00 0.489266
50.25 0.491814
50.50 0.493995
50.75 0.495763
51.00 0.497074
dtype: float64
Interpolation limits#
Like other pandas fill methods, interpolate() accepts a limit keyword
argument. Use this argument to limit the number of consecutive NaN values
filled since the last valid observation:
In [92]: ser = pd.Series([np.nan, np.nan, 5, np.nan, np.nan, np.nan, 13, np.nan, np.nan])
In [93]: ser
Out[93]:
0 NaN
1 NaN
2 5.0
3 NaN
4 NaN
5 NaN
6 13.0
7 NaN
8 NaN
dtype: float64
# fill all consecutive values in a forward direction
In [94]: ser.interpolate()
Out[94]:
0 NaN
1 NaN
2 5.0
3 7.0
4 9.0
5 11.0
6 13.0
7 13.0
8 13.0
dtype: float64
# fill one consecutive value in a forward direction
In [95]: ser.interpolate(limit=1)
Out[95]:
0 NaN
1 NaN
2 5.0
3 7.0
4 NaN
5 NaN
6 13.0
7 13.0
8 NaN
dtype: float64
By default, NaN values are filled in a forward direction. Use
limit_direction parameter to fill backward or from both directions.
# fill one consecutive value backwards
In [96]: ser.interpolate(limit=1, limit_direction="backward")
Out[96]:
0 NaN
1 5.0
2 5.0
3 NaN
4 NaN
5 11.0
6 13.0
7 NaN
8 NaN
dtype: float64
# fill one consecutive value in both directions
In [97]: ser.interpolate(limit=1, limit_direction="both")
Out[97]:
0 NaN
1 5.0
2 5.0
3 7.0
4 NaN
5 11.0
6 13.0
7 13.0
8 NaN
dtype: float64
# fill all consecutive values in both directions
In [98]: ser.interpolate(limit_direction="both")
Out[98]:
0 5.0
1 5.0
2 5.0
3 7.0
4 9.0
5 11.0
6 13.0
7 13.0
8 13.0
dtype: float64
By default, NaN values are filled whether they are inside (surrounded by)
existing valid values, or outside existing valid values. The limit_area
parameter restricts filling to either inside or outside values.
# fill one consecutive inside value in both directions
In [99]: ser.interpolate(limit_direction="both", limit_area="inside", limit=1)
Out[99]:
0 NaN
1 NaN
2 5.0
3 7.0
4 NaN
5 11.0
6 13.0
7 NaN
8 NaN
dtype: float64
# fill all consecutive outside values backward
In [100]: ser.interpolate(limit_direction="backward", limit_area="outside")
Out[100]:
0 5.0
1 5.0
2 5.0
3 NaN
4 NaN
5 NaN
6 13.0
7 NaN
8 NaN
dtype: float64
# fill all consecutive outside values in both directions
In [101]: ser.interpolate(limit_direction="both", limit_area="outside")
Out[101]:
0 5.0
1 5.0
2 5.0
3 NaN
4 NaN
5 NaN
6 13.0
7 13.0
8 13.0
dtype: float64
Replacing generic values#
Often times we want to replace arbitrary values with other values.
replace() in Series and replace() in DataFrame provides an efficient yet
flexible way to perform such replacements.
For a Series, you can replace a single value or a list of values by another
value:
In [102]: ser = pd.Series([0.0, 1.0, 2.0, 3.0, 4.0])
In [103]: ser.replace(0, 5)
Out[103]:
0 5.0
1 1.0
2 2.0
3 3.0
4 4.0
dtype: float64
You can replace a list of values by a list of other values:
In [104]: ser.replace([0, 1, 2, 3, 4], [4, 3, 2, 1, 0])
Out[104]:
0 4.0
1 3.0
2 2.0
3 1.0
4 0.0
dtype: float64
You can also specify a mapping dict:
In [105]: ser.replace({0: 10, 1: 100})
Out[105]:
0 10.0
1 100.0
2 2.0
3 3.0
4 4.0
dtype: float64
For a DataFrame, you can specify individual values by column:
In [106]: df = pd.DataFrame({"a": [0, 1, 2, 3, 4], "b": [5, 6, 7, 8, 9]})
In [107]: df.replace({"a": 0, "b": 5}, 100)
Out[107]:
a b
0 100 100
1 1 6
2 2 7
3 3 8
4 4 9
Instead of replacing with specified values, you can treat all given values as
missing and interpolate over them:
In [108]: ser.replace([1, 2, 3], method="pad")
Out[108]:
0 0.0
1 0.0
2 0.0
3 0.0
4 4.0
dtype: float64
String/regular expression replacement#
Note
Python strings prefixed with the r character such as r'hello world'
are so-called “raw” strings. They have different semantics regarding
backslashes than strings without this prefix. Backslashes in raw strings
will be interpreted as an escaped backslash, e.g., r'\' == '\\'. You
should read about them
if this is unclear.
Replace the ‘.’ with NaN (str -> str):
In [109]: d = {"a": list(range(4)), "b": list("ab.."), "c": ["a", "b", np.nan, "d"]}
In [110]: df = pd.DataFrame(d)
In [111]: df.replace(".", np.nan)
Out[111]:
a b c
0 0 a a
1 1 b b
2 2 NaN NaN
3 3 NaN d
Now do it with a regular expression that removes surrounding whitespace
(regex -> regex):
In [112]: df.replace(r"\s*\.\s*", np.nan, regex=True)
Out[112]:
a b c
0 0 a a
1 1 b b
2 2 NaN NaN
3 3 NaN d
Replace a few different values (list -> list):
In [113]: df.replace(["a", "."], ["b", np.nan])
Out[113]:
a b c
0 0 b b
1 1 b b
2 2 NaN NaN
3 3 NaN d
list of regex -> list of regex:
In [114]: df.replace([r"\.", r"(a)"], ["dot", r"\1stuff"], regex=True)
Out[114]:
a b c
0 0 astuff astuff
1 1 b b
2 2 dot NaN
3 3 dot d
Only search in column 'b' (dict -> dict):
In [115]: df.replace({"b": "."}, {"b": np.nan})
Out[115]:
a b c
0 0 a a
1 1 b b
2 2 NaN NaN
3 3 NaN d
Same as the previous example, but use a regular expression for
searching instead (dict of regex -> dict):
In [116]: df.replace({"b": r"\s*\.\s*"}, {"b": np.nan}, regex=True)
Out[116]:
a b c
0 0 a a
1 1 b b
2 2 NaN NaN
3 3 NaN d
You can pass nested dictionaries of regular expressions that use regex=True:
In [117]: df.replace({"b": {"b": r""}}, regex=True)
Out[117]:
a b c
0 0 a a
1 1 b
2 2 . NaN
3 3 . d
Alternatively, you can pass the nested dictionary like so:
In [118]: df.replace(regex={"b": {r"\s*\.\s*": np.nan}})
Out[118]:
a b c
0 0 a a
1 1 b b
2 2 NaN NaN
3 3 NaN d
You can also use the group of a regular expression match when replacing (dict
of regex -> dict of regex), this works for lists as well.
In [119]: df.replace({"b": r"\s*(\.)\s*"}, {"b": r"\1ty"}, regex=True)
Out[119]:
a b c
0 0 a a
1 1 b b
2 2 .ty NaN
3 3 .ty d
You can pass a list of regular expressions, of which those that match
will be replaced with a scalar (list of regex -> regex).
In [120]: df.replace([r"\s*\.\s*", r"a|b"], np.nan, regex=True)
Out[120]:
a b c
0 0 NaN NaN
1 1 NaN NaN
2 2 NaN NaN
3 3 NaN d
All of the regular expression examples can also be passed with the
to_replace argument as the regex argument. In this case the value
argument must be passed explicitly by name or regex must be a nested
dictionary. The previous example, in this case, would then be:
In [121]: df.replace(regex=[r"\s*\.\s*", r"a|b"], value=np.nan)
Out[121]:
a b c
0 0 NaN NaN
1 1 NaN NaN
2 2 NaN NaN
3 3 NaN d
This can be convenient if you do not want to pass regex=True every time you
want to use a regular expression.
Note
Anywhere in the above replace examples that you see a regular expression
a compiled regular expression is valid as well.
Numeric replacement#
replace() is similar to fillna().
In [122]: df = pd.DataFrame(np.random.randn(10, 2))
In [123]: df[np.random.rand(df.shape[0]) > 0.5] = 1.5
In [124]: df.replace(1.5, np.nan)
Out[124]:
0 1
0 -0.844214 -1.021415
1 0.432396 -0.323580
2 0.423825 0.799180
3 1.262614 0.751965
4 NaN NaN
5 NaN NaN
6 -0.498174 -1.060799
7 0.591667 -0.183257
8 1.019855 -1.482465
9 NaN NaN
Replacing more than one value is possible by passing a list.
In [125]: df00 = df.iloc[0, 0]
In [126]: df.replace([1.5, df00], [np.nan, "a"])
Out[126]:
0 1
0 a -1.021415
1 0.432396 -0.323580
2 0.423825 0.799180
3 1.262614 0.751965
4 NaN NaN
5 NaN NaN
6 -0.498174 -1.060799
7 0.591667 -0.183257
8 1.019855 -1.482465
9 NaN NaN
In [127]: df[1].dtype
Out[127]: dtype('float64')
You can also operate on the DataFrame in place:
In [128]: df.replace(1.5, np.nan, inplace=True)
Missing data casting rules and indexing#
While pandas supports storing arrays of integer and boolean type, these types
are not capable of storing missing data. Until we can switch to using a native
NA type in NumPy, we’ve established some “casting rules”. When a reindexing
operation introduces missing data, the Series will be cast according to the
rules introduced in the table below.
data type
Cast to
integer
float
boolean
object
float
no cast
object
no cast
For example:
In [129]: s = pd.Series(np.random.randn(5), index=[0, 2, 4, 6, 7])
In [130]: s > 0
Out[130]:
0 True
2 True
4 True
6 True
7 True
dtype: bool
In [131]: (s > 0).dtype
Out[131]: dtype('bool')
In [132]: crit = (s > 0).reindex(list(range(8)))
In [133]: crit
Out[133]:
0 True
1 NaN
2 True
3 NaN
4 True
5 NaN
6 True
7 True
dtype: object
In [134]: crit.dtype
Out[134]: dtype('O')
Ordinarily NumPy will complain if you try to use an object array (even if it
contains boolean values) instead of a boolean array to get or set values from
an ndarray (e.g. selecting values based on some criteria). If a boolean vector
contains NAs, an exception will be generated:
In [135]: reindexed = s.reindex(list(range(8))).fillna(0)
In [136]: reindexed[crit]
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[136], line 1
----> 1 reindexed[crit]
File ~/work/pandas/pandas/pandas/core/series.py:1002, in Series.__getitem__(self, key)
999 if is_iterator(key):
1000 key = list(key)
-> 1002 if com.is_bool_indexer(key):
1003 key = check_bool_indexer(self.index, key)
1004 key = np.asarray(key, dtype=bool)
File ~/work/pandas/pandas/pandas/core/common.py:135, in is_bool_indexer(key)
131 na_msg = "Cannot mask with non-boolean array containing NA / NaN values"
132 if lib.infer_dtype(key_array) == "boolean" and isna(key_array).any():
133 # Don't raise on e.g. ["A", "B", np.nan], see
134 # test_loc_getitem_list_of_labels_categoricalindex_with_na
--> 135 raise ValueError(na_msg)
136 return False
137 return True
ValueError: Cannot mask with non-boolean array containing NA / NaN values
However, these can be filled in using fillna() and it will work fine:
In [137]: reindexed[crit.fillna(False)]
Out[137]:
0 0.126504
2 0.696198
4 0.697416
6 0.601516
7 0.003659
dtype: float64
In [138]: reindexed[crit.fillna(True)]
Out[138]:
0 0.126504
1 0.000000
2 0.696198
3 0.000000
4 0.697416
5 0.000000
6 0.601516
7 0.003659
dtype: float64
pandas provides a nullable integer dtype, but you must explicitly request it
when creating the series or column. Notice that we use a capital “I” in
the dtype="Int64".
In [139]: s = pd.Series([0, 1, np.nan, 3, 4], dtype="Int64")
In [140]: s
Out[140]:
0 0
1 1
2 <NA>
3 3
4 4
dtype: Int64
See Nullable integer data type for more.
Experimental NA scalar to denote missing values#
Warning
Experimental: the behaviour of pd.NA can still change without warning.
New in version 1.0.0.
Starting from pandas 1.0, an experimental pd.NA value (singleton) is
available to represent scalar missing values. At this moment, it is used in
the nullable integer, boolean and
dedicated string data types as the missing value indicator.
The goal of pd.NA is provide a “missing” indicator that can be used
consistently across data types (instead of np.nan, None or pd.NaT
depending on the data type).
For example, when having missing values in a Series with the nullable integer
dtype, it will use pd.NA:
In [141]: s = pd.Series([1, 2, None], dtype="Int64")
In [142]: s
Out[142]:
0 1
1 2
2 <NA>
dtype: Int64
In [143]: s[2]
Out[143]: <NA>
In [144]: s[2] is pd.NA
Out[144]: True
Currently, pandas does not yet use those data types by default (when creating
a DataFrame or Series, or when reading in data), so you need to specify
the dtype explicitly. An easy way to convert to those dtypes is explained
here.
Propagation in arithmetic and comparison operations#
In general, missing values propagate in operations involving pd.NA. When
one of the operands is unknown, the outcome of the operation is also unknown.
For example, pd.NA propagates in arithmetic operations, similarly to
np.nan:
In [145]: pd.NA + 1
Out[145]: <NA>
In [146]: "a" * pd.NA
Out[146]: <NA>
There are a few special cases when the result is known, even when one of the
operands is NA.
In [147]: pd.NA ** 0
Out[147]: 1
In [148]: 1 ** pd.NA
Out[148]: 1
In equality and comparison operations, pd.NA also propagates. This deviates
from the behaviour of np.nan, where comparisons with np.nan always
return False.
In [149]: pd.NA == 1
Out[149]: <NA>
In [150]: pd.NA == pd.NA
Out[150]: <NA>
In [151]: pd.NA < 2.5
Out[151]: <NA>
To check if a value is equal to pd.NA, the isna() function can be
used:
In [152]: pd.isna(pd.NA)
Out[152]: True
An exception on this basic propagation rule are reductions (such as the
mean or the minimum), where pandas defaults to skipping missing values. See
above for more.
Logical operations#
For logical operations, pd.NA follows the rules of the
three-valued logic (or
Kleene logic, similarly to R, SQL and Julia). This logic means to only
propagate missing values when it is logically required.
For example, for the logical “or” operation (|), if one of the operands
is True, we already know the result will be True, regardless of the
other value (so regardless the missing value would be True or False).
In this case, pd.NA does not propagate:
In [153]: True | False
Out[153]: True
In [154]: True | pd.NA
Out[154]: True
In [155]: pd.NA | True
Out[155]: True
On the other hand, if one of the operands is False, the result depends
on the value of the other operand. Therefore, in this case pd.NA
propagates:
In [156]: False | True
Out[156]: True
In [157]: False | False
Out[157]: False
In [158]: False | pd.NA
Out[158]: <NA>
The behaviour of the logical “and” operation (&) can be derived using
similar logic (where now pd.NA will not propagate if one of the operands
is already False):
In [159]: False & True
Out[159]: False
In [160]: False & False
Out[160]: False
In [161]: False & pd.NA
Out[161]: False
In [162]: True & True
Out[162]: True
In [163]: True & False
Out[163]: False
In [164]: True & pd.NA
Out[164]: <NA>
NA in a boolean context#
Since the actual value of an NA is unknown, it is ambiguous to convert NA
to a boolean value. The following raises an error:
In [165]: bool(pd.NA)
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[165], line 1
----> 1 bool(pd.NA)
File ~/work/pandas/pandas/pandas/_libs/missing.pyx:382, in pandas._libs.missing.NAType.__bool__()
TypeError: boolean value of NA is ambiguous
This also means that pd.NA cannot be used in a context where it is
evaluated to a boolean, such as if condition: ... where condition can
potentially be pd.NA. In such cases, isna() can be used to check
for pd.NA or condition being pd.NA can be avoided, for example by
filling missing values beforehand.
A similar situation occurs when using Series or DataFrame objects in if
statements, see Using if/truth statements with pandas.
NumPy ufuncs#
pandas.NA implements NumPy’s __array_ufunc__ protocol. Most ufuncs
work with NA, and generally return NA:
In [166]: np.log(pd.NA)
Out[166]: <NA>
In [167]: np.add(pd.NA, 1)
Out[167]: <NA>
Warning
Currently, ufuncs involving an ndarray and NA will return an
object-dtype filled with NA values.
In [168]: a = np.array([1, 2, 3])
In [169]: np.greater(a, pd.NA)
Out[169]: array([<NA>, <NA>, <NA>], dtype=object)
The return type here may change to return a different array type
in the future.
See DataFrame interoperability with NumPy functions for more on ufuncs.
Conversion#
If you have a DataFrame or Series using traditional types that have missing data
represented using np.nan, there are convenience methods
convert_dtypes() in Series and convert_dtypes()
in DataFrame that can convert data to use the newer dtypes for integers, strings and
booleans listed here. This is especially helpful after reading
in data sets when letting the readers such as read_csv() and read_excel()
infer default dtypes.
In this example, while the dtypes of all columns are changed, we show the results for
the first 10 columns.
In [170]: bb = pd.read_csv("data/baseball.csv", index_col="id")
In [171]: bb[bb.columns[:10]].dtypes
Out[171]:
player object
year int64
stint int64
team object
lg object
g int64
ab int64
r int64
h int64
X2b int64
dtype: object
In [172]: bbn = bb.convert_dtypes()
In [173]: bbn[bbn.columns[:10]].dtypes
Out[173]:
player string
year Int64
stint Int64
team string
lg string
g Int64
ab Int64
r Int64
h Int64
X2b Int64
dtype: object
| 477
| 1,005
|
Trying to sum combine a whole lot of columns faster/easier... help appreciated
I'm trying to sum columns into groups of 30 (month). Each column is a day. There are almost 2,000 columns
Each row is an individual product and there are about 30,000 of them.
Below is what I am doing to sum them in jupyter.
My question is that is there an easier/faster way to do this without having to do what I did below over 60 more times?
Month1 = (df_sales["d_1"] + df_sales["d_2"] + df_sales["d_3"] + df_sales["d_4"] + df_sales["d_5"] + df_sales["d_6"] + df_sales["d_7"] + df_sales["d_8"] + df_sales["d_9"] + df_sales["d_10"]
+ df_sales["d_11"] + df_sales["d_12"] + df_sales["d_13"] + df_sales["d_14"] + df_sales["d_15"] + df_sales["d_16"] + df_sales["d_17"] + df_sales["d_18"] + df_sales["d_19"] + df_sales["d_20"]
+ df_sales["d_21"] + df_sales["d_22"] + df_sales["d_23"] + df_sales["d_24"] + df_sales["d_25"] + df_sales["d_26"] + df_sales["d_27"] + df_sales["d_28"] + df_sales["d_29"] + df_sales["d_30"])
|
60,784,034
|
select dataframe value based on conditions
|
<p>I want to select the value in column <code>price</code> based on column <code>type</code> = P and column <code>timestamp</code> is the closest to the current timestamp given by <code>ts</code>. Any contribution is appreciated please.</p>
<p>input df <code>trade</code></p>
<pre><code> amount block_trade_id currency direction index_price instrument_name iv ... price strike tick_direction timestamp trade_id trade_seq type
0 0.2 NaN BTC buy 6107.34 BTC-21MAR20-6125-P 148.99 ... 0.0190 6125 0 1584748972666 42629952 21 P
0 7.1 NaN BTC sell 5428.75 BTC-26JUN20-8000-C 122.21 ... 0.1380 8000 0 1584608399553 42450837 221 C
0 1.0 NaN BTC sell 5743.13 BTC-25SEP20-15000-P 133.16 ... 1.5660 15000 2 1584736336172 42623548 993 P
0 0.6 NaN BTC buy 6185.00 BTC-25SEP20-9000-P 116.23 ... 0.5810 9000 2 1584729697095 42617591 2734 P
0 1.2 NaN BTC sell 6609.72 BTC-3APR20-7750-C 129.47 ... 0.0470 7750 1 1584717196991 42612192 3 C
</code></pre>
<p>my code:</p>
<pre><code>'''get current timestamp '''
ts = calendar.timegm(time.gmtime())
print(ts)
'''get current Future price'''
idx = trade['timestamp'].sub(ts).abs().idxmin()
fut_price = trade['price'].loc[(trade['type'].loc['P'])&(trade.loc[[idx]])]
</code></pre>
| 60,784,163
| 2020-03-21T02:54:27.680000
| 1
| null | 1
| 36
|
python|pandas
|
<p>Condition select is target for get the mask by <code>Boolean</code> </p>
<pre><code>fut_price = trade['price'].loc[(trade['type']=='P')&(trade.index==idx)]
</code></pre>
| 2020-03-21T03:23:27.380000
| 0
|
https://pandas.pydata.org/docs/user_guide/indexing.html
|
Indexing and selecting data#
Indexing and selecting data#
The axis labeling information in pandas objects serves many purposes:
Identifies data (i.e. provides metadata) using known indicators,
important for analysis, visualization, and interactive console display.
Enables automatic and explicit data alignment.
Allows intuitive getting and setting of subsets of the data set.
In this section, we will focus on the final point: namely, how to slice, dice,
and generally get and set subsets of pandas objects. The primary focus will be
on Series and DataFrame as they have received more development attention in
this area.
Note
The Python and NumPy indexing operators [] and attribute operator .
provide quick and easy access to pandas data structures across a wide range
of use cases. This makes interactive work intuitive, as there’s little new
to learn if you already know how to deal with Python dictionaries and NumPy
arrays. However, since the type of the data to be accessed isn’t known in
Condition select is target for get the mask by Boolean
fut_price = trade['price'].loc[(trade['type']=='P')&(trade.index==idx)]
advance, directly using standard operators has some optimization limits. For
production code, we recommended that you take advantage of the optimized
pandas data access methods exposed in this chapter.
Warning
Whether a copy or a reference is returned for a setting operation, may
depend on the context. This is sometimes called chained assignment and
should be avoided. See Returning a View versus Copy.
See the MultiIndex / Advanced Indexing for MultiIndex and more advanced indexing documentation.
See the cookbook for some advanced strategies.
Different choices for indexing#
Object selection has had a number of user-requested additions in order to
support more explicit location based indexing. pandas now supports three types
of multi-axis indexing.
.loc is primarily label based, but may also be used with a boolean array. .loc will raise KeyError when the items are not found. Allowed inputs are:
A single label, e.g. 5 or 'a' (Note that 5 is interpreted as a
label of the index. This use is not an integer position along the
index.).
A list or array of labels ['a', 'b', 'c'].
A slice object with labels 'a':'f' (Note that contrary to usual Python
slices, both the start and the stop are included, when present in the
index! See Slicing with labels
and Endpoints are inclusive.)
A boolean array (any NA values will be treated as False).
A callable function with one argument (the calling Series or DataFrame) and
that returns valid output for indexing (one of the above).
See more at Selection by Label.
.iloc is primarily integer position based (from 0 to
length-1 of the axis), but may also be used with a boolean
array. .iloc will raise IndexError if a requested
indexer is out-of-bounds, except slice indexers which allow
out-of-bounds indexing. (this conforms with Python/NumPy slice
semantics). Allowed inputs are:
An integer e.g. 5.
A list or array of integers [4, 3, 0].
A slice object with ints 1:7.
A boolean array (any NA values will be treated as False).
A callable function with one argument (the calling Series or DataFrame) and
that returns valid output for indexing (one of the above).
See more at Selection by Position,
Advanced Indexing and Advanced
Hierarchical.
.loc, .iloc, and also [] indexing can accept a callable as indexer. See more at Selection By Callable.
Getting values from an object with multi-axes selection uses the following
notation (using .loc as an example, but the following applies to .iloc as
well). Any of the axes accessors may be the null slice :. Axes left out of
the specification are assumed to be :, e.g. p.loc['a'] is equivalent to
p.loc['a', :].
Object Type
Indexers
Series
s.loc[indexer]
DataFrame
df.loc[row_indexer,column_indexer]
Basics#
As mentioned when introducing the data structures in the last section, the primary function of indexing with [] (a.k.a. __getitem__
for those familiar with implementing class behavior in Python) is selecting out
lower-dimensional slices. The following table shows return type values when
indexing pandas objects with []:
Object Type
Selection
Return Value Type
Series
series[label]
scalar value
DataFrame
frame[colname]
Series corresponding to colname
Here we construct a simple time series data set to use for illustrating the
indexing functionality:
In [1]: dates = pd.date_range('1/1/2000', periods=8)
In [2]: df = pd.DataFrame(np.random.randn(8, 4),
...: index=dates, columns=['A', 'B', 'C', 'D'])
...:
In [3]: df
Out[3]:
A B C D
2000-01-01 0.469112 -0.282863 -1.509059 -1.135632
2000-01-02 1.212112 -0.173215 0.119209 -1.044236
2000-01-03 -0.861849 -2.104569 -0.494929 1.071804
2000-01-04 0.721555 -0.706771 -1.039575 0.271860
2000-01-05 -0.424972 0.567020 0.276232 -1.087401
2000-01-06 -0.673690 0.113648 -1.478427 0.524988
2000-01-07 0.404705 0.577046 -1.715002 -1.039268
2000-01-08 -0.370647 -1.157892 -1.344312 0.844885
Note
None of the indexing functionality is time series specific unless
specifically stated.
Thus, as per above, we have the most basic indexing using []:
In [4]: s = df['A']
In [5]: s[dates[5]]
Out[5]: -0.6736897080883706
You can pass a list of columns to [] to select columns in that order.
If a column is not contained in the DataFrame, an exception will be
raised. Multiple columns can also be set in this manner:
In [6]: df
Out[6]:
A B C D
2000-01-01 0.469112 -0.282863 -1.509059 -1.135632
2000-01-02 1.212112 -0.173215 0.119209 -1.044236
2000-01-03 -0.861849 -2.104569 -0.494929 1.071804
2000-01-04 0.721555 -0.706771 -1.039575 0.271860
2000-01-05 -0.424972 0.567020 0.276232 -1.087401
2000-01-06 -0.673690 0.113648 -1.478427 0.524988
2000-01-07 0.404705 0.577046 -1.715002 -1.039268
2000-01-08 -0.370647 -1.157892 -1.344312 0.844885
In [7]: df[['B', 'A']] = df[['A', 'B']]
In [8]: df
Out[8]:
A B C D
2000-01-01 -0.282863 0.469112 -1.509059 -1.135632
2000-01-02 -0.173215 1.212112 0.119209 -1.044236
2000-01-03 -2.104569 -0.861849 -0.494929 1.071804
2000-01-04 -0.706771 0.721555 -1.039575 0.271860
2000-01-05 0.567020 -0.424972 0.276232 -1.087401
2000-01-06 0.113648 -0.673690 -1.478427 0.524988
2000-01-07 0.577046 0.404705 -1.715002 -1.039268
2000-01-08 -1.157892 -0.370647 -1.344312 0.844885
You may find this useful for applying a transform (in-place) to a subset of the
columns.
Warning
pandas aligns all AXES when setting Series and DataFrame from .loc, and .iloc.
This will not modify df because the column alignment is before value assignment.
In [9]: df[['A', 'B']]
Out[9]:
A B
2000-01-01 -0.282863 0.469112
2000-01-02 -0.173215 1.212112
2000-01-03 -2.104569 -0.861849
2000-01-04 -0.706771 0.721555
2000-01-05 0.567020 -0.424972
2000-01-06 0.113648 -0.673690
2000-01-07 0.577046 0.404705
2000-01-08 -1.157892 -0.370647
In [10]: df.loc[:, ['B', 'A']] = df[['A', 'B']]
In [11]: df[['A', 'B']]
Out[11]:
A B
2000-01-01 -0.282863 0.469112
2000-01-02 -0.173215 1.212112
2000-01-03 -2.104569 -0.861849
2000-01-04 -0.706771 0.721555
2000-01-05 0.567020 -0.424972
2000-01-06 0.113648 -0.673690
2000-01-07 0.577046 0.404705
2000-01-08 -1.157892 -0.370647
The correct way to swap column values is by using raw values:
In [12]: df.loc[:, ['B', 'A']] = df[['A', 'B']].to_numpy()
In [13]: df[['A', 'B']]
Out[13]:
A B
2000-01-01 0.469112 -0.282863
2000-01-02 1.212112 -0.173215
2000-01-03 -0.861849 -2.104569
2000-01-04 0.721555 -0.706771
2000-01-05 -0.424972 0.567020
2000-01-06 -0.673690 0.113648
2000-01-07 0.404705 0.577046
2000-01-08 -0.370647 -1.157892
Attribute access#
You may access an index on a Series or column on a DataFrame directly
as an attribute:
In [14]: sa = pd.Series([1, 2, 3], index=list('abc'))
In [15]: dfa = df.copy()
In [16]: sa.b
Out[16]: 2
In [17]: dfa.A
Out[17]:
2000-01-01 0.469112
2000-01-02 1.212112
2000-01-03 -0.861849
2000-01-04 0.721555
2000-01-05 -0.424972
2000-01-06 -0.673690
2000-01-07 0.404705
2000-01-08 -0.370647
Freq: D, Name: A, dtype: float64
In [18]: sa.a = 5
In [19]: sa
Out[19]:
a 5
b 2
c 3
dtype: int64
In [20]: dfa.A = list(range(len(dfa.index))) # ok if A already exists
In [21]: dfa
Out[21]:
A B C D
2000-01-01 0 -0.282863 -1.509059 -1.135632
2000-01-02 1 -0.173215 0.119209 -1.044236
2000-01-03 2 -2.104569 -0.494929 1.071804
2000-01-04 3 -0.706771 -1.039575 0.271860
2000-01-05 4 0.567020 0.276232 -1.087401
2000-01-06 5 0.113648 -1.478427 0.524988
2000-01-07 6 0.577046 -1.715002 -1.039268
2000-01-08 7 -1.157892 -1.344312 0.844885
In [22]: dfa['A'] = list(range(len(dfa.index))) # use this form to create a new column
In [23]: dfa
Out[23]:
A B C D
2000-01-01 0 -0.282863 -1.509059 -1.135632
2000-01-02 1 -0.173215 0.119209 -1.044236
2000-01-03 2 -2.104569 -0.494929 1.071804
2000-01-04 3 -0.706771 -1.039575 0.271860
2000-01-05 4 0.567020 0.276232 -1.087401
2000-01-06 5 0.113648 -1.478427 0.524988
2000-01-07 6 0.577046 -1.715002 -1.039268
2000-01-08 7 -1.157892 -1.344312 0.844885
Warning
You can use this access only if the index element is a valid Python identifier, e.g. s.1 is not allowed.
See here for an explanation of valid identifiers.
The attribute will not be available if it conflicts with an existing method name, e.g. s.min is not allowed, but s['min'] is possible.
Similarly, the attribute will not be available if it conflicts with any of the following list: index,
major_axis, minor_axis, items.
In any of these cases, standard indexing will still work, e.g. s['1'], s['min'], and s['index'] will
access the corresponding element or column.
If you are using the IPython environment, you may also use tab-completion to
see these accessible attributes.
You can also assign a dict to a row of a DataFrame:
In [24]: x = pd.DataFrame({'x': [1, 2, 3], 'y': [3, 4, 5]})
In [25]: x.iloc[1] = {'x': 9, 'y': 99}
In [26]: x
Out[26]:
x y
0 1 3
1 9 99
2 3 5
You can use attribute access to modify an existing element of a Series or column of a DataFrame, but be careful;
if you try to use attribute access to create a new column, it creates a new attribute rather than a
new column. In 0.21.0 and later, this will raise a UserWarning:
In [1]: df = pd.DataFrame({'one': [1., 2., 3.]})
In [2]: df.two = [4, 5, 6]
UserWarning: Pandas doesn't allow Series to be assigned into nonexistent columns - see https://pandas.pydata.org/pandas-docs/stable/indexing.html#attribute_access
In [3]: df
Out[3]:
one
0 1.0
1 2.0
2 3.0
Slicing ranges#
The most robust and consistent way of slicing ranges along arbitrary axes is
described in the Selection by Position section
detailing the .iloc method. For now, we explain the semantics of slicing using the [] operator.
With Series, the syntax works exactly as with an ndarray, returning a slice of
the values and the corresponding labels:
In [27]: s[:5]
Out[27]:
2000-01-01 0.469112
2000-01-02 1.212112
2000-01-03 -0.861849
2000-01-04 0.721555
2000-01-05 -0.424972
Freq: D, Name: A, dtype: float64
In [28]: s[::2]
Out[28]:
2000-01-01 0.469112
2000-01-03 -0.861849
2000-01-05 -0.424972
2000-01-07 0.404705
Freq: 2D, Name: A, dtype: float64
In [29]: s[::-1]
Out[29]:
2000-01-08 -0.370647
2000-01-07 0.404705
2000-01-06 -0.673690
2000-01-05 -0.424972
2000-01-04 0.721555
2000-01-03 -0.861849
2000-01-02 1.212112
2000-01-01 0.469112
Freq: -1D, Name: A, dtype: float64
Note that setting works as well:
In [30]: s2 = s.copy()
In [31]: s2[:5] = 0
In [32]: s2
Out[32]:
2000-01-01 0.000000
2000-01-02 0.000000
2000-01-03 0.000000
2000-01-04 0.000000
2000-01-05 0.000000
2000-01-06 -0.673690
2000-01-07 0.404705
2000-01-08 -0.370647
Freq: D, Name: A, dtype: float64
With DataFrame, slicing inside of [] slices the rows. This is provided
largely as a convenience since it is such a common operation.
In [33]: df[:3]
Out[33]:
A B C D
2000-01-01 0.469112 -0.282863 -1.509059 -1.135632
2000-01-02 1.212112 -0.173215 0.119209 -1.044236
2000-01-03 -0.861849 -2.104569 -0.494929 1.071804
In [34]: df[::-1]
Out[34]:
A B C D
2000-01-08 -0.370647 -1.157892 -1.344312 0.844885
2000-01-07 0.404705 0.577046 -1.715002 -1.039268
2000-01-06 -0.673690 0.113648 -1.478427 0.524988
2000-01-05 -0.424972 0.567020 0.276232 -1.087401
2000-01-04 0.721555 -0.706771 -1.039575 0.271860
2000-01-03 -0.861849 -2.104569 -0.494929 1.071804
2000-01-02 1.212112 -0.173215 0.119209 -1.044236
2000-01-01 0.469112 -0.282863 -1.509059 -1.135632
Selection by label#
Warning
Whether a copy or a reference is returned for a setting operation, may depend on the context.
This is sometimes called chained assignment and should be avoided.
See Returning a View versus Copy.
Warning
.loc is strict when you present slicers that are not compatible (or convertible) with the index type. For example
using integers in a DatetimeIndex. These will raise a TypeError.
In [35]: dfl = pd.DataFrame(np.random.randn(5, 4),
....: columns=list('ABCD'),
....: index=pd.date_range('20130101', periods=5))
....:
In [36]: dfl
Out[36]:
A B C D
2013-01-01 1.075770 -0.109050 1.643563 -1.469388
2013-01-02 0.357021 -0.674600 -1.776904 -0.968914
2013-01-03 -1.294524 0.413738 0.276662 -0.472035
2013-01-04 -0.013960 -0.362543 -0.006154 -0.923061
2013-01-05 0.895717 0.805244 -1.206412 2.565646
In [4]: dfl.loc[2:3]
TypeError: cannot do slice indexing on <class 'pandas.tseries.index.DatetimeIndex'> with these indexers [2] of <type 'int'>
String likes in slicing can be convertible to the type of the index and lead to natural slicing.
In [37]: dfl.loc['20130102':'20130104']
Out[37]:
A B C D
2013-01-02 0.357021 -0.674600 -1.776904 -0.968914
2013-01-03 -1.294524 0.413738 0.276662 -0.472035
2013-01-04 -0.013960 -0.362543 -0.006154 -0.923061
Warning
Changed in version 1.0.0.
pandas will raise a KeyError if indexing with a list with missing labels. See list-like Using loc with
missing keys in a list is Deprecated.
pandas provides a suite of methods in order to have purely label based indexing. This is a strict inclusion based protocol.
Every label asked for must be in the index, or a KeyError will be raised.
When slicing, both the start bound AND the stop bound are included, if present in the index.
Integers are valid labels, but they refer to the label and not the position.
The .loc attribute is the primary access method. The following are valid inputs:
A single label, e.g. 5 or 'a' (Note that 5 is interpreted as a label of the index. This use is not an integer position along the index.).
A list or array of labels ['a', 'b', 'c'].
A slice object with labels 'a':'f' (Note that contrary to usual Python
slices, both the start and the stop are included, when present in the
index! See Slicing with labels.
A boolean array.
A callable, see Selection By Callable.
In [38]: s1 = pd.Series(np.random.randn(6), index=list('abcdef'))
In [39]: s1
Out[39]:
a 1.431256
b 1.340309
c -1.170299
d -0.226169
e 0.410835
f 0.813850
dtype: float64
In [40]: s1.loc['c':]
Out[40]:
c -1.170299
d -0.226169
e 0.410835
f 0.813850
dtype: float64
In [41]: s1.loc['b']
Out[41]: 1.3403088497993827
Note that setting works as well:
In [42]: s1.loc['c':] = 0
In [43]: s1
Out[43]:
a 1.431256
b 1.340309
c 0.000000
d 0.000000
e 0.000000
f 0.000000
dtype: float64
With a DataFrame:
In [44]: df1 = pd.DataFrame(np.random.randn(6, 4),
....: index=list('abcdef'),
....: columns=list('ABCD'))
....:
In [45]: df1
Out[45]:
A B C D
a 0.132003 -0.827317 -0.076467 -1.187678
b 1.130127 -1.436737 -1.413681 1.607920
c 1.024180 0.569605 0.875906 -2.211372
d 0.974466 -2.006747 -0.410001 -0.078638
e 0.545952 -1.219217 -1.226825 0.769804
f -1.281247 -0.727707 -0.121306 -0.097883
In [46]: df1.loc[['a', 'b', 'd'], :]
Out[46]:
A B C D
a 0.132003 -0.827317 -0.076467 -1.187678
b 1.130127 -1.436737 -1.413681 1.607920
d 0.974466 -2.006747 -0.410001 -0.078638
Accessing via label slices:
In [47]: df1.loc['d':, 'A':'C']
Out[47]:
A B C
d 0.974466 -2.006747 -0.410001
e 0.545952 -1.219217 -1.226825
f -1.281247 -0.727707 -0.121306
For getting a cross section using a label (equivalent to df.xs('a')):
In [48]: df1.loc['a']
Out[48]:
A 0.132003
B -0.827317
C -0.076467
D -1.187678
Name: a, dtype: float64
For getting values with a boolean array:
In [49]: df1.loc['a'] > 0
Out[49]:
A True
B False
C False
D False
Name: a, dtype: bool
In [50]: df1.loc[:, df1.loc['a'] > 0]
Out[50]:
A
a 0.132003
b 1.130127
c 1.024180
d 0.974466
e 0.545952
f -1.281247
NA values in a boolean array propagate as False:
Changed in version 1.0.2.
In [51]: mask = pd.array([True, False, True, False, pd.NA, False], dtype="boolean")
In [52]: mask
Out[52]:
<BooleanArray>
[True, False, True, False, <NA>, False]
Length: 6, dtype: boolean
In [53]: df1[mask]
Out[53]:
A B C D
a 0.132003 -0.827317 -0.076467 -1.187678
c 1.024180 0.569605 0.875906 -2.211372
For getting a value explicitly:
# this is also equivalent to ``df1.at['a','A']``
In [54]: df1.loc['a', 'A']
Out[54]: 0.13200317033032932
Slicing with labels#
When using .loc with slices, if both the start and the stop labels are
present in the index, then elements located between the two (including them)
are returned:
In [55]: s = pd.Series(list('abcde'), index=[0, 3, 2, 5, 4])
In [56]: s.loc[3:5]
Out[56]:
3 b
2 c
5 d
dtype: object
If at least one of the two is absent, but the index is sorted, and can be
compared against start and stop labels, then slicing will still work as
expected, by selecting labels which rank between the two:
In [57]: s.sort_index()
Out[57]:
0 a
2 c
3 b
4 e
5 d
dtype: object
In [58]: s.sort_index().loc[1:6]
Out[58]:
2 c
3 b
4 e
5 d
dtype: object
However, if at least one of the two is absent and the index is not sorted, an
error will be raised (since doing otherwise would be computationally expensive,
as well as potentially ambiguous for mixed type indexes). For instance, in the
above example, s.loc[1:6] would raise KeyError.
For the rationale behind this behavior, see
Endpoints are inclusive.
In [59]: s = pd.Series(list('abcdef'), index=[0, 3, 2, 5, 4, 2])
In [60]: s.loc[3:5]
Out[60]:
3 b
2 c
5 d
dtype: object
Also, if the index has duplicate labels and either the start or the stop label is duplicated,
an error will be raised. For instance, in the above example, s.loc[2:5] would raise a KeyError.
For more information about duplicate labels, see
Duplicate Labels.
Selection by position#
Warning
Whether a copy or a reference is returned for a setting operation, may depend on the context.
This is sometimes called chained assignment and should be avoided.
See Returning a View versus Copy.
pandas provides a suite of methods in order to get purely integer based indexing. The semantics follow closely Python and NumPy slicing. These are 0-based indexing. When slicing, the start bound is included, while the upper bound is excluded. Trying to use a non-integer, even a valid label will raise an IndexError.
The .iloc attribute is the primary access method. The following are valid inputs:
An integer e.g. 5.
A list or array of integers [4, 3, 0].
A slice object with ints 1:7.
A boolean array.
A callable, see Selection By Callable.
In [61]: s1 = pd.Series(np.random.randn(5), index=list(range(0, 10, 2)))
In [62]: s1
Out[62]:
0 0.695775
2 0.341734
4 0.959726
6 -1.110336
8 -0.619976
dtype: float64
In [63]: s1.iloc[:3]
Out[63]:
0 0.695775
2 0.341734
4 0.959726
dtype: float64
In [64]: s1.iloc[3]
Out[64]: -1.110336102891167
Note that setting works as well:
In [65]: s1.iloc[:3] = 0
In [66]: s1
Out[66]:
0 0.000000
2 0.000000
4 0.000000
6 -1.110336
8 -0.619976
dtype: float64
With a DataFrame:
In [67]: df1 = pd.DataFrame(np.random.randn(6, 4),
....: index=list(range(0, 12, 2)),
....: columns=list(range(0, 8, 2)))
....:
In [68]: df1
Out[68]:
0 2 4 6
0 0.149748 -0.732339 0.687738 0.176444
2 0.403310 -0.154951 0.301624 -2.179861
4 -1.369849 -0.954208 1.462696 -1.743161
6 -0.826591 -0.345352 1.314232 0.690579
8 0.995761 2.396780 0.014871 3.357427
10 -0.317441 -1.236269 0.896171 -0.487602
Select via integer slicing:
In [69]: df1.iloc[:3]
Out[69]:
0 2 4 6
0 0.149748 -0.732339 0.687738 0.176444
2 0.403310 -0.154951 0.301624 -2.179861
4 -1.369849 -0.954208 1.462696 -1.743161
In [70]: df1.iloc[1:5, 2:4]
Out[70]:
4 6
2 0.301624 -2.179861
4 1.462696 -1.743161
6 1.314232 0.690579
8 0.014871 3.357427
Select via integer list:
In [71]: df1.iloc[[1, 3, 5], [1, 3]]
Out[71]:
2 6
2 -0.154951 -2.179861
6 -0.345352 0.690579
10 -1.236269 -0.487602
In [72]: df1.iloc[1:3, :]
Out[72]:
0 2 4 6
2 0.403310 -0.154951 0.301624 -2.179861
4 -1.369849 -0.954208 1.462696 -1.743161
In [73]: df1.iloc[:, 1:3]
Out[73]:
2 4
0 -0.732339 0.687738
2 -0.154951 0.301624
4 -0.954208 1.462696
6 -0.345352 1.314232
8 2.396780 0.014871
10 -1.236269 0.896171
# this is also equivalent to ``df1.iat[1,1]``
In [74]: df1.iloc[1, 1]
Out[74]: -0.1549507744249032
For getting a cross section using an integer position (equiv to df.xs(1)):
In [75]: df1.iloc[1]
Out[75]:
0 0.403310
2 -0.154951
4 0.301624
6 -2.179861
Name: 2, dtype: float64
Out of range slice indexes are handled gracefully just as in Python/NumPy.
# these are allowed in Python/NumPy.
In [76]: x = list('abcdef')
In [77]: x
Out[77]: ['a', 'b', 'c', 'd', 'e', 'f']
In [78]: x[4:10]
Out[78]: ['e', 'f']
In [79]: x[8:10]
Out[79]: []
In [80]: s = pd.Series(x)
In [81]: s
Out[81]:
0 a
1 b
2 c
3 d
4 e
5 f
dtype: object
In [82]: s.iloc[4:10]
Out[82]:
4 e
5 f
dtype: object
In [83]: s.iloc[8:10]
Out[83]: Series([], dtype: object)
Note that using slices that go out of bounds can result in
an empty axis (e.g. an empty DataFrame being returned).
In [84]: dfl = pd.DataFrame(np.random.randn(5, 2), columns=list('AB'))
In [85]: dfl
Out[85]:
A B
0 -0.082240 -2.182937
1 0.380396 0.084844
2 0.432390 1.519970
3 -0.493662 0.600178
4 0.274230 0.132885
In [86]: dfl.iloc[:, 2:3]
Out[86]:
Empty DataFrame
Columns: []
Index: [0, 1, 2, 3, 4]
In [87]: dfl.iloc[:, 1:3]
Out[87]:
B
0 -2.182937
1 0.084844
2 1.519970
3 0.600178
4 0.132885
In [88]: dfl.iloc[4:6]
Out[88]:
A B
4 0.27423 0.132885
A single indexer that is out of bounds will raise an IndexError.
A list of indexers where any element is out of bounds will raise an
IndexError.
>>> dfl.iloc[[4, 5, 6]]
IndexError: positional indexers are out-of-bounds
>>> dfl.iloc[:, 4]
IndexError: single positional indexer is out-of-bounds
Selection by callable#
.loc, .iloc, and also [] indexing can accept a callable as indexer.
The callable must be a function with one argument (the calling Series or DataFrame) that returns valid output for indexing.
In [89]: df1 = pd.DataFrame(np.random.randn(6, 4),
....: index=list('abcdef'),
....: columns=list('ABCD'))
....:
In [90]: df1
Out[90]:
A B C D
a -0.023688 2.410179 1.450520 0.206053
b -0.251905 -2.213588 1.063327 1.266143
c 0.299368 -0.863838 0.408204 -1.048089
d -0.025747 -0.988387 0.094055 1.262731
e 1.289997 0.082423 -0.055758 0.536580
f -0.489682 0.369374 -0.034571 -2.484478
In [91]: df1.loc[lambda df: df['A'] > 0, :]
Out[91]:
A B C D
c 0.299368 -0.863838 0.408204 -1.048089
e 1.289997 0.082423 -0.055758 0.536580
In [92]: df1.loc[:, lambda df: ['A', 'B']]
Out[92]:
A B
a -0.023688 2.410179
b -0.251905 -2.213588
c 0.299368 -0.863838
d -0.025747 -0.988387
e 1.289997 0.082423
f -0.489682 0.369374
In [93]: df1.iloc[:, lambda df: [0, 1]]
Out[93]:
A B
a -0.023688 2.410179
b -0.251905 -2.213588
c 0.299368 -0.863838
d -0.025747 -0.988387
e 1.289997 0.082423
f -0.489682 0.369374
In [94]: df1[lambda df: df.columns[0]]
Out[94]:
a -0.023688
b -0.251905
c 0.299368
d -0.025747
e 1.289997
f -0.489682
Name: A, dtype: float64
You can use callable indexing in Series.
In [95]: df1['A'].loc[lambda s: s > 0]
Out[95]:
c 0.299368
e 1.289997
Name: A, dtype: float64
Using these methods / indexers, you can chain data selection operations
without using a temporary variable.
In [96]: bb = pd.read_csv('data/baseball.csv', index_col='id')
In [97]: (bb.groupby(['year', 'team']).sum(numeric_only=True)
....: .loc[lambda df: df['r'] > 100])
....:
Out[97]:
stint g ab r h X2b ... so ibb hbp sh sf gidp
year team ...
2007 CIN 6 379 745 101 203 35 ... 127.0 14.0 1.0 1.0 15.0 18.0
DET 5 301 1062 162 283 54 ... 176.0 3.0 10.0 4.0 8.0 28.0
HOU 4 311 926 109 218 47 ... 212.0 3.0 9.0 16.0 6.0 17.0
LAN 11 413 1021 153 293 61 ... 141.0 8.0 9.0 3.0 8.0 29.0
NYN 13 622 1854 240 509 101 ... 310.0 24.0 23.0 18.0 15.0 48.0
SFN 5 482 1305 198 337 67 ... 188.0 51.0 8.0 16.0 6.0 41.0
TEX 2 198 729 115 200 40 ... 140.0 4.0 5.0 2.0 8.0 16.0
TOR 4 459 1408 187 378 96 ... 265.0 16.0 12.0 4.0 16.0 38.0
[8 rows x 18 columns]
Combining positional and label-based indexing#
If you wish to get the 0th and the 2nd elements from the index in the ‘A’ column, you can do:
In [98]: dfd = pd.DataFrame({'A': [1, 2, 3],
....: 'B': [4, 5, 6]},
....: index=list('abc'))
....:
In [99]: dfd
Out[99]:
A B
a 1 4
b 2 5
c 3 6
In [100]: dfd.loc[dfd.index[[0, 2]], 'A']
Out[100]:
a 1
c 3
Name: A, dtype: int64
This can also be expressed using .iloc, by explicitly getting locations on the indexers, and using
positional indexing to select things.
In [101]: dfd.iloc[[0, 2], dfd.columns.get_loc('A')]
Out[101]:
a 1
c 3
Name: A, dtype: int64
For getting multiple indexers, using .get_indexer:
In [102]: dfd.iloc[[0, 2], dfd.columns.get_indexer(['A', 'B'])]
Out[102]:
A B
a 1 4
c 3 6
Indexing with list with missing labels is deprecated#
Warning
Changed in version 1.0.0.
Using .loc or [] with a list with one or more missing labels will no longer reindex, in favor of .reindex.
In prior versions, using .loc[list-of-labels] would work as long as at least 1 of the keys was found (otherwise it
would raise a KeyError). This behavior was changed and will now raise a KeyError if at least one label is missing.
The recommended alternative is to use .reindex().
For example.
In [103]: s = pd.Series([1, 2, 3])
In [104]: s
Out[104]:
0 1
1 2
2 3
dtype: int64
Selection with all keys found is unchanged.
In [105]: s.loc[[1, 2]]
Out[105]:
1 2
2 3
dtype: int64
Previous behavior
In [4]: s.loc[[1, 2, 3]]
Out[4]:
1 2.0
2 3.0
3 NaN
dtype: float64
Current behavior
In [4]: s.loc[[1, 2, 3]]
Passing list-likes to .loc with any non-matching elements will raise
KeyError in the future, you can use .reindex() as an alternative.
See the documentation here:
https://pandas.pydata.org/pandas-docs/stable/indexing.html#deprecate-loc-reindex-listlike
Out[4]:
1 2.0
2 3.0
3 NaN
dtype: float64
Reindexing#
The idiomatic way to achieve selecting potentially not-found elements is via .reindex(). See also the section on reindexing.
In [106]: s.reindex([1, 2, 3])
Out[106]:
1 2.0
2 3.0
3 NaN
dtype: float64
Alternatively, if you want to select only valid keys, the following is idiomatic and efficient; it is guaranteed to preserve the dtype of the selection.
In [107]: labels = [1, 2, 3]
In [108]: s.loc[s.index.intersection(labels)]
Out[108]:
1 2
2 3
dtype: int64
Having a duplicated index will raise for a .reindex():
In [109]: s = pd.Series(np.arange(4), index=['a', 'a', 'b', 'c'])
In [110]: labels = ['c', 'd']
In [17]: s.reindex(labels)
ValueError: cannot reindex on an axis with duplicate labels
Generally, you can intersect the desired labels with the current
axis, and then reindex.
In [111]: s.loc[s.index.intersection(labels)].reindex(labels)
Out[111]:
c 3.0
d NaN
dtype: float64
However, this would still raise if your resulting index is duplicated.
In [41]: labels = ['a', 'd']
In [42]: s.loc[s.index.intersection(labels)].reindex(labels)
ValueError: cannot reindex on an axis with duplicate labels
Selecting random samples#
A random selection of rows or columns from a Series or DataFrame with the sample() method. The method will sample rows by default, and accepts a specific number of rows/columns to return, or a fraction of rows.
In [112]: s = pd.Series([0, 1, 2, 3, 4, 5])
# When no arguments are passed, returns 1 row.
In [113]: s.sample()
Out[113]:
4 4
dtype: int64
# One may specify either a number of rows:
In [114]: s.sample(n=3)
Out[114]:
0 0
4 4
1 1
dtype: int64
# Or a fraction of the rows:
In [115]: s.sample(frac=0.5)
Out[115]:
5 5
3 3
1 1
dtype: int64
By default, sample will return each row at most once, but one can also sample with replacement
using the replace option:
In [116]: s = pd.Series([0, 1, 2, 3, 4, 5])
# Without replacement (default):
In [117]: s.sample(n=6, replace=False)
Out[117]:
0 0
1 1
5 5
3 3
2 2
4 4
dtype: int64
# With replacement:
In [118]: s.sample(n=6, replace=True)
Out[118]:
0 0
4 4
3 3
2 2
4 4
4 4
dtype: int64
By default, each row has an equal probability of being selected, but if you want rows
to have different probabilities, you can pass the sample function sampling weights as
weights. These weights can be a list, a NumPy array, or a Series, but they must be of the same length as the object you are sampling. Missing values will be treated as a weight of zero, and inf values are not allowed. If weights do not sum to 1, they will be re-normalized by dividing all weights by the sum of the weights. For example:
In [119]: s = pd.Series([0, 1, 2, 3, 4, 5])
In [120]: example_weights = [0, 0, 0.2, 0.2, 0.2, 0.4]
In [121]: s.sample(n=3, weights=example_weights)
Out[121]:
5 5
4 4
3 3
dtype: int64
# Weights will be re-normalized automatically
In [122]: example_weights2 = [0.5, 0, 0, 0, 0, 0]
In [123]: s.sample(n=1, weights=example_weights2)
Out[123]:
0 0
dtype: int64
When applied to a DataFrame, you can use a column of the DataFrame as sampling weights
(provided you are sampling rows and not columns) by simply passing the name of the column
as a string.
In [124]: df2 = pd.DataFrame({'col1': [9, 8, 7, 6],
.....: 'weight_column': [0.5, 0.4, 0.1, 0]})
.....:
In [125]: df2.sample(n=3, weights='weight_column')
Out[125]:
col1 weight_column
1 8 0.4
0 9 0.5
2 7 0.1
sample also allows users to sample columns instead of rows using the axis argument.
In [126]: df3 = pd.DataFrame({'col1': [1, 2, 3], 'col2': [2, 3, 4]})
In [127]: df3.sample(n=1, axis=1)
Out[127]:
col1
0 1
1 2
2 3
Finally, one can also set a seed for sample’s random number generator using the random_state argument, which will accept either an integer (as a seed) or a NumPy RandomState object.
In [128]: df4 = pd.DataFrame({'col1': [1, 2, 3], 'col2': [2, 3, 4]})
# With a given seed, the sample will always draw the same rows.
In [129]: df4.sample(n=2, random_state=2)
Out[129]:
col1 col2
2 3 4
1 2 3
In [130]: df4.sample(n=2, random_state=2)
Out[130]:
col1 col2
2 3 4
1 2 3
Setting with enlargement#
The .loc/[] operations can perform enlargement when setting a non-existent key for that axis.
In the Series case this is effectively an appending operation.
In [131]: se = pd.Series([1, 2, 3])
In [132]: se
Out[132]:
0 1
1 2
2 3
dtype: int64
In [133]: se[5] = 5.
In [134]: se
Out[134]:
0 1.0
1 2.0
2 3.0
5 5.0
dtype: float64
A DataFrame can be enlarged on either axis via .loc.
In [135]: dfi = pd.DataFrame(np.arange(6).reshape(3, 2),
.....: columns=['A', 'B'])
.....:
In [136]: dfi
Out[136]:
A B
0 0 1
1 2 3
2 4 5
In [137]: dfi.loc[:, 'C'] = dfi.loc[:, 'A']
In [138]: dfi
Out[138]:
A B C
0 0 1 0
1 2 3 2
2 4 5 4
This is like an append operation on the DataFrame.
In [139]: dfi.loc[3] = 5
In [140]: dfi
Out[140]:
A B C
0 0 1 0
1 2 3 2
2 4 5 4
3 5 5 5
Fast scalar value getting and setting#
Since indexing with [] must handle a lot of cases (single-label access,
slicing, boolean indexing, etc.), it has a bit of overhead in order to figure
out what you’re asking for. If you only want to access a scalar value, the
fastest way is to use the at and iat methods, which are implemented on
all of the data structures.
Similarly to loc, at provides label based scalar lookups, while, iat provides integer based lookups analogously to iloc
In [141]: s.iat[5]
Out[141]: 5
In [142]: df.at[dates[5], 'A']
Out[142]: -0.6736897080883706
In [143]: df.iat[3, 0]
Out[143]: 0.7215551622443669
You can also set using these same indexers.
In [144]: df.at[dates[5], 'E'] = 7
In [145]: df.iat[3, 0] = 7
at may enlarge the object in-place as above if the indexer is missing.
In [146]: df.at[dates[-1] + pd.Timedelta('1 day'), 0] = 7
In [147]: df
Out[147]:
A B C D E 0
2000-01-01 0.469112 -0.282863 -1.509059 -1.135632 NaN NaN
2000-01-02 1.212112 -0.173215 0.119209 -1.044236 NaN NaN
2000-01-03 -0.861849 -2.104569 -0.494929 1.071804 NaN NaN
2000-01-04 7.000000 -0.706771 -1.039575 0.271860 NaN NaN
2000-01-05 -0.424972 0.567020 0.276232 -1.087401 NaN NaN
2000-01-06 -0.673690 0.113648 -1.478427 0.524988 7.0 NaN
2000-01-07 0.404705 0.577046 -1.715002 -1.039268 NaN NaN
2000-01-08 -0.370647 -1.157892 -1.344312 0.844885 NaN NaN
2000-01-09 NaN NaN NaN NaN NaN 7.0
Boolean indexing#
Another common operation is the use of boolean vectors to filter the data.
The operators are: | for or, & for and, and ~ for not.
These must be grouped by using parentheses, since by default Python will
evaluate an expression such as df['A'] > 2 & df['B'] < 3 as
df['A'] > (2 & df['B']) < 3, while the desired evaluation order is
(df['A'] > 2) & (df['B'] < 3).
Using a boolean vector to index a Series works exactly as in a NumPy ndarray:
In [148]: s = pd.Series(range(-3, 4))
In [149]: s
Out[149]:
0 -3
1 -2
2 -1
3 0
4 1
5 2
6 3
dtype: int64
In [150]: s[s > 0]
Out[150]:
4 1
5 2
6 3
dtype: int64
In [151]: s[(s < -1) | (s > 0.5)]
Out[151]:
0 -3
1 -2
4 1
5 2
6 3
dtype: int64
In [152]: s[~(s < 0)]
Out[152]:
3 0
4 1
5 2
6 3
dtype: int64
You may select rows from a DataFrame using a boolean vector the same length as
the DataFrame’s index (for example, something derived from one of the columns
of the DataFrame):
In [153]: df[df['A'] > 0]
Out[153]:
A B C D E 0
2000-01-01 0.469112 -0.282863 -1.509059 -1.135632 NaN NaN
2000-01-02 1.212112 -0.173215 0.119209 -1.044236 NaN NaN
2000-01-04 7.000000 -0.706771 -1.039575 0.271860 NaN NaN
2000-01-07 0.404705 0.577046 -1.715002 -1.039268 NaN NaN
List comprehensions and the map method of Series can also be used to produce
more complex criteria:
In [154]: df2 = pd.DataFrame({'a': ['one', 'one', 'two', 'three', 'two', 'one', 'six'],
.....: 'b': ['x', 'y', 'y', 'x', 'y', 'x', 'x'],
.....: 'c': np.random.randn(7)})
.....:
# only want 'two' or 'three'
In [155]: criterion = df2['a'].map(lambda x: x.startswith('t'))
In [156]: df2[criterion]
Out[156]:
a b c
2 two y 0.041290
3 three x 0.361719
4 two y -0.238075
# equivalent but slower
In [157]: df2[[x.startswith('t') for x in df2['a']]]
Out[157]:
a b c
2 two y 0.041290
3 three x 0.361719
4 two y -0.238075
# Multiple criteria
In [158]: df2[criterion & (df2['b'] == 'x')]
Out[158]:
a b c
3 three x 0.361719
With the choice methods Selection by Label, Selection by Position,
and Advanced Indexing you may select along more than one axis using boolean vectors combined with other indexing expressions.
In [159]: df2.loc[criterion & (df2['b'] == 'x'), 'b':'c']
Out[159]:
b c
3 x 0.361719
Warning
iloc supports two kinds of boolean indexing. If the indexer is a boolean Series,
an error will be raised. For instance, in the following example, df.iloc[s.values, 1] is ok.
The boolean indexer is an array. But df.iloc[s, 1] would raise ValueError.
In [160]: df = pd.DataFrame([[1, 2], [3, 4], [5, 6]],
.....: index=list('abc'),
.....: columns=['A', 'B'])
.....:
In [161]: s = (df['A'] > 2)
In [162]: s
Out[162]:
a False
b True
c True
Name: A, dtype: bool
In [163]: df.loc[s, 'B']
Out[163]:
b 4
c 6
Name: B, dtype: int64
In [164]: df.iloc[s.values, 1]
Out[164]:
b 4
c 6
Name: B, dtype: int64
Indexing with isin#
Consider the isin() method of Series, which returns a boolean
vector that is true wherever the Series elements exist in the passed list.
This allows you to select rows where one or more columns have values you want:
In [165]: s = pd.Series(np.arange(5), index=np.arange(5)[::-1], dtype='int64')
In [166]: s
Out[166]:
4 0
3 1
2 2
1 3
0 4
dtype: int64
In [167]: s.isin([2, 4, 6])
Out[167]:
4 False
3 False
2 True
1 False
0 True
dtype: bool
In [168]: s[s.isin([2, 4, 6])]
Out[168]:
2 2
0 4
dtype: int64
The same method is available for Index objects and is useful for the cases
when you don’t know which of the sought labels are in fact present:
In [169]: s[s.index.isin([2, 4, 6])]
Out[169]:
4 0
2 2
dtype: int64
# compare it to the following
In [170]: s.reindex([2, 4, 6])
Out[170]:
2 2.0
4 0.0
6 NaN
dtype: float64
In addition to that, MultiIndex allows selecting a separate level to use
in the membership check:
In [171]: s_mi = pd.Series(np.arange(6),
.....: index=pd.MultiIndex.from_product([[0, 1], ['a', 'b', 'c']]))
.....:
In [172]: s_mi
Out[172]:
0 a 0
b 1
c 2
1 a 3
b 4
c 5
dtype: int64
In [173]: s_mi.iloc[s_mi.index.isin([(1, 'a'), (2, 'b'), (0, 'c')])]
Out[173]:
0 c 2
1 a 3
dtype: int64
In [174]: s_mi.iloc[s_mi.index.isin(['a', 'c', 'e'], level=1)]
Out[174]:
0 a 0
c 2
1 a 3
c 5
dtype: int64
DataFrame also has an isin() method. When calling isin, pass a set of
values as either an array or dict. If values is an array, isin returns
a DataFrame of booleans that is the same shape as the original DataFrame, with True
wherever the element is in the sequence of values.
In [175]: df = pd.DataFrame({'vals': [1, 2, 3, 4], 'ids': ['a', 'b', 'f', 'n'],
.....: 'ids2': ['a', 'n', 'c', 'n']})
.....:
In [176]: values = ['a', 'b', 1, 3]
In [177]: df.isin(values)
Out[177]:
vals ids ids2
0 True True True
1 False True False
2 True False False
3 False False False
Oftentimes you’ll want to match certain values with certain columns.
Just make values a dict where the key is the column, and the value is
a list of items you want to check for.
In [178]: values = {'ids': ['a', 'b'], 'vals': [1, 3]}
In [179]: df.isin(values)
Out[179]:
vals ids ids2
0 True True False
1 False True False
2 True False False
3 False False False
To return the DataFrame of booleans where the values are not in the original DataFrame,
use the ~ operator:
In [180]: values = {'ids': ['a', 'b'], 'vals': [1, 3]}
In [181]: ~df.isin(values)
Out[181]:
vals ids ids2
0 False False True
1 True False True
2 False True True
3 True True True
Combine DataFrame’s isin with the any() and all() methods to
quickly select subsets of your data that meet a given criteria.
To select a row where each column meets its own criterion:
In [182]: values = {'ids': ['a', 'b'], 'ids2': ['a', 'c'], 'vals': [1, 3]}
In [183]: row_mask = df.isin(values).all(1)
In [184]: df[row_mask]
Out[184]:
vals ids ids2
0 1 a a
The where() Method and Masking#
Selecting values from a Series with a boolean vector generally returns a
subset of the data. To guarantee that selection output has the same shape as
the original data, you can use the where method in Series and DataFrame.
To return only the selected rows:
In [185]: s[s > 0]
Out[185]:
3 1
2 2
1 3
0 4
dtype: int64
To return a Series of the same shape as the original:
In [186]: s.where(s > 0)
Out[186]:
4 NaN
3 1.0
2 2.0
1 3.0
0 4.0
dtype: float64
Selecting values from a DataFrame with a boolean criterion now also preserves
input data shape. where is used under the hood as the implementation.
The code below is equivalent to df.where(df < 0).
In [187]: df[df < 0]
Out[187]:
A B C D
2000-01-01 -2.104139 -1.309525 NaN NaN
2000-01-02 -0.352480 NaN -1.192319 NaN
2000-01-03 -0.864883 NaN -0.227870 NaN
2000-01-04 NaN -1.222082 NaN -1.233203
2000-01-05 NaN -0.605656 -1.169184 NaN
2000-01-06 NaN -0.948458 NaN -0.684718
2000-01-07 -2.670153 -0.114722 NaN -0.048048
2000-01-08 NaN NaN -0.048788 -0.808838
In addition, where takes an optional other argument for replacement of
values where the condition is False, in the returned copy.
In [188]: df.where(df < 0, -df)
Out[188]:
A B C D
2000-01-01 -2.104139 -1.309525 -0.485855 -0.245166
2000-01-02 -0.352480 -0.390389 -1.192319 -1.655824
2000-01-03 -0.864883 -0.299674 -0.227870 -0.281059
2000-01-04 -0.846958 -1.222082 -0.600705 -1.233203
2000-01-05 -0.669692 -0.605656 -1.169184 -0.342416
2000-01-06 -0.868584 -0.948458 -2.297780 -0.684718
2000-01-07 -2.670153 -0.114722 -0.168904 -0.048048
2000-01-08 -0.801196 -1.392071 -0.048788 -0.808838
You may wish to set values based on some boolean criteria.
This can be done intuitively like so:
In [189]: s2 = s.copy()
In [190]: s2[s2 < 0] = 0
In [191]: s2
Out[191]:
4 0
3 1
2 2
1 3
0 4
dtype: int64
In [192]: df2 = df.copy()
In [193]: df2[df2 < 0] = 0
In [194]: df2
Out[194]:
A B C D
2000-01-01 0.000000 0.000000 0.485855 0.245166
2000-01-02 0.000000 0.390389 0.000000 1.655824
2000-01-03 0.000000 0.299674 0.000000 0.281059
2000-01-04 0.846958 0.000000 0.600705 0.000000
2000-01-05 0.669692 0.000000 0.000000 0.342416
2000-01-06 0.868584 0.000000 2.297780 0.000000
2000-01-07 0.000000 0.000000 0.168904 0.000000
2000-01-08 0.801196 1.392071 0.000000 0.000000
By default, where returns a modified copy of the data. There is an
optional parameter inplace so that the original data can be modified
without creating a copy:
In [195]: df_orig = df.copy()
In [196]: df_orig.where(df > 0, -df, inplace=True)
In [197]: df_orig
Out[197]:
A B C D
2000-01-01 2.104139 1.309525 0.485855 0.245166
2000-01-02 0.352480 0.390389 1.192319 1.655824
2000-01-03 0.864883 0.299674 0.227870 0.281059
2000-01-04 0.846958 1.222082 0.600705 1.233203
2000-01-05 0.669692 0.605656 1.169184 0.342416
2000-01-06 0.868584 0.948458 2.297780 0.684718
2000-01-07 2.670153 0.114722 0.168904 0.048048
2000-01-08 0.801196 1.392071 0.048788 0.808838
Note
The signature for DataFrame.where() differs from numpy.where().
Roughly df1.where(m, df2) is equivalent to np.where(m, df1, df2).
In [198]: df.where(df < 0, -df) == np.where(df < 0, df, -df)
Out[198]:
A B C D
2000-01-01 True True True True
2000-01-02 True True True True
2000-01-03 True True True True
2000-01-04 True True True True
2000-01-05 True True True True
2000-01-06 True True True True
2000-01-07 True True True True
2000-01-08 True True True True
Alignment
Furthermore, where aligns the input boolean condition (ndarray or DataFrame),
such that partial selection with setting is possible. This is analogous to
partial setting via .loc (but on the contents rather than the axis labels).
In [199]: df2 = df.copy()
In [200]: df2[df2[1:4] > 0] = 3
In [201]: df2
Out[201]:
A B C D
2000-01-01 -2.104139 -1.309525 0.485855 0.245166
2000-01-02 -0.352480 3.000000 -1.192319 3.000000
2000-01-03 -0.864883 3.000000 -0.227870 3.000000
2000-01-04 3.000000 -1.222082 3.000000 -1.233203
2000-01-05 0.669692 -0.605656 -1.169184 0.342416
2000-01-06 0.868584 -0.948458 2.297780 -0.684718
2000-01-07 -2.670153 -0.114722 0.168904 -0.048048
2000-01-08 0.801196 1.392071 -0.048788 -0.808838
Where can also accept axis and level parameters to align the input when
performing the where.
In [202]: df2 = df.copy()
In [203]: df2.where(df2 > 0, df2['A'], axis='index')
Out[203]:
A B C D
2000-01-01 -2.104139 -2.104139 0.485855 0.245166
2000-01-02 -0.352480 0.390389 -0.352480 1.655824
2000-01-03 -0.864883 0.299674 -0.864883 0.281059
2000-01-04 0.846958 0.846958 0.600705 0.846958
2000-01-05 0.669692 0.669692 0.669692 0.342416
2000-01-06 0.868584 0.868584 2.297780 0.868584
2000-01-07 -2.670153 -2.670153 0.168904 -2.670153
2000-01-08 0.801196 1.392071 0.801196 0.801196
This is equivalent to (but faster than) the following.
In [204]: df2 = df.copy()
In [205]: df.apply(lambda x, y: x.where(x > 0, y), y=df['A'])
Out[205]:
A B C D
2000-01-01 -2.104139 -2.104139 0.485855 0.245166
2000-01-02 -0.352480 0.390389 -0.352480 1.655824
2000-01-03 -0.864883 0.299674 -0.864883 0.281059
2000-01-04 0.846958 0.846958 0.600705 0.846958
2000-01-05 0.669692 0.669692 0.669692 0.342416
2000-01-06 0.868584 0.868584 2.297780 0.868584
2000-01-07 -2.670153 -2.670153 0.168904 -2.670153
2000-01-08 0.801196 1.392071 0.801196 0.801196
where can accept a callable as condition and other arguments. The function must
be with one argument (the calling Series or DataFrame) and that returns valid output
as condition and other argument.
In [206]: df3 = pd.DataFrame({'A': [1, 2, 3],
.....: 'B': [4, 5, 6],
.....: 'C': [7, 8, 9]})
.....:
In [207]: df3.where(lambda x: x > 4, lambda x: x + 10)
Out[207]:
A B C
0 11 14 7
1 12 5 8
2 13 6 9
Mask#
mask() is the inverse boolean operation of where.
In [208]: s.mask(s >= 0)
Out[208]:
4 NaN
3 NaN
2 NaN
1 NaN
0 NaN
dtype: float64
In [209]: df.mask(df >= 0)
Out[209]:
A B C D
2000-01-01 -2.104139 -1.309525 NaN NaN
2000-01-02 -0.352480 NaN -1.192319 NaN
2000-01-03 -0.864883 NaN -0.227870 NaN
2000-01-04 NaN -1.222082 NaN -1.233203
2000-01-05 NaN -0.605656 -1.169184 NaN
2000-01-06 NaN -0.948458 NaN -0.684718
2000-01-07 -2.670153 -0.114722 NaN -0.048048
2000-01-08 NaN NaN -0.048788 -0.808838
Setting with enlargement conditionally using numpy()#
An alternative to where() is to use numpy.where().
Combined with setting a new column, you can use it to enlarge a DataFrame where the
values are determined conditionally.
Consider you have two choices to choose from in the following DataFrame. And you want to
set a new column color to ‘green’ when the second column has ‘Z’. You can do the
following:
In [210]: df = pd.DataFrame({'col1': list('ABBC'), 'col2': list('ZZXY')})
In [211]: df['color'] = np.where(df['col2'] == 'Z', 'green', 'red')
In [212]: df
Out[212]:
col1 col2 color
0 A Z green
1 B Z green
2 B X red
3 C Y red
If you have multiple conditions, you can use numpy.select() to achieve that. Say
corresponding to three conditions there are three choice of colors, with a fourth color
as a fallback, you can do the following.
In [213]: conditions = [
.....: (df['col2'] == 'Z') & (df['col1'] == 'A'),
.....: (df['col2'] == 'Z') & (df['col1'] == 'B'),
.....: (df['col1'] == 'B')
.....: ]
.....:
In [214]: choices = ['yellow', 'blue', 'purple']
In [215]: df['color'] = np.select(conditions, choices, default='black')
In [216]: df
Out[216]:
col1 col2 color
0 A Z yellow
1 B Z blue
2 B X purple
3 C Y black
The query() Method#
DataFrame objects have a query()
method that allows selection using an expression.
You can get the value of the frame where column b has values
between the values of columns a and c. For example:
In [217]: n = 10
In [218]: df = pd.DataFrame(np.random.rand(n, 3), columns=list('abc'))
In [219]: df
Out[219]:
a b c
0 0.438921 0.118680 0.863670
1 0.138138 0.577363 0.686602
2 0.595307 0.564592 0.520630
3 0.913052 0.926075 0.616184
4 0.078718 0.854477 0.898725
5 0.076404 0.523211 0.591538
6 0.792342 0.216974 0.564056
7 0.397890 0.454131 0.915716
8 0.074315 0.437913 0.019794
9 0.559209 0.502065 0.026437
# pure python
In [220]: df[(df['a'] < df['b']) & (df['b'] < df['c'])]
Out[220]:
a b c
1 0.138138 0.577363 0.686602
4 0.078718 0.854477 0.898725
5 0.076404 0.523211 0.591538
7 0.397890 0.454131 0.915716
# query
In [221]: df.query('(a < b) & (b < c)')
Out[221]:
a b c
1 0.138138 0.577363 0.686602
4 0.078718 0.854477 0.898725
5 0.076404 0.523211 0.591538
7 0.397890 0.454131 0.915716
Do the same thing but fall back on a named index if there is no column
with the name a.
In [222]: df = pd.DataFrame(np.random.randint(n / 2, size=(n, 2)), columns=list('bc'))
In [223]: df.index.name = 'a'
In [224]: df
Out[224]:
b c
a
0 0 4
1 0 1
2 3 4
3 4 3
4 1 4
5 0 3
6 0 1
7 3 4
8 2 3
9 1 1
In [225]: df.query('a < b and b < c')
Out[225]:
b c
a
2 3 4
If instead you don’t want to or cannot name your index, you can use the name
index in your query expression:
In [226]: df = pd.DataFrame(np.random.randint(n, size=(n, 2)), columns=list('bc'))
In [227]: df
Out[227]:
b c
0 3 1
1 3 0
2 5 6
3 5 2
4 7 4
5 0 1
6 2 5
7 0 1
8 6 0
9 7 9
In [228]: df.query('index < b < c')
Out[228]:
b c
2 5 6
Note
If the name of your index overlaps with a column name, the column name is
given precedence. For example,
In [229]: df = pd.DataFrame({'a': np.random.randint(5, size=5)})
In [230]: df.index.name = 'a'
In [231]: df.query('a > 2') # uses the column 'a', not the index
Out[231]:
a
a
1 3
3 3
You can still use the index in a query expression by using the special
identifier ‘index’:
In [232]: df.query('index > 2')
Out[232]:
a
a
3 3
4 2
If for some reason you have a column named index, then you can refer to
the index as ilevel_0 as well, but at this point you should consider
renaming your columns to something less ambiguous.
MultiIndex query() Syntax#
You can also use the levels of a DataFrame with a
MultiIndex as if they were columns in the frame:
In [233]: n = 10
In [234]: colors = np.random.choice(['red', 'green'], size=n)
In [235]: foods = np.random.choice(['eggs', 'ham'], size=n)
In [236]: colors
Out[236]:
array(['red', 'red', 'red', 'green', 'green', 'green', 'green', 'green',
'green', 'green'], dtype='<U5')
In [237]: foods
Out[237]:
array(['ham', 'ham', 'eggs', 'eggs', 'eggs', 'ham', 'ham', 'eggs', 'eggs',
'eggs'], dtype='<U4')
In [238]: index = pd.MultiIndex.from_arrays([colors, foods], names=['color', 'food'])
In [239]: df = pd.DataFrame(np.random.randn(n, 2), index=index)
In [240]: df
Out[240]:
0 1
color food
red ham 0.194889 -0.381994
ham 0.318587 2.089075
eggs -0.728293 -0.090255
green eggs -0.748199 1.318931
eggs -2.029766 0.792652
ham 0.461007 -0.542749
ham -0.305384 -0.479195
eggs 0.095031 -0.270099
eggs -0.707140 -0.773882
eggs 0.229453 0.304418
In [241]: df.query('color == "red"')
Out[241]:
0 1
color food
red ham 0.194889 -0.381994
ham 0.318587 2.089075
eggs -0.728293 -0.090255
If the levels of the MultiIndex are unnamed, you can refer to them using
special names:
In [242]: df.index.names = [None, None]
In [243]: df
Out[243]:
0 1
red ham 0.194889 -0.381994
ham 0.318587 2.089075
eggs -0.728293 -0.090255
green eggs -0.748199 1.318931
eggs -2.029766 0.792652
ham 0.461007 -0.542749
ham -0.305384 -0.479195
eggs 0.095031 -0.270099
eggs -0.707140 -0.773882
eggs 0.229453 0.304418
In [244]: df.query('ilevel_0 == "red"')
Out[244]:
0 1
red ham 0.194889 -0.381994
ham 0.318587 2.089075
eggs -0.728293 -0.090255
The convention is ilevel_0, which means “index level 0” for the 0th level
of the index.
query() Use Cases#
A use case for query() is when you have a collection of
DataFrame objects that have a subset of column names (or index
levels/names) in common. You can pass the same query to both frames without
having to specify which frame you’re interested in querying
In [245]: df = pd.DataFrame(np.random.rand(n, 3), columns=list('abc'))
In [246]: df
Out[246]:
a b c
0 0.224283 0.736107 0.139168
1 0.302827 0.657803 0.713897
2 0.611185 0.136624 0.984960
3 0.195246 0.123436 0.627712
4 0.618673 0.371660 0.047902
5 0.480088 0.062993 0.185760
6 0.568018 0.483467 0.445289
7 0.309040 0.274580 0.587101
8 0.258993 0.477769 0.370255
9 0.550459 0.840870 0.304611
In [247]: df2 = pd.DataFrame(np.random.rand(n + 2, 3), columns=df.columns)
In [248]: df2
Out[248]:
a b c
0 0.357579 0.229800 0.596001
1 0.309059 0.957923 0.965663
2 0.123102 0.336914 0.318616
3 0.526506 0.323321 0.860813
4 0.518736 0.486514 0.384724
5 0.190804 0.505723 0.614533
6 0.891939 0.623977 0.676639
7 0.480559 0.378528 0.460858
8 0.420223 0.136404 0.141295
9 0.732206 0.419540 0.604675
10 0.604466 0.848974 0.896165
11 0.589168 0.920046 0.732716
In [249]: expr = '0.0 <= a <= c <= 0.5'
In [250]: map(lambda frame: frame.query(expr), [df, df2])
Out[250]: <map at 0x7f1ea0d8e580>
query() Python versus pandas Syntax Comparison#
Full numpy-like syntax:
In [251]: df = pd.DataFrame(np.random.randint(n, size=(n, 3)), columns=list('abc'))
In [252]: df
Out[252]:
a b c
0 7 8 9
1 1 0 7
2 2 7 2
3 6 2 2
4 2 6 3
5 3 8 2
6 1 7 2
7 5 1 5
8 9 8 0
9 1 5 0
In [253]: df.query('(a < b) & (b < c)')
Out[253]:
a b c
0 7 8 9
In [254]: df[(df['a'] < df['b']) & (df['b'] < df['c'])]
Out[254]:
a b c
0 7 8 9
Slightly nicer by removing the parentheses (comparison operators bind tighter
than & and |):
In [255]: df.query('a < b & b < c')
Out[255]:
a b c
0 7 8 9
Use English instead of symbols:
In [256]: df.query('a < b and b < c')
Out[256]:
a b c
0 7 8 9
Pretty close to how you might write it on paper:
In [257]: df.query('a < b < c')
Out[257]:
a b c
0 7 8 9
The in and not in operators#
query() also supports special use of Python’s in and
not in comparison operators, providing a succinct syntax for calling the
isin method of a Series or DataFrame.
# get all rows where columns "a" and "b" have overlapping values
In [258]: df = pd.DataFrame({'a': list('aabbccddeeff'), 'b': list('aaaabbbbcccc'),
.....: 'c': np.random.randint(5, size=12),
.....: 'd': np.random.randint(9, size=12)})
.....:
In [259]: df
Out[259]:
a b c d
0 a a 2 6
1 a a 4 7
2 b a 1 6
3 b a 2 1
4 c b 3 6
5 c b 0 2
6 d b 3 3
7 d b 2 1
8 e c 4 3
9 e c 2 0
10 f c 0 6
11 f c 1 2
In [260]: df.query('a in b')
Out[260]:
a b c d
0 a a 2 6
1 a a 4 7
2 b a 1 6
3 b a 2 1
4 c b 3 6
5 c b 0 2
# How you'd do it in pure Python
In [261]: df[df['a'].isin(df['b'])]
Out[261]:
a b c d
0 a a 2 6
1 a a 4 7
2 b a 1 6
3 b a 2 1
4 c b 3 6
5 c b 0 2
In [262]: df.query('a not in b')
Out[262]:
a b c d
6 d b 3 3
7 d b 2 1
8 e c 4 3
9 e c 2 0
10 f c 0 6
11 f c 1 2
# pure Python
In [263]: df[~df['a'].isin(df['b'])]
Out[263]:
a b c d
6 d b 3 3
7 d b 2 1
8 e c 4 3
9 e c 2 0
10 f c 0 6
11 f c 1 2
You can combine this with other expressions for very succinct queries:
# rows where cols a and b have overlapping values
# and col c's values are less than col d's
In [264]: df.query('a in b and c < d')
Out[264]:
a b c d
0 a a 2 6
1 a a 4 7
2 b a 1 6
4 c b 3 6
5 c b 0 2
# pure Python
In [265]: df[df['b'].isin(df['a']) & (df['c'] < df['d'])]
Out[265]:
a b c d
0 a a 2 6
1 a a 4 7
2 b a 1 6
4 c b 3 6
5 c b 0 2
10 f c 0 6
11 f c 1 2
Note
Note that in and not in are evaluated in Python, since numexpr
has no equivalent of this operation. However, only the in/not in
expression itself is evaluated in vanilla Python. For example, in the
expression
df.query('a in b + c + d')
(b + c + d) is evaluated by numexpr and then the in
operation is evaluated in plain Python. In general, any operations that can
be evaluated using numexpr will be.
Special use of the == operator with list objects#
Comparing a list of values to a column using ==/!= works similarly
to in/not in.
In [266]: df.query('b == ["a", "b", "c"]')
Out[266]:
a b c d
0 a a 2 6
1 a a 4 7
2 b a 1 6
3 b a 2 1
4 c b 3 6
5 c b 0 2
6 d b 3 3
7 d b 2 1
8 e c 4 3
9 e c 2 0
10 f c 0 6
11 f c 1 2
# pure Python
In [267]: df[df['b'].isin(["a", "b", "c"])]
Out[267]:
a b c d
0 a a 2 6
1 a a 4 7
2 b a 1 6
3 b a 2 1
4 c b 3 6
5 c b 0 2
6 d b 3 3
7 d b 2 1
8 e c 4 3
9 e c 2 0
10 f c 0 6
11 f c 1 2
In [268]: df.query('c == [1, 2]')
Out[268]:
a b c d
0 a a 2 6
2 b a 1 6
3 b a 2 1
7 d b 2 1
9 e c 2 0
11 f c 1 2
In [269]: df.query('c != [1, 2]')
Out[269]:
a b c d
1 a a 4 7
4 c b 3 6
5 c b 0 2
6 d b 3 3
8 e c 4 3
10 f c 0 6
# using in/not in
In [270]: df.query('[1, 2] in c')
Out[270]:
a b c d
0 a a 2 6
2 b a 1 6
3 b a 2 1
7 d b 2 1
9 e c 2 0
11 f c 1 2
In [271]: df.query('[1, 2] not in c')
Out[271]:
a b c d
1 a a 4 7
4 c b 3 6
5 c b 0 2
6 d b 3 3
8 e c 4 3
10 f c 0 6
# pure Python
In [272]: df[df['c'].isin([1, 2])]
Out[272]:
a b c d
0 a a 2 6
2 b a 1 6
3 b a 2 1
7 d b 2 1
9 e c 2 0
11 f c 1 2
Boolean operators#
You can negate boolean expressions with the word not or the ~ operator.
In [273]: df = pd.DataFrame(np.random.rand(n, 3), columns=list('abc'))
In [274]: df['bools'] = np.random.rand(len(df)) > 0.5
In [275]: df.query('~bools')
Out[275]:
a b c bools
2 0.697753 0.212799 0.329209 False
7 0.275396 0.691034 0.826619 False
8 0.190649 0.558748 0.262467 False
In [276]: df.query('not bools')
Out[276]:
a b c bools
2 0.697753 0.212799 0.329209 False
7 0.275396 0.691034 0.826619 False
8 0.190649 0.558748 0.262467 False
In [277]: df.query('not bools') == df[~df['bools']]
Out[277]:
a b c bools
2 True True True True
7 True True True True
8 True True True True
Of course, expressions can be arbitrarily complex too:
# short query syntax
In [278]: shorter = df.query('a < b < c and (not bools) or bools > 2')
# equivalent in pure Python
In [279]: longer = df[(df['a'] < df['b'])
.....: & (df['b'] < df['c'])
.....: & (~df['bools'])
.....: | (df['bools'] > 2)]
.....:
In [280]: shorter
Out[280]:
a b c bools
7 0.275396 0.691034 0.826619 False
In [281]: longer
Out[281]:
a b c bools
7 0.275396 0.691034 0.826619 False
In [282]: shorter == longer
Out[282]:
a b c bools
7 True True True True
Performance of query()#
DataFrame.query() using numexpr is slightly faster than Python for
large frames.
Note
You will only see the performance benefits of using the numexpr engine
with DataFrame.query() if your frame has more than approximately 200,000
rows.
This plot was created using a DataFrame with 3 columns each containing
floating point values generated using numpy.random.randn().
Duplicate data#
If you want to identify and remove duplicate rows in a DataFrame, there are
two methods that will help: duplicated and drop_duplicates. Each
takes as an argument the columns to use to identify duplicated rows.
duplicated returns a boolean vector whose length is the number of rows, and which indicates whether a row is duplicated.
drop_duplicates removes duplicate rows.
By default, the first observed row of a duplicate set is considered unique, but
each method has a keep parameter to specify targets to be kept.
keep='first' (default): mark / drop duplicates except for the first occurrence.
keep='last': mark / drop duplicates except for the last occurrence.
keep=False: mark / drop all duplicates.
In [283]: df2 = pd.DataFrame({'a': ['one', 'one', 'two', 'two', 'two', 'three', 'four'],
.....: 'b': ['x', 'y', 'x', 'y', 'x', 'x', 'x'],
.....: 'c': np.random.randn(7)})
.....:
In [284]: df2
Out[284]:
a b c
0 one x -1.067137
1 one y 0.309500
2 two x -0.211056
3 two y -1.842023
4 two x -0.390820
5 three x -1.964475
6 four x 1.298329
In [285]: df2.duplicated('a')
Out[285]:
0 False
1 True
2 False
3 True
4 True
5 False
6 False
dtype: bool
In [286]: df2.duplicated('a', keep='last')
Out[286]:
0 True
1 False
2 True
3 True
4 False
5 False
6 False
dtype: bool
In [287]: df2.duplicated('a', keep=False)
Out[287]:
0 True
1 True
2 True
3 True
4 True
5 False
6 False
dtype: bool
In [288]: df2.drop_duplicates('a')
Out[288]:
a b c
0 one x -1.067137
2 two x -0.211056
5 three x -1.964475
6 four x 1.298329
In [289]: df2.drop_duplicates('a', keep='last')
Out[289]:
a b c
1 one y 0.309500
4 two x -0.390820
5 three x -1.964475
6 four x 1.298329
In [290]: df2.drop_duplicates('a', keep=False)
Out[290]:
a b c
5 three x -1.964475
6 four x 1.298329
Also, you can pass a list of columns to identify duplications.
In [291]: df2.duplicated(['a', 'b'])
Out[291]:
0 False
1 False
2 False
3 False
4 True
5 False
6 False
dtype: bool
In [292]: df2.drop_duplicates(['a', 'b'])
Out[292]:
a b c
0 one x -1.067137
1 one y 0.309500
2 two x -0.211056
3 two y -1.842023
5 three x -1.964475
6 four x 1.298329
To drop duplicates by index value, use Index.duplicated then perform slicing.
The same set of options are available for the keep parameter.
In [293]: df3 = pd.DataFrame({'a': np.arange(6),
.....: 'b': np.random.randn(6)},
.....: index=['a', 'a', 'b', 'c', 'b', 'a'])
.....:
In [294]: df3
Out[294]:
a b
a 0 1.440455
a 1 2.456086
b 2 1.038402
c 3 -0.894409
b 4 0.683536
a 5 3.082764
In [295]: df3.index.duplicated()
Out[295]: array([False, True, False, False, True, True])
In [296]: df3[~df3.index.duplicated()]
Out[296]:
a b
a 0 1.440455
b 2 1.038402
c 3 -0.894409
In [297]: df3[~df3.index.duplicated(keep='last')]
Out[297]:
a b
c 3 -0.894409
b 4 0.683536
a 5 3.082764
In [298]: df3[~df3.index.duplicated(keep=False)]
Out[298]:
a b
c 3 -0.894409
Dictionary-like get() method#
Each of Series or DataFrame have a get method which can return a
default value.
In [299]: s = pd.Series([1, 2, 3], index=['a', 'b', 'c'])
In [300]: s.get('a') # equivalent to s['a']
Out[300]: 1
In [301]: s.get('x', default=-1)
Out[301]: -1
Looking up values by index/column labels#
Sometimes you want to extract a set of values given a sequence of row labels
and column labels, this can be achieved by pandas.factorize and NumPy indexing.
For instance:
In [302]: df = pd.DataFrame({'col': ["A", "A", "B", "B"],
.....: 'A': [80, 23, np.nan, 22],
.....: 'B': [80, 55, 76, 67]})
.....:
In [303]: df
Out[303]:
col A B
0 A 80.0 80
1 A 23.0 55
2 B NaN 76
3 B 22.0 67
In [304]: idx, cols = pd.factorize(df['col'])
In [305]: df.reindex(cols, axis=1).to_numpy()[np.arange(len(df)), idx]
Out[305]: array([80., 23., 76., 67.])
Formerly this could be achieved with the dedicated DataFrame.lookup method
which was deprecated in version 1.2.0.
Index objects#
The pandas Index class and its subclasses can be viewed as
implementing an ordered multiset. Duplicates are allowed. However, if you try
to convert an Index object with duplicate entries into a
set, an exception will be raised.
Index also provides the infrastructure necessary for
lookups, data alignment, and reindexing. The easiest way to create an
Index directly is to pass a list or other sequence to
Index:
In [306]: index = pd.Index(['e', 'd', 'a', 'b'])
In [307]: index
Out[307]: Index(['e', 'd', 'a', 'b'], dtype='object')
In [308]: 'd' in index
Out[308]: True
You can also pass a name to be stored in the index:
In [309]: index = pd.Index(['e', 'd', 'a', 'b'], name='something')
In [310]: index.name
Out[310]: 'something'
The name, if set, will be shown in the console display:
In [311]: index = pd.Index(list(range(5)), name='rows')
In [312]: columns = pd.Index(['A', 'B', 'C'], name='cols')
In [313]: df = pd.DataFrame(np.random.randn(5, 3), index=index, columns=columns)
In [314]: df
Out[314]:
cols A B C
rows
0 1.295989 -1.051694 1.340429
1 -2.366110 0.428241 0.387275
2 0.433306 0.929548 0.278094
3 2.154730 -0.315628 0.264223
4 1.126818 1.132290 -0.353310
In [315]: df['A']
Out[315]:
rows
0 1.295989
1 -2.366110
2 0.433306
3 2.154730
4 1.126818
Name: A, dtype: float64
Setting metadata#
Indexes are “mostly immutable”, but it is possible to set and change their
name attribute. You can use the rename, set_names to set these attributes
directly, and they default to returning a copy.
See Advanced Indexing for usage of MultiIndexes.
In [316]: ind = pd.Index([1, 2, 3])
In [317]: ind.rename("apple")
Out[317]: Int64Index([1, 2, 3], dtype='int64', name='apple')
In [318]: ind
Out[318]: Int64Index([1, 2, 3], dtype='int64')
In [319]: ind.set_names(["apple"], inplace=True)
In [320]: ind.name = "bob"
In [321]: ind
Out[321]: Int64Index([1, 2, 3], dtype='int64', name='bob')
set_names, set_levels, and set_codes also take an optional
level argument
In [322]: index = pd.MultiIndex.from_product([range(3), ['one', 'two']], names=['first', 'second'])
In [323]: index
Out[323]:
MultiIndex([(0, 'one'),
(0, 'two'),
(1, 'one'),
(1, 'two'),
(2, 'one'),
(2, 'two')],
names=['first', 'second'])
In [324]: index.levels[1]
Out[324]: Index(['one', 'two'], dtype='object', name='second')
In [325]: index.set_levels(["a", "b"], level=1)
Out[325]:
MultiIndex([(0, 'a'),
(0, 'b'),
(1, 'a'),
(1, 'b'),
(2, 'a'),
(2, 'b')],
names=['first', 'second'])
Set operations on Index objects#
The two main operations are union and intersection.
Difference is provided via the .difference() method.
In [326]: a = pd.Index(['c', 'b', 'a'])
In [327]: b = pd.Index(['c', 'e', 'd'])
In [328]: a.difference(b)
Out[328]: Index(['a', 'b'], dtype='object')
Also available is the symmetric_difference operation, which returns elements
that appear in either idx1 or idx2, but not in both. This is
equivalent to the Index created by idx1.difference(idx2).union(idx2.difference(idx1)),
with duplicates dropped.
In [329]: idx1 = pd.Index([1, 2, 3, 4])
In [330]: idx2 = pd.Index([2, 3, 4, 5])
In [331]: idx1.symmetric_difference(idx2)
Out[331]: Int64Index([1, 5], dtype='int64')
Note
The resulting index from a set operation will be sorted in ascending order.
When performing Index.union() between indexes with different dtypes, the indexes
must be cast to a common dtype. Typically, though not always, this is object dtype. The
exception is when performing a union between integer and float data. In this case, the
integer values are converted to float
In [332]: idx1 = pd.Index([0, 1, 2])
In [333]: idx2 = pd.Index([0.5, 1.5])
In [334]: idx1.union(idx2)
Out[334]: Float64Index([0.0, 0.5, 1.0, 1.5, 2.0], dtype='float64')
Missing values#
Important
Even though Index can hold missing values (NaN), it should be avoided
if you do not want any unexpected results. For example, some operations
exclude missing values implicitly.
Index.fillna fills missing values with specified scalar value.
In [335]: idx1 = pd.Index([1, np.nan, 3, 4])
In [336]: idx1
Out[336]: Float64Index([1.0, nan, 3.0, 4.0], dtype='float64')
In [337]: idx1.fillna(2)
Out[337]: Float64Index([1.0, 2.0, 3.0, 4.0], dtype='float64')
In [338]: idx2 = pd.DatetimeIndex([pd.Timestamp('2011-01-01'),
.....: pd.NaT,
.....: pd.Timestamp('2011-01-03')])
.....:
In [339]: idx2
Out[339]: DatetimeIndex(['2011-01-01', 'NaT', '2011-01-03'], dtype='datetime64[ns]', freq=None)
In [340]: idx2.fillna(pd.Timestamp('2011-01-02'))
Out[340]: DatetimeIndex(['2011-01-01', '2011-01-02', '2011-01-03'], dtype='datetime64[ns]', freq=None)
Set / reset index#
Occasionally you will load or create a data set into a DataFrame and want to
add an index after you’ve already done so. There are a couple of different
ways.
Set an index#
DataFrame has a set_index() method which takes a column name
(for a regular Index) or a list of column names (for a MultiIndex).
To create a new, re-indexed DataFrame:
In [341]: data
Out[341]:
a b c d
0 bar one z 1.0
1 bar two y 2.0
2 foo one x 3.0
3 foo two w 4.0
In [342]: indexed1 = data.set_index('c')
In [343]: indexed1
Out[343]:
a b d
c
z bar one 1.0
y bar two 2.0
x foo one 3.0
w foo two 4.0
In [344]: indexed2 = data.set_index(['a', 'b'])
In [345]: indexed2
Out[345]:
c d
a b
bar one z 1.0
two y 2.0
foo one x 3.0
two w 4.0
The append keyword option allow you to keep the existing index and append
the given columns to a MultiIndex:
In [346]: frame = data.set_index('c', drop=False)
In [347]: frame = frame.set_index(['a', 'b'], append=True)
In [348]: frame
Out[348]:
c d
c a b
z bar one z 1.0
y bar two y 2.0
x foo one x 3.0
w foo two w 4.0
Other options in set_index allow you not drop the index columns or to add
the index in-place (without creating a new object):
In [349]: data.set_index('c', drop=False)
Out[349]:
a b c d
c
z bar one z 1.0
y bar two y 2.0
x foo one x 3.0
w foo two w 4.0
In [350]: data.set_index(['a', 'b'], inplace=True)
In [351]: data
Out[351]:
c d
a b
bar one z 1.0
two y 2.0
foo one x 3.0
two w 4.0
Reset the index#
As a convenience, there is a new function on DataFrame called
reset_index() which transfers the index values into the
DataFrame’s columns and sets a simple integer index.
This is the inverse operation of set_index().
In [352]: data
Out[352]:
c d
a b
bar one z 1.0
two y 2.0
foo one x 3.0
two w 4.0
In [353]: data.reset_index()
Out[353]:
a b c d
0 bar one z 1.0
1 bar two y 2.0
2 foo one x 3.0
3 foo two w 4.0
The output is more similar to a SQL table or a record array. The names for the
columns derived from the index are the ones stored in the names attribute.
You can use the level keyword to remove only a portion of the index:
In [354]: frame
Out[354]:
c d
c a b
z bar one z 1.0
y bar two y 2.0
x foo one x 3.0
w foo two w 4.0
In [355]: frame.reset_index(level=1)
Out[355]:
a c d
c b
z one bar z 1.0
y two bar y 2.0
x one foo x 3.0
w two foo w 4.0
reset_index takes an optional parameter drop which if true simply
discards the index, instead of putting index values in the DataFrame’s columns.
Adding an ad hoc index#
If you create an index yourself, you can just assign it to the index field:
data.index = index
Returning a view versus a copy#
When setting values in a pandas object, care must be taken to avoid what is called
chained indexing. Here is an example.
In [356]: dfmi = pd.DataFrame([list('abcd'),
.....: list('efgh'),
.....: list('ijkl'),
.....: list('mnop')],
.....: columns=pd.MultiIndex.from_product([['one', 'two'],
.....: ['first', 'second']]))
.....:
In [357]: dfmi
Out[357]:
one two
first second first second
0 a b c d
1 e f g h
2 i j k l
3 m n o p
Compare these two access methods:
In [358]: dfmi['one']['second']
Out[358]:
0 b
1 f
2 j
3 n
Name: second, dtype: object
In [359]: dfmi.loc[:, ('one', 'second')]
Out[359]:
0 b
1 f
2 j
3 n
Name: (one, second), dtype: object
These both yield the same results, so which should you use? It is instructive to understand the order
of operations on these and why method 2 (.loc) is much preferred over method 1 (chained []).
dfmi['one'] selects the first level of the columns and returns a DataFrame that is singly-indexed.
Then another Python operation dfmi_with_one['second'] selects the series indexed by 'second'.
This is indicated by the variable dfmi_with_one because pandas sees these operations as separate events.
e.g. separate calls to __getitem__, so it has to treat them as linear operations, they happen one after another.
Contrast this to df.loc[:,('one','second')] which passes a nested tuple of (slice(None),('one','second')) to a single call to
__getitem__. This allows pandas to deal with this as a single entity. Furthermore this order of operations can be significantly
faster, and allows one to index both axes if so desired.
Why does assignment fail when using chained indexing?#
The problem in the previous section is just a performance issue. What’s up with
the SettingWithCopy warning? We don’t usually throw warnings around when
you do something that might cost a few extra milliseconds!
But it turns out that assigning to the product of chained indexing has
inherently unpredictable results. To see this, think about how the Python
interpreter executes this code:
dfmi.loc[:, ('one', 'second')] = value
# becomes
dfmi.loc.__setitem__((slice(None), ('one', 'second')), value)
But this code is handled differently:
dfmi['one']['second'] = value
# becomes
dfmi.__getitem__('one').__setitem__('second', value)
See that __getitem__ in there? Outside of simple cases, it’s very hard to
predict whether it will return a view or a copy (it depends on the memory layout
of the array, about which pandas makes no guarantees), and therefore whether
the __setitem__ will modify dfmi or a temporary object that gets thrown
out immediately afterward. That’s what SettingWithCopy is warning you
about!
Note
You may be wondering whether we should be concerned about the loc
property in the first example. But dfmi.loc is guaranteed to be dfmi
itself with modified indexing behavior, so dfmi.loc.__getitem__ /
dfmi.loc.__setitem__ operate on dfmi directly. Of course,
dfmi.loc.__getitem__(idx) may be a view or a copy of dfmi.
Sometimes a SettingWithCopy warning will arise at times when there’s no
obvious chained indexing going on. These are the bugs that
SettingWithCopy is designed to catch! pandas is probably trying to warn you
that you’ve done this:
def do_something(df):
foo = df[['bar', 'baz']] # Is foo a view? A copy? Nobody knows!
# ... many lines here ...
# We don't know whether this will modify df or not!
foo['quux'] = value
return foo
Yikes!
Evaluation order matters#
When you use chained indexing, the order and type of the indexing operation
partially determine whether the result is a slice into the original object, or
a copy of the slice.
pandas has the SettingWithCopyWarning because assigning to a copy of a
slice is frequently not intentional, but a mistake caused by chained indexing
returning a copy where a slice was expected.
If you would like pandas to be more or less trusting about assignment to a
chained indexing expression, you can set the option
mode.chained_assignment to one of these values:
'warn', the default, means a SettingWithCopyWarning is printed.
'raise' means pandas will raise a SettingWithCopyError
you have to deal with.
None will suppress the warnings entirely.
In [360]: dfb = pd.DataFrame({'a': ['one', 'one', 'two',
.....: 'three', 'two', 'one', 'six'],
.....: 'c': np.arange(7)})
.....:
# This will show the SettingWithCopyWarning
# but the frame values will be set
In [361]: dfb['c'][dfb['a'].str.startswith('o')] = 42
This however is operating on a copy and will not work.
>>> pd.set_option('mode.chained_assignment','warn')
>>> dfb[dfb['a'].str.startswith('o')]['c'] = 42
Traceback (most recent call last)
...
SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_index,col_indexer] = value instead
A chained assignment can also crop up in setting in a mixed dtype frame.
Note
These setting rules apply to all of .loc/.iloc.
The following is the recommended access method using .loc for multiple items (using mask) and a single item using a fixed index:
In [362]: dfc = pd.DataFrame({'a': ['one', 'one', 'two',
.....: 'three', 'two', 'one', 'six'],
.....: 'c': np.arange(7)})
.....:
In [363]: dfd = dfc.copy()
# Setting multiple items using a mask
In [364]: mask = dfd['a'].str.startswith('o')
In [365]: dfd.loc[mask, 'c'] = 42
In [366]: dfd
Out[366]:
a c
0 one 42
1 one 42
2 two 2
3 three 3
4 two 4
5 one 42
6 six 6
# Setting a single item
In [367]: dfd = dfc.copy()
In [368]: dfd.loc[2, 'a'] = 11
In [369]: dfd
Out[369]:
a c
0 one 0
1 one 1
2 11 2
3 three 3
4 two 4
5 one 5
6 six 6
The following can work at times, but it is not guaranteed to, and therefore should be avoided:
In [370]: dfd = dfc.copy()
In [371]: dfd['a'][2] = 111
In [372]: dfd
Out[372]:
a c
0 one 0
1 one 1
2 111 2
3 three 3
4 two 4
5 one 5
6 six 6
Last, the subsequent example will not work at all, and so should be avoided:
>>> pd.set_option('mode.chained_assignment','raise')
>>> dfd.loc[0]['a'] = 1111
Traceback (most recent call last)
...
SettingWithCopyError:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_index,col_indexer] = value instead
Warning
The chained assignment warnings / exceptions are aiming to inform the user of a possibly invalid
assignment. There may be false positives; situations where a chained assignment is inadvertently
reported.
| 1,001
| 1,129
|
select dataframe value based on conditions
I want to select the value in column price based on column type = P and column timestamp is the closest to the current timestamp given by ts. Any contribution is appreciated please.
input df trade
amount block_trade_id currency direction index_price instrument_name iv ... price strike tick_direction timestamp trade_id trade_seq type
0 0.2 NaN BTC buy 6107.34 BTC-21MAR20-6125-P 148.99 ... 0.0190 6125 0 1584748972666 42629952 21 P
0 7.1 NaN BTC sell 5428.75 BTC-26JUN20-8000-C 122.21 ... 0.1380 8000 0 1584608399553 42450837 221 C
0 1.0 NaN BTC sell 5743.13 BTC-25SEP20-15000-P 133.16 ... 1.5660 15000 2 1584736336172 42623548 993 P
0 0.6 NaN BTC buy 6185.00 BTC-25SEP20-9000-P 116.23 ... 0.5810 9000 2 1584729697095 42617591 2734 P
0 1.2 NaN BTC sell 6609.72 BTC-3APR20-7750-C 129.47 ... 0.0470 7750 1 1584717196991 42612192 3 C
my code:
'''get current timestamp '''
ts = calendar.timegm(time.gmtime())
print(ts)
'''get current Future price'''
idx = trade['timestamp'].sub(ts).abs().idxmin()
fut_price = trade['price'].loc[(trade['type'].loc['P'])&(trade.loc[[idx]])]
|
60,672,224
|
Build hierarchy in pandas
|
<p>I am looking to build a hierarchy of who reports to who and create the reporting structure for each record. </p>
<p>My raw data would consist of two columns:
e_id and s_id:</p>
<p>and I want to create a variable with a dictionary containing the structure like below. leftmost value of the list would be climbing the hierarchy while the dictionary key is the record e_id value. </p>
<pre><code>e_id s_id structure
1 {1:[null]}
2 3 {2:[2,3]} circular so infinite sequence
3 2 {3:[3,2]} circular so infinite sequence
4 6 {4:[null,1,6]}
5 4 {5:[null,1,6,4]}
6 1 {6:[null,1]}
</code></pre>
<p>From my understanding this would be an apply method, I am just confused with how to set it up to read other rows and return the s_id value of that row.</p>
<p>Thank you in advance!</p>
| 60,691,306
| 2020-03-13T14:25:38.357000
| 1
| null | 0
| 36
|
python|pandas
|
<p>There might be a better way to do this using <code>networkx</code> graphs. But here is one simple solution. </p>
<pre><code>df = pd.DataFrame({'e_id': [1,2,3,4,5,6],
's_id': [None,3,2,6,4,1]})
</code></pre>
<p>Create a dict with parents and child</p>
<pre><code>parents = dict(zip(df.e_id, df.s_id))
</code></pre>
<p>Function will get the child for each parent passed and then recursively until a circular situation occurs or reaches a None</p>
<pre><code>def find_child(x,i):
if i==0:
child_list.clear()
child = parents.get(x)
if child not in child_list:
child_list.append(child)
else:
return child_list
if pd.isnull(child)==False:
find_child(child,1)
return child_list
</code></pre>
<p>Loop through the df rows and apply the function for each <code>e_id</code>. The second parameter is to distingush between whether to clear the list or not in case of recursive calls</p>
<pre><code>child_list = []
for idx, row in df.iterrows():
print({row['e_id']: find_child(row['e_id'], 0)})
</code></pre>
<p>Output:</p>
<pre><code>{1.0: None}
{2.0: [3.0, 2.0]}
{3.0: [2.0, 3.0]}
{4.0: [6.0, 1.0, nan]}
{5.0: [4.0, 6.0, 1.0, nan]}
{6.0: [1.0, nan]}
</code></pre>
| 2020-03-15T09:10:23.280000
| 0
|
https://pandas.pydata.org/docs/user_guide/advanced.html
|
MultiIndex / advanced indexing#
MultiIndex / advanced indexing#
This section covers indexing with a MultiIndex
and other advanced indexing features.
See the Indexing and Selecting Data for general indexing documentation.
There might be a better way to do this using networkx graphs. But here is one simple solution.
df = pd.DataFrame({'e_id': [1,2,3,4,5,6],
's_id': [None,3,2,6,4,1]})
Create a dict with parents and child
parents = dict(zip(df.e_id, df.s_id))
Function will get the child for each parent passed and then recursively until a circular situation occurs or reaches a None
def find_child(x,i):
if i==0:
child_list.clear()
child = parents.get(x)
if child not in child_list:
child_list.append(child)
else:
return child_list
if pd.isnull(child)==False:
find_child(child,1)
return child_list
Loop through the df rows and apply the function for each e_id. The second parameter is to distingush between whether to clear the list or not in case of recursive calls
child_list = []
for idx, row in df.iterrows():
print({row['e_id']: find_child(row['e_id'], 0)})
Output:
{1.0: None}
{2.0: [3.0, 2.0]}
{3.0: [2.0, 3.0]}
{4.0: [6.0, 1.0, nan]}
{5.0: [4.0, 6.0, 1.0, nan]}
{6.0: [1.0, nan]}
Warning
Whether a copy or a reference is returned for a setting operation may
depend on the context. This is sometimes called chained assignment and
should be avoided. See Returning a View versus Copy.
See the cookbook for some advanced strategies.
Hierarchical indexing (MultiIndex)#
Hierarchical / Multi-level indexing is very exciting as it opens the door to some
quite sophisticated data analysis and manipulation, especially for working with
higher dimensional data. In essence, it enables you to store and manipulate
data with an arbitrary number of dimensions in lower dimensional data
structures like Series (1d) and DataFrame (2d).
In this section, we will show what exactly we mean by “hierarchical” indexing
and how it integrates with all of the pandas indexing functionality
described above and in prior sections. Later, when discussing group by and pivoting and reshaping data, we’ll show
non-trivial applications to illustrate how it aids in structuring data for
analysis.
See the cookbook for some advanced strategies.
Creating a MultiIndex (hierarchical index) object#
The MultiIndex object is the hierarchical analogue of the standard
Index object which typically stores the axis labels in pandas objects. You
can think of MultiIndex as an array of tuples where each tuple is unique. A
MultiIndex can be created from a list of arrays (using
MultiIndex.from_arrays()), an array of tuples (using
MultiIndex.from_tuples()), a crossed set of iterables (using
MultiIndex.from_product()), or a DataFrame (using
MultiIndex.from_frame()). The Index constructor will attempt to return
a MultiIndex when it is passed a list of tuples. The following examples
demonstrate different ways to initialize MultiIndexes.
In [1]: arrays = [
...: ["bar", "bar", "baz", "baz", "foo", "foo", "qux", "qux"],
...: ["one", "two", "one", "two", "one", "two", "one", "two"],
...: ]
...:
In [2]: tuples = list(zip(*arrays))
In [3]: tuples
Out[3]:
[('bar', 'one'),
('bar', 'two'),
('baz', 'one'),
('baz', 'two'),
('foo', 'one'),
('foo', 'two'),
('qux', 'one'),
('qux', 'two')]
In [4]: index = pd.MultiIndex.from_tuples(tuples, names=["first", "second"])
In [5]: index
Out[5]:
MultiIndex([('bar', 'one'),
('bar', 'two'),
('baz', 'one'),
('baz', 'two'),
('foo', 'one'),
('foo', 'two'),
('qux', 'one'),
('qux', 'two')],
names=['first', 'second'])
In [6]: s = pd.Series(np.random.randn(8), index=index)
In [7]: s
Out[7]:
first second
bar one 0.469112
two -0.282863
baz one -1.509059
two -1.135632
foo one 1.212112
two -0.173215
qux one 0.119209
two -1.044236
dtype: float64
When you want every pairing of the elements in two iterables, it can be easier
to use the MultiIndex.from_product() method:
In [8]: iterables = [["bar", "baz", "foo", "qux"], ["one", "two"]]
In [9]: pd.MultiIndex.from_product(iterables, names=["first", "second"])
Out[9]:
MultiIndex([('bar', 'one'),
('bar', 'two'),
('baz', 'one'),
('baz', 'two'),
('foo', 'one'),
('foo', 'two'),
('qux', 'one'),
('qux', 'two')],
names=['first', 'second'])
You can also construct a MultiIndex from a DataFrame directly, using
the method MultiIndex.from_frame(). This is a complementary method to
MultiIndex.to_frame().
In [10]: df = pd.DataFrame(
....: [["bar", "one"], ["bar", "two"], ["foo", "one"], ["foo", "two"]],
....: columns=["first", "second"],
....: )
....:
In [11]: pd.MultiIndex.from_frame(df)
Out[11]:
MultiIndex([('bar', 'one'),
('bar', 'two'),
('foo', 'one'),
('foo', 'two')],
names=['first', 'second'])
As a convenience, you can pass a list of arrays directly into Series or
DataFrame to construct a MultiIndex automatically:
In [12]: arrays = [
....: np.array(["bar", "bar", "baz", "baz", "foo", "foo", "qux", "qux"]),
....: np.array(["one", "two", "one", "two", "one", "two", "one", "two"]),
....: ]
....:
In [13]: s = pd.Series(np.random.randn(8), index=arrays)
In [14]: s
Out[14]:
bar one -0.861849
two -2.104569
baz one -0.494929
two 1.071804
foo one 0.721555
two -0.706771
qux one -1.039575
two 0.271860
dtype: float64
In [15]: df = pd.DataFrame(np.random.randn(8, 4), index=arrays)
In [16]: df
Out[16]:
0 1 2 3
bar one -0.424972 0.567020 0.276232 -1.087401
two -0.673690 0.113648 -1.478427 0.524988
baz one 0.404705 0.577046 -1.715002 -1.039268
two -0.370647 -1.157892 -1.344312 0.844885
foo one 1.075770 -0.109050 1.643563 -1.469388
two 0.357021 -0.674600 -1.776904 -0.968914
qux one -1.294524 0.413738 0.276662 -0.472035
two -0.013960 -0.362543 -0.006154 -0.923061
All of the MultiIndex constructors accept a names argument which stores
string names for the levels themselves. If no names are provided, None will
be assigned:
In [17]: df.index.names
Out[17]: FrozenList([None, None])
This index can back any axis of a pandas object, and the number of levels
of the index is up to you:
In [18]: df = pd.DataFrame(np.random.randn(3, 8), index=["A", "B", "C"], columns=index)
In [19]: df
Out[19]:
first bar baz ... foo qux
second one two one ... two one two
A 0.895717 0.805244 -1.206412 ... 1.340309 -1.170299 -0.226169
B 0.410835 0.813850 0.132003 ... -1.187678 1.130127 -1.436737
C -1.413681 1.607920 1.024180 ... -2.211372 0.974466 -2.006747
[3 rows x 8 columns]
In [20]: pd.DataFrame(np.random.randn(6, 6), index=index[:6], columns=index[:6])
Out[20]:
first bar baz foo
second one two one two one two
first second
bar one -0.410001 -0.078638 0.545952 -1.219217 -1.226825 0.769804
two -1.281247 -0.727707 -0.121306 -0.097883 0.695775 0.341734
baz one 0.959726 -1.110336 -0.619976 0.149748 -0.732339 0.687738
two 0.176444 0.403310 -0.154951 0.301624 -2.179861 -1.369849
foo one -0.954208 1.462696 -1.743161 -0.826591 -0.345352 1.314232
two 0.690579 0.995761 2.396780 0.014871 3.357427 -0.317441
We’ve “sparsified” the higher levels of the indexes to make the console output a
bit easier on the eyes. Note that how the index is displayed can be controlled using the
multi_sparse option in pandas.set_options():
In [21]: with pd.option_context("display.multi_sparse", False):
....: df
....:
It’s worth keeping in mind that there’s nothing preventing you from using
tuples as atomic labels on an axis:
In [22]: pd.Series(np.random.randn(8), index=tuples)
Out[22]:
(bar, one) -1.236269
(bar, two) 0.896171
(baz, one) -0.487602
(baz, two) -0.082240
(foo, one) -2.182937
(foo, two) 0.380396
(qux, one) 0.084844
(qux, two) 0.432390
dtype: float64
The reason that the MultiIndex matters is that it can allow you to do
grouping, selection, and reshaping operations as we will describe below and in
subsequent areas of the documentation. As you will see in later sections, you
can find yourself working with hierarchically-indexed data without creating a
MultiIndex explicitly yourself. However, when loading data from a file, you
may wish to generate your own MultiIndex when preparing the data set.
Reconstructing the level labels#
The method get_level_values() will return a vector of the labels for each
location at a particular level:
In [23]: index.get_level_values(0)
Out[23]: Index(['bar', 'bar', 'baz', 'baz', 'foo', 'foo', 'qux', 'qux'], dtype='object', name='first')
In [24]: index.get_level_values("second")
Out[24]: Index(['one', 'two', 'one', 'two', 'one', 'two', 'one', 'two'], dtype='object', name='second')
Basic indexing on axis with MultiIndex#
One of the important features of hierarchical indexing is that you can select
data by a “partial” label identifying a subgroup in the data. Partial
selection “drops” levels of the hierarchical index in the result in a
completely analogous way to selecting a column in a regular DataFrame:
In [25]: df["bar"]
Out[25]:
second one two
A 0.895717 0.805244
B 0.410835 0.813850
C -1.413681 1.607920
In [26]: df["bar", "one"]
Out[26]:
A 0.895717
B 0.410835
C -1.413681
Name: (bar, one), dtype: float64
In [27]: df["bar"]["one"]
Out[27]:
A 0.895717
B 0.410835
C -1.413681
Name: one, dtype: float64
In [28]: s["qux"]
Out[28]:
one -1.039575
two 0.271860
dtype: float64
See Cross-section with hierarchical index for how to select
on a deeper level.
Defined levels#
The MultiIndex keeps all the defined levels of an index, even
if they are not actually used. When slicing an index, you may notice this.
For example:
In [29]: df.columns.levels # original MultiIndex
Out[29]: FrozenList([['bar', 'baz', 'foo', 'qux'], ['one', 'two']])
In [30]: df[["foo","qux"]].columns.levels # sliced
Out[30]: FrozenList([['bar', 'baz', 'foo', 'qux'], ['one', 'two']])
This is done to avoid a recomputation of the levels in order to make slicing
highly performant. If you want to see only the used levels, you can use the
get_level_values() method.
In [31]: df[["foo", "qux"]].columns.to_numpy()
Out[31]:
array([('foo', 'one'), ('foo', 'two'), ('qux', 'one'), ('qux', 'two')],
dtype=object)
# for a specific level
In [32]: df[["foo", "qux"]].columns.get_level_values(0)
Out[32]: Index(['foo', 'foo', 'qux', 'qux'], dtype='object', name='first')
To reconstruct the MultiIndex with only the used levels, the
remove_unused_levels() method may be used.
In [33]: new_mi = df[["foo", "qux"]].columns.remove_unused_levels()
In [34]: new_mi.levels
Out[34]: FrozenList([['foo', 'qux'], ['one', 'two']])
Data alignment and using reindex#
Operations between differently-indexed objects having MultiIndex on the
axes will work as you expect; data alignment will work the same as an Index of
tuples:
In [35]: s + s[:-2]
Out[35]:
bar one -1.723698
two -4.209138
baz one -0.989859
two 2.143608
foo one 1.443110
two -1.413542
qux one NaN
two NaN
dtype: float64
In [36]: s + s[::2]
Out[36]:
bar one -1.723698
two NaN
baz one -0.989859
two NaN
foo one 1.443110
two NaN
qux one -2.079150
two NaN
dtype: float64
The reindex() method of Series/DataFrames can be
called with another MultiIndex, or even a list or array of tuples:
In [37]: s.reindex(index[:3])
Out[37]:
first second
bar one -0.861849
two -2.104569
baz one -0.494929
dtype: float64
In [38]: s.reindex([("foo", "two"), ("bar", "one"), ("qux", "one"), ("baz", "one")])
Out[38]:
foo two -0.706771
bar one -0.861849
qux one -1.039575
baz one -0.494929
dtype: float64
Advanced indexing with hierarchical index#
Syntactically integrating MultiIndex in advanced indexing with .loc is a
bit challenging, but we’ve made every effort to do so. In general, MultiIndex
keys take the form of tuples. For example, the following works as you would expect:
In [39]: df = df.T
In [40]: df
Out[40]:
A B C
first second
bar one 0.895717 0.410835 -1.413681
two 0.805244 0.813850 1.607920
baz one -1.206412 0.132003 1.024180
two 2.565646 -0.827317 0.569605
foo one 1.431256 -0.076467 0.875906
two 1.340309 -1.187678 -2.211372
qux one -1.170299 1.130127 0.974466
two -0.226169 -1.436737 -2.006747
In [41]: df.loc[("bar", "two")]
Out[41]:
A 0.805244
B 0.813850
C 1.607920
Name: (bar, two), dtype: float64
Note that df.loc['bar', 'two'] would also work in this example, but this shorthand
notation can lead to ambiguity in general.
If you also want to index a specific column with .loc, you must use a tuple
like this:
In [42]: df.loc[("bar", "two"), "A"]
Out[42]: 0.8052440253863785
You don’t have to specify all levels of the MultiIndex by passing only the
first elements of the tuple. For example, you can use “partial” indexing to
get all elements with bar in the first level as follows:
In [43]: df.loc["bar"]
Out[43]:
A B C
second
one 0.895717 0.410835 -1.413681
two 0.805244 0.813850 1.607920
This is a shortcut for the slightly more verbose notation df.loc[('bar',),] (equivalent
to df.loc['bar',] in this example).
“Partial” slicing also works quite nicely.
In [44]: df.loc["baz":"foo"]
Out[44]:
A B C
first second
baz one -1.206412 0.132003 1.024180
two 2.565646 -0.827317 0.569605
foo one 1.431256 -0.076467 0.875906
two 1.340309 -1.187678 -2.211372
You can slice with a ‘range’ of values, by providing a slice of tuples.
In [45]: df.loc[("baz", "two"):("qux", "one")]
Out[45]:
A B C
first second
baz two 2.565646 -0.827317 0.569605
foo one 1.431256 -0.076467 0.875906
two 1.340309 -1.187678 -2.211372
qux one -1.170299 1.130127 0.974466
In [46]: df.loc[("baz", "two"):"foo"]
Out[46]:
A B C
first second
baz two 2.565646 -0.827317 0.569605
foo one 1.431256 -0.076467 0.875906
two 1.340309 -1.187678 -2.211372
Passing a list of labels or tuples works similar to reindexing:
In [47]: df.loc[[("bar", "two"), ("qux", "one")]]
Out[47]:
A B C
first second
bar two 0.805244 0.813850 1.607920
qux one -1.170299 1.130127 0.974466
Note
It is important to note that tuples and lists are not treated identically
in pandas when it comes to indexing. Whereas a tuple is interpreted as one
multi-level key, a list is used to specify several keys. Or in other words,
tuples go horizontally (traversing levels), lists go vertically (scanning levels).
Importantly, a list of tuples indexes several complete MultiIndex keys,
whereas a tuple of lists refer to several values within a level:
In [48]: s = pd.Series(
....: [1, 2, 3, 4, 5, 6],
....: index=pd.MultiIndex.from_product([["A", "B"], ["c", "d", "e"]]),
....: )
....:
In [49]: s.loc[[("A", "c"), ("B", "d")]] # list of tuples
Out[49]:
A c 1
B d 5
dtype: int64
In [50]: s.loc[(["A", "B"], ["c", "d"])] # tuple of lists
Out[50]:
A c 1
d 2
B c 4
d 5
dtype: int64
Using slicers#
You can slice a MultiIndex by providing multiple indexers.
You can provide any of the selectors as if you are indexing by label, see Selection by Label,
including slices, lists of labels, labels, and boolean indexers.
You can use slice(None) to select all the contents of that level. You do not need to specify all the
deeper levels, they will be implied as slice(None).
As usual, both sides of the slicers are included as this is label indexing.
Warning
You should specify all axes in the .loc specifier, meaning the indexer for the index and
for the columns. There are some ambiguous cases where the passed indexer could be mis-interpreted
as indexing both axes, rather than into say the MultiIndex for the rows.
You should do this:
df.loc[(slice("A1", "A3"), ...), :] # noqa: E999
You should not do this:
df.loc[(slice("A1", "A3"), ...)] # noqa: E999
In [51]: def mklbl(prefix, n):
....: return ["%s%s" % (prefix, i) for i in range(n)]
....:
In [52]: miindex = pd.MultiIndex.from_product(
....: [mklbl("A", 4), mklbl("B", 2), mklbl("C", 4), mklbl("D", 2)]
....: )
....:
In [53]: micolumns = pd.MultiIndex.from_tuples(
....: [("a", "foo"), ("a", "bar"), ("b", "foo"), ("b", "bah")], names=["lvl0", "lvl1"]
....: )
....:
In [54]: dfmi = (
....: pd.DataFrame(
....: np.arange(len(miindex) * len(micolumns)).reshape(
....: (len(miindex), len(micolumns))
....: ),
....: index=miindex,
....: columns=micolumns,
....: )
....: .sort_index()
....: .sort_index(axis=1)
....: )
....:
In [55]: dfmi
Out[55]:
lvl0 a b
lvl1 bar foo bah foo
A0 B0 C0 D0 1 0 3 2
D1 5 4 7 6
C1 D0 9 8 11 10
D1 13 12 15 14
C2 D0 17 16 19 18
... ... ... ... ...
A3 B1 C1 D1 237 236 239 238
C2 D0 241 240 243 242
D1 245 244 247 246
C3 D0 249 248 251 250
D1 253 252 255 254
[64 rows x 4 columns]
Basic MultiIndex slicing using slices, lists, and labels.
In [56]: dfmi.loc[(slice("A1", "A3"), slice(None), ["C1", "C3"]), :]
Out[56]:
lvl0 a b
lvl1 bar foo bah foo
A1 B0 C1 D0 73 72 75 74
D1 77 76 79 78
C3 D0 89 88 91 90
D1 93 92 95 94
B1 C1 D0 105 104 107 106
... ... ... ... ...
A3 B0 C3 D1 221 220 223 222
B1 C1 D0 233 232 235 234
D1 237 236 239 238
C3 D0 249 248 251 250
D1 253 252 255 254
[24 rows x 4 columns]
You can use pandas.IndexSlice to facilitate a more natural syntax
using :, rather than using slice(None).
In [57]: idx = pd.IndexSlice
In [58]: dfmi.loc[idx[:, :, ["C1", "C3"]], idx[:, "foo"]]
Out[58]:
lvl0 a b
lvl1 foo foo
A0 B0 C1 D0 8 10
D1 12 14
C3 D0 24 26
D1 28 30
B1 C1 D0 40 42
... ... ...
A3 B0 C3 D1 220 222
B1 C1 D0 232 234
D1 236 238
C3 D0 248 250
D1 252 254
[32 rows x 2 columns]
It is possible to perform quite complicated selections using this method on multiple
axes at the same time.
In [59]: dfmi.loc["A1", (slice(None), "foo")]
Out[59]:
lvl0 a b
lvl1 foo foo
B0 C0 D0 64 66
D1 68 70
C1 D0 72 74
D1 76 78
C2 D0 80 82
... ... ...
B1 C1 D1 108 110
C2 D0 112 114
D1 116 118
C3 D0 120 122
D1 124 126
[16 rows x 2 columns]
In [60]: dfmi.loc[idx[:, :, ["C1", "C3"]], idx[:, "foo"]]
Out[60]:
lvl0 a b
lvl1 foo foo
A0 B0 C1 D0 8 10
D1 12 14
C3 D0 24 26
D1 28 30
B1 C1 D0 40 42
... ... ...
A3 B0 C3 D1 220 222
B1 C1 D0 232 234
D1 236 238
C3 D0 248 250
D1 252 254
[32 rows x 2 columns]
Using a boolean indexer you can provide selection related to the values.
In [61]: mask = dfmi[("a", "foo")] > 200
In [62]: dfmi.loc[idx[mask, :, ["C1", "C3"]], idx[:, "foo"]]
Out[62]:
lvl0 a b
lvl1 foo foo
A3 B0 C1 D1 204 206
C3 D0 216 218
D1 220 222
B1 C1 D0 232 234
D1 236 238
C3 D0 248 250
D1 252 254
You can also specify the axis argument to .loc to interpret the passed
slicers on a single axis.
In [63]: dfmi.loc(axis=0)[:, :, ["C1", "C3"]]
Out[63]:
lvl0 a b
lvl1 bar foo bah foo
A0 B0 C1 D0 9 8 11 10
D1 13 12 15 14
C3 D0 25 24 27 26
D1 29 28 31 30
B1 C1 D0 41 40 43 42
... ... ... ... ...
A3 B0 C3 D1 221 220 223 222
B1 C1 D0 233 232 235 234
D1 237 236 239 238
C3 D0 249 248 251 250
D1 253 252 255 254
[32 rows x 4 columns]
Furthermore, you can set the values using the following methods.
In [64]: df2 = dfmi.copy()
In [65]: df2.loc(axis=0)[:, :, ["C1", "C3"]] = -10
In [66]: df2
Out[66]:
lvl0 a b
lvl1 bar foo bah foo
A0 B0 C0 D0 1 0 3 2
D1 5 4 7 6
C1 D0 -10 -10 -10 -10
D1 -10 -10 -10 -10
C2 D0 17 16 19 18
... ... ... ... ...
A3 B1 C1 D1 -10 -10 -10 -10
C2 D0 241 240 243 242
D1 245 244 247 246
C3 D0 -10 -10 -10 -10
D1 -10 -10 -10 -10
[64 rows x 4 columns]
You can use a right-hand-side of an alignable object as well.
In [67]: df2 = dfmi.copy()
In [68]: df2.loc[idx[:, :, ["C1", "C3"]], :] = df2 * 1000
In [69]: df2
Out[69]:
lvl0 a b
lvl1 bar foo bah foo
A0 B0 C0 D0 1 0 3 2
D1 5 4 7 6
C1 D0 9000 8000 11000 10000
D1 13000 12000 15000 14000
C2 D0 17 16 19 18
... ... ... ... ...
A3 B1 C1 D1 237000 236000 239000 238000
C2 D0 241 240 243 242
D1 245 244 247 246
C3 D0 249000 248000 251000 250000
D1 253000 252000 255000 254000
[64 rows x 4 columns]
Cross-section#
The xs() method of DataFrame additionally takes a level argument to make
selecting data at a particular level of a MultiIndex easier.
In [70]: df
Out[70]:
A B C
first second
bar one 0.895717 0.410835 -1.413681
two 0.805244 0.813850 1.607920
baz one -1.206412 0.132003 1.024180
two 2.565646 -0.827317 0.569605
foo one 1.431256 -0.076467 0.875906
two 1.340309 -1.187678 -2.211372
qux one -1.170299 1.130127 0.974466
two -0.226169 -1.436737 -2.006747
In [71]: df.xs("one", level="second")
Out[71]:
A B C
first
bar 0.895717 0.410835 -1.413681
baz -1.206412 0.132003 1.024180
foo 1.431256 -0.076467 0.875906
qux -1.170299 1.130127 0.974466
# using the slicers
In [72]: df.loc[(slice(None), "one"), :]
Out[72]:
A B C
first second
bar one 0.895717 0.410835 -1.413681
baz one -1.206412 0.132003 1.024180
foo one 1.431256 -0.076467 0.875906
qux one -1.170299 1.130127 0.974466
You can also select on the columns with xs, by
providing the axis argument.
In [73]: df = df.T
In [74]: df.xs("one", level="second", axis=1)
Out[74]:
first bar baz foo qux
A 0.895717 -1.206412 1.431256 -1.170299
B 0.410835 0.132003 -0.076467 1.130127
C -1.413681 1.024180 0.875906 0.974466
# using the slicers
In [75]: df.loc[:, (slice(None), "one")]
Out[75]:
first bar baz foo qux
second one one one one
A 0.895717 -1.206412 1.431256 -1.170299
B 0.410835 0.132003 -0.076467 1.130127
C -1.413681 1.024180 0.875906 0.974466
xs also allows selection with multiple keys.
In [76]: df.xs(("one", "bar"), level=("second", "first"), axis=1)
Out[76]:
first bar
second one
A 0.895717
B 0.410835
C -1.413681
# using the slicers
In [77]: df.loc[:, ("bar", "one")]
Out[77]:
A 0.895717
B 0.410835
C -1.413681
Name: (bar, one), dtype: float64
You can pass drop_level=False to xs to retain
the level that was selected.
In [78]: df.xs("one", level="second", axis=1, drop_level=False)
Out[78]:
first bar baz foo qux
second one one one one
A 0.895717 -1.206412 1.431256 -1.170299
B 0.410835 0.132003 -0.076467 1.130127
C -1.413681 1.024180 0.875906 0.974466
Compare the above with the result using drop_level=True (the default value).
In [79]: df.xs("one", level="second", axis=1, drop_level=True)
Out[79]:
first bar baz foo qux
A 0.895717 -1.206412 1.431256 -1.170299
B 0.410835 0.132003 -0.076467 1.130127
C -1.413681 1.024180 0.875906 0.974466
Advanced reindexing and alignment#
Using the parameter level in the reindex() and
align() methods of pandas objects is useful to broadcast
values across a level. For instance:
In [80]: midx = pd.MultiIndex(
....: levels=[["zero", "one"], ["x", "y"]], codes=[[1, 1, 0, 0], [1, 0, 1, 0]]
....: )
....:
In [81]: df = pd.DataFrame(np.random.randn(4, 2), index=midx)
In [82]: df
Out[82]:
0 1
one y 1.519970 -0.493662
x 0.600178 0.274230
zero y 0.132885 -0.023688
x 2.410179 1.450520
In [83]: df2 = df.groupby(level=0).mean()
In [84]: df2
Out[84]:
0 1
one 1.060074 -0.109716
zero 1.271532 0.713416
In [85]: df2.reindex(df.index, level=0)
Out[85]:
0 1
one y 1.060074 -0.109716
x 1.060074 -0.109716
zero y 1.271532 0.713416
x 1.271532 0.713416
# aligning
In [86]: df_aligned, df2_aligned = df.align(df2, level=0)
In [87]: df_aligned
Out[87]:
0 1
one y 1.519970 -0.493662
x 0.600178 0.274230
zero y 0.132885 -0.023688
x 2.410179 1.450520
In [88]: df2_aligned
Out[88]:
0 1
one y 1.060074 -0.109716
x 1.060074 -0.109716
zero y 1.271532 0.713416
x 1.271532 0.713416
Swapping levels with swaplevel#
The swaplevel() method can switch the order of two levels:
In [89]: df[:5]
Out[89]:
0 1
one y 1.519970 -0.493662
x 0.600178 0.274230
zero y 0.132885 -0.023688
x 2.410179 1.450520
In [90]: df[:5].swaplevel(0, 1, axis=0)
Out[90]:
0 1
y one 1.519970 -0.493662
x one 0.600178 0.274230
y zero 0.132885 -0.023688
x zero 2.410179 1.450520
Reordering levels with reorder_levels#
The reorder_levels() method generalizes the swaplevel
method, allowing you to permute the hierarchical index levels in one step:
In [91]: df[:5].reorder_levels([1, 0], axis=0)
Out[91]:
0 1
y one 1.519970 -0.493662
x one 0.600178 0.274230
y zero 0.132885 -0.023688
x zero 2.410179 1.450520
Renaming names of an Index or MultiIndex#
The rename() method is used to rename the labels of a
MultiIndex, and is typically used to rename the columns of a DataFrame.
The columns argument of rename allows a dictionary to be specified
that includes only the columns you wish to rename.
In [92]: df.rename(columns={0: "col0", 1: "col1"})
Out[92]:
col0 col1
one y 1.519970 -0.493662
x 0.600178 0.274230
zero y 0.132885 -0.023688
x 2.410179 1.450520
This method can also be used to rename specific labels of the main index
of the DataFrame.
In [93]: df.rename(index={"one": "two", "y": "z"})
Out[93]:
0 1
two z 1.519970 -0.493662
x 0.600178 0.274230
zero z 0.132885 -0.023688
x 2.410179 1.450520
The rename_axis() method is used to rename the name of a
Index or MultiIndex. In particular, the names of the levels of a
MultiIndex can be specified, which is useful if reset_index() is later
used to move the values from the MultiIndex to a column.
In [94]: df.rename_axis(index=["abc", "def"])
Out[94]:
0 1
abc def
one y 1.519970 -0.493662
x 0.600178 0.274230
zero y 0.132885 -0.023688
x 2.410179 1.450520
Note that the columns of a DataFrame are an index, so that using
rename_axis with the columns argument will change the name of that
index.
In [95]: df.rename_axis(columns="Cols").columns
Out[95]: RangeIndex(start=0, stop=2, step=1, name='Cols')
Both rename and rename_axis support specifying a dictionary,
Series or a mapping function to map labels/names to new values.
When working with an Index object directly, rather than via a DataFrame,
Index.set_names() can be used to change the names.
In [96]: mi = pd.MultiIndex.from_product([[1, 2], ["a", "b"]], names=["x", "y"])
In [97]: mi.names
Out[97]: FrozenList(['x', 'y'])
In [98]: mi2 = mi.rename("new name", level=0)
In [99]: mi2
Out[99]:
MultiIndex([(1, 'a'),
(1, 'b'),
(2, 'a'),
(2, 'b')],
names=['new name', 'y'])
You cannot set the names of the MultiIndex via a level.
In [100]: mi.levels[0].name = "name via level"
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
Cell In[100], line 1
----> 1 mi.levels[0].name = "name via level"
File ~/work/pandas/pandas/pandas/core/indexes/base.py:1745, in Index.name(self, value)
1741 @name.setter
1742 def name(self, value: Hashable) -> None:
1743 if self._no_setting_name:
1744 # Used in MultiIndex.levels to avoid silently ignoring name updates.
-> 1745 raise RuntimeError(
1746 "Cannot set name on a level of a MultiIndex. Use "
1747 "'MultiIndex.set_names' instead."
1748 )
1749 maybe_extract_name(value, None, type(self))
1750 self._name = value
RuntimeError: Cannot set name on a level of a MultiIndex. Use 'MultiIndex.set_names' instead.
Use Index.set_names() instead.
Sorting a MultiIndex#
For MultiIndex-ed objects to be indexed and sliced effectively,
they need to be sorted. As with any index, you can use sort_index().
In [101]: import random
In [102]: random.shuffle(tuples)
In [103]: s = pd.Series(np.random.randn(8), index=pd.MultiIndex.from_tuples(tuples))
In [104]: s
Out[104]:
baz two 0.206053
foo two -0.251905
bar one -2.213588
qux two 1.063327
baz one 1.266143
qux one 0.299368
foo one -0.863838
bar two 0.408204
dtype: float64
In [105]: s.sort_index()
Out[105]:
bar one -2.213588
two 0.408204
baz one 1.266143
two 0.206053
foo one -0.863838
two -0.251905
qux one 0.299368
two 1.063327
dtype: float64
In [106]: s.sort_index(level=0)
Out[106]:
bar one -2.213588
two 0.408204
baz one 1.266143
two 0.206053
foo one -0.863838
two -0.251905
qux one 0.299368
two 1.063327
dtype: float64
In [107]: s.sort_index(level=1)
Out[107]:
bar one -2.213588
baz one 1.266143
foo one -0.863838
qux one 0.299368
bar two 0.408204
baz two 0.206053
foo two -0.251905
qux two 1.063327
dtype: float64
You may also pass a level name to sort_index if the MultiIndex levels
are named.
In [108]: s.index.set_names(["L1", "L2"], inplace=True)
In [109]: s.sort_index(level="L1")
Out[109]:
L1 L2
bar one -2.213588
two 0.408204
baz one 1.266143
two 0.206053
foo one -0.863838
two -0.251905
qux one 0.299368
two 1.063327
dtype: float64
In [110]: s.sort_index(level="L2")
Out[110]:
L1 L2
bar one -2.213588
baz one 1.266143
foo one -0.863838
qux one 0.299368
bar two 0.408204
baz two 0.206053
foo two -0.251905
qux two 1.063327
dtype: float64
On higher dimensional objects, you can sort any of the other axes by level if
they have a MultiIndex:
In [111]: df.T.sort_index(level=1, axis=1)
Out[111]:
one zero one zero
x x y y
0 0.600178 2.410179 1.519970 0.132885
1 0.274230 1.450520 -0.493662 -0.023688
Indexing will work even if the data are not sorted, but will be rather
inefficient (and show a PerformanceWarning). It will also
return a copy of the data rather than a view:
In [112]: dfm = pd.DataFrame(
.....: {"jim": [0, 0, 1, 1], "joe": ["x", "x", "z", "y"], "jolie": np.random.rand(4)}
.....: )
.....:
In [113]: dfm = dfm.set_index(["jim", "joe"])
In [114]: dfm
Out[114]:
jolie
jim joe
0 x 0.490671
x 0.120248
1 z 0.537020
y 0.110968
In [4]: dfm.loc[(1, 'z')]
PerformanceWarning: indexing past lexsort depth may impact performance.
Out[4]:
jolie
jim joe
1 z 0.64094
Furthermore, if you try to index something that is not fully lexsorted, this can raise:
In [5]: dfm.loc[(0, 'y'):(1, 'z')]
UnsortedIndexError: 'Key length (2) was greater than MultiIndex lexsort depth (1)'
The is_monotonic_increasing() method on a MultiIndex shows if the
index is sorted:
In [115]: dfm.index.is_monotonic_increasing
Out[115]: False
In [116]: dfm = dfm.sort_index()
In [117]: dfm
Out[117]:
jolie
jim joe
0 x 0.490671
x 0.120248
1 y 0.110968
z 0.537020
In [118]: dfm.index.is_monotonic_increasing
Out[118]: True
And now selection works as expected.
In [119]: dfm.loc[(0, "y"):(1, "z")]
Out[119]:
jolie
jim joe
1 y 0.110968
z 0.537020
Take methods#
Similar to NumPy ndarrays, pandas Index, Series, and DataFrame also provides
the take() method that retrieves elements along a given axis at the given
indices. The given indices must be either a list or an ndarray of integer
index positions. take will also accept negative integers as relative positions to the end of the object.
In [120]: index = pd.Index(np.random.randint(0, 1000, 10))
In [121]: index
Out[121]: Int64Index([214, 502, 712, 567, 786, 175, 993, 133, 758, 329], dtype='int64')
In [122]: positions = [0, 9, 3]
In [123]: index[positions]
Out[123]: Int64Index([214, 329, 567], dtype='int64')
In [124]: index.take(positions)
Out[124]: Int64Index([214, 329, 567], dtype='int64')
In [125]: ser = pd.Series(np.random.randn(10))
In [126]: ser.iloc[positions]
Out[126]:
0 -0.179666
9 1.824375
3 0.392149
dtype: float64
In [127]: ser.take(positions)
Out[127]:
0 -0.179666
9 1.824375
3 0.392149
dtype: float64
For DataFrames, the given indices should be a 1d list or ndarray that specifies
row or column positions.
In [128]: frm = pd.DataFrame(np.random.randn(5, 3))
In [129]: frm.take([1, 4, 3])
Out[129]:
0 1 2
1 -1.237881 0.106854 -1.276829
4 0.629675 -1.425966 1.857704
3 0.979542 -1.633678 0.615855
In [130]: frm.take([0, 2], axis=1)
Out[130]:
0 2
0 0.595974 0.601544
1 -1.237881 -1.276829
2 -0.767101 1.499591
3 0.979542 0.615855
4 0.629675 1.857704
It is important to note that the take method on pandas objects are not
intended to work on boolean indices and may return unexpected results.
In [131]: arr = np.random.randn(10)
In [132]: arr.take([False, False, True, True])
Out[132]: array([-1.1935, -1.1935, 0.6775, 0.6775])
In [133]: arr[[0, 1]]
Out[133]: array([-1.1935, 0.6775])
In [134]: ser = pd.Series(np.random.randn(10))
In [135]: ser.take([False, False, True, True])
Out[135]:
0 0.233141
0 0.233141
1 -0.223540
1 -0.223540
dtype: float64
In [136]: ser.iloc[[0, 1]]
Out[136]:
0 0.233141
1 -0.223540
dtype: float64
Finally, as a small note on performance, because the take method handles
a narrower range of inputs, it can offer performance that is a good deal
faster than fancy indexing.
In [137]: arr = np.random.randn(10000, 5)
In [138]: indexer = np.arange(10000)
In [139]: random.shuffle(indexer)
In [140]: %timeit arr[indexer]
.....: %timeit arr.take(indexer, axis=0)
.....:
141 us +- 1.18 us per loop (mean +- std. dev. of 7 runs, 10,000 loops each)
43.6 us +- 1.01 us per loop (mean +- std. dev. of 7 runs, 10,000 loops each)
In [141]: ser = pd.Series(arr[:, 0])
In [142]: %timeit ser.iloc[indexer]
.....: %timeit ser.take(indexer)
.....:
71.3 us +- 2.24 us per loop (mean +- std. dev. of 7 runs, 10,000 loops each)
63.1 us +- 4.29 us per loop (mean +- std. dev. of 7 runs, 10,000 loops each)
Index types#
We have discussed MultiIndex in the previous sections pretty extensively.
Documentation about DatetimeIndex and PeriodIndex are shown here,
and documentation about TimedeltaIndex is found here.
In the following sub-sections we will highlight some other index types.
CategoricalIndex#
CategoricalIndex is a type of index that is useful for supporting
indexing with duplicates. This is a container around a Categorical
and allows efficient indexing and storage of an index with a large number of duplicated elements.
In [143]: from pandas.api.types import CategoricalDtype
In [144]: df = pd.DataFrame({"A": np.arange(6), "B": list("aabbca")})
In [145]: df["B"] = df["B"].astype(CategoricalDtype(list("cab")))
In [146]: df
Out[146]:
A B
0 0 a
1 1 a
2 2 b
3 3 b
4 4 c
5 5 a
In [147]: df.dtypes
Out[147]:
A int64
B category
dtype: object
In [148]: df["B"].cat.categories
Out[148]: Index(['c', 'a', 'b'], dtype='object')
Setting the index will create a CategoricalIndex.
In [149]: df2 = df.set_index("B")
In [150]: df2.index
Out[150]: CategoricalIndex(['a', 'a', 'b', 'b', 'c', 'a'], categories=['c', 'a', 'b'], ordered=False, dtype='category', name='B')
Indexing with __getitem__/.iloc/.loc works similarly to an Index with duplicates.
The indexers must be in the category or the operation will raise a KeyError.
In [151]: df2.loc["a"]
Out[151]:
A
B
a 0
a 1
a 5
The CategoricalIndex is preserved after indexing:
In [152]: df2.loc["a"].index
Out[152]: CategoricalIndex(['a', 'a', 'a'], categories=['c', 'a', 'b'], ordered=False, dtype='category', name='B')
Sorting the index will sort by the order of the categories (recall that we
created the index with CategoricalDtype(list('cab')), so the sorted
order is cab).
In [153]: df2.sort_index()
Out[153]:
A
B
c 4
a 0
a 1
a 5
b 2
b 3
Groupby operations on the index will preserve the index nature as well.
In [154]: df2.groupby(level=0).sum()
Out[154]:
A
B
c 4
a 6
b 5
In [155]: df2.groupby(level=0).sum().index
Out[155]: CategoricalIndex(['c', 'a', 'b'], categories=['c', 'a', 'b'], ordered=False, dtype='category', name='B')
Reindexing operations will return a resulting index based on the type of the passed
indexer. Passing a list will return a plain-old Index; indexing with
a Categorical will return a CategoricalIndex, indexed according to the categories
of the passed Categorical dtype. This allows one to arbitrarily index these even with
values not in the categories, similarly to how you can reindex any pandas index.
In [156]: df3 = pd.DataFrame(
.....: {"A": np.arange(3), "B": pd.Series(list("abc")).astype("category")}
.....: )
.....:
In [157]: df3 = df3.set_index("B")
In [158]: df3
Out[158]:
A
B
a 0
b 1
c 2
In [159]: df3.reindex(["a", "e"])
Out[159]:
A
B
a 0.0
e NaN
In [160]: df3.reindex(["a", "e"]).index
Out[160]: Index(['a', 'e'], dtype='object', name='B')
In [161]: df3.reindex(pd.Categorical(["a", "e"], categories=list("abe")))
Out[161]:
A
B
a 0.0
e NaN
In [162]: df3.reindex(pd.Categorical(["a", "e"], categories=list("abe"))).index
Out[162]: CategoricalIndex(['a', 'e'], categories=['a', 'b', 'e'], ordered=False, dtype='category', name='B')
Warning
Reshaping and Comparison operations on a CategoricalIndex must have the same categories
or a TypeError will be raised.
In [163]: df4 = pd.DataFrame({"A": np.arange(2), "B": list("ba")})
In [164]: df4["B"] = df4["B"].astype(CategoricalDtype(list("ab")))
In [165]: df4 = df4.set_index("B")
In [166]: df4.index
Out[166]: CategoricalIndex(['b', 'a'], categories=['a', 'b'], ordered=False, dtype='category', name='B')
In [167]: df5 = pd.DataFrame({"A": np.arange(2), "B": list("bc")})
In [168]: df5["B"] = df5["B"].astype(CategoricalDtype(list("bc")))
In [169]: df5 = df5.set_index("B")
In [170]: df5.index
Out[170]: CategoricalIndex(['b', 'c'], categories=['b', 'c'], ordered=False, dtype='category', name='B')
In [1]: pd.concat([df4, df5])
TypeError: categories must match existing categories when appending
Int64Index and RangeIndex#
Deprecated since version 1.4.0: In pandas 2.0, Index will become the default index type for numeric types
instead of Int64Index, Float64Index and UInt64Index and those index types
are therefore deprecated and will be removed in a futire version.
RangeIndex will not be removed, as it represents an optimized version of an integer index.
Int64Index is a fundamental basic index in pandas. This is an immutable array
implementing an ordered, sliceable set.
RangeIndex is a sub-class of Int64Index that provides the default index for all NDFrame objects.
RangeIndex is an optimized version of Int64Index that can represent a monotonic ordered set. These are analogous to Python range types.
Float64Index#
Deprecated since version 1.4.0: Index will become the default index type for numeric types in the future
instead of Int64Index, Float64Index and UInt64Index and those index types
are therefore deprecated and will be removed in a future version of Pandas.
RangeIndex will not be removed as it represents an optimized version of an integer index.
By default a Float64Index will be automatically created when passing floating, or mixed-integer-floating values in index creation.
This enables a pure label-based slicing paradigm that makes [],ix,loc for scalar indexing and slicing work exactly the
same.
In [171]: indexf = pd.Index([1.5, 2, 3, 4.5, 5])
In [172]: indexf
Out[172]: Float64Index([1.5, 2.0, 3.0, 4.5, 5.0], dtype='float64')
In [173]: sf = pd.Series(range(5), index=indexf)
In [174]: sf
Out[174]:
1.5 0
2.0 1
3.0 2
4.5 3
5.0 4
dtype: int64
Scalar selection for [],.loc will always be label based. An integer will match an equal float index (e.g. 3 is equivalent to 3.0).
In [175]: sf[3]
Out[175]: 2
In [176]: sf[3.0]
Out[176]: 2
In [177]: sf.loc[3]
Out[177]: 2
In [178]: sf.loc[3.0]
Out[178]: 2
The only positional indexing is via iloc.
In [179]: sf.iloc[3]
Out[179]: 3
A scalar index that is not found will raise a KeyError.
Slicing is primarily on the values of the index when using [],ix,loc, and
always positional when using iloc. The exception is when the slice is
boolean, in which case it will always be positional.
In [180]: sf[2:4]
Out[180]:
2.0 1
3.0 2
dtype: int64
In [181]: sf.loc[2:4]
Out[181]:
2.0 1
3.0 2
dtype: int64
In [182]: sf.iloc[2:4]
Out[182]:
3.0 2
4.5 3
dtype: int64
In float indexes, slicing using floats is allowed.
In [183]: sf[2.1:4.6]
Out[183]:
3.0 2
4.5 3
dtype: int64
In [184]: sf.loc[2.1:4.6]
Out[184]:
3.0 2
4.5 3
dtype: int64
In non-float indexes, slicing using floats will raise a TypeError.
In [1]: pd.Series(range(5))[3.5]
TypeError: the label [3.5] is not a proper indexer for this index type (Int64Index)
In [1]: pd.Series(range(5))[3.5:4.5]
TypeError: the slice start [3.5] is not a proper indexer for this index type (Int64Index)
Here is a typical use-case for using this type of indexing. Imagine that you have a somewhat
irregular timedelta-like indexing scheme, but the data is recorded as floats. This could, for
example, be millisecond offsets.
In [185]: dfir = pd.concat(
.....: [
.....: pd.DataFrame(
.....: np.random.randn(5, 2), index=np.arange(5) * 250.0, columns=list("AB")
.....: ),
.....: pd.DataFrame(
.....: np.random.randn(6, 2),
.....: index=np.arange(4, 10) * 250.1,
.....: columns=list("AB"),
.....: ),
.....: ]
.....: )
.....:
In [186]: dfir
Out[186]:
A B
0.0 -0.435772 -1.188928
250.0 -0.808286 -0.284634
500.0 -1.815703 1.347213
750.0 -0.243487 0.514704
1000.0 1.162969 -0.287725
1000.4 -0.179734 0.993962
1250.5 -0.212673 0.909872
1500.6 -0.733333 -0.349893
1750.7 0.456434 -0.306735
2000.8 0.553396 0.166221
2250.9 -0.101684 -0.734907
Selection operations then will always work on a value basis, for all selection operators.
In [187]: dfir[0:1000.4]
Out[187]:
A B
0.0 -0.435772 -1.188928
250.0 -0.808286 -0.284634
500.0 -1.815703 1.347213
750.0 -0.243487 0.514704
1000.0 1.162969 -0.287725
1000.4 -0.179734 0.993962
In [188]: dfir.loc[0:1001, "A"]
Out[188]:
0.0 -0.435772
250.0 -0.808286
500.0 -1.815703
750.0 -0.243487
1000.0 1.162969
1000.4 -0.179734
Name: A, dtype: float64
In [189]: dfir.loc[1000.4]
Out[189]:
A -0.179734
B 0.993962
Name: 1000.4, dtype: float64
You could retrieve the first 1 second (1000 ms) of data as such:
In [190]: dfir[0:1000]
Out[190]:
A B
0.0 -0.435772 -1.188928
250.0 -0.808286 -0.284634
500.0 -1.815703 1.347213
750.0 -0.243487 0.514704
1000.0 1.162969 -0.287725
If you need integer based selection, you should use iloc:
In [191]: dfir.iloc[0:5]
Out[191]:
A B
0.0 -0.435772 -1.188928
250.0 -0.808286 -0.284634
500.0 -1.815703 1.347213
750.0 -0.243487 0.514704
1000.0 1.162969 -0.287725
IntervalIndex#
IntervalIndex together with its own dtype, IntervalDtype
as well as the Interval scalar type, allow first-class support in pandas
for interval notation.
The IntervalIndex allows some unique indexing and is also used as a
return type for the categories in cut() and qcut().
Indexing with an IntervalIndex#
An IntervalIndex can be used in Series and in DataFrame as the index.
In [192]: df = pd.DataFrame(
.....: {"A": [1, 2, 3, 4]}, index=pd.IntervalIndex.from_breaks([0, 1, 2, 3, 4])
.....: )
.....:
In [193]: df
Out[193]:
A
(0, 1] 1
(1, 2] 2
(2, 3] 3
(3, 4] 4
Label based indexing via .loc along the edges of an interval works as you would expect,
selecting that particular interval.
In [194]: df.loc[2]
Out[194]:
A 2
Name: (1, 2], dtype: int64
In [195]: df.loc[[2, 3]]
Out[195]:
A
(1, 2] 2
(2, 3] 3
If you select a label contained within an interval, this will also select the interval.
In [196]: df.loc[2.5]
Out[196]:
A 3
Name: (2, 3], dtype: int64
In [197]: df.loc[[2.5, 3.5]]
Out[197]:
A
(2, 3] 3
(3, 4] 4
Selecting using an Interval will only return exact matches (starting from pandas 0.25.0).
In [198]: df.loc[pd.Interval(1, 2)]
Out[198]:
A 2
Name: (1, 2], dtype: int64
Trying to select an Interval that is not exactly contained in the IntervalIndex will raise a KeyError.
In [7]: df.loc[pd.Interval(0.5, 2.5)]
---------------------------------------------------------------------------
KeyError: Interval(0.5, 2.5, closed='right')
Selecting all Intervals that overlap a given Interval can be performed using the
overlaps() method to create a boolean indexer.
In [199]: idxr = df.index.overlaps(pd.Interval(0.5, 2.5))
In [200]: idxr
Out[200]: array([ True, True, True, False])
In [201]: df[idxr]
Out[201]:
A
(0, 1] 1
(1, 2] 2
(2, 3] 3
Binning data with cut and qcut#
cut() and qcut() both return a Categorical object, and the bins they
create are stored as an IntervalIndex in its .categories attribute.
In [202]: c = pd.cut(range(4), bins=2)
In [203]: c
Out[203]:
[(-0.003, 1.5], (-0.003, 1.5], (1.5, 3.0], (1.5, 3.0]]
Categories (2, interval[float64, right]): [(-0.003, 1.5] < (1.5, 3.0]]
In [204]: c.categories
Out[204]: IntervalIndex([(-0.003, 1.5], (1.5, 3.0]], dtype='interval[float64, right]')
cut() also accepts an IntervalIndex for its bins argument, which enables
a useful pandas idiom. First, We call cut() with some data and bins set to a
fixed number, to generate the bins. Then, we pass the values of .categories as the
bins argument in subsequent calls to cut(), supplying new data which will be
binned into the same bins.
In [205]: pd.cut([0, 3, 5, 1], bins=c.categories)
Out[205]:
[(-0.003, 1.5], (1.5, 3.0], NaN, (-0.003, 1.5]]
Categories (2, interval[float64, right]): [(-0.003, 1.5] < (1.5, 3.0]]
Any value which falls outside all bins will be assigned a NaN value.
Generating ranges of intervals#
If we need intervals on a regular frequency, we can use the interval_range() function
to create an IntervalIndex using various combinations of start, end, and periods.
The default frequency for interval_range is a 1 for numeric intervals, and calendar day for
datetime-like intervals:
In [206]: pd.interval_range(start=0, end=5)
Out[206]: IntervalIndex([(0, 1], (1, 2], (2, 3], (3, 4], (4, 5]], dtype='interval[int64, right]')
In [207]: pd.interval_range(start=pd.Timestamp("2017-01-01"), periods=4)
Out[207]: IntervalIndex([(2017-01-01, 2017-01-02], (2017-01-02, 2017-01-03], (2017-01-03, 2017-01-04], (2017-01-04, 2017-01-05]], dtype='interval[datetime64[ns], right]')
In [208]: pd.interval_range(end=pd.Timedelta("3 days"), periods=3)
Out[208]: IntervalIndex([(0 days 00:00:00, 1 days 00:00:00], (1 days 00:00:00, 2 days 00:00:00], (2 days 00:00:00, 3 days 00:00:00]], dtype='interval[timedelta64[ns], right]')
The freq parameter can used to specify non-default frequencies, and can utilize a variety
of frequency aliases with datetime-like intervals:
In [209]: pd.interval_range(start=0, periods=5, freq=1.5)
Out[209]: IntervalIndex([(0.0, 1.5], (1.5, 3.0], (3.0, 4.5], (4.5, 6.0], (6.0, 7.5]], dtype='interval[float64, right]')
In [210]: pd.interval_range(start=pd.Timestamp("2017-01-01"), periods=4, freq="W")
Out[210]: IntervalIndex([(2017-01-01, 2017-01-08], (2017-01-08, 2017-01-15], (2017-01-15, 2017-01-22], (2017-01-22, 2017-01-29]], dtype='interval[datetime64[ns], right]')
In [211]: pd.interval_range(start=pd.Timedelta("0 days"), periods=3, freq="9H")
Out[211]: IntervalIndex([(0 days 00:00:00, 0 days 09:00:00], (0 days 09:00:00, 0 days 18:00:00], (0 days 18:00:00, 1 days 03:00:00]], dtype='interval[timedelta64[ns], right]')
Additionally, the closed parameter can be used to specify which side(s) the intervals
are closed on. Intervals are closed on the right side by default.
In [212]: pd.interval_range(start=0, end=4, closed="both")
Out[212]: IntervalIndex([[0, 1], [1, 2], [2, 3], [3, 4]], dtype='interval[int64, both]')
In [213]: pd.interval_range(start=0, end=4, closed="neither")
Out[213]: IntervalIndex([(0, 1), (1, 2), (2, 3), (3, 4)], dtype='interval[int64, neither]')
Specifying start, end, and periods will generate a range of evenly spaced
intervals from start to end inclusively, with periods number of elements
in the resulting IntervalIndex:
In [214]: pd.interval_range(start=0, end=6, periods=4)
Out[214]: IntervalIndex([(0.0, 1.5], (1.5, 3.0], (3.0, 4.5], (4.5, 6.0]], dtype='interval[float64, right]')
In [215]: pd.interval_range(pd.Timestamp("2018-01-01"), pd.Timestamp("2018-02-28"), periods=3)
Out[215]: IntervalIndex([(2018-01-01, 2018-01-20 08:00:00], (2018-01-20 08:00:00, 2018-02-08 16:00:00], (2018-02-08 16:00:00, 2018-02-28]], dtype='interval[datetime64[ns], right]')
Miscellaneous indexing FAQ#
Integer indexing#
Label-based indexing with integer axis labels is a thorny topic. It has been
discussed heavily on mailing lists and among various members of the scientific
Python community. In pandas, our general viewpoint is that labels matter more
than integer locations. Therefore, with an integer axis index only
label-based indexing is possible with the standard tools like .loc. The
following code will generate exceptions:
In [216]: s = pd.Series(range(5))
In [217]: s[-1]
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
File ~/work/pandas/pandas/pandas/core/indexes/range.py:391, in RangeIndex.get_loc(self, key, method, tolerance)
390 try:
--> 391 return self._range.index(new_key)
392 except ValueError as err:
ValueError: -1 is not in range
The above exception was the direct cause of the following exception:
KeyError Traceback (most recent call last)
Cell In[217], line 1
----> 1 s[-1]
File ~/work/pandas/pandas/pandas/core/series.py:981, in Series.__getitem__(self, key)
978 return self._values[key]
980 elif key_is_scalar:
--> 981 return self._get_value(key)
983 if is_hashable(key):
984 # Otherwise index.get_value will raise InvalidIndexError
985 try:
986 # For labels that don't resolve as scalars like tuples and frozensets
File ~/work/pandas/pandas/pandas/core/series.py:1089, in Series._get_value(self, label, takeable)
1086 return self._values[label]
1088 # Similar to Index.get_value, but we do not fall back to positional
-> 1089 loc = self.index.get_loc(label)
1090 return self.index._get_values_for_loc(self, loc, label)
File ~/work/pandas/pandas/pandas/core/indexes/range.py:393, in RangeIndex.get_loc(self, key, method, tolerance)
391 return self._range.index(new_key)
392 except ValueError as err:
--> 393 raise KeyError(key) from err
394 self._check_indexing_error(key)
395 raise KeyError(key)
KeyError: -1
In [218]: df = pd.DataFrame(np.random.randn(5, 4))
In [219]: df
Out[219]:
0 1 2 3
0 -0.130121 -0.476046 0.759104 0.213379
1 -0.082641 0.448008 0.656420 -1.051443
2 0.594956 -0.151360 -0.069303 1.221431
3 -0.182832 0.791235 0.042745 2.069775
4 1.446552 0.019814 -1.389212 -0.702312
In [220]: df.loc[-2:]
Out[220]:
0 1 2 3
0 -0.130121 -0.476046 0.759104 0.213379
1 -0.082641 0.448008 0.656420 -1.051443
2 0.594956 -0.151360 -0.069303 1.221431
3 -0.182832 0.791235 0.042745 2.069775
4 1.446552 0.019814 -1.389212 -0.702312
This deliberate decision was made to prevent ambiguities and subtle bugs (many
users reported finding bugs when the API change was made to stop “falling back”
on position-based indexing).
Non-monotonic indexes require exact matches#
If the index of a Series or DataFrame is monotonically increasing or decreasing, then the bounds
of a label-based slice can be outside the range of the index, much like slice indexing a
normal Python list. Monotonicity of an index can be tested with the is_monotonic_increasing() and
is_monotonic_decreasing() attributes.
In [221]: df = pd.DataFrame(index=[2, 3, 3, 4, 5], columns=["data"], data=list(range(5)))
In [222]: df.index.is_monotonic_increasing
Out[222]: True
# no rows 0 or 1, but still returns rows 2, 3 (both of them), and 4:
In [223]: df.loc[0:4, :]
Out[223]:
data
2 0
3 1
3 2
4 3
# slice is are outside the index, so empty DataFrame is returned
In [224]: df.loc[13:15, :]
Out[224]:
Empty DataFrame
Columns: [data]
Index: []
On the other hand, if the index is not monotonic, then both slice bounds must be
unique members of the index.
In [225]: df = pd.DataFrame(index=[2, 3, 1, 4, 3, 5], columns=["data"], data=list(range(6)))
In [226]: df.index.is_monotonic_increasing
Out[226]: False
# OK because 2 and 4 are in the index
In [227]: df.loc[2:4, :]
Out[227]:
data
2 0
3 1
1 2
4 3
# 0 is not in the index
In [9]: df.loc[0:4, :]
KeyError: 0
# 3 is not a unique label
In [11]: df.loc[2:3, :]
KeyError: 'Cannot get right slice bound for non-unique label: 3'
Index.is_monotonic_increasing and Index.is_monotonic_decreasing only check that
an index is weakly monotonic. To check for strict monotonicity, you can combine one of those with
the is_unique() attribute.
In [228]: weakly_monotonic = pd.Index(["a", "b", "c", "c"])
In [229]: weakly_monotonic
Out[229]: Index(['a', 'b', 'c', 'c'], dtype='object')
In [230]: weakly_monotonic.is_monotonic_increasing
Out[230]: True
In [231]: weakly_monotonic.is_monotonic_increasing & weakly_monotonic.is_unique
Out[231]: False
Endpoints are inclusive#
Compared with standard Python sequence slicing in which the slice endpoint is
not inclusive, label-based slicing in pandas is inclusive. The primary
reason for this is that it is often not possible to easily determine the
“successor” or next element after a particular label in an index. For example,
consider the following Series:
In [232]: s = pd.Series(np.random.randn(6), index=list("abcdef"))
In [233]: s
Out[233]:
a 0.301379
b 1.240445
c -0.846068
d -0.043312
e -1.658747
f -0.819549
dtype: float64
Suppose we wished to slice from c to e, using integers this would be
accomplished as such:
In [234]: s[2:5]
Out[234]:
c -0.846068
d -0.043312
e -1.658747
dtype: float64
However, if you only had c and e, determining the next element in the
index can be somewhat complicated. For example, the following does not work:
s.loc['c':'e' + 1]
A very common use case is to limit a time series to start and end at two
specific dates. To enable this, we made the design choice to make label-based
slicing include both endpoints:
In [235]: s.loc["c":"e"]
Out[235]:
c -0.846068
d -0.043312
e -1.658747
dtype: float64
This is most definitely a “practicality beats purity” sort of thing, but it is
something to watch out for if you expect label-based slicing to behave exactly
in the way that standard Python integer slicing works.
Indexing potentially changes underlying Series dtype#
The different indexing operation can potentially change the dtype of a Series.
In [236]: series1 = pd.Series([1, 2, 3])
In [237]: series1.dtype
Out[237]: dtype('int64')
In [238]: res = series1.reindex([0, 4])
In [239]: res.dtype
Out[239]: dtype('float64')
In [240]: res
Out[240]:
0 1.0
4 NaN
dtype: float64
In [241]: series2 = pd.Series([True])
In [242]: series2.dtype
Out[242]: dtype('bool')
In [243]: res = series2.reindex_like(series1)
In [244]: res.dtype
Out[244]: dtype('O')
In [245]: res
Out[245]:
0 True
1 NaN
2 NaN
dtype: object
This is because the (re)indexing operations above silently inserts NaNs and the dtype
changes accordingly. This can cause some issues when using numpy ufuncs
such as numpy.logical_and.
See the GH2388 for a more
detailed discussion.
| 223
| 1,280
|
Build hierarchy in pandas
I am looking to build a hierarchy of who reports to who and create the reporting structure for each record.
My raw data would consist of two columns:
e_id and s_id:
and I want to create a variable with a dictionary containing the structure like below. leftmost value of the list would be climbing the hierarchy while the dictionary key is the record e_id value.
e_id s_id structure
1 {1:[null]}
2 3 {2:[2,3]} circular so infinite sequence
3 2 {3:[3,2]} circular so infinite sequence
4 6 {4:[null,1,6]}
5 4 {5:[null,1,6,4]}
6 1 {6:[null,1]}
From my understanding this would be an apply method, I am just confused with how to set it up to read other rows and return the s_id value of that row.
Thank you in advance!
|
67,863,780
|
Python - Pandas Module - Filter to show as string and NOT boolean
|
<p>I have starting to use pandas module, and i am trying to use filter on a column to find a piece of text. I am using the below syntax, and while this works to some degree, this is showing if there is a match and returning a boolean value of true or false.</p>
<p><strong>Example Input Data</strong></p>
<p><a href="https://i.stack.imgur.com/Nd5yB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Nd5yB.png" alt="enter image description here" /></a></p>
<p><strong>Syntax</strong></p>
<pre><code>test = data["Date"].str.contains("Tue 02 Feb 2021")
print(test)
</code></pre>
<p><strong>Example Output Data</strong></p>
<p><a href="https://i.stack.imgur.com/oREoj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oREoj.png" alt="enter image description here" /></a></p>
<p>I would like this to filter and only show text which i have put into the syntax as below:</p>
<p><a href="https://i.stack.imgur.com/625j6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/625j6.png" alt="enter image description here" /></a></p>
<p>Could anybody please shed some light on this.</p>
| 67,863,824
| 2021-06-06T21:17:57.180000
| 1
| null | 0
| 37
|
python|pandas
|
<p>try:</p>
<pre class="lang-py prettyprint-override"><code>test = data[data["Date"].str.contains("Tue 02 Feb 2021")]
</code></pre>
<p>or:</p>
<pre class="lang-py prettyprint-override"><code>test = data[data["Date"] =="Tue 02 Feb 2021"]
</code></pre>
| 2021-06-06T21:23:21.340000
| 0
|
https://pandas.pydata.org/docs/reference/api/pandas.Series.str.contains.html
|
pandas.Series.str.contains#
pandas.Series.str.contains#
Series.str.contains(pat, case=True, flags=0, na=None, regex=True)[source]#
Test if pattern or regex is contained within a string of a Series or Index.
Return boolean Series or Index based on whether a given pattern or regex is
contained within a string of a Series or Index.
Parameters
patstrCharacter sequence or regular expression.
casebool, default TrueIf True, case sensitive.
flagsint, default 0 (no flags)Flags to pass through to the re module, e.g. re.IGNORECASE.
nascalar, optionalFill value for missing values. The default depends on dtype of the
array. For object-dtype, numpy.nan is used. For StringDtype,
pandas.NA is used.
regexbool, default TrueIf True, assumes the pat is a regular expression.
If False, treats the pat as a literal string.
Returns
Series or Index of boolean valuesA Series or Index of boolean values indicating whether the
try:
test = data[data["Date"].str.contains("Tue 02 Feb 2021")]
or:
test = data[data["Date"] =="Tue 02 Feb 2021"]
given pattern is contained within the string of each element
of the Series or Index.
See also
matchAnalogous, but stricter, relying on re.match instead of re.search.
Series.str.startswithTest if the start of each string element matches a pattern.
Series.str.endswithSame as startswith, but tests the end of string.
Examples
Returning a Series of booleans using only a literal pattern.
>>> s1 = pd.Series(['Mouse', 'dog', 'house and parrot', '23', np.NaN])
>>> s1.str.contains('og', regex=False)
0 False
1 True
2 False
3 False
4 NaN
dtype: object
Returning an Index of booleans using only a literal pattern.
>>> ind = pd.Index(['Mouse', 'dog', 'house and parrot', '23.0', np.NaN])
>>> ind.str.contains('23', regex=False)
Index([False, False, False, True, nan], dtype='object')
Specifying case sensitivity using case.
>>> s1.str.contains('oG', case=True, regex=True)
0 False
1 False
2 False
3 False
4 NaN
dtype: object
Specifying na to be False instead of NaN replaces NaN values
with False. If Series or Index does not contain NaN values
the resultant dtype will be bool, otherwise, an object dtype.
>>> s1.str.contains('og', na=False, regex=True)
0 False
1 True
2 False
3 False
4 False
dtype: bool
Returning ‘house’ or ‘dog’ when either expression occurs in a string.
>>> s1.str.contains('house|dog', regex=True)
0 False
1 True
2 True
3 False
4 NaN
dtype: object
Ignoring case sensitivity using flags with regex.
>>> import re
>>> s1.str.contains('PARROT', flags=re.IGNORECASE, regex=True)
0 False
1 False
2 True
3 False
4 NaN
dtype: object
Returning any digit using regular expression.
>>> s1.str.contains('\\d', regex=True)
0 False
1 False
2 False
3 True
4 NaN
dtype: object
Ensure pat is a not a literal pattern when regex is set to True.
Note in the following example one might expect only s2[1] and s2[3] to
return True. However, ‘.0’ as a regex matches any character
followed by a 0.
>>> s2 = pd.Series(['40', '40.0', '41', '41.0', '35'])
>>> s2.str.contains('.0', regex=True)
0 True
1 True
2 False
3 True
4 False
dtype: bool
| 925
| 1,039
|
Python - Pandas Module - Filter to show as string and NOT boolean
I have starting to use pandas module, and i am trying to use filter on a column to find a piece of text. I am using the below syntax, and while this works to some degree, this is showing if there is a match and returning a boolean value of true or false.
Example Input Data
Syntax
test = data["Date"].str.contains("Tue 02 Feb 2021")
print(test)
Example Output Data
I would like this to filter and only show text which i have put into the syntax as below:
Could anybody please shed some light on this.
|
69,803,181
|
New dataframe of all non-NaN pairs of elements between two columns in pandas
|
<p>Trying to go from a DataFrame where each row is a source entity and columns are the type of relations between one or more entities like this:</p>
<pre><code>import numpy as np
import pandas as pd
i = [['a', np.nan, np.nan, ['d', 'e']],
['b', 'f', np.nan, np.nan],
['c', np.nan, 'g', 'h']]
inputs = pd.DataFrame(i, columns=['source', 'mom', 'dad', 'sibling'])
</code></pre>
<p>To one where each row includes a source's unique target entity and relation type in separate columns:</p>
<pre><code>o = [['a', 'd', 'sibling'],
['a', 'e', 'sibling'],
['b', 'f', 'mom'],
['c', 'g', 'dad'],
['c', 'h', 'sib']]
outputs = pd.DataFrame(o)
</code></pre>
<p>I've looked at pandas functionality including <code>stack()</code> and <code>explode()</code> but can't figure out how to implement a pandas-native solution. Any suggestions on how to do this efficiently?</p>
| 69,803,968
| 2021-11-01T21:55:47.780000
| 1
| null | 0
| 39
|
python|pandas
|
<p>Per @sammywemmy , melt and explode should do the trick:</p>
<pre><code>inputs.melt("source", var_name="relationship").dropna().explode('value')
</code></pre>
| 2021-11-01T23:54:14.480000
| 0
|
https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.compare.html
|
pandas.DataFrame.compare#
pandas.DataFrame.compare#
DataFrame.compare(other, align_axis=1, keep_shape=False, keep_equal=False, result_names=('self', 'other'))[source]#
Compare to another DataFrame and show the differences.
New in version 1.1.0.
Parameters
otherDataFrameObject to compare with.
align_axis{0 or ‘index’, 1 or ‘columns’}, default 1Determine which axis to align the comparison on.
0, or ‘index’Resulting differences are stacked verticallywith rows drawn alternately from self and other.
1, or ‘columns’Resulting differences are aligned horizontallywith columns drawn alternately from self and other.
keep_shapebool, default FalseIf true, all rows and columns are kept.
Otherwise, only the ones with different values are kept.
keep_equalbool, default FalseIf true, the result keeps values that are equal.
Otherwise, equal values are shown as NaNs.
result_namestuple, default (‘self’, ‘other’)Set the dataframes names in the comparison.
Per @sammywemmy , melt and explode should do the trick:
inputs.melt("source", var_name="relationship").dropna().explode('value')
New in version 1.5.0.
Returns
DataFrameDataFrame that shows the differences stacked side by side.
The resulting index will be a MultiIndex with ‘self’ and ‘other’
stacked alternately at the inner level.
Raises
ValueErrorWhen the two DataFrames don’t have identical labels or shape.
See also
Series.compareCompare with another Series and show differences.
DataFrame.equalsTest whether two objects contain the same elements.
Notes
Matching NaNs will not appear as a difference.
Can only compare identically-labeled
(i.e. same shape, identical row and column labels) DataFrames
Examples
>>> df = pd.DataFrame(
... {
... "col1": ["a", "a", "b", "b", "a"],
... "col2": [1.0, 2.0, 3.0, np.nan, 5.0],
... "col3": [1.0, 2.0, 3.0, 4.0, 5.0]
... },
... columns=["col1", "col2", "col3"],
... )
>>> df
col1 col2 col3
0 a 1.0 1.0
1 a 2.0 2.0
2 b 3.0 3.0
3 b NaN 4.0
4 a 5.0 5.0
>>> df2 = df.copy()
>>> df2.loc[0, 'col1'] = 'c'
>>> df2.loc[2, 'col3'] = 4.0
>>> df2
col1 col2 col3
0 c 1.0 1.0
1 a 2.0 2.0
2 b 3.0 4.0
3 b NaN 4.0
4 a 5.0 5.0
Align the differences on columns
>>> df.compare(df2)
col1 col3
self other self other
0 a c NaN NaN
2 NaN NaN 3.0 4.0
Assign result_names
>>> df.compare(df2, result_names=("left", "right"))
col1 col3
left right left right
0 a c NaN NaN
2 NaN NaN 3.0 4.0
Stack the differences on rows
>>> df.compare(df2, align_axis=0)
col1 col3
0 self a NaN
other c NaN
2 self NaN 3.0
other NaN 4.0
Keep the equal values
>>> df.compare(df2, keep_equal=True)
col1 col3
self other self other
0 a c 1.0 1.0
2 b b 3.0 4.0
Keep all original rows and columns
>>> df.compare(df2, keep_shape=True)
col1 col2 col3
self other self other self other
0 a c NaN NaN NaN NaN
1 NaN NaN NaN NaN NaN NaN
2 NaN NaN NaN NaN 3.0 4.0
3 NaN NaN NaN NaN NaN NaN
4 NaN NaN NaN NaN NaN NaN
Keep all original rows and columns and also all original values
>>> df.compare(df2, keep_shape=True, keep_equal=True)
col1 col2 col3
self other self other self other
0 a c 1.0 1.0 1.0 1.0
1 a a 2.0 2.0 2.0 2.0
2 b b 3.0 3.0 3.0 4.0
3 b b NaN NaN 4.0 4.0
4 a a 5.0 5.0 5.0 5.0
| 970
| 1,099
|
New dataframe of all non-NaN pairs of elements between two columns in pandas
Trying to go from a DataFrame where each row is a source entity and columns are the type of relations between one or more entities like this:
import numpy as np
import pandas as pd
i = [['a', np.nan, np.nan, ['d', 'e']],
['b', 'f', np.nan, np.nan],
['c', np.nan, 'g', 'h']]
inputs = pd.DataFrame(i, columns=['source', 'mom', 'dad', 'sibling'])
To one where each row includes a source's unique target entity and relation type in separate columns:
o = [['a', 'd', 'sibling'],
['a', 'e', 'sibling'],
['b', 'f', 'mom'],
['c', 'g', 'dad'],
['c', 'h', 'sib']]
outputs = pd.DataFrame(o)
I've looked at pandas functionality including stack() and explode() but can't figure out how to implement a pandas-native solution. Any suggestions on how to do this efficiently?
|
65,256,719
|
How to deal with 'dynamic' dataframes using pandas?
|
<p>Let's say I have the following table</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>X</th>
<th>Y</th>
<th>Z</th>
<th>mm</th>
<th>ff</th>
<th>cc</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>2</td>
<td>3</td>
<td>0.2</td>
<td>0.4</td>
<td>0.3</td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td>0.1</td>
<td>0.3</td>
<td>0.4</td>
</tr>
</tbody>
</table>
</div>
<p>which exported as a .csv file gives the following file content:</p>
<pre><code> X,Y,Z,mm,ff,cc
1,2,3,0.2,0.4,0.3
,,,0.1,0.3,0.4
</code></pre>
<p>Now.. if the table would have only one row I can access any cell in python using pandas like:</p>
<pre><code> X = df.loc[0, 'X'] # X = 1
Y = df.loc[0, 'Y'] # Y = 2
Z = df.loc[0, 'Z'] # Z = 3
mm_1 = df.loc[0, 'mm'] # mm_1 = 0.2
ff_1 = df.loc[0, 'ff'] # ff_1 = 0.4
cc_1 = df.loc[0, 'cc'] # cc_1 = 0.3
</code></pre>
<p>and if I would like to read the cells on the second row I need to change the code like:</p>
<pre><code> mm_2 = df.loc[1, 'mm'] # mm_2 = 0.1
ff_2 = df.loc[1, 'ff'] # ff_2 = 0.3
cc_2 = df.loc[1, 'cc'] # cc_2 = 0.4
</code></pre>
<p>Now... the problem is that the original csv file can have between one row and 6 rows.</p>
<p>Let's keep it simple. If I hard code the reading of all cells (0-1) like the code above, I'm going to have problems, when the csv file has only one line, since the variables: <code>mm_2</code>, <code>ff_2</code>, <code>cc_2</code> will not find anything.</p>
<p>There is a way in pandas to deal with such situations?</p>
| 65,256,942
| 2020-12-11T18:27:12.333000
| 1
| null | 0
| 39
|
python|pandas
|
<p>You can use <code>df.iterrows()</code> or you can also iterate through the normal loop and neglect the values which are equal to <code>NaN</code>. The NaN values are empty and filled by dataframe.</p>
| 2020-12-11T18:45:25.240000
| 0
|
https://pandas.pydata.org/docs/reference/api/pandas.pivot_table.html
|
pandas.pivot_table#
pandas.pivot_table#
pandas.pivot_table(data, values=None, index=None, columns=None, aggfunc='mean', fill_value=None, margins=False, dropna=True, margins_name='All', observed=False, sort=True)[source]#
You can use df.iterrows() or you can also iterate through the normal loop and neglect the values which are equal to NaN. The NaN values are empty and filled by dataframe.
Create a spreadsheet-style pivot table as a DataFrame.
The levels in the pivot table will be stored in MultiIndex objects
(hierarchical indexes) on the index and columns of the result DataFrame.
Parameters
dataDataFrame
valuescolumn to aggregate, optional
indexcolumn, Grouper, array, or list of the previousIf an array is passed, it must be the same length as the data. The
list can contain any of the other types (except list).
Keys to group by on the pivot table index. If an array is passed,
it is being used as the same manner as column values.
columnscolumn, Grouper, array, or list of the previousIf an array is passed, it must be the same length as the data. The
list can contain any of the other types (except list).
Keys to group by on the pivot table column. If an array is passed,
it is being used as the same manner as column values.
aggfuncfunction, list of functions, dict, default numpy.meanIf list of functions passed, the resulting pivot table will have
hierarchical columns whose top level are the function names
(inferred from the function objects themselves)
If dict is passed, the key is column to aggregate and value
is function or list of functions.
fill_valuescalar, default NoneValue to replace missing values with (in the resulting pivot table,
after aggregation).
marginsbool, default FalseAdd all row / columns (e.g. for subtotal / grand totals).
dropnabool, default TrueDo not include columns whose entries are all NaN. If True,
rows with a NaN value in any column will be omitted before
computing margins.
margins_namestr, default ‘All’Name of the row / column that will contain the totals
when margins is True.
observedbool, default FalseThis only applies if any of the groupers are Categoricals.
If True: only show observed values for categorical groupers.
If False: show all values for categorical groupers.
Changed in version 0.25.0.
sortbool, default TrueSpecifies if the result should be sorted.
New in version 1.3.0.
Returns
DataFrameAn Excel style pivot table.
See also
DataFrame.pivotPivot without aggregation that can handle non-numeric data.
DataFrame.meltUnpivot a DataFrame from wide to long format, optionally leaving identifiers set.
wide_to_longWide panel to long format. Less flexible but more user-friendly than melt.
Notes
Reference the user guide for more examples.
Examples
>>> df = pd.DataFrame({"A": ["foo", "foo", "foo", "foo", "foo",
... "bar", "bar", "bar", "bar"],
... "B": ["one", "one", "one", "two", "two",
... "one", "one", "two", "two"],
... "C": ["small", "large", "large", "small",
... "small", "large", "small", "small",
... "large"],
... "D": [1, 2, 2, 3, 3, 4, 5, 6, 7],
... "E": [2, 4, 5, 5, 6, 6, 8, 9, 9]})
>>> df
A B C D E
0 foo one small 1 2
1 foo one large 2 4
2 foo one large 2 5
3 foo two small 3 5
4 foo two small 3 6
5 bar one large 4 6
6 bar one small 5 8
7 bar two small 6 9
8 bar two large 7 9
This first example aggregates values by taking the sum.
>>> table = pd.pivot_table(df, values='D', index=['A', 'B'],
... columns=['C'], aggfunc=np.sum)
>>> table
C large small
A B
bar one 4.0 5.0
two 7.0 6.0
foo one 4.0 1.0
two NaN 6.0
We can also fill missing values using the fill_value parameter.
>>> table = pd.pivot_table(df, values='D', index=['A', 'B'],
... columns=['C'], aggfunc=np.sum, fill_value=0)
>>> table
C large small
A B
bar one 4 5
two 7 6
foo one 4 1
two 0 6
The next example aggregates by taking the mean across multiple columns.
>>> table = pd.pivot_table(df, values=['D', 'E'], index=['A', 'C'],
... aggfunc={'D': np.mean,
... 'E': np.mean})
>>> table
D E
A C
bar large 5.500000 7.500000
small 5.500000 8.500000
foo large 2.000000 4.500000
small 2.333333 4.333333
We can also calculate multiple types of aggregations for any given
value column.
>>> table = pd.pivot_table(df, values=['D', 'E'], index=['A', 'C'],
... aggfunc={'D': np.mean,
... 'E': [min, max, np.mean]})
>>> table
D E
mean max mean min
A C
bar large 5.500000 9 7.500000 6
small 5.500000 9 8.500000 8
foo large 2.000000 5 4.500000 4
small 2.333333 6 4.333333 2
| 225
| 395
|
How to deal with 'dynamic' dataframes using pandas?
Let's say I have the following table
X
Y
Z
mm
ff
cc
1
2
3
0.2
0.4
0.3
0.1
0.3
0.4
which exported as a .csv file gives the following file content:
X,Y,Z,mm,ff,cc
1,2,3,0.2,0.4,0.3
,,,0.1,0.3,0.4
Now.. if the table would have only one row I can access any cell in python using pandas like:
X = df.loc[0, 'X'] # X = 1
Y = df.loc[0, 'Y'] # Y = 2
Z = df.loc[0, 'Z'] # Z = 3
mm_1 = df.loc[0, 'mm'] # mm_1 = 0.2
ff_1 = df.loc[0, 'ff'] # ff_1 = 0.4
cc_1 = df.loc[0, 'cc'] # cc_1 = 0.3
and if I would like to read the cells on the second row I need to change the code like:
mm_2 = df.loc[1, 'mm'] # mm_2 = 0.1
ff_2 = df.loc[1, 'ff'] # ff_2 = 0.3
cc_2 = df.loc[1, 'cc'] # cc_2 = 0.4
Now... the problem is that the original csv file can have between one row and 6 rows.
Let's keep it simple. If I hard code the reading of all cells (0-1) like the code above, I'm going to have problems, when the csv file has only one line, since the variables: mm_2, ff_2, cc_2 will not find anything.
There is a way in pandas to deal with such situations?
|
61,610,473
|
Extracting interval based on data/tag
|
<pre><code>times = pd.to_datetime(pd.Series(['2020-08-05','2020-08-12', '2020-08-16', '2020-08-22', '2020-08-30', '2020-09-11', '2020-09-20']))
event = [0, 0, 1, 1, 0, 0, 1]
df = pd.DataFrame({'v': event}, index=times)
</code></pre>
<p>Above is my dataframe. I am trying to extract interval where the value switched from 0 to 1.</p>
<p>My ideal out put in above case would be : </p>
<pre><code>[['2020-09-11 00:00:00', '2020-09-20 00:00:00'],
['2020-08-12 00:00:00', '2020-08-16 00:00:00']]
</code></pre>
<p>How I am approaching:
I am iterating over the df in reverse and trying to find first occurrence of '1'.
There after I am looking for first occurrence of 0. These correspond to the first interval.
I am repeating above over the df.</p>
<p>But, the output, I am getting is:</p>
<pre><code>[['2020-09-11 00:00:00', '2020-09-20 00:00:00'],
['2020-08-12 00:00:00', '2020-08-22 00:00:00']]
</code></pre>
<p>I know that the issue is because of consecutive 1 in the timeseries. But, not able to find the workaround. Any leads would be appreciated.</p>
| 61,610,677
| 2020-05-05T10:00:18.537000
| 2
| null | 1
| 39
|
pandas
|
<p>Use:</p>
<pre><code>#filter last consecutive values
df2 = df[df['v'].ne(df['v'].shift(-1))]
#filter 0,1 pattern
m1 = df['v'].eq(0) & df['v'].shift(-1).eq(1)
m2 = df['v'].eq(1) & df['v'].shift().eq(0)
#after filtering sorting index
df2 = df[m1 | m2].sort_index(ascending=False)
#convert index to list
L = [list(x) for x in zip(df2.index[1::2], df2.index[::2])]
print (L)
[[Timestamp('2020-09-11 00:00:00'), Timestamp('2020-09-20 00:00:00')],
[Timestamp('2020-08-12 00:00:00'), Timestamp('2020-08-16 00:00:00')]]
</code></pre>
| 2020-05-05T10:11:55.257000
| 0
|
https://pandas.pydata.org/docs/reference/api/pandas.Interval.html
|
pandas.Interval#
pandas.Interval#
class pandas.Interval#
Immutable object implementing an Interval, a bounded slice-like interval.
Parameters
leftorderable scalarLeft bound for the interval.
rightorderable scalarRight bound for the interval.
closed{‘right’, ‘left’, ‘both’, ‘neither’}, default ‘right’Whether the interval is closed on the left-side, right-side, both or
neither. See the Notes for more detailed explanation.
Use:
#filter last consecutive values
df2 = df[df['v'].ne(df['v'].shift(-1))]
#filter 0,1 pattern
m1 = df['v'].eq(0) & df['v'].shift(-1).eq(1)
m2 = df['v'].eq(1) & df['v'].shift().eq(0)
#after filtering sorting index
df2 = df[m1 | m2].sort_index(ascending=False)
#convert index to list
L = [list(x) for x in zip(df2.index[1::2], df2.index[::2])]
print (L)
[[Timestamp('2020-09-11 00:00:00'), Timestamp('2020-09-20 00:00:00')],
[Timestamp('2020-08-12 00:00:00'), Timestamp('2020-08-16 00:00:00')]]
See also
IntervalIndexAn Index of Interval objects that are all closed on the same side.
cutConvert continuous data into discrete bins (Categorical of Interval objects).
qcutConvert continuous data into bins (Categorical of Interval objects) based on quantiles.
PeriodRepresents a period of time.
Notes
The parameters left and right must be from the same type, you must be
able to compare them and they must satisfy left <= right.
A closed interval (in mathematics denoted by square brackets) contains
its endpoints, i.e. the closed interval [0, 5] is characterized by the
conditions 0 <= x <= 5. This is what closed='both' stands for.
An open interval (in mathematics denoted by parentheses) does not contain
its endpoints, i.e. the open interval (0, 5) is characterized by the
conditions 0 < x < 5. This is what closed='neither' stands for.
Intervals can also be half-open or half-closed, i.e. [0, 5) is
described by 0 <= x < 5 (closed='left') and (0, 5] is
described by 0 < x <= 5 (closed='right').
Examples
It is possible to build Intervals of different types, like numeric ones:
>>> iv = pd.Interval(left=0, right=5)
>>> iv
Interval(0, 5, closed='right')
You can check if an element belongs to it, or if it contains another interval:
>>> 2.5 in iv
True
>>> pd.Interval(left=2, right=5, closed='both') in iv
True
You can test the bounds (closed='right', so 0 < x <= 5):
>>> 0 in iv
False
>>> 5 in iv
True
>>> 0.0001 in iv
True
Calculate its length
>>> iv.length
5
You can operate with + and * over an Interval and the operation
is applied to each of its bounds, so the result depends on the type
of the bound elements
>>> shifted_iv = iv + 3
>>> shifted_iv
Interval(3, 8, closed='right')
>>> extended_iv = iv * 10.0
>>> extended_iv
Interval(0.0, 50.0, closed='right')
To create a time interval you can use Timestamps as the bounds
>>> year_2017 = pd.Interval(pd.Timestamp('2017-01-01 00:00:00'),
... pd.Timestamp('2018-01-01 00:00:00'),
... closed='left')
>>> pd.Timestamp('2017-01-01 00:00') in year_2017
True
>>> year_2017.length
Timedelta('365 days 00:00:00')
Attributes
closed
String describing the inclusive side the intervals.
closed_left
Check if the interval is closed on the left side.
closed_right
Check if the interval is closed on the right side.
is_empty
Indicates if an interval is empty, meaning it contains no points.
left
Left bound for the interval.
length
Return the length of the Interval.
mid
Return the midpoint of the Interval.
open_left
Check if the interval is open on the left side.
open_right
Check if the interval is open on the right side.
right
Right bound for the interval.
Methods
overlaps
Check whether two Interval objects overlap.
| 432
| 937
|
Extracting interval based on data/tag
times = pd.to_datetime(pd.Series(['2020-08-05','2020-08-12', '2020-08-16', '2020-08-22', '2020-08-30', '2020-09-11', '2020-09-20']))
event = [0, 0, 1, 1, 0, 0, 1]
df = pd.DataFrame({'v': event}, index=times)
Above is my dataframe. I am trying to extract interval where the value switched from 0 to 1.
My ideal out put in above case would be :
[['2020-09-11 00:00:00', '2020-09-20 00:00:00'],
['2020-08-12 00:00:00', '2020-08-16 00:00:00']]
How I am approaching:
I am iterating over the df in reverse and trying to find first occurrence of '1'.
There after I am looking for first occurrence of 0. These correspond to the first interval.
I am repeating above over the df.
But, the output, I am getting is:
[['2020-09-11 00:00:00', '2020-09-20 00:00:00'],
['2020-08-12 00:00:00', '2020-08-22 00:00:00']]
I know that the issue is because of consecutive 1 in the timeseries. But, not able to find the workaround. Any leads would be appreciated.
|
63,178,700
|
Pandas - Where function over several indexes
|
<p>I'm looking to use the <code>where</code> function over a dataframe using a multiindex.</p>
<p>My dataframe looks like this :</p>
<pre><code> mw
country category date
DE Wind Onshore 2019-01-01 00:00:00+00:00 22036.50
2019-01-01 01:00:00+00:00 22748.25
2019-01-01 02:00:00+00:00 23870.25
2019-01-01 03:00:00+00:00 25921.50
FR Wind Onshore 2019-01-01 00:00:00+00:00 1637.00
2019-01-01 01:00:00+00:00 1567.00
2019-01-01 02:00:00+00:00 1556.00
2019-01-01 03:00:00+00:00 1595.00
</code></pre>
<p>I'm looking for the value under a minimum (let say 90% of the maximum for this exemple) per countries (DE, FR). How to do this ?</p>
<p>I tried this :</p>
<pre><code>maxValue = data.max(level=[index.country])
data = data.where(data < maxValue*0.1)*
</code></pre>
<p>It does not work since maxValue has to values and data (in the where function) is unique. (I'm not sure to be clear)</p>
<h2>Edit</h2>
<p>To reproduce the dataframe:</p>
<ul>
<li>Row data:</li>
</ul>
<pre><code> country category date mw
0 DE Wind Onshore 2019-01-01 00:00:00+00:00 22036.50
1 DE Wind Onshore 2019-01-01 01:00:00+00:00 22748.25
2 DE Wind Onshore 2019-01-01 02:00:00+00:00 23870.25
3 DE Wind Onshore 2019-01-01 03:00:00+00:00 25921.50
4 FR Wind Onshore 2019-01-01 00:00:00+00:00 1637.00
5 FR Wind Onshore 2019-01-01 01:00:00+00:00 1567.00
6 FR Wind Onshore 2019-01-01 02:00:00+00:00 1556.00
7 FR Wind Onshore 2019-01-01 03:00:00+00:00 1595.00
</code></pre>
<ul>
<li>the codeline</li>
</ul>
<pre><code>pd.read_clipboard(sep='\s\s+').set_index(['country', 'category', 'date'])
</code></pre>
| 63,179,197
| 2020-07-30T17:53:37.857000
| 1
| 0
| 0
| 42
|
python|pandas
|
<p>First to get the max value. Try:</p>
<pre><code>data = data.assign(max_value = data.groupby('country').transform('max'))
</code></pre>
<p>Now you have a row-by-row <code>max_value</code>. You can just:</p>
<pre><code>data_filtered = data.loc[data.mw < data.max_value * 0.1]
</code></pre>
| 2020-07-30T18:26:46.567000
| 0
|
https://pandas.pydata.org/docs/user_guide/advanced.html
|
MultiIndex / advanced indexing#
MultiIndex / advanced indexing#
This section covers indexing with a MultiIndex
and other advanced indexing features.
See the Indexing and Selecting Data for general indexing documentation.
Warning
Whether a copy or a reference is returned for a setting operation may
depend on the context. This is sometimes called chained assignment and
should be avoided. See Returning a View versus Copy.
See the cookbook for some advanced strategies.
Hierarchical indexing (MultiIndex)#
Hierarchical / Multi-level indexing is very exciting as it opens the door to some
quite sophisticated data analysis and manipulation, especially for working with
First to get the max value. Try:
data = data.assign(max_value = data.groupby('country').transform('max'))
Now you have a row-by-row max_value. You can just:
data_filtered = data.loc[data.mw < data.max_value * 0.1]
higher dimensional data. In essence, it enables you to store and manipulate
data with an arbitrary number of dimensions in lower dimensional data
structures like Series (1d) and DataFrame (2d).
In this section, we will show what exactly we mean by “hierarchical” indexing
and how it integrates with all of the pandas indexing functionality
described above and in prior sections. Later, when discussing group by and pivoting and reshaping data, we’ll show
non-trivial applications to illustrate how it aids in structuring data for
analysis.
See the cookbook for some advanced strategies.
Creating a MultiIndex (hierarchical index) object#
The MultiIndex object is the hierarchical analogue of the standard
Index object which typically stores the axis labels in pandas objects. You
can think of MultiIndex as an array of tuples where each tuple is unique. A
MultiIndex can be created from a list of arrays (using
MultiIndex.from_arrays()), an array of tuples (using
MultiIndex.from_tuples()), a crossed set of iterables (using
MultiIndex.from_product()), or a DataFrame (using
MultiIndex.from_frame()). The Index constructor will attempt to return
a MultiIndex when it is passed a list of tuples. The following examples
demonstrate different ways to initialize MultiIndexes.
In [1]: arrays = [
...: ["bar", "bar", "baz", "baz", "foo", "foo", "qux", "qux"],
...: ["one", "two", "one", "two", "one", "two", "one", "two"],
...: ]
...:
In [2]: tuples = list(zip(*arrays))
In [3]: tuples
Out[3]:
[('bar', 'one'),
('bar', 'two'),
('baz', 'one'),
('baz', 'two'),
('foo', 'one'),
('foo', 'two'),
('qux', 'one'),
('qux', 'two')]
In [4]: index = pd.MultiIndex.from_tuples(tuples, names=["first", "second"])
In [5]: index
Out[5]:
MultiIndex([('bar', 'one'),
('bar', 'two'),
('baz', 'one'),
('baz', 'two'),
('foo', 'one'),
('foo', 'two'),
('qux', 'one'),
('qux', 'two')],
names=['first', 'second'])
In [6]: s = pd.Series(np.random.randn(8), index=index)
In [7]: s
Out[7]:
first second
bar one 0.469112
two -0.282863
baz one -1.509059
two -1.135632
foo one 1.212112
two -0.173215
qux one 0.119209
two -1.044236
dtype: float64
When you want every pairing of the elements in two iterables, it can be easier
to use the MultiIndex.from_product() method:
In [8]: iterables = [["bar", "baz", "foo", "qux"], ["one", "two"]]
In [9]: pd.MultiIndex.from_product(iterables, names=["first", "second"])
Out[9]:
MultiIndex([('bar', 'one'),
('bar', 'two'),
('baz', 'one'),
('baz', 'two'),
('foo', 'one'),
('foo', 'two'),
('qux', 'one'),
('qux', 'two')],
names=['first', 'second'])
You can also construct a MultiIndex from a DataFrame directly, using
the method MultiIndex.from_frame(). This is a complementary method to
MultiIndex.to_frame().
In [10]: df = pd.DataFrame(
....: [["bar", "one"], ["bar", "two"], ["foo", "one"], ["foo", "two"]],
....: columns=["first", "second"],
....: )
....:
In [11]: pd.MultiIndex.from_frame(df)
Out[11]:
MultiIndex([('bar', 'one'),
('bar', 'two'),
('foo', 'one'),
('foo', 'two')],
names=['first', 'second'])
As a convenience, you can pass a list of arrays directly into Series or
DataFrame to construct a MultiIndex automatically:
In [12]: arrays = [
....: np.array(["bar", "bar", "baz", "baz", "foo", "foo", "qux", "qux"]),
....: np.array(["one", "two", "one", "two", "one", "two", "one", "two"]),
....: ]
....:
In [13]: s = pd.Series(np.random.randn(8), index=arrays)
In [14]: s
Out[14]:
bar one -0.861849
two -2.104569
baz one -0.494929
two 1.071804
foo one 0.721555
two -0.706771
qux one -1.039575
two 0.271860
dtype: float64
In [15]: df = pd.DataFrame(np.random.randn(8, 4), index=arrays)
In [16]: df
Out[16]:
0 1 2 3
bar one -0.424972 0.567020 0.276232 -1.087401
two -0.673690 0.113648 -1.478427 0.524988
baz one 0.404705 0.577046 -1.715002 -1.039268
two -0.370647 -1.157892 -1.344312 0.844885
foo one 1.075770 -0.109050 1.643563 -1.469388
two 0.357021 -0.674600 -1.776904 -0.968914
qux one -1.294524 0.413738 0.276662 -0.472035
two -0.013960 -0.362543 -0.006154 -0.923061
All of the MultiIndex constructors accept a names argument which stores
string names for the levels themselves. If no names are provided, None will
be assigned:
In [17]: df.index.names
Out[17]: FrozenList([None, None])
This index can back any axis of a pandas object, and the number of levels
of the index is up to you:
In [18]: df = pd.DataFrame(np.random.randn(3, 8), index=["A", "B", "C"], columns=index)
In [19]: df
Out[19]:
first bar baz ... foo qux
second one two one ... two one two
A 0.895717 0.805244 -1.206412 ... 1.340309 -1.170299 -0.226169
B 0.410835 0.813850 0.132003 ... -1.187678 1.130127 -1.436737
C -1.413681 1.607920 1.024180 ... -2.211372 0.974466 -2.006747
[3 rows x 8 columns]
In [20]: pd.DataFrame(np.random.randn(6, 6), index=index[:6], columns=index[:6])
Out[20]:
first bar baz foo
second one two one two one two
first second
bar one -0.410001 -0.078638 0.545952 -1.219217 -1.226825 0.769804
two -1.281247 -0.727707 -0.121306 -0.097883 0.695775 0.341734
baz one 0.959726 -1.110336 -0.619976 0.149748 -0.732339 0.687738
two 0.176444 0.403310 -0.154951 0.301624 -2.179861 -1.369849
foo one -0.954208 1.462696 -1.743161 -0.826591 -0.345352 1.314232
two 0.690579 0.995761 2.396780 0.014871 3.357427 -0.317441
We’ve “sparsified” the higher levels of the indexes to make the console output a
bit easier on the eyes. Note that how the index is displayed can be controlled using the
multi_sparse option in pandas.set_options():
In [21]: with pd.option_context("display.multi_sparse", False):
....: df
....:
It’s worth keeping in mind that there’s nothing preventing you from using
tuples as atomic labels on an axis:
In [22]: pd.Series(np.random.randn(8), index=tuples)
Out[22]:
(bar, one) -1.236269
(bar, two) 0.896171
(baz, one) -0.487602
(baz, two) -0.082240
(foo, one) -2.182937
(foo, two) 0.380396
(qux, one) 0.084844
(qux, two) 0.432390
dtype: float64
The reason that the MultiIndex matters is that it can allow you to do
grouping, selection, and reshaping operations as we will describe below and in
subsequent areas of the documentation. As you will see in later sections, you
can find yourself working with hierarchically-indexed data without creating a
MultiIndex explicitly yourself. However, when loading data from a file, you
may wish to generate your own MultiIndex when preparing the data set.
Reconstructing the level labels#
The method get_level_values() will return a vector of the labels for each
location at a particular level:
In [23]: index.get_level_values(0)
Out[23]: Index(['bar', 'bar', 'baz', 'baz', 'foo', 'foo', 'qux', 'qux'], dtype='object', name='first')
In [24]: index.get_level_values("second")
Out[24]: Index(['one', 'two', 'one', 'two', 'one', 'two', 'one', 'two'], dtype='object', name='second')
Basic indexing on axis with MultiIndex#
One of the important features of hierarchical indexing is that you can select
data by a “partial” label identifying a subgroup in the data. Partial
selection “drops” levels of the hierarchical index in the result in a
completely analogous way to selecting a column in a regular DataFrame:
In [25]: df["bar"]
Out[25]:
second one two
A 0.895717 0.805244
B 0.410835 0.813850
C -1.413681 1.607920
In [26]: df["bar", "one"]
Out[26]:
A 0.895717
B 0.410835
C -1.413681
Name: (bar, one), dtype: float64
In [27]: df["bar"]["one"]
Out[27]:
A 0.895717
B 0.410835
C -1.413681
Name: one, dtype: float64
In [28]: s["qux"]
Out[28]:
one -1.039575
two 0.271860
dtype: float64
See Cross-section with hierarchical index for how to select
on a deeper level.
Defined levels#
The MultiIndex keeps all the defined levels of an index, even
if they are not actually used. When slicing an index, you may notice this.
For example:
In [29]: df.columns.levels # original MultiIndex
Out[29]: FrozenList([['bar', 'baz', 'foo', 'qux'], ['one', 'two']])
In [30]: df[["foo","qux"]].columns.levels # sliced
Out[30]: FrozenList([['bar', 'baz', 'foo', 'qux'], ['one', 'two']])
This is done to avoid a recomputation of the levels in order to make slicing
highly performant. If you want to see only the used levels, you can use the
get_level_values() method.
In [31]: df[["foo", "qux"]].columns.to_numpy()
Out[31]:
array([('foo', 'one'), ('foo', 'two'), ('qux', 'one'), ('qux', 'two')],
dtype=object)
# for a specific level
In [32]: df[["foo", "qux"]].columns.get_level_values(0)
Out[32]: Index(['foo', 'foo', 'qux', 'qux'], dtype='object', name='first')
To reconstruct the MultiIndex with only the used levels, the
remove_unused_levels() method may be used.
In [33]: new_mi = df[["foo", "qux"]].columns.remove_unused_levels()
In [34]: new_mi.levels
Out[34]: FrozenList([['foo', 'qux'], ['one', 'two']])
Data alignment and using reindex#
Operations between differently-indexed objects having MultiIndex on the
axes will work as you expect; data alignment will work the same as an Index of
tuples:
In [35]: s + s[:-2]
Out[35]:
bar one -1.723698
two -4.209138
baz one -0.989859
two 2.143608
foo one 1.443110
two -1.413542
qux one NaN
two NaN
dtype: float64
In [36]: s + s[::2]
Out[36]:
bar one -1.723698
two NaN
baz one -0.989859
two NaN
foo one 1.443110
two NaN
qux one -2.079150
two NaN
dtype: float64
The reindex() method of Series/DataFrames can be
called with another MultiIndex, or even a list or array of tuples:
In [37]: s.reindex(index[:3])
Out[37]:
first second
bar one -0.861849
two -2.104569
baz one -0.494929
dtype: float64
In [38]: s.reindex([("foo", "two"), ("bar", "one"), ("qux", "one"), ("baz", "one")])
Out[38]:
foo two -0.706771
bar one -0.861849
qux one -1.039575
baz one -0.494929
dtype: float64
Advanced indexing with hierarchical index#
Syntactically integrating MultiIndex in advanced indexing with .loc is a
bit challenging, but we’ve made every effort to do so. In general, MultiIndex
keys take the form of tuples. For example, the following works as you would expect:
In [39]: df = df.T
In [40]: df
Out[40]:
A B C
first second
bar one 0.895717 0.410835 -1.413681
two 0.805244 0.813850 1.607920
baz one -1.206412 0.132003 1.024180
two 2.565646 -0.827317 0.569605
foo one 1.431256 -0.076467 0.875906
two 1.340309 -1.187678 -2.211372
qux one -1.170299 1.130127 0.974466
two -0.226169 -1.436737 -2.006747
In [41]: df.loc[("bar", "two")]
Out[41]:
A 0.805244
B 0.813850
C 1.607920
Name: (bar, two), dtype: float64
Note that df.loc['bar', 'two'] would also work in this example, but this shorthand
notation can lead to ambiguity in general.
If you also want to index a specific column with .loc, you must use a tuple
like this:
In [42]: df.loc[("bar", "two"), "A"]
Out[42]: 0.8052440253863785
You don’t have to specify all levels of the MultiIndex by passing only the
first elements of the tuple. For example, you can use “partial” indexing to
get all elements with bar in the first level as follows:
In [43]: df.loc["bar"]
Out[43]:
A B C
second
one 0.895717 0.410835 -1.413681
two 0.805244 0.813850 1.607920
This is a shortcut for the slightly more verbose notation df.loc[('bar',),] (equivalent
to df.loc['bar',] in this example).
“Partial” slicing also works quite nicely.
In [44]: df.loc["baz":"foo"]
Out[44]:
A B C
first second
baz one -1.206412 0.132003 1.024180
two 2.565646 -0.827317 0.569605
foo one 1.431256 -0.076467 0.875906
two 1.340309 -1.187678 -2.211372
You can slice with a ‘range’ of values, by providing a slice of tuples.
In [45]: df.loc[("baz", "two"):("qux", "one")]
Out[45]:
A B C
first second
baz two 2.565646 -0.827317 0.569605
foo one 1.431256 -0.076467 0.875906
two 1.340309 -1.187678 -2.211372
qux one -1.170299 1.130127 0.974466
In [46]: df.loc[("baz", "two"):"foo"]
Out[46]:
A B C
first second
baz two 2.565646 -0.827317 0.569605
foo one 1.431256 -0.076467 0.875906
two 1.340309 -1.187678 -2.211372
Passing a list of labels or tuples works similar to reindexing:
In [47]: df.loc[[("bar", "two"), ("qux", "one")]]
Out[47]:
A B C
first second
bar two 0.805244 0.813850 1.607920
qux one -1.170299 1.130127 0.974466
Note
It is important to note that tuples and lists are not treated identically
in pandas when it comes to indexing. Whereas a tuple is interpreted as one
multi-level key, a list is used to specify several keys. Or in other words,
tuples go horizontally (traversing levels), lists go vertically (scanning levels).
Importantly, a list of tuples indexes several complete MultiIndex keys,
whereas a tuple of lists refer to several values within a level:
In [48]: s = pd.Series(
....: [1, 2, 3, 4, 5, 6],
....: index=pd.MultiIndex.from_product([["A", "B"], ["c", "d", "e"]]),
....: )
....:
In [49]: s.loc[[("A", "c"), ("B", "d")]] # list of tuples
Out[49]:
A c 1
B d 5
dtype: int64
In [50]: s.loc[(["A", "B"], ["c", "d"])] # tuple of lists
Out[50]:
A c 1
d 2
B c 4
d 5
dtype: int64
Using slicers#
You can slice a MultiIndex by providing multiple indexers.
You can provide any of the selectors as if you are indexing by label, see Selection by Label,
including slices, lists of labels, labels, and boolean indexers.
You can use slice(None) to select all the contents of that level. You do not need to specify all the
deeper levels, they will be implied as slice(None).
As usual, both sides of the slicers are included as this is label indexing.
Warning
You should specify all axes in the .loc specifier, meaning the indexer for the index and
for the columns. There are some ambiguous cases where the passed indexer could be mis-interpreted
as indexing both axes, rather than into say the MultiIndex for the rows.
You should do this:
df.loc[(slice("A1", "A3"), ...), :] # noqa: E999
You should not do this:
df.loc[(slice("A1", "A3"), ...)] # noqa: E999
In [51]: def mklbl(prefix, n):
....: return ["%s%s" % (prefix, i) for i in range(n)]
....:
In [52]: miindex = pd.MultiIndex.from_product(
....: [mklbl("A", 4), mklbl("B", 2), mklbl("C", 4), mklbl("D", 2)]
....: )
....:
In [53]: micolumns = pd.MultiIndex.from_tuples(
....: [("a", "foo"), ("a", "bar"), ("b", "foo"), ("b", "bah")], names=["lvl0", "lvl1"]
....: )
....:
In [54]: dfmi = (
....: pd.DataFrame(
....: np.arange(len(miindex) * len(micolumns)).reshape(
....: (len(miindex), len(micolumns))
....: ),
....: index=miindex,
....: columns=micolumns,
....: )
....: .sort_index()
....: .sort_index(axis=1)
....: )
....:
In [55]: dfmi
Out[55]:
lvl0 a b
lvl1 bar foo bah foo
A0 B0 C0 D0 1 0 3 2
D1 5 4 7 6
C1 D0 9 8 11 10
D1 13 12 15 14
C2 D0 17 16 19 18
... ... ... ... ...
A3 B1 C1 D1 237 236 239 238
C2 D0 241 240 243 242
D1 245 244 247 246
C3 D0 249 248 251 250
D1 253 252 255 254
[64 rows x 4 columns]
Basic MultiIndex slicing using slices, lists, and labels.
In [56]: dfmi.loc[(slice("A1", "A3"), slice(None), ["C1", "C3"]), :]
Out[56]:
lvl0 a b
lvl1 bar foo bah foo
A1 B0 C1 D0 73 72 75 74
D1 77 76 79 78
C3 D0 89 88 91 90
D1 93 92 95 94
B1 C1 D0 105 104 107 106
... ... ... ... ...
A3 B0 C3 D1 221 220 223 222
B1 C1 D0 233 232 235 234
D1 237 236 239 238
C3 D0 249 248 251 250
D1 253 252 255 254
[24 rows x 4 columns]
You can use pandas.IndexSlice to facilitate a more natural syntax
using :, rather than using slice(None).
In [57]: idx = pd.IndexSlice
In [58]: dfmi.loc[idx[:, :, ["C1", "C3"]], idx[:, "foo"]]
Out[58]:
lvl0 a b
lvl1 foo foo
A0 B0 C1 D0 8 10
D1 12 14
C3 D0 24 26
D1 28 30
B1 C1 D0 40 42
... ... ...
A3 B0 C3 D1 220 222
B1 C1 D0 232 234
D1 236 238
C3 D0 248 250
D1 252 254
[32 rows x 2 columns]
It is possible to perform quite complicated selections using this method on multiple
axes at the same time.
In [59]: dfmi.loc["A1", (slice(None), "foo")]
Out[59]:
lvl0 a b
lvl1 foo foo
B0 C0 D0 64 66
D1 68 70
C1 D0 72 74
D1 76 78
C2 D0 80 82
... ... ...
B1 C1 D1 108 110
C2 D0 112 114
D1 116 118
C3 D0 120 122
D1 124 126
[16 rows x 2 columns]
In [60]: dfmi.loc[idx[:, :, ["C1", "C3"]], idx[:, "foo"]]
Out[60]:
lvl0 a b
lvl1 foo foo
A0 B0 C1 D0 8 10
D1 12 14
C3 D0 24 26
D1 28 30
B1 C1 D0 40 42
... ... ...
A3 B0 C3 D1 220 222
B1 C1 D0 232 234
D1 236 238
C3 D0 248 250
D1 252 254
[32 rows x 2 columns]
Using a boolean indexer you can provide selection related to the values.
In [61]: mask = dfmi[("a", "foo")] > 200
In [62]: dfmi.loc[idx[mask, :, ["C1", "C3"]], idx[:, "foo"]]
Out[62]:
lvl0 a b
lvl1 foo foo
A3 B0 C1 D1 204 206
C3 D0 216 218
D1 220 222
B1 C1 D0 232 234
D1 236 238
C3 D0 248 250
D1 252 254
You can also specify the axis argument to .loc to interpret the passed
slicers on a single axis.
In [63]: dfmi.loc(axis=0)[:, :, ["C1", "C3"]]
Out[63]:
lvl0 a b
lvl1 bar foo bah foo
A0 B0 C1 D0 9 8 11 10
D1 13 12 15 14
C3 D0 25 24 27 26
D1 29 28 31 30
B1 C1 D0 41 40 43 42
... ... ... ... ...
A3 B0 C3 D1 221 220 223 222
B1 C1 D0 233 232 235 234
D1 237 236 239 238
C3 D0 249 248 251 250
D1 253 252 255 254
[32 rows x 4 columns]
Furthermore, you can set the values using the following methods.
In [64]: df2 = dfmi.copy()
In [65]: df2.loc(axis=0)[:, :, ["C1", "C3"]] = -10
In [66]: df2
Out[66]:
lvl0 a b
lvl1 bar foo bah foo
A0 B0 C0 D0 1 0 3 2
D1 5 4 7 6
C1 D0 -10 -10 -10 -10
D1 -10 -10 -10 -10
C2 D0 17 16 19 18
... ... ... ... ...
A3 B1 C1 D1 -10 -10 -10 -10
C2 D0 241 240 243 242
D1 245 244 247 246
C3 D0 -10 -10 -10 -10
D1 -10 -10 -10 -10
[64 rows x 4 columns]
You can use a right-hand-side of an alignable object as well.
In [67]: df2 = dfmi.copy()
In [68]: df2.loc[idx[:, :, ["C1", "C3"]], :] = df2 * 1000
In [69]: df2
Out[69]:
lvl0 a b
lvl1 bar foo bah foo
A0 B0 C0 D0 1 0 3 2
D1 5 4 7 6
C1 D0 9000 8000 11000 10000
D1 13000 12000 15000 14000
C2 D0 17 16 19 18
... ... ... ... ...
A3 B1 C1 D1 237000 236000 239000 238000
C2 D0 241 240 243 242
D1 245 244 247 246
C3 D0 249000 248000 251000 250000
D1 253000 252000 255000 254000
[64 rows x 4 columns]
Cross-section#
The xs() method of DataFrame additionally takes a level argument to make
selecting data at a particular level of a MultiIndex easier.
In [70]: df
Out[70]:
A B C
first second
bar one 0.895717 0.410835 -1.413681
two 0.805244 0.813850 1.607920
baz one -1.206412 0.132003 1.024180
two 2.565646 -0.827317 0.569605
foo one 1.431256 -0.076467 0.875906
two 1.340309 -1.187678 -2.211372
qux one -1.170299 1.130127 0.974466
two -0.226169 -1.436737 -2.006747
In [71]: df.xs("one", level="second")
Out[71]:
A B C
first
bar 0.895717 0.410835 -1.413681
baz -1.206412 0.132003 1.024180
foo 1.431256 -0.076467 0.875906
qux -1.170299 1.130127 0.974466
# using the slicers
In [72]: df.loc[(slice(None), "one"), :]
Out[72]:
A B C
first second
bar one 0.895717 0.410835 -1.413681
baz one -1.206412 0.132003 1.024180
foo one 1.431256 -0.076467 0.875906
qux one -1.170299 1.130127 0.974466
You can also select on the columns with xs, by
providing the axis argument.
In [73]: df = df.T
In [74]: df.xs("one", level="second", axis=1)
Out[74]:
first bar baz foo qux
A 0.895717 -1.206412 1.431256 -1.170299
B 0.410835 0.132003 -0.076467 1.130127
C -1.413681 1.024180 0.875906 0.974466
# using the slicers
In [75]: df.loc[:, (slice(None), "one")]
Out[75]:
first bar baz foo qux
second one one one one
A 0.895717 -1.206412 1.431256 -1.170299
B 0.410835 0.132003 -0.076467 1.130127
C -1.413681 1.024180 0.875906 0.974466
xs also allows selection with multiple keys.
In [76]: df.xs(("one", "bar"), level=("second", "first"), axis=1)
Out[76]:
first bar
second one
A 0.895717
B 0.410835
C -1.413681
# using the slicers
In [77]: df.loc[:, ("bar", "one")]
Out[77]:
A 0.895717
B 0.410835
C -1.413681
Name: (bar, one), dtype: float64
You can pass drop_level=False to xs to retain
the level that was selected.
In [78]: df.xs("one", level="second", axis=1, drop_level=False)
Out[78]:
first bar baz foo qux
second one one one one
A 0.895717 -1.206412 1.431256 -1.170299
B 0.410835 0.132003 -0.076467 1.130127
C -1.413681 1.024180 0.875906 0.974466
Compare the above with the result using drop_level=True (the default value).
In [79]: df.xs("one", level="second", axis=1, drop_level=True)
Out[79]:
first bar baz foo qux
A 0.895717 -1.206412 1.431256 -1.170299
B 0.410835 0.132003 -0.076467 1.130127
C -1.413681 1.024180 0.875906 0.974466
Advanced reindexing and alignment#
Using the parameter level in the reindex() and
align() methods of pandas objects is useful to broadcast
values across a level. For instance:
In [80]: midx = pd.MultiIndex(
....: levels=[["zero", "one"], ["x", "y"]], codes=[[1, 1, 0, 0], [1, 0, 1, 0]]
....: )
....:
In [81]: df = pd.DataFrame(np.random.randn(4, 2), index=midx)
In [82]: df
Out[82]:
0 1
one y 1.519970 -0.493662
x 0.600178 0.274230
zero y 0.132885 -0.023688
x 2.410179 1.450520
In [83]: df2 = df.groupby(level=0).mean()
In [84]: df2
Out[84]:
0 1
one 1.060074 -0.109716
zero 1.271532 0.713416
In [85]: df2.reindex(df.index, level=0)
Out[85]:
0 1
one y 1.060074 -0.109716
x 1.060074 -0.109716
zero y 1.271532 0.713416
x 1.271532 0.713416
# aligning
In [86]: df_aligned, df2_aligned = df.align(df2, level=0)
In [87]: df_aligned
Out[87]:
0 1
one y 1.519970 -0.493662
x 0.600178 0.274230
zero y 0.132885 -0.023688
x 2.410179 1.450520
In [88]: df2_aligned
Out[88]:
0 1
one y 1.060074 -0.109716
x 1.060074 -0.109716
zero y 1.271532 0.713416
x 1.271532 0.713416
Swapping levels with swaplevel#
The swaplevel() method can switch the order of two levels:
In [89]: df[:5]
Out[89]:
0 1
one y 1.519970 -0.493662
x 0.600178 0.274230
zero y 0.132885 -0.023688
x 2.410179 1.450520
In [90]: df[:5].swaplevel(0, 1, axis=0)
Out[90]:
0 1
y one 1.519970 -0.493662
x one 0.600178 0.274230
y zero 0.132885 -0.023688
x zero 2.410179 1.450520
Reordering levels with reorder_levels#
The reorder_levels() method generalizes the swaplevel
method, allowing you to permute the hierarchical index levels in one step:
In [91]: df[:5].reorder_levels([1, 0], axis=0)
Out[91]:
0 1
y one 1.519970 -0.493662
x one 0.600178 0.274230
y zero 0.132885 -0.023688
x zero 2.410179 1.450520
Renaming names of an Index or MultiIndex#
The rename() method is used to rename the labels of a
MultiIndex, and is typically used to rename the columns of a DataFrame.
The columns argument of rename allows a dictionary to be specified
that includes only the columns you wish to rename.
In [92]: df.rename(columns={0: "col0", 1: "col1"})
Out[92]:
col0 col1
one y 1.519970 -0.493662
x 0.600178 0.274230
zero y 0.132885 -0.023688
x 2.410179 1.450520
This method can also be used to rename specific labels of the main index
of the DataFrame.
In [93]: df.rename(index={"one": "two", "y": "z"})
Out[93]:
0 1
two z 1.519970 -0.493662
x 0.600178 0.274230
zero z 0.132885 -0.023688
x 2.410179 1.450520
The rename_axis() method is used to rename the name of a
Index or MultiIndex. In particular, the names of the levels of a
MultiIndex can be specified, which is useful if reset_index() is later
used to move the values from the MultiIndex to a column.
In [94]: df.rename_axis(index=["abc", "def"])
Out[94]:
0 1
abc def
one y 1.519970 -0.493662
x 0.600178 0.274230
zero y 0.132885 -0.023688
x 2.410179 1.450520
Note that the columns of a DataFrame are an index, so that using
rename_axis with the columns argument will change the name of that
index.
In [95]: df.rename_axis(columns="Cols").columns
Out[95]: RangeIndex(start=0, stop=2, step=1, name='Cols')
Both rename and rename_axis support specifying a dictionary,
Series or a mapping function to map labels/names to new values.
When working with an Index object directly, rather than via a DataFrame,
Index.set_names() can be used to change the names.
In [96]: mi = pd.MultiIndex.from_product([[1, 2], ["a", "b"]], names=["x", "y"])
In [97]: mi.names
Out[97]: FrozenList(['x', 'y'])
In [98]: mi2 = mi.rename("new name", level=0)
In [99]: mi2
Out[99]:
MultiIndex([(1, 'a'),
(1, 'b'),
(2, 'a'),
(2, 'b')],
names=['new name', 'y'])
You cannot set the names of the MultiIndex via a level.
In [100]: mi.levels[0].name = "name via level"
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
Cell In[100], line 1
----> 1 mi.levels[0].name = "name via level"
File ~/work/pandas/pandas/pandas/core/indexes/base.py:1745, in Index.name(self, value)
1741 @name.setter
1742 def name(self, value: Hashable) -> None:
1743 if self._no_setting_name:
1744 # Used in MultiIndex.levels to avoid silently ignoring name updates.
-> 1745 raise RuntimeError(
1746 "Cannot set name on a level of a MultiIndex. Use "
1747 "'MultiIndex.set_names' instead."
1748 )
1749 maybe_extract_name(value, None, type(self))
1750 self._name = value
RuntimeError: Cannot set name on a level of a MultiIndex. Use 'MultiIndex.set_names' instead.
Use Index.set_names() instead.
Sorting a MultiIndex#
For MultiIndex-ed objects to be indexed and sliced effectively,
they need to be sorted. As with any index, you can use sort_index().
In [101]: import random
In [102]: random.shuffle(tuples)
In [103]: s = pd.Series(np.random.randn(8), index=pd.MultiIndex.from_tuples(tuples))
In [104]: s
Out[104]:
baz two 0.206053
foo two -0.251905
bar one -2.213588
qux two 1.063327
baz one 1.266143
qux one 0.299368
foo one -0.863838
bar two 0.408204
dtype: float64
In [105]: s.sort_index()
Out[105]:
bar one -2.213588
two 0.408204
baz one 1.266143
two 0.206053
foo one -0.863838
two -0.251905
qux one 0.299368
two 1.063327
dtype: float64
In [106]: s.sort_index(level=0)
Out[106]:
bar one -2.213588
two 0.408204
baz one 1.266143
two 0.206053
foo one -0.863838
two -0.251905
qux one 0.299368
two 1.063327
dtype: float64
In [107]: s.sort_index(level=1)
Out[107]:
bar one -2.213588
baz one 1.266143
foo one -0.863838
qux one 0.299368
bar two 0.408204
baz two 0.206053
foo two -0.251905
qux two 1.063327
dtype: float64
You may also pass a level name to sort_index if the MultiIndex levels
are named.
In [108]: s.index.set_names(["L1", "L2"], inplace=True)
In [109]: s.sort_index(level="L1")
Out[109]:
L1 L2
bar one -2.213588
two 0.408204
baz one 1.266143
two 0.206053
foo one -0.863838
two -0.251905
qux one 0.299368
two 1.063327
dtype: float64
In [110]: s.sort_index(level="L2")
Out[110]:
L1 L2
bar one -2.213588
baz one 1.266143
foo one -0.863838
qux one 0.299368
bar two 0.408204
baz two 0.206053
foo two -0.251905
qux two 1.063327
dtype: float64
On higher dimensional objects, you can sort any of the other axes by level if
they have a MultiIndex:
In [111]: df.T.sort_index(level=1, axis=1)
Out[111]:
one zero one zero
x x y y
0 0.600178 2.410179 1.519970 0.132885
1 0.274230 1.450520 -0.493662 -0.023688
Indexing will work even if the data are not sorted, but will be rather
inefficient (and show a PerformanceWarning). It will also
return a copy of the data rather than a view:
In [112]: dfm = pd.DataFrame(
.....: {"jim": [0, 0, 1, 1], "joe": ["x", "x", "z", "y"], "jolie": np.random.rand(4)}
.....: )
.....:
In [113]: dfm = dfm.set_index(["jim", "joe"])
In [114]: dfm
Out[114]:
jolie
jim joe
0 x 0.490671
x 0.120248
1 z 0.537020
y 0.110968
In [4]: dfm.loc[(1, 'z')]
PerformanceWarning: indexing past lexsort depth may impact performance.
Out[4]:
jolie
jim joe
1 z 0.64094
Furthermore, if you try to index something that is not fully lexsorted, this can raise:
In [5]: dfm.loc[(0, 'y'):(1, 'z')]
UnsortedIndexError: 'Key length (2) was greater than MultiIndex lexsort depth (1)'
The is_monotonic_increasing() method on a MultiIndex shows if the
index is sorted:
In [115]: dfm.index.is_monotonic_increasing
Out[115]: False
In [116]: dfm = dfm.sort_index()
In [117]: dfm
Out[117]:
jolie
jim joe
0 x 0.490671
x 0.120248
1 y 0.110968
z 0.537020
In [118]: dfm.index.is_monotonic_increasing
Out[118]: True
And now selection works as expected.
In [119]: dfm.loc[(0, "y"):(1, "z")]
Out[119]:
jolie
jim joe
1 y 0.110968
z 0.537020
Take methods#
Similar to NumPy ndarrays, pandas Index, Series, and DataFrame also provides
the take() method that retrieves elements along a given axis at the given
indices. The given indices must be either a list or an ndarray of integer
index positions. take will also accept negative integers as relative positions to the end of the object.
In [120]: index = pd.Index(np.random.randint(0, 1000, 10))
In [121]: index
Out[121]: Int64Index([214, 502, 712, 567, 786, 175, 993, 133, 758, 329], dtype='int64')
In [122]: positions = [0, 9, 3]
In [123]: index[positions]
Out[123]: Int64Index([214, 329, 567], dtype='int64')
In [124]: index.take(positions)
Out[124]: Int64Index([214, 329, 567], dtype='int64')
In [125]: ser = pd.Series(np.random.randn(10))
In [126]: ser.iloc[positions]
Out[126]:
0 -0.179666
9 1.824375
3 0.392149
dtype: float64
In [127]: ser.take(positions)
Out[127]:
0 -0.179666
9 1.824375
3 0.392149
dtype: float64
For DataFrames, the given indices should be a 1d list or ndarray that specifies
row or column positions.
In [128]: frm = pd.DataFrame(np.random.randn(5, 3))
In [129]: frm.take([1, 4, 3])
Out[129]:
0 1 2
1 -1.237881 0.106854 -1.276829
4 0.629675 -1.425966 1.857704
3 0.979542 -1.633678 0.615855
In [130]: frm.take([0, 2], axis=1)
Out[130]:
0 2
0 0.595974 0.601544
1 -1.237881 -1.276829
2 -0.767101 1.499591
3 0.979542 0.615855
4 0.629675 1.857704
It is important to note that the take method on pandas objects are not
intended to work on boolean indices and may return unexpected results.
In [131]: arr = np.random.randn(10)
In [132]: arr.take([False, False, True, True])
Out[132]: array([-1.1935, -1.1935, 0.6775, 0.6775])
In [133]: arr[[0, 1]]
Out[133]: array([-1.1935, 0.6775])
In [134]: ser = pd.Series(np.random.randn(10))
In [135]: ser.take([False, False, True, True])
Out[135]:
0 0.233141
0 0.233141
1 -0.223540
1 -0.223540
dtype: float64
In [136]: ser.iloc[[0, 1]]
Out[136]:
0 0.233141
1 -0.223540
dtype: float64
Finally, as a small note on performance, because the take method handles
a narrower range of inputs, it can offer performance that is a good deal
faster than fancy indexing.
In [137]: arr = np.random.randn(10000, 5)
In [138]: indexer = np.arange(10000)
In [139]: random.shuffle(indexer)
In [140]: %timeit arr[indexer]
.....: %timeit arr.take(indexer, axis=0)
.....:
141 us +- 1.18 us per loop (mean +- std. dev. of 7 runs, 10,000 loops each)
43.6 us +- 1.01 us per loop (mean +- std. dev. of 7 runs, 10,000 loops each)
In [141]: ser = pd.Series(arr[:, 0])
In [142]: %timeit ser.iloc[indexer]
.....: %timeit ser.take(indexer)
.....:
71.3 us +- 2.24 us per loop (mean +- std. dev. of 7 runs, 10,000 loops each)
63.1 us +- 4.29 us per loop (mean +- std. dev. of 7 runs, 10,000 loops each)
Index types#
We have discussed MultiIndex in the previous sections pretty extensively.
Documentation about DatetimeIndex and PeriodIndex are shown here,
and documentation about TimedeltaIndex is found here.
In the following sub-sections we will highlight some other index types.
CategoricalIndex#
CategoricalIndex is a type of index that is useful for supporting
indexing with duplicates. This is a container around a Categorical
and allows efficient indexing and storage of an index with a large number of duplicated elements.
In [143]: from pandas.api.types import CategoricalDtype
In [144]: df = pd.DataFrame({"A": np.arange(6), "B": list("aabbca")})
In [145]: df["B"] = df["B"].astype(CategoricalDtype(list("cab")))
In [146]: df
Out[146]:
A B
0 0 a
1 1 a
2 2 b
3 3 b
4 4 c
5 5 a
In [147]: df.dtypes
Out[147]:
A int64
B category
dtype: object
In [148]: df["B"].cat.categories
Out[148]: Index(['c', 'a', 'b'], dtype='object')
Setting the index will create a CategoricalIndex.
In [149]: df2 = df.set_index("B")
In [150]: df2.index
Out[150]: CategoricalIndex(['a', 'a', 'b', 'b', 'c', 'a'], categories=['c', 'a', 'b'], ordered=False, dtype='category', name='B')
Indexing with __getitem__/.iloc/.loc works similarly to an Index with duplicates.
The indexers must be in the category or the operation will raise a KeyError.
In [151]: df2.loc["a"]
Out[151]:
A
B
a 0
a 1
a 5
The CategoricalIndex is preserved after indexing:
In [152]: df2.loc["a"].index
Out[152]: CategoricalIndex(['a', 'a', 'a'], categories=['c', 'a', 'b'], ordered=False, dtype='category', name='B')
Sorting the index will sort by the order of the categories (recall that we
created the index with CategoricalDtype(list('cab')), so the sorted
order is cab).
In [153]: df2.sort_index()
Out[153]:
A
B
c 4
a 0
a 1
a 5
b 2
b 3
Groupby operations on the index will preserve the index nature as well.
In [154]: df2.groupby(level=0).sum()
Out[154]:
A
B
c 4
a 6
b 5
In [155]: df2.groupby(level=0).sum().index
Out[155]: CategoricalIndex(['c', 'a', 'b'], categories=['c', 'a', 'b'], ordered=False, dtype='category', name='B')
Reindexing operations will return a resulting index based on the type of the passed
indexer. Passing a list will return a plain-old Index; indexing with
a Categorical will return a CategoricalIndex, indexed according to the categories
of the passed Categorical dtype. This allows one to arbitrarily index these even with
values not in the categories, similarly to how you can reindex any pandas index.
In [156]: df3 = pd.DataFrame(
.....: {"A": np.arange(3), "B": pd.Series(list("abc")).astype("category")}
.....: )
.....:
In [157]: df3 = df3.set_index("B")
In [158]: df3
Out[158]:
A
B
a 0
b 1
c 2
In [159]: df3.reindex(["a", "e"])
Out[159]:
A
B
a 0.0
e NaN
In [160]: df3.reindex(["a", "e"]).index
Out[160]: Index(['a', 'e'], dtype='object', name='B')
In [161]: df3.reindex(pd.Categorical(["a", "e"], categories=list("abe")))
Out[161]:
A
B
a 0.0
e NaN
In [162]: df3.reindex(pd.Categorical(["a", "e"], categories=list("abe"))).index
Out[162]: CategoricalIndex(['a', 'e'], categories=['a', 'b', 'e'], ordered=False, dtype='category', name='B')
Warning
Reshaping and Comparison operations on a CategoricalIndex must have the same categories
or a TypeError will be raised.
In [163]: df4 = pd.DataFrame({"A": np.arange(2), "B": list("ba")})
In [164]: df4["B"] = df4["B"].astype(CategoricalDtype(list("ab")))
In [165]: df4 = df4.set_index("B")
In [166]: df4.index
Out[166]: CategoricalIndex(['b', 'a'], categories=['a', 'b'], ordered=False, dtype='category', name='B')
In [167]: df5 = pd.DataFrame({"A": np.arange(2), "B": list("bc")})
In [168]: df5["B"] = df5["B"].astype(CategoricalDtype(list("bc")))
In [169]: df5 = df5.set_index("B")
In [170]: df5.index
Out[170]: CategoricalIndex(['b', 'c'], categories=['b', 'c'], ordered=False, dtype='category', name='B')
In [1]: pd.concat([df4, df5])
TypeError: categories must match existing categories when appending
Int64Index and RangeIndex#
Deprecated since version 1.4.0: In pandas 2.0, Index will become the default index type for numeric types
instead of Int64Index, Float64Index and UInt64Index and those index types
are therefore deprecated and will be removed in a futire version.
RangeIndex will not be removed, as it represents an optimized version of an integer index.
Int64Index is a fundamental basic index in pandas. This is an immutable array
implementing an ordered, sliceable set.
RangeIndex is a sub-class of Int64Index that provides the default index for all NDFrame objects.
RangeIndex is an optimized version of Int64Index that can represent a monotonic ordered set. These are analogous to Python range types.
Float64Index#
Deprecated since version 1.4.0: Index will become the default index type for numeric types in the future
instead of Int64Index, Float64Index and UInt64Index and those index types
are therefore deprecated and will be removed in a future version of Pandas.
RangeIndex will not be removed as it represents an optimized version of an integer index.
By default a Float64Index will be automatically created when passing floating, or mixed-integer-floating values in index creation.
This enables a pure label-based slicing paradigm that makes [],ix,loc for scalar indexing and slicing work exactly the
same.
In [171]: indexf = pd.Index([1.5, 2, 3, 4.5, 5])
In [172]: indexf
Out[172]: Float64Index([1.5, 2.0, 3.0, 4.5, 5.0], dtype='float64')
In [173]: sf = pd.Series(range(5), index=indexf)
In [174]: sf
Out[174]:
1.5 0
2.0 1
3.0 2
4.5 3
5.0 4
dtype: int64
Scalar selection for [],.loc will always be label based. An integer will match an equal float index (e.g. 3 is equivalent to 3.0).
In [175]: sf[3]
Out[175]: 2
In [176]: sf[3.0]
Out[176]: 2
In [177]: sf.loc[3]
Out[177]: 2
In [178]: sf.loc[3.0]
Out[178]: 2
The only positional indexing is via iloc.
In [179]: sf.iloc[3]
Out[179]: 3
A scalar index that is not found will raise a KeyError.
Slicing is primarily on the values of the index when using [],ix,loc, and
always positional when using iloc. The exception is when the slice is
boolean, in which case it will always be positional.
In [180]: sf[2:4]
Out[180]:
2.0 1
3.0 2
dtype: int64
In [181]: sf.loc[2:4]
Out[181]:
2.0 1
3.0 2
dtype: int64
In [182]: sf.iloc[2:4]
Out[182]:
3.0 2
4.5 3
dtype: int64
In float indexes, slicing using floats is allowed.
In [183]: sf[2.1:4.6]
Out[183]:
3.0 2
4.5 3
dtype: int64
In [184]: sf.loc[2.1:4.6]
Out[184]:
3.0 2
4.5 3
dtype: int64
In non-float indexes, slicing using floats will raise a TypeError.
In [1]: pd.Series(range(5))[3.5]
TypeError: the label [3.5] is not a proper indexer for this index type (Int64Index)
In [1]: pd.Series(range(5))[3.5:4.5]
TypeError: the slice start [3.5] is not a proper indexer for this index type (Int64Index)
Here is a typical use-case for using this type of indexing. Imagine that you have a somewhat
irregular timedelta-like indexing scheme, but the data is recorded as floats. This could, for
example, be millisecond offsets.
In [185]: dfir = pd.concat(
.....: [
.....: pd.DataFrame(
.....: np.random.randn(5, 2), index=np.arange(5) * 250.0, columns=list("AB")
.....: ),
.....: pd.DataFrame(
.....: np.random.randn(6, 2),
.....: index=np.arange(4, 10) * 250.1,
.....: columns=list("AB"),
.....: ),
.....: ]
.....: )
.....:
In [186]: dfir
Out[186]:
A B
0.0 -0.435772 -1.188928
250.0 -0.808286 -0.284634
500.0 -1.815703 1.347213
750.0 -0.243487 0.514704
1000.0 1.162969 -0.287725
1000.4 -0.179734 0.993962
1250.5 -0.212673 0.909872
1500.6 -0.733333 -0.349893
1750.7 0.456434 -0.306735
2000.8 0.553396 0.166221
2250.9 -0.101684 -0.734907
Selection operations then will always work on a value basis, for all selection operators.
In [187]: dfir[0:1000.4]
Out[187]:
A B
0.0 -0.435772 -1.188928
250.0 -0.808286 -0.284634
500.0 -1.815703 1.347213
750.0 -0.243487 0.514704
1000.0 1.162969 -0.287725
1000.4 -0.179734 0.993962
In [188]: dfir.loc[0:1001, "A"]
Out[188]:
0.0 -0.435772
250.0 -0.808286
500.0 -1.815703
750.0 -0.243487
1000.0 1.162969
1000.4 -0.179734
Name: A, dtype: float64
In [189]: dfir.loc[1000.4]
Out[189]:
A -0.179734
B 0.993962
Name: 1000.4, dtype: float64
You could retrieve the first 1 second (1000 ms) of data as such:
In [190]: dfir[0:1000]
Out[190]:
A B
0.0 -0.435772 -1.188928
250.0 -0.808286 -0.284634
500.0 -1.815703 1.347213
750.0 -0.243487 0.514704
1000.0 1.162969 -0.287725
If you need integer based selection, you should use iloc:
In [191]: dfir.iloc[0:5]
Out[191]:
A B
0.0 -0.435772 -1.188928
250.0 -0.808286 -0.284634
500.0 -1.815703 1.347213
750.0 -0.243487 0.514704
1000.0 1.162969 -0.287725
IntervalIndex#
IntervalIndex together with its own dtype, IntervalDtype
as well as the Interval scalar type, allow first-class support in pandas
for interval notation.
The IntervalIndex allows some unique indexing and is also used as a
return type for the categories in cut() and qcut().
Indexing with an IntervalIndex#
An IntervalIndex can be used in Series and in DataFrame as the index.
In [192]: df = pd.DataFrame(
.....: {"A": [1, 2, 3, 4]}, index=pd.IntervalIndex.from_breaks([0, 1, 2, 3, 4])
.....: )
.....:
In [193]: df
Out[193]:
A
(0, 1] 1
(1, 2] 2
(2, 3] 3
(3, 4] 4
Label based indexing via .loc along the edges of an interval works as you would expect,
selecting that particular interval.
In [194]: df.loc[2]
Out[194]:
A 2
Name: (1, 2], dtype: int64
In [195]: df.loc[[2, 3]]
Out[195]:
A
(1, 2] 2
(2, 3] 3
If you select a label contained within an interval, this will also select the interval.
In [196]: df.loc[2.5]
Out[196]:
A 3
Name: (2, 3], dtype: int64
In [197]: df.loc[[2.5, 3.5]]
Out[197]:
A
(2, 3] 3
(3, 4] 4
Selecting using an Interval will only return exact matches (starting from pandas 0.25.0).
In [198]: df.loc[pd.Interval(1, 2)]
Out[198]:
A 2
Name: (1, 2], dtype: int64
Trying to select an Interval that is not exactly contained in the IntervalIndex will raise a KeyError.
In [7]: df.loc[pd.Interval(0.5, 2.5)]
---------------------------------------------------------------------------
KeyError: Interval(0.5, 2.5, closed='right')
Selecting all Intervals that overlap a given Interval can be performed using the
overlaps() method to create a boolean indexer.
In [199]: idxr = df.index.overlaps(pd.Interval(0.5, 2.5))
In [200]: idxr
Out[200]: array([ True, True, True, False])
In [201]: df[idxr]
Out[201]:
A
(0, 1] 1
(1, 2] 2
(2, 3] 3
Binning data with cut and qcut#
cut() and qcut() both return a Categorical object, and the bins they
create are stored as an IntervalIndex in its .categories attribute.
In [202]: c = pd.cut(range(4), bins=2)
In [203]: c
Out[203]:
[(-0.003, 1.5], (-0.003, 1.5], (1.5, 3.0], (1.5, 3.0]]
Categories (2, interval[float64, right]): [(-0.003, 1.5] < (1.5, 3.0]]
In [204]: c.categories
Out[204]: IntervalIndex([(-0.003, 1.5], (1.5, 3.0]], dtype='interval[float64, right]')
cut() also accepts an IntervalIndex for its bins argument, which enables
a useful pandas idiom. First, We call cut() with some data and bins set to a
fixed number, to generate the bins. Then, we pass the values of .categories as the
bins argument in subsequent calls to cut(), supplying new data which will be
binned into the same bins.
In [205]: pd.cut([0, 3, 5, 1], bins=c.categories)
Out[205]:
[(-0.003, 1.5], (1.5, 3.0], NaN, (-0.003, 1.5]]
Categories (2, interval[float64, right]): [(-0.003, 1.5] < (1.5, 3.0]]
Any value which falls outside all bins will be assigned a NaN value.
Generating ranges of intervals#
If we need intervals on a regular frequency, we can use the interval_range() function
to create an IntervalIndex using various combinations of start, end, and periods.
The default frequency for interval_range is a 1 for numeric intervals, and calendar day for
datetime-like intervals:
In [206]: pd.interval_range(start=0, end=5)
Out[206]: IntervalIndex([(0, 1], (1, 2], (2, 3], (3, 4], (4, 5]], dtype='interval[int64, right]')
In [207]: pd.interval_range(start=pd.Timestamp("2017-01-01"), periods=4)
Out[207]: IntervalIndex([(2017-01-01, 2017-01-02], (2017-01-02, 2017-01-03], (2017-01-03, 2017-01-04], (2017-01-04, 2017-01-05]], dtype='interval[datetime64[ns], right]')
In [208]: pd.interval_range(end=pd.Timedelta("3 days"), periods=3)
Out[208]: IntervalIndex([(0 days 00:00:00, 1 days 00:00:00], (1 days 00:00:00, 2 days 00:00:00], (2 days 00:00:00, 3 days 00:00:00]], dtype='interval[timedelta64[ns], right]')
The freq parameter can used to specify non-default frequencies, and can utilize a variety
of frequency aliases with datetime-like intervals:
In [209]: pd.interval_range(start=0, periods=5, freq=1.5)
Out[209]: IntervalIndex([(0.0, 1.5], (1.5, 3.0], (3.0, 4.5], (4.5, 6.0], (6.0, 7.5]], dtype='interval[float64, right]')
In [210]: pd.interval_range(start=pd.Timestamp("2017-01-01"), periods=4, freq="W")
Out[210]: IntervalIndex([(2017-01-01, 2017-01-08], (2017-01-08, 2017-01-15], (2017-01-15, 2017-01-22], (2017-01-22, 2017-01-29]], dtype='interval[datetime64[ns], right]')
In [211]: pd.interval_range(start=pd.Timedelta("0 days"), periods=3, freq="9H")
Out[211]: IntervalIndex([(0 days 00:00:00, 0 days 09:00:00], (0 days 09:00:00, 0 days 18:00:00], (0 days 18:00:00, 1 days 03:00:00]], dtype='interval[timedelta64[ns], right]')
Additionally, the closed parameter can be used to specify which side(s) the intervals
are closed on. Intervals are closed on the right side by default.
In [212]: pd.interval_range(start=0, end=4, closed="both")
Out[212]: IntervalIndex([[0, 1], [1, 2], [2, 3], [3, 4]], dtype='interval[int64, both]')
In [213]: pd.interval_range(start=0, end=4, closed="neither")
Out[213]: IntervalIndex([(0, 1), (1, 2), (2, 3), (3, 4)], dtype='interval[int64, neither]')
Specifying start, end, and periods will generate a range of evenly spaced
intervals from start to end inclusively, with periods number of elements
in the resulting IntervalIndex:
In [214]: pd.interval_range(start=0, end=6, periods=4)
Out[214]: IntervalIndex([(0.0, 1.5], (1.5, 3.0], (3.0, 4.5], (4.5, 6.0]], dtype='interval[float64, right]')
In [215]: pd.interval_range(pd.Timestamp("2018-01-01"), pd.Timestamp("2018-02-28"), periods=3)
Out[215]: IntervalIndex([(2018-01-01, 2018-01-20 08:00:00], (2018-01-20 08:00:00, 2018-02-08 16:00:00], (2018-02-08 16:00:00, 2018-02-28]], dtype='interval[datetime64[ns], right]')
Miscellaneous indexing FAQ#
Integer indexing#
Label-based indexing with integer axis labels is a thorny topic. It has been
discussed heavily on mailing lists and among various members of the scientific
Python community. In pandas, our general viewpoint is that labels matter more
than integer locations. Therefore, with an integer axis index only
label-based indexing is possible with the standard tools like .loc. The
following code will generate exceptions:
In [216]: s = pd.Series(range(5))
In [217]: s[-1]
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
File ~/work/pandas/pandas/pandas/core/indexes/range.py:391, in RangeIndex.get_loc(self, key, method, tolerance)
390 try:
--> 391 return self._range.index(new_key)
392 except ValueError as err:
ValueError: -1 is not in range
The above exception was the direct cause of the following exception:
KeyError Traceback (most recent call last)
Cell In[217], line 1
----> 1 s[-1]
File ~/work/pandas/pandas/pandas/core/series.py:981, in Series.__getitem__(self, key)
978 return self._values[key]
980 elif key_is_scalar:
--> 981 return self._get_value(key)
983 if is_hashable(key):
984 # Otherwise index.get_value will raise InvalidIndexError
985 try:
986 # For labels that don't resolve as scalars like tuples and frozensets
File ~/work/pandas/pandas/pandas/core/series.py:1089, in Series._get_value(self, label, takeable)
1086 return self._values[label]
1088 # Similar to Index.get_value, but we do not fall back to positional
-> 1089 loc = self.index.get_loc(label)
1090 return self.index._get_values_for_loc(self, loc, label)
File ~/work/pandas/pandas/pandas/core/indexes/range.py:393, in RangeIndex.get_loc(self, key, method, tolerance)
391 return self._range.index(new_key)
392 except ValueError as err:
--> 393 raise KeyError(key) from err
394 self._check_indexing_error(key)
395 raise KeyError(key)
KeyError: -1
In [218]: df = pd.DataFrame(np.random.randn(5, 4))
In [219]: df
Out[219]:
0 1 2 3
0 -0.130121 -0.476046 0.759104 0.213379
1 -0.082641 0.448008 0.656420 -1.051443
2 0.594956 -0.151360 -0.069303 1.221431
3 -0.182832 0.791235 0.042745 2.069775
4 1.446552 0.019814 -1.389212 -0.702312
In [220]: df.loc[-2:]
Out[220]:
0 1 2 3
0 -0.130121 -0.476046 0.759104 0.213379
1 -0.082641 0.448008 0.656420 -1.051443
2 0.594956 -0.151360 -0.069303 1.221431
3 -0.182832 0.791235 0.042745 2.069775
4 1.446552 0.019814 -1.389212 -0.702312
This deliberate decision was made to prevent ambiguities and subtle bugs (many
users reported finding bugs when the API change was made to stop “falling back”
on position-based indexing).
Non-monotonic indexes require exact matches#
If the index of a Series or DataFrame is monotonically increasing or decreasing, then the bounds
of a label-based slice can be outside the range of the index, much like slice indexing a
normal Python list. Monotonicity of an index can be tested with the is_monotonic_increasing() and
is_monotonic_decreasing() attributes.
In [221]: df = pd.DataFrame(index=[2, 3, 3, 4, 5], columns=["data"], data=list(range(5)))
In [222]: df.index.is_monotonic_increasing
Out[222]: True
# no rows 0 or 1, but still returns rows 2, 3 (both of them), and 4:
In [223]: df.loc[0:4, :]
Out[223]:
data
2 0
3 1
3 2
4 3
# slice is are outside the index, so empty DataFrame is returned
In [224]: df.loc[13:15, :]
Out[224]:
Empty DataFrame
Columns: [data]
Index: []
On the other hand, if the index is not monotonic, then both slice bounds must be
unique members of the index.
In [225]: df = pd.DataFrame(index=[2, 3, 1, 4, 3, 5], columns=["data"], data=list(range(6)))
In [226]: df.index.is_monotonic_increasing
Out[226]: False
# OK because 2 and 4 are in the index
In [227]: df.loc[2:4, :]
Out[227]:
data
2 0
3 1
1 2
4 3
# 0 is not in the index
In [9]: df.loc[0:4, :]
KeyError: 0
# 3 is not a unique label
In [11]: df.loc[2:3, :]
KeyError: 'Cannot get right slice bound for non-unique label: 3'
Index.is_monotonic_increasing and Index.is_monotonic_decreasing only check that
an index is weakly monotonic. To check for strict monotonicity, you can combine one of those with
the is_unique() attribute.
In [228]: weakly_monotonic = pd.Index(["a", "b", "c", "c"])
In [229]: weakly_monotonic
Out[229]: Index(['a', 'b', 'c', 'c'], dtype='object')
In [230]: weakly_monotonic.is_monotonic_increasing
Out[230]: True
In [231]: weakly_monotonic.is_monotonic_increasing & weakly_monotonic.is_unique
Out[231]: False
Endpoints are inclusive#
Compared with standard Python sequence slicing in which the slice endpoint is
not inclusive, label-based slicing in pandas is inclusive. The primary
reason for this is that it is often not possible to easily determine the
“successor” or next element after a particular label in an index. For example,
consider the following Series:
In [232]: s = pd.Series(np.random.randn(6), index=list("abcdef"))
In [233]: s
Out[233]:
a 0.301379
b 1.240445
c -0.846068
d -0.043312
e -1.658747
f -0.819549
dtype: float64
Suppose we wished to slice from c to e, using integers this would be
accomplished as such:
In [234]: s[2:5]
Out[234]:
c -0.846068
d -0.043312
e -1.658747
dtype: float64
However, if you only had c and e, determining the next element in the
index can be somewhat complicated. For example, the following does not work:
s.loc['c':'e' + 1]
A very common use case is to limit a time series to start and end at two
specific dates. To enable this, we made the design choice to make label-based
slicing include both endpoints:
In [235]: s.loc["c":"e"]
Out[235]:
c -0.846068
d -0.043312
e -1.658747
dtype: float64
This is most definitely a “practicality beats purity” sort of thing, but it is
something to watch out for if you expect label-based slicing to behave exactly
in the way that standard Python integer slicing works.
Indexing potentially changes underlying Series dtype#
The different indexing operation can potentially change the dtype of a Series.
In [236]: series1 = pd.Series([1, 2, 3])
In [237]: series1.dtype
Out[237]: dtype('int64')
In [238]: res = series1.reindex([0, 4])
In [239]: res.dtype
Out[239]: dtype('float64')
In [240]: res
Out[240]:
0 1.0
4 NaN
dtype: float64
In [241]: series2 = pd.Series([True])
In [242]: series2.dtype
Out[242]: dtype('bool')
In [243]: res = series2.reindex_like(series1)
In [244]: res.dtype
Out[244]: dtype('O')
In [245]: res
Out[245]:
0 True
1 NaN
2 NaN
dtype: object
This is because the (re)indexing operations above silently inserts NaNs and the dtype
changes accordingly. This can cause some issues when using numpy ufuncs
such as numpy.logical_and.
See the GH2388 for a more
detailed discussion.
| 673
| 888
|
Pandas - Where function over several indexes
I'm looking to use the where function over a dataframe using a multiindex.
My dataframe looks like this :
mw
country category date
DE Wind Onshore 2019-01-01 00:00:00+00:00 22036.50
2019-01-01 01:00:00+00:00 22748.25
2019-01-01 02:00:00+00:00 23870.25
2019-01-01 03:00:00+00:00 25921.50
FR Wind Onshore 2019-01-01 00:00:00+00:00 1637.00
2019-01-01 01:00:00+00:00 1567.00
2019-01-01 02:00:00+00:00 1556.00
2019-01-01 03:00:00+00:00 1595.00
I'm looking for the value under a minimum (let say 90% of the maximum for this exemple) per countries (DE, FR). How to do this ?
I tried this :
maxValue = data.max(level=[index.country])
data = data.where(data < maxValue*0.1)*
It does not work since maxValue has to values and data (in the where function) is unique. (I'm not sure to be clear)
Edit
To reproduce the dataframe:
Row data:
country category date mw
0 DE Wind Onshore 2019-01-01 00:00:00+00:00 22036.50
1 DE Wind Onshore 2019-01-01 01:00:00+00:00 22748.25
2 DE Wind Onshore 2019-01-01 02:00:00+00:00 23870.25
3 DE Wind Onshore 2019-01-01 03:00:00+00:00 25921.50
4 FR Wind Onshore 2019-01-01 00:00:00+00:00 1637.00
5 FR Wind Onshore 2019-01-01 01:00:00+00:00 1567.00
6 FR Wind Onshore 2019-01-01 02:00:00+00:00 1556.00
7 FR Wind Onshore 2019-01-01 03:00:00+00:00 1595.00
the codeline
pd.read_clipboard(sep='\s\s+').set_index(['country', 'category', 'date'])
|
68,666,373
|
How to set multiindex column from existing df
|
<p>How to set multi index column from existing df</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'A': [11, 21, 31],
'B': [12, 22, 32],
'C': [13, 23, 33]},
index=['ONE', 'TWO', 'THREE'])
</code></pre>
<p>Expected output</p>
<pre><code> level1
level2
A B C
ONE 11 12 13
TWO 21 22 23
THREE 31 32 33
</code></pre>
| 68,666,396
| 2021-08-05T12:13:03.873000
| 1
| null | 0
| 43
|
python|pandas
|
<p>Use MultiIndex</p>
<pre><code> df.columns = pd.MultiIndex.from_product([['level1'],['level2'],df.columns ])
</code></pre>
| 2021-08-05T12:13:47.907000
| 0
|
https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.set_index.html
|
pandas.DataFrame.set_index#
pandas.DataFrame.set_index#
DataFrame.set_index(keys, *, drop=True, append=False, inplace=False, verify_integrity=False)[source]#
Set the DataFrame index using existing columns.
Set the DataFrame index (row labels) using one or more existing
columns or arrays (of the correct length). The index can replace the
existing index or expand on it.
Parameters
keyslabel or array-like or list of labels/arraysThis parameter can be either a single column key, a single array of
the same length as the calling DataFrame, or a list containing an
arbitrary combination of column keys and arrays. Here, “array”
Use MultiIndex
df.columns = pd.MultiIndex.from_product([['level1'],['level2'],df.columns ])
encompasses Series, Index, np.ndarray, and
instances of Iterator.
dropbool, default TrueDelete columns to be used as the new index.
appendbool, default FalseWhether to append columns to existing index.
inplacebool, default FalseWhether to modify the DataFrame rather than creating a new one.
verify_integritybool, default FalseCheck the new index for duplicates. Otherwise defer the check until
necessary. Setting to False will improve the performance of this
method.
Returns
DataFrame or NoneChanged row labels or None if inplace=True.
See also
DataFrame.reset_indexOpposite of set_index.
DataFrame.reindexChange to new indices or expand indices.
DataFrame.reindex_likeChange to same indices as other DataFrame.
Examples
>>> df = pd.DataFrame({'month': [1, 4, 7, 10],
... 'year': [2012, 2014, 2013, 2014],
... 'sale': [55, 40, 84, 31]})
>>> df
month year sale
0 1 2012 55
1 4 2014 40
2 7 2013 84
3 10 2014 31
Set the index to become the ‘month’ column:
>>> df.set_index('month')
year sale
month
1 2012 55
4 2014 40
7 2013 84
10 2014 31
Create a MultiIndex using columns ‘year’ and ‘month’:
>>> df.set_index(['year', 'month'])
sale
year month
2012 1 55
2014 4 40
2013 7 84
2014 10 31
Create a MultiIndex using an Index and a column:
>>> df.set_index([pd.Index([1, 2, 3, 4]), 'year'])
month sale
year
1 2012 1 55
2 2014 4 40
3 2013 7 84
4 2014 10 31
Create a MultiIndex using two Series:
>>> s = pd.Series([1, 2, 3, 4])
>>> df.set_index([s, s**2])
month year sale
1 1 1 2012 55
2 4 4 2014 40
3 9 7 2013 84
4 16 10 2014 31
| 633
| 728
|
How to set multiindex column from existing df
How to set multi index column from existing df
import pandas as pd
df = pd.DataFrame({'A': [11, 21, 31],
'B': [12, 22, 32],
'C': [13, 23, 33]},
index=['ONE', 'TWO', 'THREE'])
Expected output
level1
level2
A B C
ONE 11 12 13
TWO 21 22 23
THREE 31 32 33
|
69,584,351
|
why groupby change the rows number
|
<p>i have this code that i try to plot column x Ct and Fs based on Ft</p>
<p>so how can i solve this?</p>
<pre><code>df = pd.read_csv('f.txt',sep=" ",names=list(["Ct", "Fs", "Ft"]))
df.iloc[:]
groups = df.groupby("Ft")
plt.subplots(figsize=(18,10))
for name, group in groups:
plt.scatter( group.Ct,group.Fs, label=name,s=100)
plt.yticks(np.arange(0, 6,0.5))
plt.xticks(np.arange(0, 24,1))
plt.title('f',fontsize=20)
plt.xlabel('x',fontsize=20)
plt.ylabel('y',fontsize=20)
plt.legend(loc='upper center', ncol=3)
</code></pre>
<pre><code>group.iloc[:]
</code></pre>
| 69,584,488
| 2021-10-15T11:54:41.943000
| 2
| null | -1
| 43
|
python|pandas
|
<p>If you are trying to make a scatter plot of Ct and Fs and want to have each point colored based on Ft I suggest using <a href="https://seaborn.pydata.org/generated/seaborn.scatterplot.html" rel="nofollow noreferrer">Seaborn</a> or <a href="https://plotly.com/python-api-reference/generated/plotly.express.scatter" rel="nofollow noreferrer">Plotly</a>. Matplotlib takes a bit more work to color by an object column.</p>
<p>No groupby needed.</p>
<p>After installing those libraries, here's how you do it.</p>
<pre><code>import seaborn as sns
sns.scatterplot(data=df, x='Ct', y='Fs', hue='Ft')
</code></pre>
<p>or</p>
<pre><code>import plotly.express as px
px.scatter(data=df, x='Ct', y='Fs', color='Ft')
</code></pre>
| 2021-10-15T12:09:35.127000
| 0
|
https://pandas.pydata.org/docs/user_guide/groupby.html
|
Group by: split-apply-combine#
Group by: split-apply-combine#
By “group by” we are referring to a process involving one or more of the following
steps:
Splitting the data into groups based on some criteria.
Applying a function to each group independently.
Combining the results into a data structure.
Out of these, the split step is the most straightforward. In fact, in many
situations we may wish to split the data set into groups and do something with
those groups. In the apply step, we might wish to do one of the
following:
Aggregation: compute a summary statistic (or statistics) for each
group. Some examples:
Compute group sums or means.
Compute group sizes / counts.
If you are trying to make a scatter plot of Ct and Fs and want to have each point colored based on Ft I suggest using Seaborn or Plotly. Matplotlib takes a bit more work to color by an object column.
No groupby needed.
After installing those libraries, here's how you do it.
import seaborn as sns
sns.scatterplot(data=df, x='Ct', y='Fs', hue='Ft')
or
import plotly.express as px
px.scatter(data=df, x='Ct', y='Fs', color='Ft')
Transformation: perform some group-specific computations and return a
like-indexed object. Some examples:
Standardize data (zscore) within a group.
Filling NAs within groups with a value derived from each group.
Filtration: discard some groups, according to a group-wise computation
that evaluates True or False. Some examples:
Discard data that belongs to groups with only a few members.
Filter out data based on the group sum or mean.
Some combination of the above: GroupBy will examine the results of the apply
step and try to return a sensibly combined result if it doesn’t fit into
either of the above two categories.
Since the set of object instance methods on pandas data structures are generally
rich and expressive, we often simply want to invoke, say, a DataFrame function
on each group. The name GroupBy should be quite familiar to those who have used
a SQL-based tool (or itertools), in which you can write code like:
SELECT Column1, Column2, mean(Column3), sum(Column4)
FROM SomeTable
GROUP BY Column1, Column2
We aim to make operations like this natural and easy to express using
pandas. We’ll address each area of GroupBy functionality then provide some
non-trivial examples / use cases.
See the cookbook for some advanced strategies.
Splitting an object into groups#
pandas objects can be split on any of their axes. The abstract definition of
grouping is to provide a mapping of labels to group names. To create a GroupBy
object (more on what the GroupBy object is later), you may do the following:
In [1]: df = pd.DataFrame(
...: [
...: ("bird", "Falconiformes", 389.0),
...: ("bird", "Psittaciformes", 24.0),
...: ("mammal", "Carnivora", 80.2),
...: ("mammal", "Primates", np.nan),
...: ("mammal", "Carnivora", 58),
...: ],
...: index=["falcon", "parrot", "lion", "monkey", "leopard"],
...: columns=("class", "order", "max_speed"),
...: )
...:
In [2]: df
Out[2]:
class order max_speed
falcon bird Falconiformes 389.0
parrot bird Psittaciformes 24.0
lion mammal Carnivora 80.2
monkey mammal Primates NaN
leopard mammal Carnivora 58.0
# default is axis=0
In [3]: grouped = df.groupby("class")
In [4]: grouped = df.groupby("order", axis="columns")
In [5]: grouped = df.groupby(["class", "order"])
The mapping can be specified many different ways:
A Python function, to be called on each of the axis labels.
A list or NumPy array of the same length as the selected axis.
A dict or Series, providing a label -> group name mapping.
For DataFrame objects, a string indicating either a column name or
an index level name to be used to group.
df.groupby('A') is just syntactic sugar for df.groupby(df['A']).
A list of any of the above things.
Collectively we refer to the grouping objects as the keys. For example,
consider the following DataFrame:
Note
A string passed to groupby may refer to either a column or an index level.
If a string matches both a column name and an index level name, a
ValueError will be raised.
In [6]: df = pd.DataFrame(
...: {
...: "A": ["foo", "bar", "foo", "bar", "foo", "bar", "foo", "foo"],
...: "B": ["one", "one", "two", "three", "two", "two", "one", "three"],
...: "C": np.random.randn(8),
...: "D": np.random.randn(8),
...: }
...: )
...:
In [7]: df
Out[7]:
A B C D
0 foo one 0.469112 -0.861849
1 bar one -0.282863 -2.104569
2 foo two -1.509059 -0.494929
3 bar three -1.135632 1.071804
4 foo two 1.212112 0.721555
5 bar two -0.173215 -0.706771
6 foo one 0.119209 -1.039575
7 foo three -1.044236 0.271860
On a DataFrame, we obtain a GroupBy object by calling groupby().
We could naturally group by either the A or B columns, or both:
In [8]: grouped = df.groupby("A")
In [9]: grouped = df.groupby(["A", "B"])
If we also have a MultiIndex on columns A and B, we can group by all
but the specified columns
In [10]: df2 = df.set_index(["A", "B"])
In [11]: grouped = df2.groupby(level=df2.index.names.difference(["B"]))
In [12]: grouped.sum()
Out[12]:
C D
A
bar -1.591710 -1.739537
foo -0.752861 -1.402938
These will split the DataFrame on its index (rows). We could also split by the
columns:
In [13]: def get_letter_type(letter):
....: if letter.lower() in 'aeiou':
....: return 'vowel'
....: else:
....: return 'consonant'
....:
In [14]: grouped = df.groupby(get_letter_type, axis=1)
pandas Index objects support duplicate values. If a
non-unique index is used as the group key in a groupby operation, all values
for the same index value will be considered to be in one group and thus the
output of aggregation functions will only contain unique index values:
In [15]: lst = [1, 2, 3, 1, 2, 3]
In [16]: s = pd.Series([1, 2, 3, 10, 20, 30], lst)
In [17]: grouped = s.groupby(level=0)
In [18]: grouped.first()
Out[18]:
1 1
2 2
3 3
dtype: int64
In [19]: grouped.last()
Out[19]:
1 10
2 20
3 30
dtype: int64
In [20]: grouped.sum()
Out[20]:
1 11
2 22
3 33
dtype: int64
Note that no splitting occurs until it’s needed. Creating the GroupBy object
only verifies that you’ve passed a valid mapping.
Note
Many kinds of complicated data manipulations can be expressed in terms of
GroupBy operations (though can’t be guaranteed to be the most
efficient). You can get quite creative with the label mapping functions.
GroupBy sorting#
By default the group keys are sorted during the groupby operation. You may however pass sort=False for potential speedups:
In [21]: df2 = pd.DataFrame({"X": ["B", "B", "A", "A"], "Y": [1, 2, 3, 4]})
In [22]: df2.groupby(["X"]).sum()
Out[22]:
Y
X
A 7
B 3
In [23]: df2.groupby(["X"], sort=False).sum()
Out[23]:
Y
X
B 3
A 7
Note that groupby will preserve the order in which observations are sorted within each group.
For example, the groups created by groupby() below are in the order they appeared in the original DataFrame:
In [24]: df3 = pd.DataFrame({"X": ["A", "B", "A", "B"], "Y": [1, 4, 3, 2]})
In [25]: df3.groupby(["X"]).get_group("A")
Out[25]:
X Y
0 A 1
2 A 3
In [26]: df3.groupby(["X"]).get_group("B")
Out[26]:
X Y
1 B 4
3 B 2
New in version 1.1.0.
GroupBy dropna#
By default NA values are excluded from group keys during the groupby operation. However,
in case you want to include NA values in group keys, you could pass dropna=False to achieve it.
In [27]: df_list = [[1, 2, 3], [1, None, 4], [2, 1, 3], [1, 2, 2]]
In [28]: df_dropna = pd.DataFrame(df_list, columns=["a", "b", "c"])
In [29]: df_dropna
Out[29]:
a b c
0 1 2.0 3
1 1 NaN 4
2 2 1.0 3
3 1 2.0 2
# Default ``dropna`` is set to True, which will exclude NaNs in keys
In [30]: df_dropna.groupby(by=["b"], dropna=True).sum()
Out[30]:
a c
b
1.0 2 3
2.0 2 5
# In order to allow NaN in keys, set ``dropna`` to False
In [31]: df_dropna.groupby(by=["b"], dropna=False).sum()
Out[31]:
a c
b
1.0 2 3
2.0 2 5
NaN 1 4
The default setting of dropna argument is True which means NA are not included in group keys.
GroupBy object attributes#
The groups attribute is a dict whose keys are the computed unique groups
and corresponding values being the axis labels belonging to each group. In the
above example we have:
In [32]: df.groupby("A").groups
Out[32]: {'bar': [1, 3, 5], 'foo': [0, 2, 4, 6, 7]}
In [33]: df.groupby(get_letter_type, axis=1).groups
Out[33]: {'consonant': ['B', 'C', 'D'], 'vowel': ['A']}
Calling the standard Python len function on the GroupBy object just returns
the length of the groups dict, so it is largely just a convenience:
In [34]: grouped = df.groupby(["A", "B"])
In [35]: grouped.groups
Out[35]: {('bar', 'one'): [1], ('bar', 'three'): [3], ('bar', 'two'): [5], ('foo', 'one'): [0, 6], ('foo', 'three'): [7], ('foo', 'two'): [2, 4]}
In [36]: len(grouped)
Out[36]: 6
GroupBy will tab complete column names (and other attributes):
In [37]: df
Out[37]:
height weight gender
2000-01-01 42.849980 157.500553 male
2000-01-02 49.607315 177.340407 male
2000-01-03 56.293531 171.524640 male
2000-01-04 48.421077 144.251986 female
2000-01-05 46.556882 152.526206 male
2000-01-06 68.448851 168.272968 female
2000-01-07 70.757698 136.431469 male
2000-01-08 58.909500 176.499753 female
2000-01-09 76.435631 174.094104 female
2000-01-10 45.306120 177.540920 male
In [38]: gb = df.groupby("gender")
In [39]: gb.<TAB> # noqa: E225, E999
gb.agg gb.boxplot gb.cummin gb.describe gb.filter gb.get_group gb.height gb.last gb.median gb.ngroups gb.plot gb.rank gb.std gb.transform
gb.aggregate gb.count gb.cumprod gb.dtype gb.first gb.groups gb.hist gb.max gb.min gb.nth gb.prod gb.resample gb.sum gb.var
gb.apply gb.cummax gb.cumsum gb.fillna gb.gender gb.head gb.indices gb.mean gb.name gb.ohlc gb.quantile gb.size gb.tail gb.weight
GroupBy with MultiIndex#
With hierarchically-indexed data, it’s quite
natural to group by one of the levels of the hierarchy.
Let’s create a Series with a two-level MultiIndex.
In [40]: arrays = [
....: ["bar", "bar", "baz", "baz", "foo", "foo", "qux", "qux"],
....: ["one", "two", "one", "two", "one", "two", "one", "two"],
....: ]
....:
In [41]: index = pd.MultiIndex.from_arrays(arrays, names=["first", "second"])
In [42]: s = pd.Series(np.random.randn(8), index=index)
In [43]: s
Out[43]:
first second
bar one -0.919854
two -0.042379
baz one 1.247642
two -0.009920
foo one 0.290213
two 0.495767
qux one 0.362949
two 1.548106
dtype: float64
We can then group by one of the levels in s.
In [44]: grouped = s.groupby(level=0)
In [45]: grouped.sum()
Out[45]:
first
bar -0.962232
baz 1.237723
foo 0.785980
qux 1.911055
dtype: float64
If the MultiIndex has names specified, these can be passed instead of the level
number:
In [46]: s.groupby(level="second").sum()
Out[46]:
second
one 0.980950
two 1.991575
dtype: float64
Grouping with multiple levels is supported.
In [47]: s
Out[47]:
first second third
bar doo one -1.131345
two -0.089329
baz bee one 0.337863
two -0.945867
foo bop one -0.932132
two 1.956030
qux bop one 0.017587
two -0.016692
dtype: float64
In [48]: s.groupby(level=["first", "second"]).sum()
Out[48]:
first second
bar doo -1.220674
baz bee -0.608004
foo bop 1.023898
qux bop 0.000895
dtype: float64
Index level names may be supplied as keys.
In [49]: s.groupby(["first", "second"]).sum()
Out[49]:
first second
bar doo -1.220674
baz bee -0.608004
foo bop 1.023898
qux bop 0.000895
dtype: float64
More on the sum function and aggregation later.
Grouping DataFrame with Index levels and columns#
A DataFrame may be grouped by a combination of columns and index levels by
specifying the column names as strings and the index levels as pd.Grouper
objects.
In [50]: arrays = [
....: ["bar", "bar", "baz", "baz", "foo", "foo", "qux", "qux"],
....: ["one", "two", "one", "two", "one", "two", "one", "two"],
....: ]
....:
In [51]: index = pd.MultiIndex.from_arrays(arrays, names=["first", "second"])
In [52]: df = pd.DataFrame({"A": [1, 1, 1, 1, 2, 2, 3, 3], "B": np.arange(8)}, index=index)
In [53]: df
Out[53]:
A B
first second
bar one 1 0
two 1 1
baz one 1 2
two 1 3
foo one 2 4
two 2 5
qux one 3 6
two 3 7
The following example groups df by the second index level and
the A column.
In [54]: df.groupby([pd.Grouper(level=1), "A"]).sum()
Out[54]:
B
second A
one 1 2
2 4
3 6
two 1 4
2 5
3 7
Index levels may also be specified by name.
In [55]: df.groupby([pd.Grouper(level="second"), "A"]).sum()
Out[55]:
B
second A
one 1 2
2 4
3 6
two 1 4
2 5
3 7
Index level names may be specified as keys directly to groupby.
In [56]: df.groupby(["second", "A"]).sum()
Out[56]:
B
second A
one 1 2
2 4
3 6
two 1 4
2 5
3 7
DataFrame column selection in GroupBy#
Once you have created the GroupBy object from a DataFrame, you might want to do
something different for each of the columns. Thus, using [] similar to
getting a column from a DataFrame, you can do:
In [57]: df = pd.DataFrame(
....: {
....: "A": ["foo", "bar", "foo", "bar", "foo", "bar", "foo", "foo"],
....: "B": ["one", "one", "two", "three", "two", "two", "one", "three"],
....: "C": np.random.randn(8),
....: "D": np.random.randn(8),
....: }
....: )
....:
In [58]: df
Out[58]:
A B C D
0 foo one -0.575247 1.346061
1 bar one 0.254161 1.511763
2 foo two -1.143704 1.627081
3 bar three 0.215897 -0.990582
4 foo two 1.193555 -0.441652
5 bar two -0.077118 1.211526
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
In [59]: grouped = df.groupby(["A"])
In [60]: grouped_C = grouped["C"]
In [61]: grouped_D = grouped["D"]
This is mainly syntactic sugar for the alternative and much more verbose:
In [62]: df["C"].groupby(df["A"])
Out[62]: <pandas.core.groupby.generic.SeriesGroupBy object at 0x7f1ea100a490>
Additionally this method avoids recomputing the internal grouping information
derived from the passed key.
Iterating through groups#
With the GroupBy object in hand, iterating through the grouped data is very
natural and functions similarly to itertools.groupby():
In [63]: grouped = df.groupby('A')
In [64]: for name, group in grouped:
....: print(name)
....: print(group)
....:
bar
A B C D
1 bar one 0.254161 1.511763
3 bar three 0.215897 -0.990582
5 bar two -0.077118 1.211526
foo
A B C D
0 foo one -0.575247 1.346061
2 foo two -1.143704 1.627081
4 foo two 1.193555 -0.441652
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
In the case of grouping by multiple keys, the group name will be a tuple:
In [65]: for name, group in df.groupby(['A', 'B']):
....: print(name)
....: print(group)
....:
('bar', 'one')
A B C D
1 bar one 0.254161 1.511763
('bar', 'three')
A B C D
3 bar three 0.215897 -0.990582
('bar', 'two')
A B C D
5 bar two -0.077118 1.211526
('foo', 'one')
A B C D
0 foo one -0.575247 1.346061
6 foo one -0.408530 0.268520
('foo', 'three')
A B C D
7 foo three -0.862495 0.02458
('foo', 'two')
A B C D
2 foo two -1.143704 1.627081
4 foo two 1.193555 -0.441652
See Iterating through groups.
Selecting a group#
A single group can be selected using
get_group():
In [66]: grouped.get_group("bar")
Out[66]:
A B C D
1 bar one 0.254161 1.511763
3 bar three 0.215897 -0.990582
5 bar two -0.077118 1.211526
Or for an object grouped on multiple columns:
In [67]: df.groupby(["A", "B"]).get_group(("bar", "one"))
Out[67]:
A B C D
1 bar one 0.254161 1.511763
Aggregation#
Once the GroupBy object has been created, several methods are available to
perform a computation on the grouped data. These operations are similar to the
aggregating API, window API,
and resample API.
An obvious one is aggregation via the
aggregate() or equivalently
agg() method:
In [68]: grouped = df.groupby("A")
In [69]: grouped[["C", "D"]].aggregate(np.sum)
Out[69]:
C D
A
bar 0.392940 1.732707
foo -1.796421 2.824590
In [70]: grouped = df.groupby(["A", "B"])
In [71]: grouped.aggregate(np.sum)
Out[71]:
C D
A B
bar one 0.254161 1.511763
three 0.215897 -0.990582
two -0.077118 1.211526
foo one -0.983776 1.614581
three -0.862495 0.024580
two 0.049851 1.185429
As you can see, the result of the aggregation will have the group names as the
new index along the grouped axis. In the case of multiple keys, the result is a
MultiIndex by default, though this can be
changed by using the as_index option:
In [72]: grouped = df.groupby(["A", "B"], as_index=False)
In [73]: grouped.aggregate(np.sum)
Out[73]:
A B C D
0 bar one 0.254161 1.511763
1 bar three 0.215897 -0.990582
2 bar two -0.077118 1.211526
3 foo one -0.983776 1.614581
4 foo three -0.862495 0.024580
5 foo two 0.049851 1.185429
In [74]: df.groupby("A", as_index=False)[["C", "D"]].sum()
Out[74]:
A C D
0 bar 0.392940 1.732707
1 foo -1.796421 2.824590
Note that you could use the reset_index DataFrame function to achieve the
same result as the column names are stored in the resulting MultiIndex:
In [75]: df.groupby(["A", "B"]).sum().reset_index()
Out[75]:
A B C D
0 bar one 0.254161 1.511763
1 bar three 0.215897 -0.990582
2 bar two -0.077118 1.211526
3 foo one -0.983776 1.614581
4 foo three -0.862495 0.024580
5 foo two 0.049851 1.185429
Another simple aggregation example is to compute the size of each group.
This is included in GroupBy as the size method. It returns a Series whose
index are the group names and whose values are the sizes of each group.
In [76]: grouped.size()
Out[76]:
A B size
0 bar one 1
1 bar three 1
2 bar two 1
3 foo one 2
4 foo three 1
5 foo two 2
In [77]: grouped.describe()
Out[77]:
C ... D
count mean std min ... 25% 50% 75% max
0 1.0 0.254161 NaN 0.254161 ... 1.511763 1.511763 1.511763 1.511763
1 1.0 0.215897 NaN 0.215897 ... -0.990582 -0.990582 -0.990582 -0.990582
2 1.0 -0.077118 NaN -0.077118 ... 1.211526 1.211526 1.211526 1.211526
3 2.0 -0.491888 0.117887 -0.575247 ... 0.537905 0.807291 1.076676 1.346061
4 1.0 -0.862495 NaN -0.862495 ... 0.024580 0.024580 0.024580 0.024580
5 2.0 0.024925 1.652692 -1.143704 ... 0.075531 0.592714 1.109898 1.627081
[6 rows x 16 columns]
Another aggregation example is to compute the number of unique values of each group. This is similar to the value_counts function, except that it only counts unique values.
In [78]: ll = [['foo', 1], ['foo', 2], ['foo', 2], ['bar', 1], ['bar', 1]]
In [79]: df4 = pd.DataFrame(ll, columns=["A", "B"])
In [80]: df4
Out[80]:
A B
0 foo 1
1 foo 2
2 foo 2
3 bar 1
4 bar 1
In [81]: df4.groupby("A")["B"].nunique()
Out[81]:
A
bar 1
foo 2
Name: B, dtype: int64
Note
Aggregation functions will not return the groups that you are aggregating over
if they are named columns, when as_index=True, the default. The grouped columns will
be the indices of the returned object.
Passing as_index=False will return the groups that you are aggregating over, if they are
named columns.
Aggregating functions are the ones that reduce the dimension of the returned objects.
Some common aggregating functions are tabulated below:
Function
Description
mean()
Compute mean of groups
sum()
Compute sum of group values
size()
Compute group sizes
count()
Compute count of group
std()
Standard deviation of groups
var()
Compute variance of groups
sem()
Standard error of the mean of groups
describe()
Generates descriptive statistics
first()
Compute first of group values
last()
Compute last of group values
nth()
Take nth value, or a subset if n is a list
min()
Compute min of group values
max()
Compute max of group values
The aggregating functions above will exclude NA values. Any function which
reduces a Series to a scalar value is an aggregation function and will work,
a trivial example is df.groupby('A').agg(lambda ser: 1). Note that
nth() can act as a reducer or a
filter, see here.
Applying multiple functions at once#
With grouped Series you can also pass a list or dict of functions to do
aggregation with, outputting a DataFrame:
In [82]: grouped = df.groupby("A")
In [83]: grouped["C"].agg([np.sum, np.mean, np.std])
Out[83]:
sum mean std
A
bar 0.392940 0.130980 0.181231
foo -1.796421 -0.359284 0.912265
On a grouped DataFrame, you can pass a list of functions to apply to each
column, which produces an aggregated result with a hierarchical index:
In [84]: grouped[["C", "D"]].agg([np.sum, np.mean, np.std])
Out[84]:
C D
sum mean std sum mean std
A
bar 0.392940 0.130980 0.181231 1.732707 0.577569 1.366330
foo -1.796421 -0.359284 0.912265 2.824590 0.564918 0.884785
The resulting aggregations are named for the functions themselves. If you
need to rename, then you can add in a chained operation for a Series like this:
In [85]: (
....: grouped["C"]
....: .agg([np.sum, np.mean, np.std])
....: .rename(columns={"sum": "foo", "mean": "bar", "std": "baz"})
....: )
....:
Out[85]:
foo bar baz
A
bar 0.392940 0.130980 0.181231
foo -1.796421 -0.359284 0.912265
For a grouped DataFrame, you can rename in a similar manner:
In [86]: (
....: grouped[["C", "D"]].agg([np.sum, np.mean, np.std]).rename(
....: columns={"sum": "foo", "mean": "bar", "std": "baz"}
....: )
....: )
....:
Out[86]:
C D
foo bar baz foo bar baz
A
bar 0.392940 0.130980 0.181231 1.732707 0.577569 1.366330
foo -1.796421 -0.359284 0.912265 2.824590 0.564918 0.884785
Note
In general, the output column names should be unique. You can’t apply
the same function (or two functions with the same name) to the same
column.
In [87]: grouped["C"].agg(["sum", "sum"])
Out[87]:
sum sum
A
bar 0.392940 0.392940
foo -1.796421 -1.796421
pandas does allow you to provide multiple lambdas. In this case, pandas
will mangle the name of the (nameless) lambda functions, appending _<i>
to each subsequent lambda.
In [88]: grouped["C"].agg([lambda x: x.max() - x.min(), lambda x: x.median() - x.mean()])
Out[88]:
<lambda_0> <lambda_1>
A
bar 0.331279 0.084917
foo 2.337259 -0.215962
Named aggregation#
New in version 0.25.0.
To support column-specific aggregation with control over the output column names, pandas
accepts the special syntax in GroupBy.agg(), known as “named aggregation”, where
The keywords are the output column names
The values are tuples whose first element is the column to select
and the second element is the aggregation to apply to that column. pandas
provides the pandas.NamedAgg namedtuple with the fields ['column', 'aggfunc']
to make it clearer what the arguments are. As usual, the aggregation can
be a callable or a string alias.
In [89]: animals = pd.DataFrame(
....: {
....: "kind": ["cat", "dog", "cat", "dog"],
....: "height": [9.1, 6.0, 9.5, 34.0],
....: "weight": [7.9, 7.5, 9.9, 198.0],
....: }
....: )
....:
In [90]: animals
Out[90]:
kind height weight
0 cat 9.1 7.9
1 dog 6.0 7.5
2 cat 9.5 9.9
3 dog 34.0 198.0
In [91]: animals.groupby("kind").agg(
....: min_height=pd.NamedAgg(column="height", aggfunc="min"),
....: max_height=pd.NamedAgg(column="height", aggfunc="max"),
....: average_weight=pd.NamedAgg(column="weight", aggfunc=np.mean),
....: )
....:
Out[91]:
min_height max_height average_weight
kind
cat 9.1 9.5 8.90
dog 6.0 34.0 102.75
pandas.NamedAgg is just a namedtuple. Plain tuples are allowed as well.
In [92]: animals.groupby("kind").agg(
....: min_height=("height", "min"),
....: max_height=("height", "max"),
....: average_weight=("weight", np.mean),
....: )
....:
Out[92]:
min_height max_height average_weight
kind
cat 9.1 9.5 8.90
dog 6.0 34.0 102.75
If your desired output column names are not valid Python keywords, construct a dictionary
and unpack the keyword arguments
In [93]: animals.groupby("kind").agg(
....: **{
....: "total weight": pd.NamedAgg(column="weight", aggfunc=sum)
....: }
....: )
....:
Out[93]:
total weight
kind
cat 17.8
dog 205.5
Additional keyword arguments are not passed through to the aggregation functions. Only pairs
of (column, aggfunc) should be passed as **kwargs. If your aggregation functions
requires additional arguments, partially apply them with functools.partial().
Note
For Python 3.5 and earlier, the order of **kwargs in a functions was not
preserved. This means that the output column ordering would not be
consistent. To ensure consistent ordering, the keys (and so output columns)
will always be sorted for Python 3.5.
Named aggregation is also valid for Series groupby aggregations. In this case there’s
no column selection, so the values are just the functions.
In [94]: animals.groupby("kind").height.agg(
....: min_height="min",
....: max_height="max",
....: )
....:
Out[94]:
min_height max_height
kind
cat 9.1 9.5
dog 6.0 34.0
Applying different functions to DataFrame columns#
By passing a dict to aggregate you can apply a different aggregation to the
columns of a DataFrame:
In [95]: grouped.agg({"C": np.sum, "D": lambda x: np.std(x, ddof=1)})
Out[95]:
C D
A
bar 0.392940 1.366330
foo -1.796421 0.884785
The function names can also be strings. In order for a string to be valid it
must be either implemented on GroupBy or available via dispatching:
In [96]: grouped.agg({"C": "sum", "D": "std"})
Out[96]:
C D
A
bar 0.392940 1.366330
foo -1.796421 0.884785
Cython-optimized aggregation functions#
Some common aggregations, currently only sum, mean, std, and sem, have
optimized Cython implementations:
In [97]: df.groupby("A")[["C", "D"]].sum()
Out[97]:
C D
A
bar 0.392940 1.732707
foo -1.796421 2.824590
In [98]: df.groupby(["A", "B"]).mean()
Out[98]:
C D
A B
bar one 0.254161 1.511763
three 0.215897 -0.990582
two -0.077118 1.211526
foo one -0.491888 0.807291
three -0.862495 0.024580
two 0.024925 0.592714
Of course sum and mean are implemented on pandas objects, so the above
code would work even without the special versions via dispatching (see below).
Aggregations with User-Defined Functions#
Users can also provide their own functions for custom aggregations. When aggregating
with a User-Defined Function (UDF), the UDF should not mutate the provided Series, see
Mutating with User Defined Function (UDF) methods for more information.
In [99]: animals.groupby("kind")[["height"]].agg(lambda x: set(x))
Out[99]:
height
kind
cat {9.1, 9.5}
dog {34.0, 6.0}
The resulting dtype will reflect that of the aggregating function. If the results from different groups have
different dtypes, then a common dtype will be determined in the same way as DataFrame construction.
In [100]: animals.groupby("kind")[["height"]].agg(lambda x: x.astype(int).sum())
Out[100]:
height
kind
cat 18
dog 40
Transformation#
The transform method returns an object that is indexed the same
as the one being grouped. The transform function must:
Return a result that is either the same size as the group chunk or
broadcastable to the size of the group chunk (e.g., a scalar,
grouped.transform(lambda x: x.iloc[-1])).
Operate column-by-column on the group chunk. The transform is applied to
the first group chunk using chunk.apply.
Not perform in-place operations on the group chunk. Group chunks should
be treated as immutable, and changes to a group chunk may produce unexpected
results. For example, when using fillna, inplace must be False
(grouped.transform(lambda x: x.fillna(inplace=False))).
(Optionally) operates on the entire group chunk. If this is supported, a
fast path is used starting from the second chunk.
Deprecated since version 1.5.0: When using .transform on a grouped DataFrame and the transformation function
returns a DataFrame, currently pandas does not align the result’s index
with the input’s index. This behavior is deprecated and alignment will
be performed in a future version of pandas. You can apply .to_numpy() to the
result of the transformation function to avoid alignment.
Similar to Aggregations with User-Defined Functions, the resulting dtype will reflect that of the
transformation function. If the results from different groups have different dtypes, then
a common dtype will be determined in the same way as DataFrame construction.
Suppose we wished to standardize the data within each group:
In [101]: index = pd.date_range("10/1/1999", periods=1100)
In [102]: ts = pd.Series(np.random.normal(0.5, 2, 1100), index)
In [103]: ts = ts.rolling(window=100, min_periods=100).mean().dropna()
In [104]: ts.head()
Out[104]:
2000-01-08 0.779333
2000-01-09 0.778852
2000-01-10 0.786476
2000-01-11 0.782797
2000-01-12 0.798110
Freq: D, dtype: float64
In [105]: ts.tail()
Out[105]:
2002-09-30 0.660294
2002-10-01 0.631095
2002-10-02 0.673601
2002-10-03 0.709213
2002-10-04 0.719369
Freq: D, dtype: float64
In [106]: transformed = ts.groupby(lambda x: x.year).transform(
.....: lambda x: (x - x.mean()) / x.std()
.....: )
.....:
We would expect the result to now have mean 0 and standard deviation 1 within
each group, which we can easily check:
# Original Data
In [107]: grouped = ts.groupby(lambda x: x.year)
In [108]: grouped.mean()
Out[108]:
2000 0.442441
2001 0.526246
2002 0.459365
dtype: float64
In [109]: grouped.std()
Out[109]:
2000 0.131752
2001 0.210945
2002 0.128753
dtype: float64
# Transformed Data
In [110]: grouped_trans = transformed.groupby(lambda x: x.year)
In [111]: grouped_trans.mean()
Out[111]:
2000 -4.870756e-16
2001 -1.545187e-16
2002 4.136282e-16
dtype: float64
In [112]: grouped_trans.std()
Out[112]:
2000 1.0
2001 1.0
2002 1.0
dtype: float64
We can also visually compare the original and transformed data sets.
In [113]: compare = pd.DataFrame({"Original": ts, "Transformed": transformed})
In [114]: compare.plot()
Out[114]: <AxesSubplot: >
Transformation functions that have lower dimension outputs are broadcast to
match the shape of the input array.
In [115]: ts.groupby(lambda x: x.year).transform(lambda x: x.max() - x.min())
Out[115]:
2000-01-08 0.623893
2000-01-09 0.623893
2000-01-10 0.623893
2000-01-11 0.623893
2000-01-12 0.623893
...
2002-09-30 0.558275
2002-10-01 0.558275
2002-10-02 0.558275
2002-10-03 0.558275
2002-10-04 0.558275
Freq: D, Length: 1001, dtype: float64
Alternatively, the built-in methods could be used to produce the same outputs.
In [116]: max_ts = ts.groupby(lambda x: x.year).transform("max")
In [117]: min_ts = ts.groupby(lambda x: x.year).transform("min")
In [118]: max_ts - min_ts
Out[118]:
2000-01-08 0.623893
2000-01-09 0.623893
2000-01-10 0.623893
2000-01-11 0.623893
2000-01-12 0.623893
...
2002-09-30 0.558275
2002-10-01 0.558275
2002-10-02 0.558275
2002-10-03 0.558275
2002-10-04 0.558275
Freq: D, Length: 1001, dtype: float64
Another common data transform is to replace missing data with the group mean.
In [119]: data_df
Out[119]:
A B C
0 1.539708 -1.166480 0.533026
1 1.302092 -0.505754 NaN
2 -0.371983 1.104803 -0.651520
3 -1.309622 1.118697 -1.161657
4 -1.924296 0.396437 0.812436
.. ... ... ...
995 -0.093110 0.683847 -0.774753
996 -0.185043 1.438572 NaN
997 -0.394469 -0.642343 0.011374
998 -1.174126 1.857148 NaN
999 0.234564 0.517098 0.393534
[1000 rows x 3 columns]
In [120]: countries = np.array(["US", "UK", "GR", "JP"])
In [121]: key = countries[np.random.randint(0, 4, 1000)]
In [122]: grouped = data_df.groupby(key)
# Non-NA count in each group
In [123]: grouped.count()
Out[123]:
A B C
GR 209 217 189
JP 240 255 217
UK 216 231 193
US 239 250 217
In [124]: transformed = grouped.transform(lambda x: x.fillna(x.mean()))
We can verify that the group means have not changed in the transformed data
and that the transformed data contains no NAs.
In [125]: grouped_trans = transformed.groupby(key)
In [126]: grouped.mean() # original group means
Out[126]:
A B C
GR -0.098371 -0.015420 0.068053
JP 0.069025 0.023100 -0.077324
UK 0.034069 -0.052580 -0.116525
US 0.058664 -0.020399 0.028603
In [127]: grouped_trans.mean() # transformation did not change group means
Out[127]:
A B C
GR -0.098371 -0.015420 0.068053
JP 0.069025 0.023100 -0.077324
UK 0.034069 -0.052580 -0.116525
US 0.058664 -0.020399 0.028603
In [128]: grouped.count() # original has some missing data points
Out[128]:
A B C
GR 209 217 189
JP 240 255 217
UK 216 231 193
US 239 250 217
In [129]: grouped_trans.count() # counts after transformation
Out[129]:
A B C
GR 228 228 228
JP 267 267 267
UK 247 247 247
US 258 258 258
In [130]: grouped_trans.size() # Verify non-NA count equals group size
Out[130]:
GR 228
JP 267
UK 247
US 258
dtype: int64
Note
Some functions will automatically transform the input when applied to a
GroupBy object, but returning an object of the same shape as the original.
Passing as_index=False will not affect these transformation methods.
For example: fillna, ffill, bfill, shift..
In [131]: grouped.ffill()
Out[131]:
A B C
0 1.539708 -1.166480 0.533026
1 1.302092 -0.505754 0.533026
2 -0.371983 1.104803 -0.651520
3 -1.309622 1.118697 -1.161657
4 -1.924296 0.396437 0.812436
.. ... ... ...
995 -0.093110 0.683847 -0.774753
996 -0.185043 1.438572 -0.774753
997 -0.394469 -0.642343 0.011374
998 -1.174126 1.857148 -0.774753
999 0.234564 0.517098 0.393534
[1000 rows x 3 columns]
Window and resample operations#
It is possible to use resample(), expanding() and
rolling() as methods on groupbys.
The example below will apply the rolling() method on the samples of
the column B based on the groups of column A.
In [132]: df_re = pd.DataFrame({"A": [1] * 10 + [5] * 10, "B": np.arange(20)})
In [133]: df_re
Out[133]:
A B
0 1 0
1 1 1
2 1 2
3 1 3
4 1 4
.. .. ..
15 5 15
16 5 16
17 5 17
18 5 18
19 5 19
[20 rows x 2 columns]
In [134]: df_re.groupby("A").rolling(4).B.mean()
Out[134]:
A
1 0 NaN
1 NaN
2 NaN
3 1.5
4 2.5
...
5 15 13.5
16 14.5
17 15.5
18 16.5
19 17.5
Name: B, Length: 20, dtype: float64
The expanding() method will accumulate a given operation
(sum() in the example) for all the members of each particular
group.
In [135]: df_re.groupby("A").expanding().sum()
Out[135]:
B
A
1 0 0.0
1 1.0
2 3.0
3 6.0
4 10.0
... ...
5 15 75.0
16 91.0
17 108.0
18 126.0
19 145.0
[20 rows x 1 columns]
Suppose you want to use the resample() method to get a daily
frequency in each group of your dataframe and wish to complete the
missing values with the ffill() method.
In [136]: df_re = pd.DataFrame(
.....: {
.....: "date": pd.date_range(start="2016-01-01", periods=4, freq="W"),
.....: "group": [1, 1, 2, 2],
.....: "val": [5, 6, 7, 8],
.....: }
.....: ).set_index("date")
.....:
In [137]: df_re
Out[137]:
group val
date
2016-01-03 1 5
2016-01-10 1 6
2016-01-17 2 7
2016-01-24 2 8
In [138]: df_re.groupby("group").resample("1D").ffill()
Out[138]:
group val
group date
1 2016-01-03 1 5
2016-01-04 1 5
2016-01-05 1 5
2016-01-06 1 5
2016-01-07 1 5
... ... ...
2 2016-01-20 2 7
2016-01-21 2 7
2016-01-22 2 7
2016-01-23 2 7
2016-01-24 2 8
[16 rows x 2 columns]
Filtration#
The filter method returns a subset of the original object. Suppose we
want to take only elements that belong to groups with a group sum greater
than 2.
In [139]: sf = pd.Series([1, 1, 2, 3, 3, 3])
In [140]: sf.groupby(sf).filter(lambda x: x.sum() > 2)
Out[140]:
3 3
4 3
5 3
dtype: int64
The argument of filter must be a function that, applied to the group as a
whole, returns True or False.
Another useful operation is filtering out elements that belong to groups
with only a couple members.
In [141]: dff = pd.DataFrame({"A": np.arange(8), "B": list("aabbbbcc")})
In [142]: dff.groupby("B").filter(lambda x: len(x) > 2)
Out[142]:
A B
2 2 b
3 3 b
4 4 b
5 5 b
Alternatively, instead of dropping the offending groups, we can return a
like-indexed objects where the groups that do not pass the filter are filled
with NaNs.
In [143]: dff.groupby("B").filter(lambda x: len(x) > 2, dropna=False)
Out[143]:
A B
0 NaN NaN
1 NaN NaN
2 2.0 b
3 3.0 b
4 4.0 b
5 5.0 b
6 NaN NaN
7 NaN NaN
For DataFrames with multiple columns, filters should explicitly specify a column as the filter criterion.
In [144]: dff["C"] = np.arange(8)
In [145]: dff.groupby("B").filter(lambda x: len(x["C"]) > 2)
Out[145]:
A B C
2 2 b 2
3 3 b 3
4 4 b 4
5 5 b 5
Note
Some functions when applied to a groupby object will act as a filter on the input, returning
a reduced shape of the original (and potentially eliminating groups), but with the index unchanged.
Passing as_index=False will not affect these transformation methods.
For example: head, tail.
In [146]: dff.groupby("B").head(2)
Out[146]:
A B C
0 0 a 0
1 1 a 1
2 2 b 2
3 3 b 3
6 6 c 6
7 7 c 7
Dispatching to instance methods#
When doing an aggregation or transformation, you might just want to call an
instance method on each data group. This is pretty easy to do by passing lambda
functions:
In [147]: grouped = df.groupby("A")
In [148]: grouped.agg(lambda x: x.std())
Out[148]:
C D
A
bar 0.181231 1.366330
foo 0.912265 0.884785
But, it’s rather verbose and can be untidy if you need to pass additional
arguments. Using a bit of metaprogramming cleverness, GroupBy now has the
ability to “dispatch” method calls to the groups:
In [149]: grouped.std()
Out[149]:
C D
A
bar 0.181231 1.366330
foo 0.912265 0.884785
What is actually happening here is that a function wrapper is being
generated. When invoked, it takes any passed arguments and invokes the function
with any arguments on each group (in the above example, the std
function). The results are then combined together much in the style of agg
and transform (it actually uses apply to infer the gluing, documented
next). This enables some operations to be carried out rather succinctly:
In [150]: tsdf = pd.DataFrame(
.....: np.random.randn(1000, 3),
.....: index=pd.date_range("1/1/2000", periods=1000),
.....: columns=["A", "B", "C"],
.....: )
.....:
In [151]: tsdf.iloc[::2] = np.nan
In [152]: grouped = tsdf.groupby(lambda x: x.year)
In [153]: grouped.fillna(method="pad")
Out[153]:
A B C
2000-01-01 NaN NaN NaN
2000-01-02 -0.353501 -0.080957 -0.876864
2000-01-03 -0.353501 -0.080957 -0.876864
2000-01-04 0.050976 0.044273 -0.559849
2000-01-05 0.050976 0.044273 -0.559849
... ... ... ...
2002-09-22 0.005011 0.053897 -1.026922
2002-09-23 0.005011 0.053897 -1.026922
2002-09-24 -0.456542 -1.849051 1.559856
2002-09-25 -0.456542 -1.849051 1.559856
2002-09-26 1.123162 0.354660 1.128135
[1000 rows x 3 columns]
In this example, we chopped the collection of time series into yearly chunks
then independently called fillna on the
groups.
The nlargest and nsmallest methods work on Series style groupbys:
In [154]: s = pd.Series([9, 8, 7, 5, 19, 1, 4.2, 3.3])
In [155]: g = pd.Series(list("abababab"))
In [156]: gb = s.groupby(g)
In [157]: gb.nlargest(3)
Out[157]:
a 4 19.0
0 9.0
2 7.0
b 1 8.0
3 5.0
7 3.3
dtype: float64
In [158]: gb.nsmallest(3)
Out[158]:
a 6 4.2
2 7.0
0 9.0
b 5 1.0
7 3.3
3 5.0
dtype: float64
Flexible apply#
Some operations on the grouped data might not fit into either the aggregate or
transform categories. Or, you may simply want GroupBy to infer how to combine
the results. For these, use the apply function, which can be substituted
for both aggregate and transform in many standard use cases. However,
apply can handle some exceptional use cases.
Note
apply can act as a reducer, transformer, or filter function, depending
on exactly what is passed to it. It can depend on the passed function and
exactly what you are grouping. Thus the grouped column(s) may be included in
the output as well as set the indices.
In [159]: df
Out[159]:
A B C D
0 foo one -0.575247 1.346061
1 bar one 0.254161 1.511763
2 foo two -1.143704 1.627081
3 bar three 0.215897 -0.990582
4 foo two 1.193555 -0.441652
5 bar two -0.077118 1.211526
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
In [160]: grouped = df.groupby("A")
# could also just call .describe()
In [161]: grouped["C"].apply(lambda x: x.describe())
Out[161]:
A
bar count 3.000000
mean 0.130980
std 0.181231
min -0.077118
25% 0.069390
...
foo min -1.143704
25% -0.862495
50% -0.575247
75% -0.408530
max 1.193555
Name: C, Length: 16, dtype: float64
The dimension of the returned result can also change:
In [162]: grouped = df.groupby('A')['C']
In [163]: def f(group):
.....: return pd.DataFrame({'original': group,
.....: 'demeaned': group - group.mean()})
.....:
apply on a Series can operate on a returned value from the applied function,
that is itself a series, and possibly upcast the result to a DataFrame:
In [164]: def f(x):
.....: return pd.Series([x, x ** 2], index=["x", "x^2"])
.....:
In [165]: s = pd.Series(np.random.rand(5))
In [166]: s
Out[166]:
0 0.321438
1 0.493496
2 0.139505
3 0.910103
4 0.194158
dtype: float64
In [167]: s.apply(f)
Out[167]:
x x^2
0 0.321438 0.103323
1 0.493496 0.243538
2 0.139505 0.019462
3 0.910103 0.828287
4 0.194158 0.037697
Control grouped column(s) placement with group_keys#
Note
If group_keys=True is specified when calling groupby(),
functions passed to apply that return like-indexed outputs will have the
group keys added to the result index. Previous versions of pandas would add
the group keys only when the result from the applied function had a different
index than the input. If group_keys is not specified, the group keys will
not be added for like-indexed outputs. In the future this behavior
will change to always respect group_keys, which defaults to True.
Changed in version 1.5.0.
To control whether the grouped column(s) are included in the indices, you can use
the argument group_keys. Compare
In [168]: df.groupby("A", group_keys=True).apply(lambda x: x)
Out[168]:
A B C D
A
bar 1 bar one 0.254161 1.511763
3 bar three 0.215897 -0.990582
5 bar two -0.077118 1.211526
foo 0 foo one -0.575247 1.346061
2 foo two -1.143704 1.627081
4 foo two 1.193555 -0.441652
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
with
In [169]: df.groupby("A", group_keys=False).apply(lambda x: x)
Out[169]:
A B C D
0 foo one -0.575247 1.346061
1 bar one 0.254161 1.511763
2 foo two -1.143704 1.627081
3 bar three 0.215897 -0.990582
4 foo two 1.193555 -0.441652
5 bar two -0.077118 1.211526
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
Similar to Aggregations with User-Defined Functions, the resulting dtype will reflect that of the
apply function. If the results from different groups have different dtypes, then
a common dtype will be determined in the same way as DataFrame construction.
Numba Accelerated Routines#
New in version 1.1.
If Numba is installed as an optional dependency, the transform and
aggregate methods support engine='numba' and engine_kwargs arguments.
See enhancing performance with Numba for general usage of the arguments
and performance considerations.
The function signature must start with values, index exactly as the data belonging to each group
will be passed into values, and the group index will be passed into index.
Warning
When using engine='numba', there will be no “fall back” behavior internally. The group
data and group index will be passed as NumPy arrays to the JITed user defined function, and no
alternative execution attempts will be tried.
Other useful features#
Automatic exclusion of “nuisance” columns#
Again consider the example DataFrame we’ve been looking at:
In [170]: df
Out[170]:
A B C D
0 foo one -0.575247 1.346061
1 bar one 0.254161 1.511763
2 foo two -1.143704 1.627081
3 bar three 0.215897 -0.990582
4 foo two 1.193555 -0.441652
5 bar two -0.077118 1.211526
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
Suppose we wish to compute the standard deviation grouped by the A
column. There is a slight problem, namely that we don’t care about the data in
column B. We refer to this as a “nuisance” column. You can avoid nuisance
columns by specifying numeric_only=True:
In [171]: df.groupby("A").std(numeric_only=True)
Out[171]:
C D
A
bar 0.181231 1.366330
foo 0.912265 0.884785
Note that df.groupby('A').colname.std(). is more efficient than
df.groupby('A').std().colname, so if the result of an aggregation function
is only interesting over one column (here colname), it may be filtered
before applying the aggregation function.
Note
Any object column, also if it contains numerical values such as Decimal
objects, is considered as a “nuisance” columns. They are excluded from
aggregate functions automatically in groupby.
If you do wish to include decimal or object columns in an aggregation with
other non-nuisance data types, you must do so explicitly.
Warning
The automatic dropping of nuisance columns has been deprecated and will be removed
in a future version of pandas. If columns are included that cannot be operated
on, pandas will instead raise an error. In order to avoid this, either select
the columns you wish to operate on or specify numeric_only=True.
In [172]: from decimal import Decimal
In [173]: df_dec = pd.DataFrame(
.....: {
.....: "id": [1, 2, 1, 2],
.....: "int_column": [1, 2, 3, 4],
.....: "dec_column": [
.....: Decimal("0.50"),
.....: Decimal("0.15"),
.....: Decimal("0.25"),
.....: Decimal("0.40"),
.....: ],
.....: }
.....: )
.....:
# Decimal columns can be sum'd explicitly by themselves...
In [174]: df_dec.groupby(["id"])[["dec_column"]].sum()
Out[174]:
dec_column
id
1 0.75
2 0.55
# ...but cannot be combined with standard data types or they will be excluded
In [175]: df_dec.groupby(["id"])[["int_column", "dec_column"]].sum()
Out[175]:
int_column
id
1 4
2 6
# Use .agg function to aggregate over standard and "nuisance" data types
# at the same time
In [176]: df_dec.groupby(["id"]).agg({"int_column": "sum", "dec_column": "sum"})
Out[176]:
int_column dec_column
id
1 4 0.75
2 6 0.55
Handling of (un)observed Categorical values#
When using a Categorical grouper (as a single grouper, or as part of multiple groupers), the observed keyword
controls whether to return a cartesian product of all possible groupers values (observed=False) or only those
that are observed groupers (observed=True).
Show all values:
In [177]: pd.Series([1, 1, 1]).groupby(
.....: pd.Categorical(["a", "a", "a"], categories=["a", "b"]), observed=False
.....: ).count()
.....:
Out[177]:
a 3
b 0
dtype: int64
Show only the observed values:
In [178]: pd.Series([1, 1, 1]).groupby(
.....: pd.Categorical(["a", "a", "a"], categories=["a", "b"]), observed=True
.....: ).count()
.....:
Out[178]:
a 3
dtype: int64
The returned dtype of the grouped will always include all of the categories that were grouped.
In [179]: s = (
.....: pd.Series([1, 1, 1])
.....: .groupby(pd.Categorical(["a", "a", "a"], categories=["a", "b"]), observed=False)
.....: .count()
.....: )
.....:
In [180]: s.index.dtype
Out[180]: CategoricalDtype(categories=['a', 'b'], ordered=False)
NA and NaT group handling#
If there are any NaN or NaT values in the grouping key, these will be
automatically excluded. In other words, there will never be an “NA group” or
“NaT group”. This was not the case in older versions of pandas, but users were
generally discarding the NA group anyway (and supporting it was an
implementation headache).
Grouping with ordered factors#
Categorical variables represented as instance of pandas’s Categorical class
can be used as group keys. If so, the order of the levels will be preserved:
In [181]: data = pd.Series(np.random.randn(100))
In [182]: factor = pd.qcut(data, [0, 0.25, 0.5, 0.75, 1.0])
In [183]: data.groupby(factor).mean()
Out[183]:
(-2.645, -0.523] -1.362896
(-0.523, 0.0296] -0.260266
(0.0296, 0.654] 0.361802
(0.654, 2.21] 1.073801
dtype: float64
Grouping with a grouper specification#
You may need to specify a bit more data to properly group. You can
use the pd.Grouper to provide this local control.
In [184]: import datetime
In [185]: df = pd.DataFrame(
.....: {
.....: "Branch": "A A A A A A A B".split(),
.....: "Buyer": "Carl Mark Carl Carl Joe Joe Joe Carl".split(),
.....: "Quantity": [1, 3, 5, 1, 8, 1, 9, 3],
.....: "Date": [
.....: datetime.datetime(2013, 1, 1, 13, 0),
.....: datetime.datetime(2013, 1, 1, 13, 5),
.....: datetime.datetime(2013, 10, 1, 20, 0),
.....: datetime.datetime(2013, 10, 2, 10, 0),
.....: datetime.datetime(2013, 10, 1, 20, 0),
.....: datetime.datetime(2013, 10, 2, 10, 0),
.....: datetime.datetime(2013, 12, 2, 12, 0),
.....: datetime.datetime(2013, 12, 2, 14, 0),
.....: ],
.....: }
.....: )
.....:
In [186]: df
Out[186]:
Branch Buyer Quantity Date
0 A Carl 1 2013-01-01 13:00:00
1 A Mark 3 2013-01-01 13:05:00
2 A Carl 5 2013-10-01 20:00:00
3 A Carl 1 2013-10-02 10:00:00
4 A Joe 8 2013-10-01 20:00:00
5 A Joe 1 2013-10-02 10:00:00
6 A Joe 9 2013-12-02 12:00:00
7 B Carl 3 2013-12-02 14:00:00
Groupby a specific column with the desired frequency. This is like resampling.
In [187]: df.groupby([pd.Grouper(freq="1M", key="Date"), "Buyer"])[["Quantity"]].sum()
Out[187]:
Quantity
Date Buyer
2013-01-31 Carl 1
Mark 3
2013-10-31 Carl 6
Joe 9
2013-12-31 Carl 3
Joe 9
You have an ambiguous specification in that you have a named index and a column
that could be potential groupers.
In [188]: df = df.set_index("Date")
In [189]: df["Date"] = df.index + pd.offsets.MonthEnd(2)
In [190]: df.groupby([pd.Grouper(freq="6M", key="Date"), "Buyer"])[["Quantity"]].sum()
Out[190]:
Quantity
Date Buyer
2013-02-28 Carl 1
Mark 3
2014-02-28 Carl 9
Joe 18
In [191]: df.groupby([pd.Grouper(freq="6M", level="Date"), "Buyer"])[["Quantity"]].sum()
Out[191]:
Quantity
Date Buyer
2013-01-31 Carl 1
Mark 3
2014-01-31 Carl 9
Joe 18
Taking the first rows of each group#
Just like for a DataFrame or Series you can call head and tail on a groupby:
In [192]: df = pd.DataFrame([[1, 2], [1, 4], [5, 6]], columns=["A", "B"])
In [193]: df
Out[193]:
A B
0 1 2
1 1 4
2 5 6
In [194]: g = df.groupby("A")
In [195]: g.head(1)
Out[195]:
A B
0 1 2
2 5 6
In [196]: g.tail(1)
Out[196]:
A B
1 1 4
2 5 6
This shows the first or last n rows from each group.
Taking the nth row of each group#
To select from a DataFrame or Series the nth item, use
nth(). This is a reduction method, and
will return a single row (or no row) per group if you pass an int for n:
In [197]: df = pd.DataFrame([[1, np.nan], [1, 4], [5, 6]], columns=["A", "B"])
In [198]: g = df.groupby("A")
In [199]: g.nth(0)
Out[199]:
B
A
1 NaN
5 6.0
In [200]: g.nth(-1)
Out[200]:
B
A
1 4.0
5 6.0
In [201]: g.nth(1)
Out[201]:
B
A
1 4.0
If you want to select the nth not-null item, use the dropna kwarg. For a DataFrame this should be either 'any' or 'all' just like you would pass to dropna:
# nth(0) is the same as g.first()
In [202]: g.nth(0, dropna="any")
Out[202]:
B
A
1 4.0
5 6.0
In [203]: g.first()
Out[203]:
B
A
1 4.0
5 6.0
# nth(-1) is the same as g.last()
In [204]: g.nth(-1, dropna="any") # NaNs denote group exhausted when using dropna
Out[204]:
B
A
1 4.0
5 6.0
In [205]: g.last()
Out[205]:
B
A
1 4.0
5 6.0
In [206]: g.B.nth(0, dropna="all")
Out[206]:
A
1 4.0
5 6.0
Name: B, dtype: float64
As with other methods, passing as_index=False, will achieve a filtration, which returns the grouped row.
In [207]: df = pd.DataFrame([[1, np.nan], [1, 4], [5, 6]], columns=["A", "B"])
In [208]: g = df.groupby("A", as_index=False)
In [209]: g.nth(0)
Out[209]:
A B
0 1 NaN
2 5 6.0
In [210]: g.nth(-1)
Out[210]:
A B
1 1 4.0
2 5 6.0
You can also select multiple rows from each group by specifying multiple nth values as a list of ints.
In [211]: business_dates = pd.date_range(start="4/1/2014", end="6/30/2014", freq="B")
In [212]: df = pd.DataFrame(1, index=business_dates, columns=["a", "b"])
# get the first, 4th, and last date index for each month
In [213]: df.groupby([df.index.year, df.index.month]).nth([0, 3, -1])
Out[213]:
a b
2014 4 1 1
4 1 1
4 1 1
5 1 1
5 1 1
5 1 1
6 1 1
6 1 1
6 1 1
Enumerate group items#
To see the order in which each row appears within its group, use the
cumcount method:
In [214]: dfg = pd.DataFrame(list("aaabba"), columns=["A"])
In [215]: dfg
Out[215]:
A
0 a
1 a
2 a
3 b
4 b
5 a
In [216]: dfg.groupby("A").cumcount()
Out[216]:
0 0
1 1
2 2
3 0
4 1
5 3
dtype: int64
In [217]: dfg.groupby("A").cumcount(ascending=False)
Out[217]:
0 3
1 2
2 1
3 1
4 0
5 0
dtype: int64
Enumerate groups#
To see the ordering of the groups (as opposed to the order of rows
within a group given by cumcount) you can use
ngroup().
Note that the numbers given to the groups match the order in which the
groups would be seen when iterating over the groupby object, not the
order they are first observed.
In [218]: dfg = pd.DataFrame(list("aaabba"), columns=["A"])
In [219]: dfg
Out[219]:
A
0 a
1 a
2 a
3 b
4 b
5 a
In [220]: dfg.groupby("A").ngroup()
Out[220]:
0 0
1 0
2 0
3 1
4 1
5 0
dtype: int64
In [221]: dfg.groupby("A").ngroup(ascending=False)
Out[221]:
0 1
1 1
2 1
3 0
4 0
5 1
dtype: int64
Plotting#
Groupby also works with some plotting methods. For example, suppose we
suspect that some features in a DataFrame may differ by group, in this case,
the values in column 1 where the group is “B” are 3 higher on average.
In [222]: np.random.seed(1234)
In [223]: df = pd.DataFrame(np.random.randn(50, 2))
In [224]: df["g"] = np.random.choice(["A", "B"], size=50)
In [225]: df.loc[df["g"] == "B", 1] += 3
We can easily visualize this with a boxplot:
In [226]: df.groupby("g").boxplot()
Out[226]:
A AxesSubplot(0.1,0.15;0.363636x0.75)
B AxesSubplot(0.536364,0.15;0.363636x0.75)
dtype: object
The result of calling boxplot is a dictionary whose keys are the values
of our grouping column g (“A” and “B”). The values of the resulting dictionary
can be controlled by the return_type keyword of boxplot.
See the visualization documentation for more.
Warning
For historical reasons, df.groupby("g").boxplot() is not equivalent
to df.boxplot(by="g"). See here for
an explanation.
Piping function calls#
Similar to the functionality provided by DataFrame and Series, functions
that take GroupBy objects can be chained together using a pipe method to
allow for a cleaner, more readable syntax. To read about .pipe in general terms,
see here.
Combining .groupby and .pipe is often useful when you need to reuse
GroupBy objects.
As an example, imagine having a DataFrame with columns for stores, products,
revenue and quantity sold. We’d like to do a groupwise calculation of prices
(i.e. revenue/quantity) per store and per product. We could do this in a
multi-step operation, but expressing it in terms of piping can make the
code more readable. First we set the data:
In [227]: n = 1000
In [228]: df = pd.DataFrame(
.....: {
.....: "Store": np.random.choice(["Store_1", "Store_2"], n),
.....: "Product": np.random.choice(["Product_1", "Product_2"], n),
.....: "Revenue": (np.random.random(n) * 50 + 10).round(2),
.....: "Quantity": np.random.randint(1, 10, size=n),
.....: }
.....: )
.....:
In [229]: df.head(2)
Out[229]:
Store Product Revenue Quantity
0 Store_2 Product_1 26.12 1
1 Store_2 Product_1 28.86 1
Now, to find prices per store/product, we can simply do:
In [230]: (
.....: df.groupby(["Store", "Product"])
.....: .pipe(lambda grp: grp.Revenue.sum() / grp.Quantity.sum())
.....: .unstack()
.....: .round(2)
.....: )
.....:
Out[230]:
Product Product_1 Product_2
Store
Store_1 6.82 7.05
Store_2 6.30 6.64
Piping can also be expressive when you want to deliver a grouped object to some
arbitrary function, for example:
In [231]: def mean(groupby):
.....: return groupby.mean()
.....:
In [232]: df.groupby(["Store", "Product"]).pipe(mean)
Out[232]:
Revenue Quantity
Store Product
Store_1 Product_1 34.622727 5.075758
Product_2 35.482815 5.029630
Store_2 Product_1 32.972837 5.237589
Product_2 34.684360 5.224000
where mean takes a GroupBy object and finds the mean of the Revenue and Quantity
columns respectively for each Store-Product combination. The mean function can
be any function that takes in a GroupBy object; the .pipe will pass the GroupBy
object as a parameter into the function you specify.
Examples#
Regrouping by factor#
Regroup columns of a DataFrame according to their sum, and sum the aggregated ones.
In [233]: df = pd.DataFrame({"a": [1, 0, 0], "b": [0, 1, 0], "c": [1, 0, 0], "d": [2, 3, 4]})
In [234]: df
Out[234]:
a b c d
0 1 0 1 2
1 0 1 0 3
2 0 0 0 4
In [235]: df.groupby(df.sum(), axis=1).sum()
Out[235]:
1 9
0 2 2
1 1 3
2 0 4
Multi-column factorization#
By using ngroup(), we can extract
information about the groups in a way similar to factorize() (as described
further in the reshaping API) but which applies
naturally to multiple columns of mixed type and different
sources. This can be useful as an intermediate categorical-like step
in processing, when the relationships between the group rows are more
important than their content, or as input to an algorithm which only
accepts the integer encoding. (For more information about support in
pandas for full categorical data, see the Categorical
introduction and the
API documentation.)
In [236]: dfg = pd.DataFrame({"A": [1, 1, 2, 3, 2], "B": list("aaaba")})
In [237]: dfg
Out[237]:
A B
0 1 a
1 1 a
2 2 a
3 3 b
4 2 a
In [238]: dfg.groupby(["A", "B"]).ngroup()
Out[238]:
0 0
1 0
2 1
3 2
4 1
dtype: int64
In [239]: dfg.groupby(["A", [0, 0, 0, 1, 1]]).ngroup()
Out[239]:
0 0
1 0
2 1
3 3
4 2
dtype: int64
Groupby by indexer to ‘resample’ data#
Resampling produces new hypothetical samples (resamples) from already existing observed data or from a model that generates data. These new samples are similar to the pre-existing samples.
In order to resample to work on indices that are non-datetimelike, the following procedure can be utilized.
In the following examples, df.index // 5 returns a binary array which is used to determine what gets selected for the groupby operation.
Note
The below example shows how we can downsample by consolidation of samples into fewer samples. Here by using df.index // 5, we are aggregating the samples in bins. By applying std() function, we aggregate the information contained in many samples into a small subset of values which is their standard deviation thereby reducing the number of samples.
In [240]: df = pd.DataFrame(np.random.randn(10, 2))
In [241]: df
Out[241]:
0 1
0 -0.793893 0.321153
1 0.342250 1.618906
2 -0.975807 1.918201
3 -0.810847 -1.405919
4 -1.977759 0.461659
5 0.730057 -1.316938
6 -0.751328 0.528290
7 -0.257759 -1.081009
8 0.505895 -1.701948
9 -1.006349 0.020208
In [242]: df.index // 5
Out[242]: Int64Index([0, 0, 0, 0, 0, 1, 1, 1, 1, 1], dtype='int64')
In [243]: df.groupby(df.index // 5).std()
Out[243]:
0 1
0 0.823647 1.312912
1 0.760109 0.942941
Returning a Series to propagate names#
Group DataFrame columns, compute a set of metrics and return a named Series.
The Series name is used as the name for the column index. This is especially
useful in conjunction with reshaping operations such as stacking in which the
column index name will be used as the name of the inserted column:
In [244]: df = pd.DataFrame(
.....: {
.....: "a": [0, 0, 0, 0, 1, 1, 1, 1, 2, 2, 2, 2],
.....: "b": [0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1],
.....: "c": [1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0],
.....: "d": [0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1],
.....: }
.....: )
.....:
In [245]: def compute_metrics(x):
.....: result = {"b_sum": x["b"].sum(), "c_mean": x["c"].mean()}
.....: return pd.Series(result, name="metrics")
.....:
In [246]: result = df.groupby("a").apply(compute_metrics)
In [247]: result
Out[247]:
metrics b_sum c_mean
a
0 2.0 0.5
1 2.0 0.5
2 2.0 0.5
In [248]: result.stack()
Out[248]:
a metrics
0 b_sum 2.0
c_mean 0.5
1 b_sum 2.0
c_mean 0.5
2 b_sum 2.0
c_mean 0.5
dtype: float64
| 684
| 1,115
|
why groupby change the rows number
i have this code that i try to plot column x Ct and Fs based on Ft
so how can i solve this?
df = pd.read_csv('f.txt',sep=" ",names=list(["Ct", "Fs", "Ft"]))
df.iloc[:]
groups = df.groupby("Ft")
plt.subplots(figsize=(18,10))
for name, group in groups:
plt.scatter( group.Ct,group.Fs, label=name,s=100)
plt.yticks(np.arange(0, 6,0.5))
plt.xticks(np.arange(0, 24,1))
plt.title('f',fontsize=20)
plt.xlabel('x',fontsize=20)
plt.ylabel('y',fontsize=20)
plt.legend(loc='upper center', ncol=3)
group.iloc[:]
|
65,774,957
|
In python, how can i use a loop to name panda data frames?
|
<p>What I'm trying to do is to use pandas to create as many separate data arrays as there are runs of my data set. The approach needs to be vary depending on the data file read in, so I want the run number, the second column, to be used to identify the data and separate it into separate data sets.</p>
<p>So I have a data set that looks like:</p>
<pre><code>1.350000035018e-03 1.000000000000e+00 -1.617387196395e-14
2.850000048056e-03 1.000000000000e+00 -2.752685546875e-06
4.350000061095e-03 1.000000000000e+00 -2.062988281250e-06
(couple hundred lines later)
1.350000035018e-03 2.000000000000e+00 -1.617387196395e-14
2.850000048056e-03 2.000000000000e+00 -2.752685546875e-06
4.350000061095e-03 2.000000000000e+00 -2.062988281250e-06
(however many readings later)
1.350000035018e-03 35.000000000000e+00 -1.617387196395e-14
2.850000048056e-03 35.000000000000e+00 -2.752685546875e-06
4.350000061095e-03 35.000000000000e+00 -2.062988281250e-06
</code></pre>
<p>I want to process it into:</p>
<pre><code>data1 = some number 1.0 some number
some number 1.0 some number
data2 = some number 2.0 some number
some number 2.0 some number
datan= some number n some number
some number n some number
</code></pre>
<p>So far my code:</p>
<pre><code>
f =r'C:~.dat'
#store data using pandas
data = pd.read_csv( f, sep = '\t', comment = '#', names = ['V','n','I'] )
#observe data format
print(data)
V n I
0 0.001350 1.0 -1.617387e-14
1 0.002850 1.0 -2.752686e-06
2 0.004350 1.0 -2.062988e-06
#count the loops for autamted graph plotting
num = 1
for i in range (len(data)):
if i > 0:
if data['n'][i]> data['n'][i-1]:
num = num + 1
#
print('there are '+str(num)+' runs')
#seperate data based on loop #n
for i in range (num):
run = data.groupby(data.n)
data+str(i) = run.get_group(i)
print(data+str(i))
#
</code></pre>
<p>using the data grouping method works, but I cant figure out a way to use the loop number as a name variable, any help/suggestions would be highly appreciated?</p>
| 65,775,886
| 2021-01-18T12:57:01.613000
| 2
| null | -1
| 46
|
python|pandas
|
<p>Do you need to explicitly name your dataframes or can it be part of a list or dict?</p>
<p>For instance, you could do something like this...</p>
<pre><code>import pandas as pd
f =r'C:~.dat'
#store data using pandas
data = pd.read_csv( f, sep = '\t', comment = '#', names = ['V','n','I'] )
data_list = []
# get unique run entries
runs = data["n"].unique()
# save each run's corresponding dataframe into data_list
for run in runs:
data_sub = data[data["n"] == run]
data_list.append(data_sub)
# access it by doing something as follows
for idx, run in enumerate(runs):
print("Working on run {}".format(run))
df_to_operate_on = data_list[idx]
</code></pre>
| 2021-01-18T13:57:57.560000
| 0
|
https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.itertuples.html
|
pandas.DataFrame.itertuples#
pandas.DataFrame.itertuples#
DataFrame.itertuples(index=True, name='Pandas')[source]#
Iterate over DataFrame rows as namedtuples.
Parameters
indexbool, default TrueIf True, return the index as the first element of the tuple.
namestr or None, default “Pandas”The name of the returned namedtuples or None to return regular
tuples.
Returns
iteratorAn object to iterate over namedtuples for each row in the
DataFrame with the first field possibly being the index and
following fields being the column values.
See also
DataFrame.iterrowsIterate over DataFrame rows as (index, Series) pairs.
Do you need to explicitly name your dataframes or can it be part of a list or dict?
For instance, you could do something like this...
import pandas as pd
f =r'C:~.dat'
#store data using pandas
data = pd.read_csv( f, sep = '\t', comment = '#', names = ['V','n','I'] )
data_list = []
# get unique run entries
runs = data["n"].unique()
# save each run's corresponding dataframe into data_list
for run in runs:
data_sub = data[data["n"] == run]
data_list.append(data_sub)
# access it by doing something as follows
for idx, run in enumerate(runs):
print("Working on run {}".format(run))
df_to_operate_on = data_list[idx]
DataFrame.itemsIterate over (column name, Series) pairs.
Notes
The column names will be renamed to positional names if they are
invalid Python identifiers, repeated, or start with an underscore.
Examples
>>> df = pd.DataFrame({'num_legs': [4, 2], 'num_wings': [0, 2]},
... index=['dog', 'hawk'])
>>> df
num_legs num_wings
dog 4 0
hawk 2 2
>>> for row in df.itertuples():
... print(row)
...
Pandas(Index='dog', num_legs=4, num_wings=0)
Pandas(Index='hawk', num_legs=2, num_wings=2)
By setting the index parameter to False we can remove the index
as the first element of the tuple:
>>> for row in df.itertuples(index=False):
... print(row)
...
Pandas(num_legs=4, num_wings=0)
Pandas(num_legs=2, num_wings=2)
With the name parameter set we set a custom name for the yielded
namedtuples:
>>> for row in df.itertuples(name='Animal'):
... print(row)
...
Animal(Index='dog', num_legs=4, num_wings=0)
Animal(Index='hawk', num_legs=2, num_wings=2)
| 632
| 1,273
|
In python, how can i use a loop to name panda data frames?
What I'm trying to do is to use pandas to create as many separate data arrays as there are runs of my data set. The approach needs to be vary depending on the data file read in, so I want the run number, the second column, to be used to identify the data and separate it into separate data sets.
So I have a data set that looks like:
1.350000035018e-03 1.000000000000e+00 -1.617387196395e-14
2.850000048056e-03 1.000000000000e+00 -2.752685546875e-06
4.350000061095e-03 1.000000000000e+00 -2.062988281250e-06
(couple hundred lines later)
1.350000035018e-03 2.000000000000e+00 -1.617387196395e-14
2.850000048056e-03 2.000000000000e+00 -2.752685546875e-06
4.350000061095e-03 2.000000000000e+00 -2.062988281250e-06
(however many readings later)
1.350000035018e-03 35.000000000000e+00 -1.617387196395e-14
2.850000048056e-03 35.000000000000e+00 -2.752685546875e-06
4.350000061095e-03 35.000000000000e+00 -2.062988281250e-06
I want to process it into:
data1 = some number 1.0 some number
some number 1.0 some number
data2 = some number 2.0 some number
some number 2.0 some number
datan= some number n some number
some number n some number
So far my code:
f =r'C:~.dat'
#store data using pandas
data = pd.read_csv( f, sep = '\t', comment = '#', names = ['V','n','I'] )
#observe data format
print(data)
V n I
0 0.001350 1.0 -1.617387e-14
1 0.002850 1.0 -2.752686e-06
2 0.004350 1.0 -2.062988e-06
#count the loops for autamted graph plotting
num = 1
for i in range (len(data)):
if i > 0:
if data['n'][i]> data['n'][i-1]:
num = num + 1
#
print('there are '+str(num)+' runs')
#seperate data based on loop #n
for i in range (num):
run = data.groupby(data.n)
data+str(i) = run.get_group(i)
print(data+str(i))
#
using the data grouping method works, but I cant figure out a way to use the loop number as a name variable, any help/suggestions would be highly appreciated?
|
70,454,468
|
Word in string exists but not recognized
|
<p>I have a df that contains a column that has a description, I used the following code to extract specific words and create:</p>
<pre><code>def criteria (df):
if df.DESCRIPCION.find('CORONITA')>0:
return ('Corona')
else:
return ('Otras')
df['Marca'] = df.apply(criteria, axis=1)
</code></pre>
<p><a href="https://i.stack.imgur.com/7omX5.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7omX5.png" alt="" /></a></p>
<p>As you can see, the word exists, but pandas applies 'Otras' instead of Corona.</p>
<p>Any advice?</p>
| 70,454,556
| 2021-12-22T19:55:54.067000
| 1
| null | 0
| 47
|
python|pandas
|
<p>The <code>find</code> command usually returns an index for location. This location can start at 0. So try changing:</p>
<pre><code>if df.DESCRIPCION.find('CORONITA')>0:
</code></pre>
<p>to:</p>
<pre><code>if df.DESCRIPCION.find('CORONITA')>=0:
# ^
</code></pre>
<p>That should help. Location <code>0</code> means it finds it right at the beginning, which is probably what's happening for you. So, since you exclude <code>0</code> as a viable answer, you are getting an incorrect result.</p>
| 2021-12-22T20:05:16.300000
| 0
|
https://pandas.pydata.org/docs/user_guide/text.html
|
Working with text data#
Working with text data#
Text data types#
New in version 1.0.0.
There are two ways to store text data in pandas:
object -dtype NumPy array.
StringDtype extension type.
We recommend using StringDtype to store text data.
Prior to pandas 1.0, object dtype was the only option. This was unfortunate
for many reasons:
You can accidentally store a mixture of strings and non-strings in an
object dtype array. It’s better to have a dedicated dtype.
object dtype breaks dtype-specific operations like DataFrame.select_dtypes().
The find command usually returns an index for location. This location can start at 0. So try changing:
if df.DESCRIPCION.find('CORONITA')>0:
to:
if df.DESCRIPCION.find('CORONITA')>=0:
# ^
That should help. Location 0 means it finds it right at the beginning, which is probably what's happening for you. So, since you exclude 0 as a viable answer, you are getting an incorrect result.
There isn’t a clear way to select just text while excluding non-text
but still object-dtype columns.
When reading code, the contents of an object dtype array is less clear
than 'string'.
Currently, the performance of object dtype arrays of strings and
arrays.StringArray are about the same. We expect future enhancements
to significantly increase the performance and lower the memory overhead of
StringArray.
Warning
StringArray is currently considered experimental. The implementation
and parts of the API may change without warning.
For backwards-compatibility, object dtype remains the default type we
infer a list of strings to
In [1]: pd.Series(["a", "b", "c"])
Out[1]:
0 a
1 b
2 c
dtype: object
To explicitly request string dtype, specify the dtype
In [2]: pd.Series(["a", "b", "c"], dtype="string")
Out[2]:
0 a
1 b
2 c
dtype: string
In [3]: pd.Series(["a", "b", "c"], dtype=pd.StringDtype())
Out[3]:
0 a
1 b
2 c
dtype: string
Or astype after the Series or DataFrame is created
In [4]: s = pd.Series(["a", "b", "c"])
In [5]: s
Out[5]:
0 a
1 b
2 c
dtype: object
In [6]: s.astype("string")
Out[6]:
0 a
1 b
2 c
dtype: string
Changed in version 1.1.0.
You can also use StringDtype/"string" as the dtype on non-string data and
it will be converted to string dtype:
In [7]: s = pd.Series(["a", 2, np.nan], dtype="string")
In [8]: s
Out[8]:
0 a
1 2
2 <NA>
dtype: string
In [9]: type(s[1])
Out[9]: str
or convert from existing pandas data:
In [10]: s1 = pd.Series([1, 2, np.nan], dtype="Int64")
In [11]: s1
Out[11]:
0 1
1 2
2 <NA>
dtype: Int64
In [12]: s2 = s1.astype("string")
In [13]: s2
Out[13]:
0 1
1 2
2 <NA>
dtype: string
In [14]: type(s2[0])
Out[14]: str
Behavior differences#
These are places where the behavior of StringDtype objects differ from
object dtype
For StringDtype, string accessor methods
that return numeric output will always return a nullable integer dtype,
rather than either int or float dtype, depending on the presence of NA values.
Methods returning boolean output will return a nullable boolean dtype.
In [15]: s = pd.Series(["a", None, "b"], dtype="string")
In [16]: s
Out[16]:
0 a
1 <NA>
2 b
dtype: string
In [17]: s.str.count("a")
Out[17]:
0 1
1 <NA>
2 0
dtype: Int64
In [18]: s.dropna().str.count("a")
Out[18]:
0 1
2 0
dtype: Int64
Both outputs are Int64 dtype. Compare that with object-dtype
In [19]: s2 = pd.Series(["a", None, "b"], dtype="object")
In [20]: s2.str.count("a")
Out[20]:
0 1.0
1 NaN
2 0.0
dtype: float64
In [21]: s2.dropna().str.count("a")
Out[21]:
0 1
2 0
dtype: int64
When NA values are present, the output dtype is float64. Similarly for
methods returning boolean values.
In [22]: s.str.isdigit()
Out[22]:
0 False
1 <NA>
2 False
dtype: boolean
In [23]: s.str.match("a")
Out[23]:
0 True
1 <NA>
2 False
dtype: boolean
Some string methods, like Series.str.decode() are not available
on StringArray because StringArray only holds strings, not
bytes.
In comparison operations, arrays.StringArray and Series backed
by a StringArray will return an object with BooleanDtype,
rather than a bool dtype object. Missing values in a StringArray
will propagate in comparison operations, rather than always comparing
unequal like numpy.nan.
Everything else that follows in the rest of this document applies equally to
string and object dtype.
String methods#
Series and Index are equipped with a set of string processing methods
that make it easy to operate on each element of the array. Perhaps most
importantly, these methods exclude missing/NA values automatically. These are
accessed via the str attribute and generally have names matching
the equivalent (scalar) built-in string methods:
In [24]: s = pd.Series(
....: ["A", "B", "C", "Aaba", "Baca", np.nan, "CABA", "dog", "cat"], dtype="string"
....: )
....:
In [25]: s.str.lower()
Out[25]:
0 a
1 b
2 c
3 aaba
4 baca
5 <NA>
6 caba
7 dog
8 cat
dtype: string
In [26]: s.str.upper()
Out[26]:
0 A
1 B
2 C
3 AABA
4 BACA
5 <NA>
6 CABA
7 DOG
8 CAT
dtype: string
In [27]: s.str.len()
Out[27]:
0 1
1 1
2 1
3 4
4 4
5 <NA>
6 4
7 3
8 3
dtype: Int64
In [28]: idx = pd.Index([" jack", "jill ", " jesse ", "frank"])
In [29]: idx.str.strip()
Out[29]: Index(['jack', 'jill', 'jesse', 'frank'], dtype='object')
In [30]: idx.str.lstrip()
Out[30]: Index(['jack', 'jill ', 'jesse ', 'frank'], dtype='object')
In [31]: idx.str.rstrip()
Out[31]: Index([' jack', 'jill', ' jesse', 'frank'], dtype='object')
The string methods on Index are especially useful for cleaning up or
transforming DataFrame columns. For instance, you may have columns with
leading or trailing whitespace:
In [32]: df = pd.DataFrame(
....: np.random.randn(3, 2), columns=[" Column A ", " Column B "], index=range(3)
....: )
....:
In [33]: df
Out[33]:
Column A Column B
0 0.469112 -0.282863
1 -1.509059 -1.135632
2 1.212112 -0.173215
Since df.columns is an Index object, we can use the .str accessor
In [34]: df.columns.str.strip()
Out[34]: Index(['Column A', 'Column B'], dtype='object')
In [35]: df.columns.str.lower()
Out[35]: Index([' column a ', ' column b '], dtype='object')
These string methods can then be used to clean up the columns as needed.
Here we are removing leading and trailing whitespaces, lower casing all names,
and replacing any remaining whitespaces with underscores:
In [36]: df.columns = df.columns.str.strip().str.lower().str.replace(" ", "_")
In [37]: df
Out[37]:
column_a column_b
0 0.469112 -0.282863
1 -1.509059 -1.135632
2 1.212112 -0.173215
Note
If you have a Series where lots of elements are repeated
(i.e. the number of unique elements in the Series is a lot smaller than the length of the
Series), it can be faster to convert the original Series to one of type
category and then use .str.<method> or .dt.<property> on that.
The performance difference comes from the fact that, for Series of type category, the
string operations are done on the .categories and not on each element of the
Series.
Please note that a Series of type category with string .categories has
some limitations in comparison to Series of type string (e.g. you can’t add strings to
each other: s + " " + s won’t work if s is a Series of type category). Also,
.str methods which operate on elements of type list are not available on such a
Series.
Warning
Before v.0.25.0, the .str-accessor did only the most rudimentary type checks. Starting with
v.0.25.0, the type of the Series is inferred and the allowed types (i.e. strings) are enforced more rigorously.
Generally speaking, the .str accessor is intended to work only on strings. With very few
exceptions, other uses are not supported, and may be disabled at a later point.
Splitting and replacing strings#
Methods like split return a Series of lists:
In [38]: s2 = pd.Series(["a_b_c", "c_d_e", np.nan, "f_g_h"], dtype="string")
In [39]: s2.str.split("_")
Out[39]:
0 [a, b, c]
1 [c, d, e]
2 <NA>
3 [f, g, h]
dtype: object
Elements in the split lists can be accessed using get or [] notation:
In [40]: s2.str.split("_").str.get(1)
Out[40]:
0 b
1 d
2 <NA>
3 g
dtype: object
In [41]: s2.str.split("_").str[1]
Out[41]:
0 b
1 d
2 <NA>
3 g
dtype: object
It is easy to expand this to return a DataFrame using expand.
In [42]: s2.str.split("_", expand=True)
Out[42]:
0 1 2
0 a b c
1 c d e
2 <NA> <NA> <NA>
3 f g h
When original Series has StringDtype, the output columns will all
be StringDtype as well.
It is also possible to limit the number of splits:
In [43]: s2.str.split("_", expand=True, n=1)
Out[43]:
0 1
0 a b_c
1 c d_e
2 <NA> <NA>
3 f g_h
rsplit is similar to split except it works in the reverse direction,
i.e., from the end of the string to the beginning of the string:
In [44]: s2.str.rsplit("_", expand=True, n=1)
Out[44]:
0 1
0 a_b c
1 c_d e
2 <NA> <NA>
3 f_g h
replace optionally uses regular expressions:
In [45]: s3 = pd.Series(
....: ["A", "B", "C", "Aaba", "Baca", "", np.nan, "CABA", "dog", "cat"],
....: dtype="string",
....: )
....:
In [46]: s3
Out[46]:
0 A
1 B
2 C
3 Aaba
4 Baca
5
6 <NA>
7 CABA
8 dog
9 cat
dtype: string
In [47]: s3.str.replace("^.a|dog", "XX-XX ", case=False, regex=True)
Out[47]:
0 A
1 B
2 C
3 XX-XX ba
4 XX-XX ca
5
6 <NA>
7 XX-XX BA
8 XX-XX
9 XX-XX t
dtype: string
Warning
Some caution must be taken when dealing with regular expressions! The current behavior
is to treat single character patterns as literal strings, even when regex is set
to True. This behavior is deprecated and will be removed in a future version so
that the regex keyword is always respected.
Changed in version 1.2.0.
If you want literal replacement of a string (equivalent to str.replace()), you
can set the optional regex parameter to False, rather than escaping each
character. In this case both pat and repl must be strings:
In [48]: dollars = pd.Series(["12", "-$10", "$10,000"], dtype="string")
# These lines are equivalent
In [49]: dollars.str.replace(r"-\$", "-", regex=True)
Out[49]:
0 12
1 -10
2 $10,000
dtype: string
In [50]: dollars.str.replace("-$", "-", regex=False)
Out[50]:
0 12
1 -10
2 $10,000
dtype: string
The replace method can also take a callable as replacement. It is called
on every pat using re.sub(). The callable should expect one
positional argument (a regex object) and return a string.
# Reverse every lowercase alphabetic word
In [51]: pat = r"[a-z]+"
In [52]: def repl(m):
....: return m.group(0)[::-1]
....:
In [53]: pd.Series(["foo 123", "bar baz", np.nan], dtype="string").str.replace(
....: pat, repl, regex=True
....: )
....:
Out[53]:
0 oof 123
1 rab zab
2 <NA>
dtype: string
# Using regex groups
In [54]: pat = r"(?P<one>\w+) (?P<two>\w+) (?P<three>\w+)"
In [55]: def repl(m):
....: return m.group("two").swapcase()
....:
In [56]: pd.Series(["Foo Bar Baz", np.nan], dtype="string").str.replace(
....: pat, repl, regex=True
....: )
....:
Out[56]:
0 bAR
1 <NA>
dtype: string
The replace method also accepts a compiled regular expression object
from re.compile() as a pattern. All flags should be included in the
compiled regular expression object.
In [57]: import re
In [58]: regex_pat = re.compile(r"^.a|dog", flags=re.IGNORECASE)
In [59]: s3.str.replace(regex_pat, "XX-XX ", regex=True)
Out[59]:
0 A
1 B
2 C
3 XX-XX ba
4 XX-XX ca
5
6 <NA>
7 XX-XX BA
8 XX-XX
9 XX-XX t
dtype: string
Including a flags argument when calling replace with a compiled
regular expression object will raise a ValueError.
In [60]: s3.str.replace(regex_pat, 'XX-XX ', flags=re.IGNORECASE)
---------------------------------------------------------------------------
ValueError: case and flags cannot be set when pat is a compiled regex
removeprefix and removesuffix have the same effect as str.removeprefix and str.removesuffix added in Python 3.9
<https://docs.python.org/3/library/stdtypes.html#str.removeprefix>`__:
New in version 1.4.0.
In [61]: s = pd.Series(["str_foo", "str_bar", "no_prefix"])
In [62]: s.str.removeprefix("str_")
Out[62]:
0 foo
1 bar
2 no_prefix
dtype: object
In [63]: s = pd.Series(["foo_str", "bar_str", "no_suffix"])
In [64]: s.str.removesuffix("_str")
Out[64]:
0 foo
1 bar
2 no_suffix
dtype: object
Concatenation#
There are several ways to concatenate a Series or Index, either with itself or others, all based on cat(),
resp. Index.str.cat.
Concatenating a single Series into a string#
The content of a Series (or Index) can be concatenated:
In [65]: s = pd.Series(["a", "b", "c", "d"], dtype="string")
In [66]: s.str.cat(sep=",")
Out[66]: 'a,b,c,d'
If not specified, the keyword sep for the separator defaults to the empty string, sep='':
In [67]: s.str.cat()
Out[67]: 'abcd'
By default, missing values are ignored. Using na_rep, they can be given a representation:
In [68]: t = pd.Series(["a", "b", np.nan, "d"], dtype="string")
In [69]: t.str.cat(sep=",")
Out[69]: 'a,b,d'
In [70]: t.str.cat(sep=",", na_rep="-")
Out[70]: 'a,b,-,d'
Concatenating a Series and something list-like into a Series#
The first argument to cat() can be a list-like object, provided that it matches the length of the calling Series (or Index).
In [71]: s.str.cat(["A", "B", "C", "D"])
Out[71]:
0 aA
1 bB
2 cC
3 dD
dtype: string
Missing values on either side will result in missing values in the result as well, unless na_rep is specified:
In [72]: s.str.cat(t)
Out[72]:
0 aa
1 bb
2 <NA>
3 dd
dtype: string
In [73]: s.str.cat(t, na_rep="-")
Out[73]:
0 aa
1 bb
2 c-
3 dd
dtype: string
Concatenating a Series and something array-like into a Series#
The parameter others can also be two-dimensional. In this case, the number or rows must match the lengths of the calling Series (or Index).
In [74]: d = pd.concat([t, s], axis=1)
In [75]: s
Out[75]:
0 a
1 b
2 c
3 d
dtype: string
In [76]: d
Out[76]:
0 1
0 a a
1 b b
2 <NA> c
3 d d
In [77]: s.str.cat(d, na_rep="-")
Out[77]:
0 aaa
1 bbb
2 c-c
3 ddd
dtype: string
Concatenating a Series and an indexed object into a Series, with alignment#
For concatenation with a Series or DataFrame, it is possible to align the indexes before concatenation by setting
the join-keyword.
In [78]: u = pd.Series(["b", "d", "a", "c"], index=[1, 3, 0, 2], dtype="string")
In [79]: s
Out[79]:
0 a
1 b
2 c
3 d
dtype: string
In [80]: u
Out[80]:
1 b
3 d
0 a
2 c
dtype: string
In [81]: s.str.cat(u)
Out[81]:
0 aa
1 bb
2 cc
3 dd
dtype: string
In [82]: s.str.cat(u, join="left")
Out[82]:
0 aa
1 bb
2 cc
3 dd
dtype: string
Warning
If the join keyword is not passed, the method cat() will currently fall back to the behavior before version 0.23.0 (i.e. no alignment),
but a FutureWarning will be raised if any of the involved indexes differ, since this default will change to join='left' in a future version.
The usual options are available for join (one of 'left', 'outer', 'inner', 'right').
In particular, alignment also means that the different lengths do not need to coincide anymore.
In [83]: v = pd.Series(["z", "a", "b", "d", "e"], index=[-1, 0, 1, 3, 4], dtype="string")
In [84]: s
Out[84]:
0 a
1 b
2 c
3 d
dtype: string
In [85]: v
Out[85]:
-1 z
0 a
1 b
3 d
4 e
dtype: string
In [86]: s.str.cat(v, join="left", na_rep="-")
Out[86]:
0 aa
1 bb
2 c-
3 dd
dtype: string
In [87]: s.str.cat(v, join="outer", na_rep="-")
Out[87]:
-1 -z
0 aa
1 bb
2 c-
3 dd
4 -e
dtype: string
The same alignment can be used when others is a DataFrame:
In [88]: f = d.loc[[3, 2, 1, 0], :]
In [89]: s
Out[89]:
0 a
1 b
2 c
3 d
dtype: string
In [90]: f
Out[90]:
0 1
3 d d
2 <NA> c
1 b b
0 a a
In [91]: s.str.cat(f, join="left", na_rep="-")
Out[91]:
0 aaa
1 bbb
2 c-c
3 ddd
dtype: string
Concatenating a Series and many objects into a Series#
Several array-like items (specifically: Series, Index, and 1-dimensional variants of np.ndarray)
can be combined in a list-like container (including iterators, dict-views, etc.).
In [92]: s
Out[92]:
0 a
1 b
2 c
3 d
dtype: string
In [93]: u
Out[93]:
1 b
3 d
0 a
2 c
dtype: string
In [94]: s.str.cat([u, u.to_numpy()], join="left")
Out[94]:
0 aab
1 bbd
2 cca
3 ddc
dtype: string
All elements without an index (e.g. np.ndarray) within the passed list-like must match in length to the calling Series (or Index),
but Series and Index may have arbitrary length (as long as alignment is not disabled with join=None):
In [95]: v
Out[95]:
-1 z
0 a
1 b
3 d
4 e
dtype: string
In [96]: s.str.cat([v, u, u.to_numpy()], join="outer", na_rep="-")
Out[96]:
-1 -z--
0 aaab
1 bbbd
2 c-ca
3 dddc
4 -e--
dtype: string
If using join='right' on a list-like of others that contains different indexes,
the union of these indexes will be used as the basis for the final concatenation:
In [97]: u.loc[[3]]
Out[97]:
3 d
dtype: string
In [98]: v.loc[[-1, 0]]
Out[98]:
-1 z
0 a
dtype: string
In [99]: s.str.cat([u.loc[[3]], v.loc[[-1, 0]]], join="right", na_rep="-")
Out[99]:
3 dd-
-1 --z
0 a-a
dtype: string
Indexing with .str#
You can use [] notation to directly index by position locations. If you index past the end
of the string, the result will be a NaN.
In [100]: s = pd.Series(
.....: ["A", "B", "C", "Aaba", "Baca", np.nan, "CABA", "dog", "cat"], dtype="string"
.....: )
.....:
In [101]: s.str[0]
Out[101]:
0 A
1 B
2 C
3 A
4 B
5 <NA>
6 C
7 d
8 c
dtype: string
In [102]: s.str[1]
Out[102]:
0 <NA>
1 <NA>
2 <NA>
3 a
4 a
5 <NA>
6 A
7 o
8 a
dtype: string
Extracting substrings#
Extract first match in each subject (extract)#
Warning
Before version 0.23, argument expand of the extract method defaulted to
False. When expand=False, expand returns a Series, Index, or
DataFrame, depending on the subject and regular expression
pattern. When expand=True, it always returns a DataFrame,
which is more consistent and less confusing from the perspective of a user.
expand=True has been the default since version 0.23.0.
The extract method accepts a regular expression with at least one
capture group.
Extracting a regular expression with more than one group returns a
DataFrame with one column per group.
In [103]: pd.Series(
.....: ["a1", "b2", "c3"],
.....: dtype="string",
.....: ).str.extract(r"([ab])(\d)", expand=False)
.....:
Out[103]:
0 1
0 a 1
1 b 2
2 <NA> <NA>
Elements that do not match return a row filled with NaN. Thus, a
Series of messy strings can be “converted” into a like-indexed Series
or DataFrame of cleaned-up or more useful strings, without
necessitating get() to access tuples or re.match objects. The
dtype of the result is always object, even if no match is found and
the result only contains NaN.
Named groups like
In [104]: pd.Series(["a1", "b2", "c3"], dtype="string").str.extract(
.....: r"(?P<letter>[ab])(?P<digit>\d)", expand=False
.....: )
.....:
Out[104]:
letter digit
0 a 1
1 b 2
2 <NA> <NA>
and optional groups like
In [105]: pd.Series(
.....: ["a1", "b2", "3"],
.....: dtype="string",
.....: ).str.extract(r"([ab])?(\d)", expand=False)
.....:
Out[105]:
0 1
0 a 1
1 b 2
2 <NA> 3
can also be used. Note that any capture group names in the regular
expression will be used for column names; otherwise capture group
numbers will be used.
Extracting a regular expression with one group returns a DataFrame
with one column if expand=True.
In [106]: pd.Series(["a1", "b2", "c3"], dtype="string").str.extract(r"[ab](\d)", expand=True)
Out[106]:
0
0 1
1 2
2 <NA>
It returns a Series if expand=False.
In [107]: pd.Series(["a1", "b2", "c3"], dtype="string").str.extract(r"[ab](\d)", expand=False)
Out[107]:
0 1
1 2
2 <NA>
dtype: string
Calling on an Index with a regex with exactly one capture group
returns a DataFrame with one column if expand=True.
In [108]: s = pd.Series(["a1", "b2", "c3"], ["A11", "B22", "C33"], dtype="string")
In [109]: s
Out[109]:
A11 a1
B22 b2
C33 c3
dtype: string
In [110]: s.index.str.extract("(?P<letter>[a-zA-Z])", expand=True)
Out[110]:
letter
0 A
1 B
2 C
It returns an Index if expand=False.
In [111]: s.index.str.extract("(?P<letter>[a-zA-Z])", expand=False)
Out[111]: Index(['A', 'B', 'C'], dtype='object', name='letter')
Calling on an Index with a regex with more than one capture group
returns a DataFrame if expand=True.
In [112]: s.index.str.extract("(?P<letter>[a-zA-Z])([0-9]+)", expand=True)
Out[112]:
letter 1
0 A 11
1 B 22
2 C 33
It raises ValueError if expand=False.
>>> s.index.str.extract("(?P<letter>[a-zA-Z])([0-9]+)", expand=False)
ValueError: only one regex group is supported with Index
The table below summarizes the behavior of extract(expand=False)
(input subject in first column, number of groups in regex in
first row)
1 group
>1 group
Index
Index
ValueError
Series
Series
DataFrame
Extract all matches in each subject (extractall)#
Unlike extract (which returns only the first match),
In [113]: s = pd.Series(["a1a2", "b1", "c1"], index=["A", "B", "C"], dtype="string")
In [114]: s
Out[114]:
A a1a2
B b1
C c1
dtype: string
In [115]: two_groups = "(?P<letter>[a-z])(?P<digit>[0-9])"
In [116]: s.str.extract(two_groups, expand=True)
Out[116]:
letter digit
A a 1
B b 1
C c 1
the extractall method returns every match. The result of
extractall is always a DataFrame with a MultiIndex on its
rows. The last level of the MultiIndex is named match and
indicates the order in the subject.
In [117]: s.str.extractall(two_groups)
Out[117]:
letter digit
match
A 0 a 1
1 a 2
B 0 b 1
C 0 c 1
When each subject string in the Series has exactly one match,
In [118]: s = pd.Series(["a3", "b3", "c2"], dtype="string")
In [119]: s
Out[119]:
0 a3
1 b3
2 c2
dtype: string
then extractall(pat).xs(0, level='match') gives the same result as
extract(pat).
In [120]: extract_result = s.str.extract(two_groups, expand=True)
In [121]: extract_result
Out[121]:
letter digit
0 a 3
1 b 3
2 c 2
In [122]: extractall_result = s.str.extractall(two_groups)
In [123]: extractall_result
Out[123]:
letter digit
match
0 0 a 3
1 0 b 3
2 0 c 2
In [124]: extractall_result.xs(0, level="match")
Out[124]:
letter digit
0 a 3
1 b 3
2 c 2
Index also supports .str.extractall. It returns a DataFrame which has the
same result as a Series.str.extractall with a default index (starts from 0).
In [125]: pd.Index(["a1a2", "b1", "c1"]).str.extractall(two_groups)
Out[125]:
letter digit
match
0 0 a 1
1 a 2
1 0 b 1
2 0 c 1
In [126]: pd.Series(["a1a2", "b1", "c1"], dtype="string").str.extractall(two_groups)
Out[126]:
letter digit
match
0 0 a 1
1 a 2
1 0 b 1
2 0 c 1
Testing for strings that match or contain a pattern#
You can check whether elements contain a pattern:
In [127]: pattern = r"[0-9][a-z]"
In [128]: pd.Series(
.....: ["1", "2", "3a", "3b", "03c", "4dx"],
.....: dtype="string",
.....: ).str.contains(pattern)
.....:
Out[128]:
0 False
1 False
2 True
3 True
4 True
5 True
dtype: boolean
Or whether elements match a pattern:
In [129]: pd.Series(
.....: ["1", "2", "3a", "3b", "03c", "4dx"],
.....: dtype="string",
.....: ).str.match(pattern)
.....:
Out[129]:
0 False
1 False
2 True
3 True
4 False
5 True
dtype: boolean
New in version 1.1.0.
In [130]: pd.Series(
.....: ["1", "2", "3a", "3b", "03c", "4dx"],
.....: dtype="string",
.....: ).str.fullmatch(pattern)
.....:
Out[130]:
0 False
1 False
2 True
3 True
4 False
5 False
dtype: boolean
Note
The distinction between match, fullmatch, and contains is strictness:
fullmatch tests whether the entire string matches the regular expression;
match tests whether there is a match of the regular expression that begins
at the first character of the string; and contains tests whether there is
a match of the regular expression at any position within the string.
The corresponding functions in the re package for these three match modes are
re.fullmatch,
re.match, and
re.search,
respectively.
Methods like match, fullmatch, contains, startswith, and
endswith take an extra na argument so missing values can be considered
True or False:
In [131]: s4 = pd.Series(
.....: ["A", "B", "C", "Aaba", "Baca", np.nan, "CABA", "dog", "cat"], dtype="string"
.....: )
.....:
In [132]: s4.str.contains("A", na=False)
Out[132]:
0 True
1 False
2 False
3 True
4 False
5 False
6 True
7 False
8 False
dtype: boolean
Creating indicator variables#
You can extract dummy variables from string columns.
For example if they are separated by a '|':
In [133]: s = pd.Series(["a", "a|b", np.nan, "a|c"], dtype="string")
In [134]: s.str.get_dummies(sep="|")
Out[134]:
a b c
0 1 0 0
1 1 1 0
2 0 0 0
3 1 0 1
String Index also supports get_dummies which returns a MultiIndex.
In [135]: idx = pd.Index(["a", "a|b", np.nan, "a|c"])
In [136]: idx.str.get_dummies(sep="|")
Out[136]:
MultiIndex([(1, 0, 0),
(1, 1, 0),
(0, 0, 0),
(1, 0, 1)],
names=['a', 'b', 'c'])
See also get_dummies().
Method summary#
Method
Description
cat()
Concatenate strings
split()
Split strings on delimiter
rsplit()
Split strings on delimiter working from the end of the string
get()
Index into each element (retrieve i-th element)
join()
Join strings in each element of the Series with passed separator
get_dummies()
Split strings on the delimiter returning DataFrame of dummy variables
contains()
Return boolean array if each string contains pattern/regex
replace()
Replace occurrences of pattern/regex/string with some other string or the return value of a callable given the occurrence
removeprefix()
Remove prefix from string, i.e. only remove if string starts with prefix.
removesuffix()
Remove suffix from string, i.e. only remove if string ends with suffix.
repeat()
Duplicate values (s.str.repeat(3) equivalent to x * 3)
pad()
Add whitespace to left, right, or both sides of strings
center()
Equivalent to str.center
ljust()
Equivalent to str.ljust
rjust()
Equivalent to str.rjust
zfill()
Equivalent to str.zfill
wrap()
Split long strings into lines with length less than a given width
slice()
Slice each string in the Series
slice_replace()
Replace slice in each string with passed value
count()
Count occurrences of pattern
startswith()
Equivalent to str.startswith(pat) for each element
endswith()
Equivalent to str.endswith(pat) for each element
findall()
Compute list of all occurrences of pattern/regex for each string
match()
Call re.match on each element, returning matched groups as list
extract()
Call re.search on each element, returning DataFrame with one row for each element and one column for each regex capture group
extractall()
Call re.findall on each element, returning DataFrame with one row for each match and one column for each regex capture group
len()
Compute string lengths
strip()
Equivalent to str.strip
rstrip()
Equivalent to str.rstrip
lstrip()
Equivalent to str.lstrip
partition()
Equivalent to str.partition
rpartition()
Equivalent to str.rpartition
lower()
Equivalent to str.lower
casefold()
Equivalent to str.casefold
upper()
Equivalent to str.upper
find()
Equivalent to str.find
rfind()
Equivalent to str.rfind
index()
Equivalent to str.index
rindex()
Equivalent to str.rindex
capitalize()
Equivalent to str.capitalize
swapcase()
Equivalent to str.swapcase
normalize()
Return Unicode normal form. Equivalent to unicodedata.normalize
translate()
Equivalent to str.translate
isalnum()
Equivalent to str.isalnum
isalpha()
Equivalent to str.isalpha
isdigit()
Equivalent to str.isdigit
isspace()
Equivalent to str.isspace
islower()
Equivalent to str.islower
isupper()
Equivalent to str.isupper
istitle()
Equivalent to str.istitle
isnumeric()
Equivalent to str.isnumeric
isdecimal()
Equivalent to str.isdecimal
| 551
| 970
|
Word in string exists but not recognized
I have a df that contains a column that has a description, I used the following code to extract specific words and create:
def criteria (df):
if df.DESCRIPCION.find('CORONITA')>0:
return ('Corona')
else:
return ('Otras')
df['Marca'] = df.apply(criteria, axis=1)
As you can see, the word exists, but pandas applies 'Otras' instead of Corona.
Any advice?
|
63,817,458
|
pandas, selective join based on nearest date
|
<p>I have a data-frame, X, that contains the following</p>
<pre><code>Index A B
2020-09-08 0.252167 0.263719
2020-09-05 0.266898 0.270347
2019-09-04 0.254873 0.273878
</code></pre>
<p>I have another data-frame, Y, that contains the following</p>
<pre><code>Index C
2021-09-08 0.252167
2015-09-05 0.266898
</code></pre>
<p>For every row in Y I want to efficiently select the nearest row in X and join them together. Here 'nearest' as function of the index, i.e:
which date is closer.</p>
<p>In this case this should return.</p>
<pre><code>Index Index2 C A B
2021-09-08 2020-09-08 0.252167 0.252167 0.263719
2015-09-05 2019-09-04 0.266898 0.254873 0.273878
</code></pre>
<p>(note: both indexes are datetime objects)</p>
<p>Since 2020-09-08 is the closest to 2021-09-08 and 2019-09-04 is the closest to 2015-09-05.</p>
<p>I can do this, by iterating through each index of Y and calling</p>
<p>X.index.get_loc(currentYIndex, "nearest")</p>
<p>Is there a more efficient way of doing this ?</p>
| 63,817,579
| 2020-09-09T18:23:14.207000
| 1
| null | 2
| 47
|
python|pandas
|
<p>This is like what Quang's comment but need more detail</p>
<pre><code>df1['Index2']=df1['Index']
Out = pd.merge_asof(df2.sort_values('Index'),
df1.sort_values('Index'),
on = 'Index',
direction = 'nearest',
allow_exact_matches = False)
Out[33]:
Index C A B Index2
0 2015-09-05 0.266898 0.254873 0.273878 2019-09-04
1 2021-09-08 0.252167 0.252167 0.263719 2020-09-08
</code></pre>
| 2020-09-09T18:33:14.427000
| 0
|
https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.between_time.html
|
pandas.DataFrame.between_time#
pandas.DataFrame.between_time#
DataFrame.between_time(start_time, end_time, include_start=_NoDefault.no_default, include_end=_NoDefault.no_default, inclusive=None, axis=None)[source]#
Select values between particular times of the day (e.g., 9:00-9:30 AM).
By setting start_time to be later than end_time,
you can get the times that are not between the two times.
Parameters
start_timedatetime.time or strInitial time as a time filter limit.
This is like what Quang's comment but need more detail
df1['Index2']=df1['Index']
Out = pd.merge_asof(df2.sort_values('Index'),
df1.sort_values('Index'),
on = 'Index',
direction = 'nearest',
allow_exact_matches = False)
Out[33]:
Index C A B Index2
0 2015-09-05 0.266898 0.254873 0.273878 2019-09-04
1 2021-09-08 0.252167 0.252167 0.263719 2020-09-08
end_timedatetime.time or strEnd time as a time filter limit.
include_startbool, default TrueWhether the start time needs to be included in the result.
Deprecated since version 1.4.0: Arguments include_start and include_end have been deprecated
to standardize boundary inputs. Use inclusive instead, to set
each bound as closed or open.
include_endbool, default TrueWhether the end time needs to be included in the result.
Deprecated since version 1.4.0: Arguments include_start and include_end have been deprecated
to standardize boundary inputs. Use inclusive instead, to set
each bound as closed or open.
inclusive{“both”, “neither”, “left”, “right”}, default “both”Include boundaries; whether to set each bound as closed or open.
axis{0 or ‘index’, 1 or ‘columns’}, default 0Determine range time on index or columns value.
For Series this parameter is unused and defaults to 0.
Returns
Series or DataFrameData from the original object filtered to the specified dates range.
Raises
TypeErrorIf the index is not a DatetimeIndex
See also
at_timeSelect values at a particular time of the day.
firstSelect initial periods of time series based on a date offset.
lastSelect final periods of time series based on a date offset.
DatetimeIndex.indexer_between_timeGet just the index locations for values between particular times of the day.
Examples
>>> i = pd.date_range('2018-04-09', periods=4, freq='1D20min')
>>> ts = pd.DataFrame({'A': [1, 2, 3, 4]}, index=i)
>>> ts
A
2018-04-09 00:00:00 1
2018-04-10 00:20:00 2
2018-04-11 00:40:00 3
2018-04-12 01:00:00 4
>>> ts.between_time('0:15', '0:45')
A
2018-04-10 00:20:00 2
2018-04-11 00:40:00 3
You get the times that are not between two times by setting
start_time later than end_time:
>>> ts.between_time('0:45', '0:15')
A
2018-04-09 00:00:00 1
2018-04-12 01:00:00 4
| 478
| 954
|
pandas, selective join based on nearest date
I have a data-frame, X, that contains the following
Index A B
2020-09-08 0.252167 0.263719
2020-09-05 0.266898 0.270347
2019-09-04 0.254873 0.273878
I have another data-frame, Y, that contains the following
Index C
2021-09-08 0.252167
2015-09-05 0.266898
For every row in Y I want to efficiently select the nearest row in X and join them together. Here 'nearest' as function of the index, i.e:
which date is closer.
In this case this should return.
Index Index2 C A B
2021-09-08 2020-09-08 0.252167 0.252167 0.263719
2015-09-05 2019-09-04 0.266898 0.254873 0.273878
(note: both indexes are datetime objects)
Since 2020-09-08 is the closest to 2021-09-08 and 2019-09-04 is the closest to 2015-09-05.
I can do this, by iterating through each index of Y and calling
X.index.get_loc(currentYIndex, "nearest")
Is there a more efficient way of doing this ?
|
65,735,657
|
Pandas Detect changes of date values in pandas series in python
|
<p>I have a panda series as follows:</p>
<pre><code> value0 value1 value2 value3 value4 value5
</code></pre>
<p>0 2020-10-22 2020-10-22 2020-10-22 2020-10-22 2020-12-02 2020-12-03</p>
<p>Values are of Datetime.date object</p>
<p>I need to find the column names or indices of when date changes.
so the output will be ["value0" , "value4" , "value5"]</p>
<p>How can I do this?</p>
| 65,735,705
| 2021-01-15T11:54:57.650000
| 1
| null | 1
| 50
|
python|pandas
|
<p>If <code>s</code> is input <code>Series</code> first convert to datetimes if necessary, then get difference by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.diff.html" rel="nofollow noreferrer"><code>Series.diff</code></a>, compare for not equal <code>0</code> and filter index values by this mask:</p>
<pre><code>#if input is one row DataFrame
#s = df.T.iloc[:,0]
s = pd.to_datetime(s)
mask = s.diff().dt.days.ne(0)
#alternative
#mask = s.diff().ne(pd.Timedelta(0))
out = mask.index[mask].tolist()
print (out)
['value0', 'value4', 'value5']
</code></pre>
| 2021-01-15T11:58:21.747000
| 0
|
https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.diff.html
|
pandas.DataFrame.diff#
pandas.DataFrame.diff#
DataFrame.diff(periods=1, axis=0)[source]#
First discrete difference of element.
Calculates the difference of a DataFrame element compared with another
element in the DataFrame (default is element in previous row).
Parameters
periodsint, default 1Periods to shift for calculating difference, accepts negative
If s is input Series first convert to datetimes if necessary, then get difference by Series.diff, compare for not equal 0 and filter index values by this mask:
#if input is one row DataFrame
#s = df.T.iloc[:,0]
s = pd.to_datetime(s)
mask = s.diff().dt.days.ne(0)
#alternative
#mask = s.diff().ne(pd.Timedelta(0))
out = mask.index[mask].tolist()
print (out)
['value0', 'value4', 'value5']
values.
axis{0 or ‘index’, 1 or ‘columns’}, default 0Take difference over rows (0) or columns (1).
Returns
DataFrameFirst differences of the Series.
See also
DataFrame.pct_changePercent change over given number of periods.
DataFrame.shiftShift index by desired number of periods with an optional time freq.
Series.diffFirst discrete difference of object.
Notes
For boolean dtypes, this uses operator.xor() rather than
operator.sub().
The result is calculated according to current dtype in DataFrame,
however dtype of the result is always float64.
Examples
Difference with previous row
>>> df = pd.DataFrame({'a': [1, 2, 3, 4, 5, 6],
... 'b': [1, 1, 2, 3, 5, 8],
... 'c': [1, 4, 9, 16, 25, 36]})
>>> df
a b c
0 1 1 1
1 2 1 4
2 3 2 9
3 4 3 16
4 5 5 25
5 6 8 36
>>> df.diff()
a b c
0 NaN NaN NaN
1 1.0 0.0 3.0
2 1.0 1.0 5.0
3 1.0 1.0 7.0
4 1.0 2.0 9.0
5 1.0 3.0 11.0
Difference with previous column
>>> df.diff(axis=1)
a b c
0 NaN 0 0
1 NaN -1 3
2 NaN -1 7
3 NaN -1 13
4 NaN 0 20
5 NaN 2 28
Difference with 3rd previous row
>>> df.diff(periods=3)
a b c
0 NaN NaN NaN
1 NaN NaN NaN
2 NaN NaN NaN
3 3.0 2.0 15.0
4 3.0 4.0 21.0
5 3.0 6.0 27.0
Difference with following row
>>> df.diff(periods=-1)
a b c
0 -1.0 0.0 -3.0
1 -1.0 -1.0 -5.0
2 -1.0 -1.0 -7.0
3 -1.0 -2.0 -9.0
4 -1.0 -3.0 -11.0
5 NaN NaN NaN
Overflow in input dtype
>>> df = pd.DataFrame({'a': [1, 0]}, dtype=np.uint8)
>>> df.diff()
a
0 NaN
1 255.0
| 361
| 752
|
Pandas Detect changes of date values in pandas series in python
I have a panda series as follows:
value0 value1 value2 value3 value4 value5
0 2020-10-22 2020-10-22 2020-10-22 2020-10-22 2020-12-02 2020-12-03
Values are of Datetime.date object
I need to find the column names or indices of when date changes.
so the output will be ["value0" , "value4" , "value5"]
How can I do this?
|
64,054,851
|
Problem with 'skiprows' when reading csv with pandas
|
<p>I have a big dataframe (~5 millions rows) that has some wrong data in it.
I have identified the indexes of the rows with wrong data and now I am trying to remove the 'wrong' rows from the dataframe.</p>
<p>Due to the size of the dataframe, I am using the <code>chunksize</code> feature while reading the csv.
To skip the 'wrong' rows, I am using the <code>skiprows</code> and <code>error_bad_lines features</code>.
I also use the <code>low_memory</code> feature to prevent warnings (and for the purpose of the example I read only the first 20 000 rows).
Then I save the new dataframe in a new csv.</p>
<p>The problem is that that only the 9 first 'wrong' rows are skipped, then 'wrong rows' are still read (and saved to the output csv).</p>
<p>Here is my code:</p>
<pre><code>for df in pd.read_csv('database.csv', chunksize=1000, nrows=20000,
low_memory=False, error_bad_lines=False, skiprows=wrong_id_list):
df.to_csv('database_fixed.csv', mode='a', header=False, index=False)
</code></pre>
<p>where <code>wrong_id_list</code> is the list of indexes of the rows I want to remove:</p>
<p><code>[2689, 3251, 3254, 3589, 3885, 8301, 10062, 10570, 10883, 13118, 16153, 16237, 17601, 18099, 18676]</code></p>
<p>when checking <code>database_fixed.csv</code> I can see that the following rows have wrong data:</p>
<p><code>[13108, 16142, 16225, 17588, 18085, 18661]</code> So I imagine rows are still being skipped but not the right ones.</p>
<p>any ideas?</p>
| 64,054,964
| 2020-09-24T21:45:09.243000
| 1
| null | 0
| 52
|
pandas
|
<p>the easiest way to remove bad rows is to do it explicitely</p>
<pre><code>df = df.loc[~df.index.isin(list_of_bad_rows]),]
</code></pre>
| 2020-09-24T21:55:34.420000
| 0
|
https://pandas.pydata.org/docs/reference/api/pandas.read_csv.html
|
pandas.read_csv#
pandas.read_csv#
pandas.read_csv(filepath_or_buffer, *, sep=_NoDefault.no_default, delimiter=None, header='infer', names=_NoDefault.no_default, index_col=None, usecols=None, squeeze=None, prefix=_NoDefault.no_default, mangle_dupe_cols=True, dtype=None, engine=None, converters=None, true_values=None, false_values=None, skipinitialspace=False, skiprows=None, skipfooter=0, nrows=None, na_values=None, keep_default_na=True, na_filter=True, verbose=False, skip_blank_lines=True, parse_dates=None, infer_datetime_format=False, keep_date_col=False, date_parser=None, dayfirst=False, cache_dates=True, iterator=False, chunksize=None, compression='infer', thousands=None, decimal='.', lineterminator=None, quotechar='"', quoting=0, doublequote=True, escapechar=None, comment=None, encoding=None, encoding_errors='strict', dialect=None, error_bad_lines=None, warn_bad_lines=None, on_bad_lines=None, delim_whitespace=False, low_memory=True, memory_map=False, float_precision=None, storage_options=None)[source]#
the easiest way to remove bad rows is to do it explicitely
df = df.loc[~df.index.isin(list_of_bad_rows]),]
Read a comma-separated values (csv) file into DataFrame.
Also supports optionally iterating or breaking of the file
into chunks.
Additional help can be found in the online docs for
IO Tools.
Parameters
filepath_or_bufferstr, path object or file-like objectAny valid string path is acceptable. The string could be a URL. Valid
URL schemes include http, ftp, s3, gs, and file. For file URLs, a host is
expected. A local file could be: file://localhost/path/to/table.csv.
If you want to pass in a path object, pandas accepts any os.PathLike.
By file-like object, we refer to objects with a read() method, such as
a file handle (e.g. via builtin open function) or StringIO.
sepstr, default ‘,’Delimiter to use. If sep is None, the C engine cannot automatically detect
the separator, but the Python parsing engine can, meaning the latter will
be used and automatically detect the separator by Python’s builtin sniffer
tool, csv.Sniffer. In addition, separators longer than 1 character and
different from '\s+' will be interpreted as regular expressions and
will also force the use of the Python parsing engine. Note that regex
delimiters are prone to ignoring quoted data. Regex example: '\r\t'.
delimiterstr, default NoneAlias for sep.
headerint, list of int, None, default ‘infer’Row number(s) to use as the column names, and the start of the
data. Default behavior is to infer the column names: if no names
are passed the behavior is identical to header=0 and column
names are inferred from the first line of the file, if column
names are passed explicitly then the behavior is identical to
header=None. Explicitly pass header=0 to be able to
replace existing names. The header can be a list of integers that
specify row locations for a multi-index on the columns
e.g. [0,1,3]. Intervening rows that are not specified will be
skipped (e.g. 2 in this example is skipped). Note that this
parameter ignores commented lines and empty lines if
skip_blank_lines=True, so header=0 denotes the first line of
data rather than the first line of the file.
namesarray-like, optionalList of column names to use. If the file contains a header row,
then you should explicitly pass header=0 to override the column names.
Duplicates in this list are not allowed.
index_colint, str, sequence of int / str, or False, optional, default NoneColumn(s) to use as the row labels of the DataFrame, either given as
string name or column index. If a sequence of int / str is given, a
MultiIndex is used.
Note: index_col=False can be used to force pandas to not use the first
column as the index, e.g. when you have a malformed file with delimiters at
the end of each line.
usecolslist-like or callable, optionalReturn a subset of the columns. If list-like, all elements must either
be positional (i.e. integer indices into the document columns) or strings
that correspond to column names provided either by the user in names or
inferred from the document header row(s). If names are given, the document
header row(s) are not taken into account. For example, a valid list-like
usecols parameter would be [0, 1, 2] or ['foo', 'bar', 'baz'].
Element order is ignored, so usecols=[0, 1] is the same as [1, 0].
To instantiate a DataFrame from data with element order preserved use
pd.read_csv(data, usecols=['foo', 'bar'])[['foo', 'bar']] for columns
in ['foo', 'bar'] order or
pd.read_csv(data, usecols=['foo', 'bar'])[['bar', 'foo']]
for ['bar', 'foo'] order.
If callable, the callable function will be evaluated against the column
names, returning names where the callable function evaluates to True. An
example of a valid callable argument would be lambda x: x.upper() in
['AAA', 'BBB', 'DDD']. Using this parameter results in much faster
parsing time and lower memory usage.
squeezebool, default FalseIf the parsed data only contains one column then return a Series.
Deprecated since version 1.4.0: Append .squeeze("columns") to the call to read_csv to squeeze
the data.
prefixstr, optionalPrefix to add to column numbers when no header, e.g. ‘X’ for X0, X1, …
Deprecated since version 1.4.0: Use a list comprehension on the DataFrame’s columns after calling read_csv.
mangle_dupe_colsbool, default TrueDuplicate columns will be specified as ‘X’, ‘X.1’, …’X.N’, rather than
‘X’…’X’. Passing in False will cause data to be overwritten if there
are duplicate names in the columns.
Deprecated since version 1.5.0: Not implemented, and a new argument to specify the pattern for the
names of duplicated columns will be added instead
dtypeType name or dict of column -> type, optionalData type for data or columns. E.g. {‘a’: np.float64, ‘b’: np.int32,
‘c’: ‘Int64’}
Use str or object together with suitable na_values settings
to preserve and not interpret dtype.
If converters are specified, they will be applied INSTEAD
of dtype conversion.
New in version 1.5.0: Support for defaultdict was added. Specify a defaultdict as input where
the default determines the dtype of the columns which are not explicitly
listed.
engine{‘c’, ‘python’, ‘pyarrow’}, optionalParser engine to use. The C and pyarrow engines are faster, while the python engine
is currently more feature-complete. Multithreading is currently only supported by
the pyarrow engine.
New in version 1.4.0: The “pyarrow” engine was added as an experimental engine, and some features
are unsupported, or may not work correctly, with this engine.
convertersdict, optionalDict of functions for converting values in certain columns. Keys can either
be integers or column labels.
true_valueslist, optionalValues to consider as True.
false_valueslist, optionalValues to consider as False.
skipinitialspacebool, default FalseSkip spaces after delimiter.
skiprowslist-like, int or callable, optionalLine numbers to skip (0-indexed) or number of lines to skip (int)
at the start of the file.
If callable, the callable function will be evaluated against the row
indices, returning True if the row should be skipped and False otherwise.
An example of a valid callable argument would be lambda x: x in [0, 2].
skipfooterint, default 0Number of lines at bottom of file to skip (Unsupported with engine=’c’).
nrowsint, optionalNumber of rows of file to read. Useful for reading pieces of large files.
na_valuesscalar, str, list-like, or dict, optionalAdditional strings to recognize as NA/NaN. If dict passed, specific
per-column NA values. By default the following values are interpreted as
NaN: ‘’, ‘#N/A’, ‘#N/A N/A’, ‘#NA’, ‘-1.#IND’, ‘-1.#QNAN’, ‘-NaN’, ‘-nan’,
‘1.#IND’, ‘1.#QNAN’, ‘<NA>’, ‘N/A’, ‘NA’, ‘NULL’, ‘NaN’, ‘n/a’,
‘nan’, ‘null’.
keep_default_nabool, default TrueWhether or not to include the default NaN values when parsing the data.
Depending on whether na_values is passed in, the behavior is as follows:
If keep_default_na is True, and na_values are specified, na_values
is appended to the default NaN values used for parsing.
If keep_default_na is True, and na_values are not specified, only
the default NaN values are used for parsing.
If keep_default_na is False, and na_values are specified, only
the NaN values specified na_values are used for parsing.
If keep_default_na is False, and na_values are not specified, no
strings will be parsed as NaN.
Note that if na_filter is passed in as False, the keep_default_na and
na_values parameters will be ignored.
na_filterbool, default TrueDetect missing value markers (empty strings and the value of na_values). In
data without any NAs, passing na_filter=False can improve the performance
of reading a large file.
verbosebool, default FalseIndicate number of NA values placed in non-numeric columns.
skip_blank_linesbool, default TrueIf True, skip over blank lines rather than interpreting as NaN values.
parse_datesbool or list of int or names or list of lists or dict, default FalseThe behavior is as follows:
boolean. If True -> try parsing the index.
list of int or names. e.g. If [1, 2, 3] -> try parsing columns 1, 2, 3
each as a separate date column.
list of lists. e.g. If [[1, 3]] -> combine columns 1 and 3 and parse as
a single date column.
dict, e.g. {‘foo’ : [1, 3]} -> parse columns 1, 3 as date and call
result ‘foo’
If a column or index cannot be represented as an array of datetimes,
say because of an unparsable value or a mixture of timezones, the column
or index will be returned unaltered as an object data type. For
non-standard datetime parsing, use pd.to_datetime after
pd.read_csv. To parse an index or column with a mixture of timezones,
specify date_parser to be a partially-applied
pandas.to_datetime() with utc=True. See
Parsing a CSV with mixed timezones for more.
Note: A fast-path exists for iso8601-formatted dates.
infer_datetime_formatbool, default FalseIf True and parse_dates is enabled, pandas will attempt to infer the
format of the datetime strings in the columns, and if it can be inferred,
switch to a faster method of parsing them. In some cases this can increase
the parsing speed by 5-10x.
keep_date_colbool, default FalseIf True and parse_dates specifies combining multiple columns then
keep the original columns.
date_parserfunction, optionalFunction to use for converting a sequence of string columns to an array of
datetime instances. The default uses dateutil.parser.parser to do the
conversion. Pandas will try to call date_parser in three different ways,
advancing to the next if an exception occurs: 1) Pass one or more arrays
(as defined by parse_dates) as arguments; 2) concatenate (row-wise) the
string values from the columns defined by parse_dates into a single array
and pass that; and 3) call date_parser once for each row using one or
more strings (corresponding to the columns defined by parse_dates) as
arguments.
dayfirstbool, default FalseDD/MM format dates, international and European format.
cache_datesbool, default TrueIf True, use a cache of unique, converted dates to apply the datetime
conversion. May produce significant speed-up when parsing duplicate
date strings, especially ones with timezone offsets.
New in version 0.25.0.
iteratorbool, default FalseReturn TextFileReader object for iteration or getting chunks with
get_chunk().
Changed in version 1.2: TextFileReader is a context manager.
chunksizeint, optionalReturn TextFileReader object for iteration.
See the IO Tools docs
for more information on iterator and chunksize.
Changed in version 1.2: TextFileReader is a context manager.
compressionstr or dict, default ‘infer’For on-the-fly decompression of on-disk data. If ‘infer’ and ‘filepath_or_buffer’ is
path-like, then detect compression from the following extensions: ‘.gz’,
‘.bz2’, ‘.zip’, ‘.xz’, ‘.zst’, ‘.tar’, ‘.tar.gz’, ‘.tar.xz’ or ‘.tar.bz2’
(otherwise no compression).
If using ‘zip’ or ‘tar’, the ZIP file must contain only one data file to be read in.
Set to None for no decompression.
Can also be a dict with key 'method' set
to one of {'zip', 'gzip', 'bz2', 'zstd', 'tar'} and other
key-value pairs are forwarded to
zipfile.ZipFile, gzip.GzipFile,
bz2.BZ2File, zstandard.ZstdDecompressor or
tarfile.TarFile, respectively.
As an example, the following could be passed for Zstandard decompression using a
custom compression dictionary:
compression={'method': 'zstd', 'dict_data': my_compression_dict}.
New in version 1.5.0: Added support for .tar files.
Changed in version 1.4.0: Zstandard support.
thousandsstr, optionalThousands separator.
decimalstr, default ‘.’Character to recognize as decimal point (e.g. use ‘,’ for European data).
lineterminatorstr (length 1), optionalCharacter to break file into lines. Only valid with C parser.
quotecharstr (length 1), optionalThe character used to denote the start and end of a quoted item. Quoted
items can include the delimiter and it will be ignored.
quotingint or csv.QUOTE_* instance, default 0Control field quoting behavior per csv.QUOTE_* constants. Use one of
QUOTE_MINIMAL (0), QUOTE_ALL (1), QUOTE_NONNUMERIC (2) or QUOTE_NONE (3).
doublequotebool, default TrueWhen quotechar is specified and quoting is not QUOTE_NONE, indicate
whether or not to interpret two consecutive quotechar elements INSIDE a
field as a single quotechar element.
escapecharstr (length 1), optionalOne-character string used to escape other characters.
commentstr, optionalIndicates remainder of line should not be parsed. If found at the beginning
of a line, the line will be ignored altogether. This parameter must be a
single character. Like empty lines (as long as skip_blank_lines=True),
fully commented lines are ignored by the parameter header but not by
skiprows. For example, if comment='#', parsing
#empty\na,b,c\n1,2,3 with header=0 will result in ‘a,b,c’ being
treated as the header.
encodingstr, optionalEncoding to use for UTF when reading/writing (ex. ‘utf-8’). List of Python
standard encodings .
Changed in version 1.2: When encoding is None, errors="replace" is passed to
open(). Otherwise, errors="strict" is passed to open().
This behavior was previously only the case for engine="python".
Changed in version 1.3.0: encoding_errors is a new argument. encoding has no longer an
influence on how encoding errors are handled.
encoding_errorsstr, optional, default “strict”How encoding errors are treated. List of possible values .
New in version 1.3.0.
dialectstr or csv.Dialect, optionalIf provided, this parameter will override values (default or not) for the
following parameters: delimiter, doublequote, escapechar,
skipinitialspace, quotechar, and quoting. If it is necessary to
override values, a ParserWarning will be issued. See csv.Dialect
documentation for more details.
error_bad_linesbool, optional, default NoneLines with too many fields (e.g. a csv line with too many commas) will by
default cause an exception to be raised, and no DataFrame will be returned.
If False, then these “bad lines” will be dropped from the DataFrame that is
returned.
Deprecated since version 1.3.0: The on_bad_lines parameter should be used instead to specify behavior upon
encountering a bad line instead.
warn_bad_linesbool, optional, default NoneIf error_bad_lines is False, and warn_bad_lines is True, a warning for each
“bad line” will be output.
Deprecated since version 1.3.0: The on_bad_lines parameter should be used instead to specify behavior upon
encountering a bad line instead.
on_bad_lines{‘error’, ‘warn’, ‘skip’} or callable, default ‘error’Specifies what to do upon encountering a bad line (a line with too many fields).
Allowed values are :
‘error’, raise an Exception when a bad line is encountered.
‘warn’, raise a warning when a bad line is encountered and skip that line.
‘skip’, skip bad lines without raising or warning when they are encountered.
New in version 1.3.0.
New in version 1.4.0:
callable, function with signature
(bad_line: list[str]) -> list[str] | None that will process a single
bad line. bad_line is a list of strings split by the sep.
If the function returns None, the bad line will be ignored.
If the function returns a new list of strings with more elements than
expected, a ParserWarning will be emitted while dropping extra elements.
Only supported when engine="python"
delim_whitespacebool, default FalseSpecifies whether or not whitespace (e.g. ' ' or ' ') will be
used as the sep. Equivalent to setting sep='\s+'. If this option
is set to True, nothing should be passed in for the delimiter
parameter.
low_memorybool, default TrueInternally process the file in chunks, resulting in lower memory use
while parsing, but possibly mixed type inference. To ensure no mixed
types either set False, or specify the type with the dtype parameter.
Note that the entire file is read into a single DataFrame regardless,
use the chunksize or iterator parameter to return the data in chunks.
(Only valid with C parser).
memory_mapbool, default FalseIf a filepath is provided for filepath_or_buffer, map the file object
directly onto memory and access the data directly from there. Using this
option can improve performance because there is no longer any I/O overhead.
float_precisionstr, optionalSpecifies which converter the C engine should use for floating-point
values. The options are None or ‘high’ for the ordinary converter,
‘legacy’ for the original lower precision pandas converter, and
‘round_trip’ for the round-trip converter.
Changed in version 1.2.
storage_optionsdict, optionalExtra options that make sense for a particular storage connection, e.g.
host, port, username, password, etc. For HTTP(S) URLs the key-value pairs
are forwarded to urllib.request.Request as header options. For other
URLs (e.g. starting with “s3://”, and “gcs://”) the key-value pairs are
forwarded to fsspec.open. Please see fsspec and urllib for more
details, and for more examples on storage options refer here.
New in version 1.2.
Returns
DataFrame or TextParserA comma-separated values (csv) file is returned as two-dimensional
data structure with labeled axes.
See also
DataFrame.to_csvWrite DataFrame to a comma-separated values (csv) file.
read_csvRead a comma-separated values (csv) file into DataFrame.
read_fwfRead a table of fixed-width formatted lines into DataFrame.
Examples
>>> pd.read_csv('data.csv')
| 1,025
| 1,132
|
Problem with 'skiprows' when reading csv with pandas
I have a big dataframe (~5 millions rows) that has some wrong data in it.
I have identified the indexes of the rows with wrong data and now I am trying to remove the 'wrong' rows from the dataframe.
Due to the size of the dataframe, I am using the chunksize feature while reading the csv.
To skip the 'wrong' rows, I am using the skiprows and error_bad_lines features.
I also use the low_memory feature to prevent warnings (and for the purpose of the example I read only the first 20 000 rows).
Then I save the new dataframe in a new csv.
The problem is that that only the 9 first 'wrong' rows are skipped, then 'wrong rows' are still read (and saved to the output csv).
Here is my code:
for df in pd.read_csv('database.csv', chunksize=1000, nrows=20000,
low_memory=False, error_bad_lines=False, skiprows=wrong_id_list):
df.to_csv('database_fixed.csv', mode='a', header=False, index=False)
where wrong_id_list is the list of indexes of the rows I want to remove:
[2689, 3251, 3254, 3589, 3885, 8301, 10062, 10570, 10883, 13118, 16153, 16237, 17601, 18099, 18676]
when checking database_fixed.csv I can see that the following rows have wrong data:
[13108, 16142, 16225, 17588, 18085, 18661] So I imagine rows are still being skipped but not the right ones.
any ideas?
|
61,394,624
|
Python Convert List of Dict Tuples into Dataframe
|
<p>I have a series of Dict->List->Dict-> Tuples? that I wanted to convert into a dataframe. Ideally all at once, but even if it's just one at a time that works as well:</p>
<pre><code>[OrderedDict([('clientRequestId', None),
('band', 'FM'),
('bandName', 'FM'),
('bandType', None),
('callLetters', 'WBBO'),
('call_Letter_change', False),
('commercial_status', 'commercial'),
('countyOfLicense', None),
('dmaMarketCodeOfLicense', None),
('dmaMarketNameOfLicense', None),
('forcedInFlags', None),
('format', 'Pop Contemporary Hit Radio'),
('homeToDma', False),
('homeToMetro', False),
('homeToTsa', False),
('inTheBook', False),
('metrosOfLicense', []),
('name', 'WBBO-FM'),
('owner', None),
('qualifiedInDma', True),
('qualifiedInMetro', True),
('qualifiedInTsa', False),
('specialActivityIndicated', False),
('stateOfLicense', None),
('stateOfLicenseName', None),
('stationCount', 1),
('stationGroup', False),
('stationId', 17601)]),
OrderedDict([('clientRequestId', None),
('band', 'FM'),
('bandName', 'FM'),
('bandType', None),
('callLetters', 'WRNB'),
('call_Letter_change', False),
('commercial_status', 'commercial'),
('countyOfLicense', None),
('dmaMarketCodeOfLicense', None),
('dmaMarketNameOfLicense', None),
('forcedInFlags', None), ...
</code></pre>
<p>I've been trying going one at a time of this:</p>
<pre><code>test = pd.DataFrame.from_dict(stationDict.get('stationsInList')[0].values())
test
</code></pre>
<p>but the result is turning all of the values in the tuples into one column, 28 rows instead of what i wanted -1 row, 28 columns with the columns as the keys in the "tuples".</p>
| 61,394,960
| 2020-04-23T18:38:51.777000
| 1
| null | 1
| 55
|
python|pandas
|
<p>You can create dataframe by just giving the list of dicts.</p>
<pre><code>data = [OrderedDict([('clientRequestId', None), ('band', 'FM'), ('bandName', 'FM'), ('bandType', None), ('callLetters', 'WBBO'), ('call_Letter_change', False), ('commercial_status', 'commercial'), ('countyOfLicense', None), ('dmaMarketCodeOfLicense', None), ('dmaMarketNameOfLicense', None),('forcedInFlags', None),('format', 'Pop Contemporary Hit Radio'),('homeToDma', False),('homeToMetro', False),('homeToTsa', False),('inTheBook', False),('metrosOfLicense', []),('name', 'WBBO-FM'),('owner', None),('qualifiedInDma', True),('qualifiedInMetro', True),('qualifiedInTsa', False),('specialActivityIndicated', False),('stateOfLicense', None),('stateOfLicenseName', None),('stationCount', 1),('stationGroup', False),('stationId', 17601)])]
df = pd.DataFrame(data)
</code></pre>
<p><strong>Output:</strong></p>
<pre><code> clientRequestId band bandName ... stationCount stationGroup stationId
0 None FM FM ... 1 False 17601
[1 rows x 28 columns]
</code></pre>
| 2020-04-23T18:57:08.530000
| 0
|
https://pandas.pydata.org/docs/user_guide/dsintro.html
|
Intro to data structures#
Intro to data structures#
We’ll start with a quick, non-comprehensive overview of the fundamental data
structures in pandas to get you started. The fundamental behavior about data
You can create dataframe by just giving the list of dicts.
data = [OrderedDict([('clientRequestId', None), ('band', 'FM'), ('bandName', 'FM'), ('bandType', None), ('callLetters', 'WBBO'), ('call_Letter_change', False), ('commercial_status', 'commercial'), ('countyOfLicense', None), ('dmaMarketCodeOfLicense', None), ('dmaMarketNameOfLicense', None),('forcedInFlags', None),('format', 'Pop Contemporary Hit Radio'),('homeToDma', False),('homeToMetro', False),('homeToTsa', False),('inTheBook', False),('metrosOfLicense', []),('name', 'WBBO-FM'),('owner', None),('qualifiedInDma', True),('qualifiedInMetro', True),('qualifiedInTsa', False),('specialActivityIndicated', False),('stateOfLicense', None),('stateOfLicenseName', None),('stationCount', 1),('stationGroup', False),('stationId', 17601)])]
df = pd.DataFrame(data)
Output:
clientRequestId band bandName ... stationCount stationGroup stationId
0 None FM FM ... 1 False 17601
[1 rows x 28 columns]
types, indexing, axis labeling, and alignment apply across all of the
objects. To get started, import NumPy and load pandas into your namespace:
In [1]: import numpy as np
In [2]: import pandas as pd
Fundamentally, data alignment is intrinsic. The link
between labels and data will not be broken unless done so explicitly by you.
We’ll give a brief intro to the data structures, then consider all of the broad
categories of functionality and methods in separate sections.
Series#
Series is a one-dimensional labeled array capable of holding any data
type (integers, strings, floating point numbers, Python objects, etc.). The axis
labels are collectively referred to as the index. The basic method to create a Series is to call:
>>> s = pd.Series(data, index=index)
Here, data can be many different things:
a Python dict
an ndarray
a scalar value (like 5)
The passed index is a list of axis labels. Thus, this separates into a few
cases depending on what data is:
From ndarray
If data is an ndarray, index must be the same length as data. If no
index is passed, one will be created having values [0, ..., len(data) - 1].
In [3]: s = pd.Series(np.random.randn(5), index=["a", "b", "c", "d", "e"])
In [4]: s
Out[4]:
a 0.469112
b -0.282863
c -1.509059
d -1.135632
e 1.212112
dtype: float64
In [5]: s.index
Out[5]: Index(['a', 'b', 'c', 'd', 'e'], dtype='object')
In [6]: pd.Series(np.random.randn(5))
Out[6]:
0 -0.173215
1 0.119209
2 -1.044236
3 -0.861849
4 -2.104569
dtype: float64
Note
pandas supports non-unique index values. If an operation
that does not support duplicate index values is attempted, an exception
will be raised at that time.
From dict
Series can be instantiated from dicts:
In [7]: d = {"b": 1, "a": 0, "c": 2}
In [8]: pd.Series(d)
Out[8]:
b 1
a 0
c 2
dtype: int64
If an index is passed, the values in data corresponding to the labels in the
index will be pulled out.
In [9]: d = {"a": 0.0, "b": 1.0, "c": 2.0}
In [10]: pd.Series(d)
Out[10]:
a 0.0
b 1.0
c 2.0
dtype: float64
In [11]: pd.Series(d, index=["b", "c", "d", "a"])
Out[11]:
b 1.0
c 2.0
d NaN
a 0.0
dtype: float64
Note
NaN (not a number) is the standard missing data marker used in pandas.
From scalar value
If data is a scalar value, an index must be
provided. The value will be repeated to match the length of index.
In [12]: pd.Series(5.0, index=["a", "b", "c", "d", "e"])
Out[12]:
a 5.0
b 5.0
c 5.0
d 5.0
e 5.0
dtype: float64
Series is ndarray-like#
Series acts very similarly to a ndarray and is a valid argument to most NumPy functions.
However, operations such as slicing will also slice the index.
In [13]: s[0]
Out[13]: 0.4691122999071863
In [14]: s[:3]
Out[14]:
a 0.469112
b -0.282863
c -1.509059
dtype: float64
In [15]: s[s > s.median()]
Out[15]:
a 0.469112
e 1.212112
dtype: float64
In [16]: s[[4, 3, 1]]
Out[16]:
e 1.212112
d -1.135632
b -0.282863
dtype: float64
In [17]: np.exp(s)
Out[17]:
a 1.598575
b 0.753623
c 0.221118
d 0.321219
e 3.360575
dtype: float64
Note
We will address array-based indexing like s[[4, 3, 1]]
in section on indexing.
Like a NumPy array, a pandas Series has a single dtype.
In [18]: s.dtype
Out[18]: dtype('float64')
This is often a NumPy dtype. However, pandas and 3rd-party libraries
extend NumPy’s type system in a few places, in which case the dtype would
be an ExtensionDtype. Some examples within
pandas are Categorical data and Nullable integer data type. See dtypes
for more.
If you need the actual array backing a Series, use Series.array.
In [19]: s.array
Out[19]:
<PandasArray>
[ 0.4691122999071863, -0.2828633443286633, -1.5090585031735124,
-1.1356323710171934, 1.2121120250208506]
Length: 5, dtype: float64
Accessing the array can be useful when you need to do some operation without the
index (to disable automatic alignment, for example).
Series.array will always be an ExtensionArray.
Briefly, an ExtensionArray is a thin wrapper around one or more concrete arrays like a
numpy.ndarray. pandas knows how to take an ExtensionArray and
store it in a Series or a column of a DataFrame.
See dtypes for more.
While Series is ndarray-like, if you need an actual ndarray, then use
Series.to_numpy().
In [20]: s.to_numpy()
Out[20]: array([ 0.4691, -0.2829, -1.5091, -1.1356, 1.2121])
Even if the Series is backed by a ExtensionArray,
Series.to_numpy() will return a NumPy ndarray.
Series is dict-like#
A Series is also like a fixed-size dict in that you can get and set values by index
label:
In [21]: s["a"]
Out[21]: 0.4691122999071863
In [22]: s["e"] = 12.0
In [23]: s
Out[23]:
a 0.469112
b -0.282863
c -1.509059
d -1.135632
e 12.000000
dtype: float64
In [24]: "e" in s
Out[24]: True
In [25]: "f" in s
Out[25]: False
If a label is not contained in the index, an exception is raised:
In [26]: s["f"]
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
File ~/work/pandas/pandas/pandas/core/indexes/base.py:3802, in Index.get_loc(self, key, method, tolerance)
3801 try:
-> 3802 return self._engine.get_loc(casted_key)
3803 except KeyError as err:
File ~/work/pandas/pandas/pandas/_libs/index.pyx:138, in pandas._libs.index.IndexEngine.get_loc()
File ~/work/pandas/pandas/pandas/_libs/index.pyx:165, in pandas._libs.index.IndexEngine.get_loc()
File ~/work/pandas/pandas/pandas/_libs/hashtable_class_helper.pxi:5745, in pandas._libs.hashtable.PyObjectHashTable.get_item()
File ~/work/pandas/pandas/pandas/_libs/hashtable_class_helper.pxi:5753, in pandas._libs.hashtable.PyObjectHashTable.get_item()
KeyError: 'f'
The above exception was the direct cause of the following exception:
KeyError Traceback (most recent call last)
Cell In[26], line 1
----> 1 s["f"]
File ~/work/pandas/pandas/pandas/core/series.py:981, in Series.__getitem__(self, key)
978 return self._values[key]
980 elif key_is_scalar:
--> 981 return self._get_value(key)
983 if is_hashable(key):
984 # Otherwise index.get_value will raise InvalidIndexError
985 try:
986 # For labels that don't resolve as scalars like tuples and frozensets
File ~/work/pandas/pandas/pandas/core/series.py:1089, in Series._get_value(self, label, takeable)
1086 return self._values[label]
1088 # Similar to Index.get_value, but we do not fall back to positional
-> 1089 loc = self.index.get_loc(label)
1090 return self.index._get_values_for_loc(self, loc, label)
File ~/work/pandas/pandas/pandas/core/indexes/base.py:3804, in Index.get_loc(self, key, method, tolerance)
3802 return self._engine.get_loc(casted_key)
3803 except KeyError as err:
-> 3804 raise KeyError(key) from err
3805 except TypeError:
3806 # If we have a listlike key, _check_indexing_error will raise
3807 # InvalidIndexError. Otherwise we fall through and re-raise
3808 # the TypeError.
3809 self._check_indexing_error(key)
KeyError: 'f'
Using the Series.get() method, a missing label will return None or specified default:
In [27]: s.get("f")
In [28]: s.get("f", np.nan)
Out[28]: nan
These labels can also be accessed by attribute.
Vectorized operations and label alignment with Series#
When working with raw NumPy arrays, looping through value-by-value is usually
not necessary. The same is true when working with Series in pandas.
Series can also be passed into most NumPy methods expecting an ndarray.
In [29]: s + s
Out[29]:
a 0.938225
b -0.565727
c -3.018117
d -2.271265
e 24.000000
dtype: float64
In [30]: s * 2
Out[30]:
a 0.938225
b -0.565727
c -3.018117
d -2.271265
e 24.000000
dtype: float64
In [31]: np.exp(s)
Out[31]:
a 1.598575
b 0.753623
c 0.221118
d 0.321219
e 162754.791419
dtype: float64
A key difference between Series and ndarray is that operations between Series
automatically align the data based on label. Thus, you can write computations
without giving consideration to whether the Series involved have the same
labels.
In [32]: s[1:] + s[:-1]
Out[32]:
a NaN
b -0.565727
c -3.018117
d -2.271265
e NaN
dtype: float64
The result of an operation between unaligned Series will have the union of
the indexes involved. If a label is not found in one Series or the other, the
result will be marked as missing NaN. Being able to write code without doing
any explicit data alignment grants immense freedom and flexibility in
interactive data analysis and research. The integrated data alignment features
of the pandas data structures set pandas apart from the majority of related
tools for working with labeled data.
Note
In general, we chose to make the default result of operations between
differently indexed objects yield the union of the indexes in order to
avoid loss of information. Having an index label, though the data is
missing, is typically important information as part of a computation. You
of course have the option of dropping labels with missing data via the
dropna function.
Name attribute#
Series also has a name attribute:
In [33]: s = pd.Series(np.random.randn(5), name="something")
In [34]: s
Out[34]:
0 -0.494929
1 1.071804
2 0.721555
3 -0.706771
4 -1.039575
Name: something, dtype: float64
In [35]: s.name
Out[35]: 'something'
The Series name can be assigned automatically in many cases, in particular,
when selecting a single column from a DataFrame, the name will be assigned
the column label.
You can rename a Series with the pandas.Series.rename() method.
In [36]: s2 = s.rename("different")
In [37]: s2.name
Out[37]: 'different'
Note that s and s2 refer to different objects.
DataFrame#
DataFrame is a 2-dimensional labeled data structure with columns of
potentially different types. You can think of it like a spreadsheet or SQL
table, or a dict of Series objects. It is generally the most commonly used
pandas object. Like Series, DataFrame accepts many different kinds of input:
Dict of 1D ndarrays, lists, dicts, or Series
2-D numpy.ndarray
Structured or record ndarray
A Series
Another DataFrame
Along with the data, you can optionally pass index (row labels) and
columns (column labels) arguments. If you pass an index and / or columns,
you are guaranteeing the index and / or columns of the resulting
DataFrame. Thus, a dict of Series plus a specific index will discard all data
not matching up to the passed index.
If axis labels are not passed, they will be constructed from the input data
based on common sense rules.
From dict of Series or dicts#
The resulting index will be the union of the indexes of the various
Series. If there are any nested dicts, these will first be converted to
Series. If no columns are passed, the columns will be the ordered list of dict
keys.
In [38]: d = {
....: "one": pd.Series([1.0, 2.0, 3.0], index=["a", "b", "c"]),
....: "two": pd.Series([1.0, 2.0, 3.0, 4.0], index=["a", "b", "c", "d"]),
....: }
....:
In [39]: df = pd.DataFrame(d)
In [40]: df
Out[40]:
one two
a 1.0 1.0
b 2.0 2.0
c 3.0 3.0
d NaN 4.0
In [41]: pd.DataFrame(d, index=["d", "b", "a"])
Out[41]:
one two
d NaN 4.0
b 2.0 2.0
a 1.0 1.0
In [42]: pd.DataFrame(d, index=["d", "b", "a"], columns=["two", "three"])
Out[42]:
two three
d 4.0 NaN
b 2.0 NaN
a 1.0 NaN
The row and column labels can be accessed respectively by accessing the
index and columns attributes:
Note
When a particular set of columns is passed along with a dict of data, the
passed columns override the keys in the dict.
In [43]: df.index
Out[43]: Index(['a', 'b', 'c', 'd'], dtype='object')
In [44]: df.columns
Out[44]: Index(['one', 'two'], dtype='object')
From dict of ndarrays / lists#
The ndarrays must all be the same length. If an index is passed, it must
also be the same length as the arrays. If no index is passed, the
result will be range(n), where n is the array length.
In [45]: d = {"one": [1.0, 2.0, 3.0, 4.0], "two": [4.0, 3.0, 2.0, 1.0]}
In [46]: pd.DataFrame(d)
Out[46]:
one two
0 1.0 4.0
1 2.0 3.0
2 3.0 2.0
3 4.0 1.0
In [47]: pd.DataFrame(d, index=["a", "b", "c", "d"])
Out[47]:
one two
a 1.0 4.0
b 2.0 3.0
c 3.0 2.0
d 4.0 1.0
From structured or record array#
This case is handled identically to a dict of arrays.
In [48]: data = np.zeros((2,), dtype=[("A", "i4"), ("B", "f4"), ("C", "a10")])
In [49]: data[:] = [(1, 2.0, "Hello"), (2, 3.0, "World")]
In [50]: pd.DataFrame(data)
Out[50]:
A B C
0 1 2.0 b'Hello'
1 2 3.0 b'World'
In [51]: pd.DataFrame(data, index=["first", "second"])
Out[51]:
A B C
first 1 2.0 b'Hello'
second 2 3.0 b'World'
In [52]: pd.DataFrame(data, columns=["C", "A", "B"])
Out[52]:
C A B
0 b'Hello' 1 2.0
1 b'World' 2 3.0
Note
DataFrame is not intended to work exactly like a 2-dimensional NumPy
ndarray.
From a list of dicts#
In [53]: data2 = [{"a": 1, "b": 2}, {"a": 5, "b": 10, "c": 20}]
In [54]: pd.DataFrame(data2)
Out[54]:
a b c
0 1 2 NaN
1 5 10 20.0
In [55]: pd.DataFrame(data2, index=["first", "second"])
Out[55]:
a b c
first 1 2 NaN
second 5 10 20.0
In [56]: pd.DataFrame(data2, columns=["a", "b"])
Out[56]:
a b
0 1 2
1 5 10
From a dict of tuples#
You can automatically create a MultiIndexed frame by passing a tuples
dictionary.
In [57]: pd.DataFrame(
....: {
....: ("a", "b"): {("A", "B"): 1, ("A", "C"): 2},
....: ("a", "a"): {("A", "C"): 3, ("A", "B"): 4},
....: ("a", "c"): {("A", "B"): 5, ("A", "C"): 6},
....: ("b", "a"): {("A", "C"): 7, ("A", "B"): 8},
....: ("b", "b"): {("A", "D"): 9, ("A", "B"): 10},
....: }
....: )
....:
Out[57]:
a b
b a c a b
A B 1.0 4.0 5.0 8.0 10.0
C 2.0 3.0 6.0 7.0 NaN
D NaN NaN NaN NaN 9.0
From a Series#
The result will be a DataFrame with the same index as the input Series, and
with one column whose name is the original name of the Series (only if no other
column name provided).
In [58]: ser = pd.Series(range(3), index=list("abc"), name="ser")
In [59]: pd.DataFrame(ser)
Out[59]:
ser
a 0
b 1
c 2
From a list of namedtuples#
The field names of the first namedtuple in the list determine the columns
of the DataFrame. The remaining namedtuples (or tuples) are simply unpacked
and their values are fed into the rows of the DataFrame. If any of those
tuples is shorter than the first namedtuple then the later columns in the
corresponding row are marked as missing values. If any are longer than the
first namedtuple, a ValueError is raised.
In [60]: from collections import namedtuple
In [61]: Point = namedtuple("Point", "x y")
In [62]: pd.DataFrame([Point(0, 0), Point(0, 3), (2, 3)])
Out[62]:
x y
0 0 0
1 0 3
2 2 3
In [63]: Point3D = namedtuple("Point3D", "x y z")
In [64]: pd.DataFrame([Point3D(0, 0, 0), Point3D(0, 3, 5), Point(2, 3)])
Out[64]:
x y z
0 0 0 0.0
1 0 3 5.0
2 2 3 NaN
From a list of dataclasses#
New in version 1.1.0.
Data Classes as introduced in PEP557,
can be passed into the DataFrame constructor.
Passing a list of dataclasses is equivalent to passing a list of dictionaries.
Please be aware, that all values in the list should be dataclasses, mixing
types in the list would result in a TypeError.
In [65]: from dataclasses import make_dataclass
In [66]: Point = make_dataclass("Point", [("x", int), ("y", int)])
In [67]: pd.DataFrame([Point(0, 0), Point(0, 3), Point(2, 3)])
Out[67]:
x y
0 0 0
1 0 3
2 2 3
Missing data
To construct a DataFrame with missing data, we use np.nan to
represent missing values. Alternatively, you may pass a numpy.MaskedArray
as the data argument to the DataFrame constructor, and its masked entries will
be considered missing. See Missing data for more.
Alternate constructors#
DataFrame.from_dict
DataFrame.from_dict() takes a dict of dicts or a dict of array-like sequences
and returns a DataFrame. It operates like the DataFrame constructor except
for the orient parameter which is 'columns' by default, but which can be
set to 'index' in order to use the dict keys as row labels.
In [68]: pd.DataFrame.from_dict(dict([("A", [1, 2, 3]), ("B", [4, 5, 6])]))
Out[68]:
A B
0 1 4
1 2 5
2 3 6
If you pass orient='index', the keys will be the row labels. In this
case, you can also pass the desired column names:
In [69]: pd.DataFrame.from_dict(
....: dict([("A", [1, 2, 3]), ("B", [4, 5, 6])]),
....: orient="index",
....: columns=["one", "two", "three"],
....: )
....:
Out[69]:
one two three
A 1 2 3
B 4 5 6
DataFrame.from_records
DataFrame.from_records() takes a list of tuples or an ndarray with structured
dtype. It works analogously to the normal DataFrame constructor, except that
the resulting DataFrame index may be a specific field of the structured
dtype.
In [70]: data
Out[70]:
array([(1, 2., b'Hello'), (2, 3., b'World')],
dtype=[('A', '<i4'), ('B', '<f4'), ('C', 'S10')])
In [71]: pd.DataFrame.from_records(data, index="C")
Out[71]:
A B
C
b'Hello' 1 2.0
b'World' 2 3.0
Column selection, addition, deletion#
You can treat a DataFrame semantically like a dict of like-indexed Series
objects. Getting, setting, and deleting columns works with the same syntax as
the analogous dict operations:
In [72]: df["one"]
Out[72]:
a 1.0
b 2.0
c 3.0
d NaN
Name: one, dtype: float64
In [73]: df["three"] = df["one"] * df["two"]
In [74]: df["flag"] = df["one"] > 2
In [75]: df
Out[75]:
one two three flag
a 1.0 1.0 1.0 False
b 2.0 2.0 4.0 False
c 3.0 3.0 9.0 True
d NaN 4.0 NaN False
Columns can be deleted or popped like with a dict:
In [76]: del df["two"]
In [77]: three = df.pop("three")
In [78]: df
Out[78]:
one flag
a 1.0 False
b 2.0 False
c 3.0 True
d NaN False
When inserting a scalar value, it will naturally be propagated to fill the
column:
In [79]: df["foo"] = "bar"
In [80]: df
Out[80]:
one flag foo
a 1.0 False bar
b 2.0 False bar
c 3.0 True bar
d NaN False bar
When inserting a Series that does not have the same index as the DataFrame, it
will be conformed to the DataFrame’s index:
In [81]: df["one_trunc"] = df["one"][:2]
In [82]: df
Out[82]:
one flag foo one_trunc
a 1.0 False bar 1.0
b 2.0 False bar 2.0
c 3.0 True bar NaN
d NaN False bar NaN
You can insert raw ndarrays but their length must match the length of the
DataFrame’s index.
By default, columns get inserted at the end. DataFrame.insert()
inserts at a particular location in the columns:
In [83]: df.insert(1, "bar", df["one"])
In [84]: df
Out[84]:
one bar flag foo one_trunc
a 1.0 1.0 False bar 1.0
b 2.0 2.0 False bar 2.0
c 3.0 3.0 True bar NaN
d NaN NaN False bar NaN
Assigning new columns in method chains#
Inspired by dplyr’s
mutate verb, DataFrame has an assign()
method that allows you to easily create new columns that are potentially
derived from existing columns.
In [85]: iris = pd.read_csv("data/iris.data")
In [86]: iris.head()
Out[86]:
SepalLength SepalWidth PetalLength PetalWidth Name
0 5.1 3.5 1.4 0.2 Iris-setosa
1 4.9 3.0 1.4 0.2 Iris-setosa
2 4.7 3.2 1.3 0.2 Iris-setosa
3 4.6 3.1 1.5 0.2 Iris-setosa
4 5.0 3.6 1.4 0.2 Iris-setosa
In [87]: iris.assign(sepal_ratio=iris["SepalWidth"] / iris["SepalLength"]).head()
Out[87]:
SepalLength SepalWidth PetalLength PetalWidth Name sepal_ratio
0 5.1 3.5 1.4 0.2 Iris-setosa 0.686275
1 4.9 3.0 1.4 0.2 Iris-setosa 0.612245
2 4.7 3.2 1.3 0.2 Iris-setosa 0.680851
3 4.6 3.1 1.5 0.2 Iris-setosa 0.673913
4 5.0 3.6 1.4 0.2 Iris-setosa 0.720000
In the example above, we inserted a precomputed value. We can also pass in
a function of one argument to be evaluated on the DataFrame being assigned to.
In [88]: iris.assign(sepal_ratio=lambda x: (x["SepalWidth"] / x["SepalLength"])).head()
Out[88]:
SepalLength SepalWidth PetalLength PetalWidth Name sepal_ratio
0 5.1 3.5 1.4 0.2 Iris-setosa 0.686275
1 4.9 3.0 1.4 0.2 Iris-setosa 0.612245
2 4.7 3.2 1.3 0.2 Iris-setosa 0.680851
3 4.6 3.1 1.5 0.2 Iris-setosa 0.673913
4 5.0 3.6 1.4 0.2 Iris-setosa 0.720000
assign() always returns a copy of the data, leaving the original
DataFrame untouched.
Passing a callable, as opposed to an actual value to be inserted, is
useful when you don’t have a reference to the DataFrame at hand. This is
common when using assign() in a chain of operations. For example,
we can limit the DataFrame to just those observations with a Sepal Length
greater than 5, calculate the ratio, and plot:
In [89]: (
....: iris.query("SepalLength > 5")
....: .assign(
....: SepalRatio=lambda x: x.SepalWidth / x.SepalLength,
....: PetalRatio=lambda x: x.PetalWidth / x.PetalLength,
....: )
....: .plot(kind="scatter", x="SepalRatio", y="PetalRatio")
....: )
....:
Out[89]: <AxesSubplot: xlabel='SepalRatio', ylabel='PetalRatio'>
Since a function is passed in, the function is computed on the DataFrame
being assigned to. Importantly, this is the DataFrame that’s been filtered
to those rows with sepal length greater than 5. The filtering happens first,
and then the ratio calculations. This is an example where we didn’t
have a reference to the filtered DataFrame available.
The function signature for assign() is simply **kwargs. The keys
are the column names for the new fields, and the values are either a value
to be inserted (for example, a Series or NumPy array), or a function
of one argument to be called on the DataFrame. A copy of the original
DataFrame is returned, with the new values inserted.
The order of **kwargs is preserved. This allows
for dependent assignment, where an expression later in **kwargs can refer
to a column created earlier in the same assign().
In [90]: dfa = pd.DataFrame({"A": [1, 2, 3], "B": [4, 5, 6]})
In [91]: dfa.assign(C=lambda x: x["A"] + x["B"], D=lambda x: x["A"] + x["C"])
Out[91]:
A B C D
0 1 4 5 6
1 2 5 7 9
2 3 6 9 12
In the second expression, x['C'] will refer to the newly created column,
that’s equal to dfa['A'] + dfa['B'].
Indexing / selection#
The basics of indexing are as follows:
Operation
Syntax
Result
Select column
df[col]
Series
Select row by label
df.loc[label]
Series
Select row by integer location
df.iloc[loc]
Series
Slice rows
df[5:10]
DataFrame
Select rows by boolean vector
df[bool_vec]
DataFrame
Row selection, for example, returns a Series whose index is the columns of the
DataFrame:
In [92]: df.loc["b"]
Out[92]:
one 2.0
bar 2.0
flag False
foo bar
one_trunc 2.0
Name: b, dtype: object
In [93]: df.iloc[2]
Out[93]:
one 3.0
bar 3.0
flag True
foo bar
one_trunc NaN
Name: c, dtype: object
For a more exhaustive treatment of sophisticated label-based indexing and
slicing, see the section on indexing. We will address the
fundamentals of reindexing / conforming to new sets of labels in the
section on reindexing.
Data alignment and arithmetic#
Data alignment between DataFrame objects automatically align on both the
columns and the index (row labels). Again, the resulting object will have the
union of the column and row labels.
In [94]: df = pd.DataFrame(np.random.randn(10, 4), columns=["A", "B", "C", "D"])
In [95]: df2 = pd.DataFrame(np.random.randn(7, 3), columns=["A", "B", "C"])
In [96]: df + df2
Out[96]:
A B C D
0 0.045691 -0.014138 1.380871 NaN
1 -0.955398 -1.501007 0.037181 NaN
2 -0.662690 1.534833 -0.859691 NaN
3 -2.452949 1.237274 -0.133712 NaN
4 1.414490 1.951676 -2.320422 NaN
5 -0.494922 -1.649727 -1.084601 NaN
6 -1.047551 -0.748572 -0.805479 NaN
7 NaN NaN NaN NaN
8 NaN NaN NaN NaN
9 NaN NaN NaN NaN
When doing an operation between DataFrame and Series, the default behavior is
to align the Series index on the DataFrame columns, thus broadcasting
row-wise. For example:
In [97]: df - df.iloc[0]
Out[97]:
A B C D
0 0.000000 0.000000 0.000000 0.000000
1 -1.359261 -0.248717 -0.453372 -1.754659
2 0.253128 0.829678 0.010026 -1.991234
3 -1.311128 0.054325 -1.724913 -1.620544
4 0.573025 1.500742 -0.676070 1.367331
5 -1.741248 0.781993 -1.241620 -2.053136
6 -1.240774 -0.869551 -0.153282 0.000430
7 -0.743894 0.411013 -0.929563 -0.282386
8 -1.194921 1.320690 0.238224 -1.482644
9 2.293786 1.856228 0.773289 -1.446531
For explicit control over the matching and broadcasting behavior, see the
section on flexible binary operations.
Arithmetic operations with scalars operate element-wise:
In [98]: df * 5 + 2
Out[98]:
A B C D
0 3.359299 -0.124862 4.835102 3.381160
1 -3.437003 -1.368449 2.568242 -5.392133
2 4.624938 4.023526 4.885230 -6.575010
3 -3.196342 0.146766 -3.789461 -4.721559
4 6.224426 7.378849 1.454750 10.217815
5 -5.346940 3.785103 -1.373001 -6.884519
6 -2.844569 -4.472618 4.068691 3.383309
7 -0.360173 1.930201 0.187285 1.969232
8 -2.615303 6.478587 6.026220 -4.032059
9 14.828230 9.156280 8.701544 -3.851494
In [99]: 1 / df
Out[99]:
A B C D
0 3.678365 -2.353094 1.763605 3.620145
1 -0.919624 -1.484363 8.799067 -0.676395
2 1.904807 2.470934 1.732964 -0.583090
3 -0.962215 -2.697986 -0.863638 -0.743875
4 1.183593 0.929567 -9.170108 0.608434
5 -0.680555 2.800959 -1.482360 -0.562777
6 -1.032084 -0.772485 2.416988 3.614523
7 -2.118489 -71.634509 -2.758294 -162.507295
8 -1.083352 1.116424 1.241860 -0.828904
9 0.389765 0.698687 0.746097 -0.854483
In [100]: df ** 4
Out[100]:
A B C D
0 0.005462 3.261689e-02 0.103370 5.822320e-03
1 1.398165 2.059869e-01 0.000167 4.777482e+00
2 0.075962 2.682596e-02 0.110877 8.650845e+00
3 1.166571 1.887302e-02 1.797515 3.265879e+00
4 0.509555 1.339298e+00 0.000141 7.297019e+00
5 4.661717 1.624699e-02 0.207103 9.969092e+00
6 0.881334 2.808277e+00 0.029302 5.858632e-03
7 0.049647 3.797614e-08 0.017276 1.433866e-09
8 0.725974 6.437005e-01 0.420446 2.118275e+00
9 43.329821 4.196326e+00 3.227153 1.875802e+00
Boolean operators operate element-wise as well:
In [101]: df1 = pd.DataFrame({"a": [1, 0, 1], "b": [0, 1, 1]}, dtype=bool)
In [102]: df2 = pd.DataFrame({"a": [0, 1, 1], "b": [1, 1, 0]}, dtype=bool)
In [103]: df1 & df2
Out[103]:
a b
0 False False
1 False True
2 True False
In [104]: df1 | df2
Out[104]:
a b
0 True True
1 True True
2 True True
In [105]: df1 ^ df2
Out[105]:
a b
0 True True
1 True False
2 False True
In [106]: -df1
Out[106]:
a b
0 False True
1 True False
2 False False
Transposing#
To transpose, access the T attribute or DataFrame.transpose(),
similar to an ndarray:
# only show the first 5 rows
In [107]: df[:5].T
Out[107]:
0 1 2 3 4
A 0.271860 -1.087401 0.524988 -1.039268 0.844885
B -0.424972 -0.673690 0.404705 -0.370647 1.075770
C 0.567020 0.113648 0.577046 -1.157892 -0.109050
D 0.276232 -1.478427 -1.715002 -1.344312 1.643563
DataFrame interoperability with NumPy functions#
Most NumPy functions can be called directly on Series and DataFrame.
In [108]: np.exp(df)
Out[108]:
A B C D
0 1.312403 0.653788 1.763006 1.318154
1 0.337092 0.509824 1.120358 0.227996
2 1.690438 1.498861 1.780770 0.179963
3 0.353713 0.690288 0.314148 0.260719
4 2.327710 2.932249 0.896686 5.173571
5 0.230066 1.429065 0.509360 0.169161
6 0.379495 0.274028 1.512461 1.318720
7 0.623732 0.986137 0.695904 0.993865
8 0.397301 2.449092 2.237242 0.299269
9 13.009059 4.183951 3.820223 0.310274
In [109]: np.asarray(df)
Out[109]:
array([[ 0.2719, -0.425 , 0.567 , 0.2762],
[-1.0874, -0.6737, 0.1136, -1.4784],
[ 0.525 , 0.4047, 0.577 , -1.715 ],
[-1.0393, -0.3706, -1.1579, -1.3443],
[ 0.8449, 1.0758, -0.109 , 1.6436],
[-1.4694, 0.357 , -0.6746, -1.7769],
[-0.9689, -1.2945, 0.4137, 0.2767],
[-0.472 , -0.014 , -0.3625, -0.0062],
[-0.9231, 0.8957, 0.8052, -1.2064],
[ 2.5656, 1.4313, 1.3403, -1.1703]])
DataFrame is not intended to be a drop-in replacement for ndarray as its
indexing semantics and data model are quite different in places from an n-dimensional
array.
Series implements __array_ufunc__, which allows it to work with NumPy’s
universal functions.
The ufunc is applied to the underlying array in a Series.
In [110]: ser = pd.Series([1, 2, 3, 4])
In [111]: np.exp(ser)
Out[111]:
0 2.718282
1 7.389056
2 20.085537
3 54.598150
dtype: float64
Changed in version 0.25.0: When multiple Series are passed to a ufunc, they are aligned before
performing the operation.
Like other parts of the library, pandas will automatically align labeled inputs
as part of a ufunc with multiple inputs. For example, using numpy.remainder()
on two Series with differently ordered labels will align before the operation.
In [112]: ser1 = pd.Series([1, 2, 3], index=["a", "b", "c"])
In [113]: ser2 = pd.Series([1, 3, 5], index=["b", "a", "c"])
In [114]: ser1
Out[114]:
a 1
b 2
c 3
dtype: int64
In [115]: ser2
Out[115]:
b 1
a 3
c 5
dtype: int64
In [116]: np.remainder(ser1, ser2)
Out[116]:
a 1
b 0
c 3
dtype: int64
As usual, the union of the two indices is taken, and non-overlapping values are filled
with missing values.
In [117]: ser3 = pd.Series([2, 4, 6], index=["b", "c", "d"])
In [118]: ser3
Out[118]:
b 2
c 4
d 6
dtype: int64
In [119]: np.remainder(ser1, ser3)
Out[119]:
a NaN
b 0.0
c 3.0
d NaN
dtype: float64
When a binary ufunc is applied to a Series and Index, the Series
implementation takes precedence and a Series is returned.
In [120]: ser = pd.Series([1, 2, 3])
In [121]: idx = pd.Index([4, 5, 6])
In [122]: np.maximum(ser, idx)
Out[122]:
0 4
1 5
2 6
dtype: int64
NumPy ufuncs are safe to apply to Series backed by non-ndarray arrays,
for example arrays.SparseArray (see Sparse calculation). If possible,
the ufunc is applied without converting the underlying data to an ndarray.
Console display#
A very large DataFrame will be truncated to display them in the console.
You can also get a summary using info().
(The baseball dataset is from the plyr R package):
In [123]: baseball = pd.read_csv("data/baseball.csv")
In [124]: print(baseball)
id player year stint team lg ... so ibb hbp sh sf gidp
0 88641 womacto01 2006 2 CHN NL ... 4.0 0.0 0.0 3.0 0.0 0.0
1 88643 schilcu01 2006 1 BOS AL ... 1.0 0.0 0.0 0.0 0.0 0.0
.. ... ... ... ... ... .. ... ... ... ... ... ... ...
98 89533 aloumo01 2007 1 NYN NL ... 30.0 5.0 2.0 0.0 3.0 13.0
99 89534 alomasa02 2007 1 NYN NL ... 3.0 0.0 0.0 0.0 0.0 0.0
[100 rows x 23 columns]
In [125]: baseball.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 100 entries, 0 to 99
Data columns (total 23 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 id 100 non-null int64
1 player 100 non-null object
2 year 100 non-null int64
3 stint 100 non-null int64
4 team 100 non-null object
5 lg 100 non-null object
6 g 100 non-null int64
7 ab 100 non-null int64
8 r 100 non-null int64
9 h 100 non-null int64
10 X2b 100 non-null int64
11 X3b 100 non-null int64
12 hr 100 non-null int64
13 rbi 100 non-null float64
14 sb 100 non-null float64
15 cs 100 non-null float64
16 bb 100 non-null int64
17 so 100 non-null float64
18 ibb 100 non-null float64
19 hbp 100 non-null float64
20 sh 100 non-null float64
21 sf 100 non-null float64
22 gidp 100 non-null float64
dtypes: float64(9), int64(11), object(3)
memory usage: 18.1+ KB
However, using DataFrame.to_string() will return a string representation of the
DataFrame in tabular form, though it won’t always fit the console width:
In [126]: print(baseball.iloc[-20:, :12].to_string())
id player year stint team lg g ab r h X2b X3b
80 89474 finlest01 2007 1 COL NL 43 94 9 17 3 0
81 89480 embreal01 2007 1 OAK AL 4 0 0 0 0 0
82 89481 edmonji01 2007 1 SLN NL 117 365 39 92 15 2
83 89482 easleda01 2007 1 NYN NL 76 193 24 54 6 0
84 89489 delgaca01 2007 1 NYN NL 139 538 71 139 30 0
85 89493 cormirh01 2007 1 CIN NL 6 0 0 0 0 0
86 89494 coninje01 2007 2 NYN NL 21 41 2 8 2 0
87 89495 coninje01 2007 1 CIN NL 80 215 23 57 11 1
88 89497 clemero02 2007 1 NYA AL 2 2 0 1 0 0
89 89498 claytro01 2007 2 BOS AL 8 6 1 0 0 0
90 89499 claytro01 2007 1 TOR AL 69 189 23 48 14 0
91 89501 cirilje01 2007 2 ARI NL 28 40 6 8 4 0
92 89502 cirilje01 2007 1 MIN AL 50 153 18 40 9 2
93 89521 bondsba01 2007 1 SFN NL 126 340 75 94 14 0
94 89523 biggicr01 2007 1 HOU NL 141 517 68 130 31 3
95 89525 benitar01 2007 2 FLO NL 34 0 0 0 0 0
96 89526 benitar01 2007 1 SFN NL 19 0 0 0 0 0
97 89530 ausmubr01 2007 1 HOU NL 117 349 38 82 16 3
98 89533 aloumo01 2007 1 NYN NL 87 328 51 112 19 1
99 89534 alomasa02 2007 1 NYN NL 8 22 1 3 1 0
Wide DataFrames will be printed across multiple rows by
default:
In [127]: pd.DataFrame(np.random.randn(3, 12))
Out[127]:
0 1 2 ... 9 10 11
0 -1.226825 0.769804 -1.281247 ... -1.110336 -0.619976 0.149748
1 -0.732339 0.687738 0.176444 ... 1.462696 -1.743161 -0.826591
2 -0.345352 1.314232 0.690579 ... 0.896171 -0.487602 -0.082240
[3 rows x 12 columns]
You can change how much to print on a single row by setting the display.width
option:
In [128]: pd.set_option("display.width", 40) # default is 80
In [129]: pd.DataFrame(np.random.randn(3, 12))
Out[129]:
0 1 2 ... 9 10 11
0 -2.182937 0.380396 0.084844 ... -0.023688 2.410179 1.450520
1 0.206053 -0.251905 -2.213588 ... -0.025747 -0.988387 0.094055
2 1.262731 1.289997 0.082423 ... -0.281461 0.030711 0.109121
[3 rows x 12 columns]
You can adjust the max width of the individual columns by setting display.max_colwidth
In [130]: datafile = {
.....: "filename": ["filename_01", "filename_02"],
.....: "path": [
.....: "media/user_name/storage/folder_01/filename_01",
.....: "media/user_name/storage/folder_02/filename_02",
.....: ],
.....: }
.....:
In [131]: pd.set_option("display.max_colwidth", 30)
In [132]: pd.DataFrame(datafile)
Out[132]:
filename path
0 filename_01 media/user_name/storage/fo...
1 filename_02 media/user_name/storage/fo...
In [133]: pd.set_option("display.max_colwidth", 100)
In [134]: pd.DataFrame(datafile)
Out[134]:
filename path
0 filename_01 media/user_name/storage/folder_01/filename_01
1 filename_02 media/user_name/storage/folder_02/filename_02
You can also disable this feature via the expand_frame_repr option.
This will print the table in one block.
DataFrame column attribute access and IPython completion#
If a DataFrame column label is a valid Python variable name, the column can be
accessed like an attribute:
In [135]: df = pd.DataFrame({"foo1": np.random.randn(5), "foo2": np.random.randn(5)})
In [136]: df
Out[136]:
foo1 foo2
0 1.126203 0.781836
1 -0.977349 -1.071357
2 1.474071 0.441153
3 -0.064034 2.353925
4 -1.282782 0.583787
In [137]: df.foo1
Out[137]:
0 1.126203
1 -0.977349
2 1.474071
3 -0.064034
4 -1.282782
Name: foo1, dtype: float64
The columns are also connected to the IPython
completion mechanism so they can be tab-completed:
In [5]: df.foo<TAB> # noqa: E225, E999
df.foo1 df.foo2
| 208
| 1,209
|
Python Convert List of Dict Tuples into Dataframe
I have a series of Dict->List->Dict-> Tuples? that I wanted to convert into a dataframe. Ideally all at once, but even if it's just one at a time that works as well:
[OrderedDict([('clientRequestId', None),
('band', 'FM'),
('bandName', 'FM'),
('bandType', None),
('callLetters', 'WBBO'),
('call_Letter_change', False),
('commercial_status', 'commercial'),
('countyOfLicense', None),
('dmaMarketCodeOfLicense', None),
('dmaMarketNameOfLicense', None),
('forcedInFlags', None),
('format', 'Pop Contemporary Hit Radio'),
('homeToDma', False),
('homeToMetro', False),
('homeToTsa', False),
('inTheBook', False),
('metrosOfLicense', []),
('name', 'WBBO-FM'),
('owner', None),
('qualifiedInDma', True),
('qualifiedInMetro', True),
('qualifiedInTsa', False),
('specialActivityIndicated', False),
('stateOfLicense', None),
('stateOfLicenseName', None),
('stationCount', 1),
('stationGroup', False),
('stationId', 17601)]),
OrderedDict([('clientRequestId', None),
('band', 'FM'),
('bandName', 'FM'),
('bandType', None),
('callLetters', 'WRNB'),
('call_Letter_change', False),
('commercial_status', 'commercial'),
('countyOfLicense', None),
('dmaMarketCodeOfLicense', None),
('dmaMarketNameOfLicense', None),
('forcedInFlags', None), ...
I've been trying going one at a time of this:
test = pd.DataFrame.from_dict(stationDict.get('stationsInList')[0].values())
test
but the result is turning all of the values in the tuples into one column, 28 rows instead of what i wanted -1 row, 28 columns with the columns as the keys in the "tuples".
|
69,240,441
|
How do I add row indices to a column using lambda functions in Pandas?
|
<p>I have a dataframe as follows:</p>
<p><strong>Original dataframe</strong>:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th></th>
<th>Index</th>
<th>Value</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>aT</td>
<td>1</td>
</tr>
<tr>
<td>1</td>
<td>bee</td>
<td>2</td>
</tr>
<tr>
<td>2</td>
<td>cT</td>
<td>3</td>
</tr>
<tr>
<td>3</td>
<td>Y</td>
<td>4</td>
</tr>
<tr>
<td>4</td>
<td>D</td>
<td>5</td>
</tr>
</tbody>
</table>
</div>
<p>I would like to combine each item in the "index" column (except items trailing with T), hyphen (-) and row number like this:</p>
<p><strong>Expected result</strong>:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th></th>
<th>Index</th>
<th>Value</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>aT</td>
<td>1</td>
</tr>
<tr>
<td>1</td>
<td>bee-1</td>
<td>2</td>
</tr>
<tr>
<td>2</td>
<td>cT</td>
<td>3</td>
</tr>
<tr>
<td>3</td>
<td>Y-3</td>
<td>4</td>
</tr>
<tr>
<td>4</td>
<td>D-4</td>
<td>5</td>
</tr>
</tbody>
</table>
</div>
<p>My code is the following:</p>
<pre><code>df = pandas.DataFrame({"Index": ["aT", "bee", "cT","Y","D"], "Value": [1, 2, 3,4,5]})
ind_name = df.iloc[df.index,0].apply(lambda x: x + '-' + str(df.index) if "T" not in x else x)
</code></pre>
<p>How to correct my code?</p>
| 69,240,541
| 2021-09-19T05:02:04.177000
| 3
| null | 1
| 567
|
python|pandas
|
<p>Solution with <code>.apply</code>:</p>
<pre><code>import pandas as pd
df = pd.DataFrame({"Index": ["aT", "bee", "cT", "Y", "D"], "Value": [1, 2, 3, 4, 5]})
df['Index'] = df.apply(lambda x: x['Index'] + ('' if 'T' in x['Index'] else f'-{x.name}'), axis=1)
print(df)
</code></pre>
<p>Prints:</p>
<pre class="lang-none prettyprint-override"><code> Index Value
0 aT 1
1 bee-1 2
2 cT 3
3 Y-3 4
4 D-4 5
</code></pre>
| 2021-09-19T05:26:28.323000
| 0
|
https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.apply.html
|
pandas.DataFrame.apply#
Solution with .apply:
import pandas as pd
df = pd.DataFrame({"Index": ["aT", "bee", "cT", "Y", "D"], "Value": [1, 2, 3, 4, 5]})
df['Index'] = df.apply(lambda x: x['Index'] + ('' if 'T' in x['Index'] else f'-{x.name}'), axis=1)
print(df)
Prints:
Index Value
0 aT 1
1 bee-1 2
2 cT 3
3 Y-3 4
4 D-4 5
pandas.DataFrame.apply#
DataFrame.apply(func, axis=0, raw=False, result_type=None, args=(), **kwargs)[source]#
Apply a function along an axis of the DataFrame.
Objects passed to the function are Series objects whose index is
either the DataFrame’s index (axis=0) or the DataFrame’s columns
(axis=1). By default (result_type=None), the final return type
is inferred from the return type of the applied function. Otherwise,
it depends on the result_type argument.
Parameters
funcfunctionFunction to apply to each column or row.
axis{0 or ‘index’, 1 or ‘columns’}, default 0Axis along which the function is applied:
0 or ‘index’: apply function to each column.
1 or ‘columns’: apply function to each row.
rawbool, default FalseDetermines if row or column is passed as a Series or ndarray object:
False : passes each row or column as a Series to the
function.
True : the passed function will receive ndarray objects
instead.
If you are just applying a NumPy reduction function this will
achieve much better performance.
result_type{‘expand’, ‘reduce’, ‘broadcast’, None}, default NoneThese only act when axis=1 (columns):
‘expand’ : list-like results will be turned into columns.
‘reduce’ : returns a Series if possible rather than expanding
list-like results. This is the opposite of ‘expand’.
‘broadcast’ : results will be broadcast to the original shape
of the DataFrame, the original index and columns will be
retained.
The default behaviour (None) depends on the return value of the
applied function: list-like results will be returned as a Series
of those. However if the apply function returns a Series these
are expanded to columns.
argstuplePositional arguments to pass to func in addition to the
array/series.
**kwargsAdditional keyword arguments to pass as keywords arguments to
func.
Returns
Series or DataFrameResult of applying func along the given axis of the
DataFrame.
See also
DataFrame.applymapFor elementwise operations.
DataFrame.aggregateOnly perform aggregating type operations.
DataFrame.transformOnly perform transforming type operations.
Notes
Functions that mutate the passed object can produce unexpected
behavior or errors and are not supported. See Mutating with User Defined Function (UDF) methods
for more details.
Examples
>>> df = pd.DataFrame([[4, 9]] * 3, columns=['A', 'B'])
>>> df
A B
0 4 9
1 4 9
2 4 9
Using a numpy universal function (in this case the same as
np.sqrt(df)):
>>> df.apply(np.sqrt)
A B
0 2.0 3.0
1 2.0 3.0
2 2.0 3.0
Using a reducing function on either axis
>>> df.apply(np.sum, axis=0)
A 12
B 27
dtype: int64
>>> df.apply(np.sum, axis=1)
0 13
1 13
2 13
dtype: int64
Returning a list-like will result in a Series
>>> df.apply(lambda x: [1, 2], axis=1)
0 [1, 2]
1 [1, 2]
2 [1, 2]
dtype: object
Passing result_type='expand' will expand list-like results
to columns of a Dataframe
>>> df.apply(lambda x: [1, 2], axis=1, result_type='expand')
0 1
0 1 2
1 1 2
2 1 2
Returning a Series inside the function is similar to passing
result_type='expand'. The resulting column names
will be the Series index.
>>> df.apply(lambda x: pd.Series([1, 2], index=['foo', 'bar']), axis=1)
foo bar
0 1 2
1 1 2
2 1 2
Passing result_type='broadcast' will ensure the same shape
result, whether list-like or scalar is returned by the function,
and broadcast it along the axis. The resulting column names will
be the originals.
>>> df.apply(lambda x: [1, 2], axis=1, result_type='broadcast')
A B
0 1 2
1 1 2
2 1 2
| 25
| 368
|
How do I add row indices to a column using lambda functions in Pandas?
I have a dataframe as follows:
Original dataframe:
Index
Value
0
aT
1
1
bee
2
2
cT
3
3
Y
4
4
D
5
I would like to combine each item in the "index" column (except items trailing with T), hyphen (-) and row number like this:
Expected result:
Index
Value
0
aT
1
1
bee-1
2
2
cT
3
3
Y-3
4
4
D-4
5
My code is the following:
df = pandas.DataFrame({"Index": ["aT", "bee", "cT","Y","D"], "Value": [1, 2, 3,4,5]})
ind_name = df.iloc[df.index,0].apply(lambda x: x + '-' + str(df.index) if "T" not in x else x)
How to correct my code?
|
69,562,174
|
check if column is blank in pandas dataframe
|
<p>I have the next csv file:</p>
<pre><code>A|B|C
1100|8718|2021-11-21
1104|21|
</code></pre>
<p>I want to create a dataframe that gives me the date output as follows:</p>
<pre><code> A B C
0 1100 8718 20211121000000
1 1104 21 ""
</code></pre>
<p>This means</p>
<pre><code>if C is empty:
put doublequotes
else:
format date to yyyymmddhhmmss (adding 0s to hhmmss)
</code></pre>
<p>My code:</p>
<pre><code>df['C'] = np.where(df['C'].empty, df['C'].str.replace('', '""'), df['C'] + '000000')
</code></pre>
<p>but it gives me the next:</p>
<pre><code> A B C
0 1100 8718 2021-11-21
1 1104 21 0
</code></pre>
<p>I have tried another piece of code:</p>
<pre><code>if df['C'].empty:
df['C'] = df['C'].str.replace('', '""')
else:
df['C'] = df['C'].str.replace('-', '') + '000000'
</code></pre>
<p>OUTPUT:</p>
<pre><code> A B C
0 1100 8718 20211121000000
1 1104 21 0000000
</code></pre>
| 69,562,310
| 2021-10-13T20:51:43.923000
| 2
| 1
| 0
| 568
|
python|pandas
|
<p>Use <code>dt.strftime</code>:</p>
<pre><code>df = pd.read_csv('data.csv', sep='|', parse_dates=['C'])
df['C'] = df['C'].dt.strftime('%Y%m%d%H%M%S').fillna('""')
print(df)
# Output:
A B C
0 1100 8718 20211121000000
1 1104 21 ""
</code></pre>
| 2021-10-13T21:04:33.533000
| 0
|
https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.empty.html
|
pandas.DataFrame.empty#
pandas.DataFrame.empty#
property DataFrame.empty[source]#
Indicator whether Series/DataFrame is empty.
True if Series/DataFrame is entirely empty (no items), meaning any of the
axes are of length 0.
Returns
boolIf Series/DataFrame is empty, return True, if not return False.
See also
Series.dropnaReturn series without null values.
DataFrame.dropnaReturn DataFrame with labels on given axis omitted where (all or any) data are missing.
Notes
If Series/DataFrame contains only NaNs, it is still not considered empty. See
the example below.
Examples
An example of an actual empty DataFrame. Notice the index is empty:
>>> df_empty = pd.DataFrame({'A' : []})
>>> df_empty
Empty DataFrame
Columns: [A]
Index: []
>>> df_empty.empty
True
If we only have NaNs in our DataFrame, it is not considered empty! We
will need to drop the NaNs to make the DataFrame empty:
Use dt.strftime:
df = pd.read_csv('data.csv', sep='|', parse_dates=['C'])
df['C'] = df['C'].dt.strftime('%Y%m%d%H%M%S').fillna('""')
print(df)
# Output:
A B C
0 1100 8718 20211121000000
1 1104 21 ""
>>> df = pd.DataFrame({'A' : [np.nan]})
>>> df
A
0 NaN
>>> df.empty
False
>>> df.dropna().empty
True
>>> ser_empty = pd.Series({'A' : []})
>>> ser_empty
A []
dtype: object
>>> ser_empty.empty
False
>>> ser_empty = pd.Series()
>>> ser_empty.empty
True
| 900
| 1,144
|
check if column is blank in pandas dataframe
I have the next csv file:
A|B|C
1100|8718|2021-11-21
1104|21|
I want to create a dataframe that gives me the date output as follows:
A B C
0 1100 8718 20211121000000
1 1104 21 ""
This means
if C is empty:
put doublequotes
else:
format date to yyyymmddhhmmss (adding 0s to hhmmss)
My code:
df['C'] = np.where(df['C'].empty, df['C'].str.replace('', '""'), df['C'] + '000000')
but it gives me the next:
A B C
0 1100 8718 2021-11-21
1 1104 21 0
I have tried another piece of code:
if df['C'].empty:
df['C'] = df['C'].str.replace('', '""')
else:
df['C'] = df['C'].str.replace('-', '') + '000000'
OUTPUT:
A B C
0 1100 8718 20211121000000
1 1104 21 0000000
|
65,258,629
|
How to identify zones in a table using pandas?
|
<p>I have a file with a table (.csv file).
The table is composed by many sub "areas" like this example:</p>
<p><a href="https://i.stack.imgur.com/TucnE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/TucnE.png" alt="Image 1" /></a></p>
<p>As you can see, there are more some data which can be grouped together (blue group, orange group, etc.)</p>
<p>Now.. the color is just to make the concept clear, but in the .csv there is no group identified by a color. In reality there is no color to identify the groups and the groups dimensions (rows) can change. There is no pattern to predict where the next group has 1, 2, 3, 4 or more rows.</p>
<p>The problem is that I need to open the table and import it using a dataframe using pandas. In my algorithm one group should be identified, copied to another dataframe and then saved.</p>
<p>How can I group data using pandas?</p>
<p>I was thinking to index the groups like the following table:</p>
<p><a href="https://i.stack.imgur.com/nLhCt.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nLhCt.png" alt="Picture" /></a></p>
<p>but in this case I cannot access the cells with the same index sequentially.</p>
<p>Any idea?</p>
<p>EDIT: here the table from the .csv file:</p>
<pre><code>,X,Y,Z,mm,ff,cc
1,1,2,3,0.2,0.4,0.3
,,,,0.1,0.3,0.4
2,1,2,3,0.1,1.2,-1.2
,,,,0.12,-1.234,303.4
,,,,1.2,43.2,44.3
,,,,7.4,88.3,34.4
3,2,4,2,1.13,4.1,55.1
,,,,80.3,34.1,4.01
,,,,43.12,12.3,98.4
</code></pre>
| 65,258,664
| 2020-12-11T21:09:33.650000
| 2
| null | 0
| 57
|
python|pandas
|
<p>Try <code>groupby</code>:</p>
<pre><code>groups = df[['X','Y','Z']].notna().all(axis=1).cumsum()
for k, d in df.groupby(groups):
# do something with the groups
print(f'Group {k}')
print(d)
</code></pre>
| 2020-12-11T21:13:07.197000
| 0
|
https://pandas.pydata.org/docs/user_guide/timeseries.html
|
Time series / date functionality#
Time series / date functionality#
pandas contains extensive capabilities and features for working with time series data for all domains.
Using the NumPy datetime64 and timedelta64 dtypes, pandas has consolidated a large number of
features from other Python libraries like scikits.timeseries as well as created
a tremendous amount of new functionality for manipulating time series data.
For example, pandas supports:
Parsing time series information from various sources and formats
In [1]: import datetime
In [2]: dti = pd.to_datetime(
...: ["1/1/2018", np.datetime64("2018-01-01"), datetime.datetime(2018, 1, 1)]
...: )
...:
In [3]: dti
Out[3]: DatetimeIndex(['2018-01-01', '2018-01-01', '2018-01-01'], dtype='datetime64[ns]', freq=None)
Generate sequences of fixed-frequency dates and time spans
In [4]: dti = pd.date_range("2018-01-01", periods=3, freq="H")
In [5]: dti
Out[5]:
DatetimeIndex(['2018-01-01 00:00:00', '2018-01-01 01:00:00',
Try groupby:
groups = df[['X','Y','Z']].notna().all(axis=1).cumsum()
for k, d in df.groupby(groups):
# do something with the groups
print(f'Group {k}')
print(d)
'2018-01-01 02:00:00'],
dtype='datetime64[ns]', freq='H')
Manipulating and converting date times with timezone information
In [6]: dti = dti.tz_localize("UTC")
In [7]: dti
Out[7]:
DatetimeIndex(['2018-01-01 00:00:00+00:00', '2018-01-01 01:00:00+00:00',
'2018-01-01 02:00:00+00:00'],
dtype='datetime64[ns, UTC]', freq='H')
In [8]: dti.tz_convert("US/Pacific")
Out[8]:
DatetimeIndex(['2017-12-31 16:00:00-08:00', '2017-12-31 17:00:00-08:00',
'2017-12-31 18:00:00-08:00'],
dtype='datetime64[ns, US/Pacific]', freq='H')
Resampling or converting a time series to a particular frequency
In [9]: idx = pd.date_range("2018-01-01", periods=5, freq="H")
In [10]: ts = pd.Series(range(len(idx)), index=idx)
In [11]: ts
Out[11]:
2018-01-01 00:00:00 0
2018-01-01 01:00:00 1
2018-01-01 02:00:00 2
2018-01-01 03:00:00 3
2018-01-01 04:00:00 4
Freq: H, dtype: int64
In [12]: ts.resample("2H").mean()
Out[12]:
2018-01-01 00:00:00 0.5
2018-01-01 02:00:00 2.5
2018-01-01 04:00:00 4.0
Freq: 2H, dtype: float64
Performing date and time arithmetic with absolute or relative time increments
In [13]: friday = pd.Timestamp("2018-01-05")
In [14]: friday.day_name()
Out[14]: 'Friday'
# Add 1 day
In [15]: saturday = friday + pd.Timedelta("1 day")
In [16]: saturday.day_name()
Out[16]: 'Saturday'
# Add 1 business day (Friday --> Monday)
In [17]: monday = friday + pd.offsets.BDay()
In [18]: monday.day_name()
Out[18]: 'Monday'
pandas provides a relatively compact and self-contained set of tools for
performing the above tasks and more.
Overview#
pandas captures 4 general time related concepts:
Date times: A specific date and time with timezone support. Similar to datetime.datetime from the standard library.
Time deltas: An absolute time duration. Similar to datetime.timedelta from the standard library.
Time spans: A span of time defined by a point in time and its associated frequency.
Date offsets: A relative time duration that respects calendar arithmetic. Similar to dateutil.relativedelta.relativedelta from the dateutil package.
Concept
Scalar Class
Array Class
pandas Data Type
Primary Creation Method
Date times
Timestamp
DatetimeIndex
datetime64[ns] or datetime64[ns, tz]
to_datetime or date_range
Time deltas
Timedelta
TimedeltaIndex
timedelta64[ns]
to_timedelta or timedelta_range
Time spans
Period
PeriodIndex
period[freq]
Period or period_range
Date offsets
DateOffset
None
None
DateOffset
For time series data, it’s conventional to represent the time component in the index of a Series or DataFrame
so manipulations can be performed with respect to the time element.
In [19]: pd.Series(range(3), index=pd.date_range("2000", freq="D", periods=3))
Out[19]:
2000-01-01 0
2000-01-02 1
2000-01-03 2
Freq: D, dtype: int64
However, Series and DataFrame can directly also support the time component as data itself.
In [20]: pd.Series(pd.date_range("2000", freq="D", periods=3))
Out[20]:
0 2000-01-01
1 2000-01-02
2 2000-01-03
dtype: datetime64[ns]
Series and DataFrame have extended data type support and functionality for datetime, timedelta
and Period data when passed into those constructors. DateOffset
data however will be stored as object data.
In [21]: pd.Series(pd.period_range("1/1/2011", freq="M", periods=3))
Out[21]:
0 2011-01
1 2011-02
2 2011-03
dtype: period[M]
In [22]: pd.Series([pd.DateOffset(1), pd.DateOffset(2)])
Out[22]:
0 <DateOffset>
1 <2 * DateOffsets>
dtype: object
In [23]: pd.Series(pd.date_range("1/1/2011", freq="M", periods=3))
Out[23]:
0 2011-01-31
1 2011-02-28
2 2011-03-31
dtype: datetime64[ns]
Lastly, pandas represents null date times, time deltas, and time spans as NaT which
is useful for representing missing or null date like values and behaves similar
as np.nan does for float data.
In [24]: pd.Timestamp(pd.NaT)
Out[24]: NaT
In [25]: pd.Timedelta(pd.NaT)
Out[25]: NaT
In [26]: pd.Period(pd.NaT)
Out[26]: NaT
# Equality acts as np.nan would
In [27]: pd.NaT == pd.NaT
Out[27]: False
Timestamps vs. time spans#
Timestamped data is the most basic type of time series data that associates
values with points in time. For pandas objects it means using the points in
time.
In [28]: pd.Timestamp(datetime.datetime(2012, 5, 1))
Out[28]: Timestamp('2012-05-01 00:00:00')
In [29]: pd.Timestamp("2012-05-01")
Out[29]: Timestamp('2012-05-01 00:00:00')
In [30]: pd.Timestamp(2012, 5, 1)
Out[30]: Timestamp('2012-05-01 00:00:00')
However, in many cases it is more natural to associate things like change
variables with a time span instead. The span represented by Period can be
specified explicitly, or inferred from datetime string format.
For example:
In [31]: pd.Period("2011-01")
Out[31]: Period('2011-01', 'M')
In [32]: pd.Period("2012-05", freq="D")
Out[32]: Period('2012-05-01', 'D')
Timestamp and Period can serve as an index. Lists of
Timestamp and Period are automatically coerced to DatetimeIndex
and PeriodIndex respectively.
In [33]: dates = [
....: pd.Timestamp("2012-05-01"),
....: pd.Timestamp("2012-05-02"),
....: pd.Timestamp("2012-05-03"),
....: ]
....:
In [34]: ts = pd.Series(np.random.randn(3), dates)
In [35]: type(ts.index)
Out[35]: pandas.core.indexes.datetimes.DatetimeIndex
In [36]: ts.index
Out[36]: DatetimeIndex(['2012-05-01', '2012-05-02', '2012-05-03'], dtype='datetime64[ns]', freq=None)
In [37]: ts
Out[37]:
2012-05-01 0.469112
2012-05-02 -0.282863
2012-05-03 -1.509059
dtype: float64
In [38]: periods = [pd.Period("2012-01"), pd.Period("2012-02"), pd.Period("2012-03")]
In [39]: ts = pd.Series(np.random.randn(3), periods)
In [40]: type(ts.index)
Out[40]: pandas.core.indexes.period.PeriodIndex
In [41]: ts.index
Out[41]: PeriodIndex(['2012-01', '2012-02', '2012-03'], dtype='period[M]')
In [42]: ts
Out[42]:
2012-01 -1.135632
2012-02 1.212112
2012-03 -0.173215
Freq: M, dtype: float64
pandas allows you to capture both representations and
convert between them. Under the hood, pandas represents timestamps using
instances of Timestamp and sequences of timestamps using instances of
DatetimeIndex. For regular time spans, pandas uses Period objects for
scalar values and PeriodIndex for sequences of spans. Better support for
irregular intervals with arbitrary start and end points are forth-coming in
future releases.
Converting to timestamps#
To convert a Series or list-like object of date-like objects e.g. strings,
epochs, or a mixture, you can use the to_datetime function. When passed
a Series, this returns a Series (with the same index), while a list-like
is converted to a DatetimeIndex:
In [43]: pd.to_datetime(pd.Series(["Jul 31, 2009", "2010-01-10", None]))
Out[43]:
0 2009-07-31
1 2010-01-10
2 NaT
dtype: datetime64[ns]
In [44]: pd.to_datetime(["2005/11/23", "2010.12.31"])
Out[44]: DatetimeIndex(['2005-11-23', '2010-12-31'], dtype='datetime64[ns]', freq=None)
If you use dates which start with the day first (i.e. European style),
you can pass the dayfirst flag:
In [45]: pd.to_datetime(["04-01-2012 10:00"], dayfirst=True)
Out[45]: DatetimeIndex(['2012-01-04 10:00:00'], dtype='datetime64[ns]', freq=None)
In [46]: pd.to_datetime(["14-01-2012", "01-14-2012"], dayfirst=True)
Out[46]: DatetimeIndex(['2012-01-14', '2012-01-14'], dtype='datetime64[ns]', freq=None)
Warning
You see in the above example that dayfirst isn’t strict. If a date
can’t be parsed with the day being first it will be parsed as if
dayfirst were False, and in the case of parsing delimited date strings
(e.g. 31-12-2012) then a warning will also be raised.
If you pass a single string to to_datetime, it returns a single Timestamp.
Timestamp can also accept string input, but it doesn’t accept string parsing
options like dayfirst or format, so use to_datetime if these are required.
In [47]: pd.to_datetime("2010/11/12")
Out[47]: Timestamp('2010-11-12 00:00:00')
In [48]: pd.Timestamp("2010/11/12")
Out[48]: Timestamp('2010-11-12 00:00:00')
You can also use the DatetimeIndex constructor directly:
In [49]: pd.DatetimeIndex(["2018-01-01", "2018-01-03", "2018-01-05"])
Out[49]: DatetimeIndex(['2018-01-01', '2018-01-03', '2018-01-05'], dtype='datetime64[ns]', freq=None)
The string ‘infer’ can be passed in order to set the frequency of the index as the
inferred frequency upon creation:
In [50]: pd.DatetimeIndex(["2018-01-01", "2018-01-03", "2018-01-05"], freq="infer")
Out[50]: DatetimeIndex(['2018-01-01', '2018-01-03', '2018-01-05'], dtype='datetime64[ns]', freq='2D')
Providing a format argument#
In addition to the required datetime string, a format argument can be passed to ensure specific parsing.
This could also potentially speed up the conversion considerably.
In [51]: pd.to_datetime("2010/11/12", format="%Y/%m/%d")
Out[51]: Timestamp('2010-11-12 00:00:00')
In [52]: pd.to_datetime("12-11-2010 00:00", format="%d-%m-%Y %H:%M")
Out[52]: Timestamp('2010-11-12 00:00:00')
For more information on the choices available when specifying the format
option, see the Python datetime documentation.
Assembling datetime from multiple DataFrame columns#
You can also pass a DataFrame of integer or string columns to assemble into a Series of Timestamps.
In [53]: df = pd.DataFrame(
....: {"year": [2015, 2016], "month": [2, 3], "day": [4, 5], "hour": [2, 3]}
....: )
....:
In [54]: pd.to_datetime(df)
Out[54]:
0 2015-02-04 02:00:00
1 2016-03-05 03:00:00
dtype: datetime64[ns]
You can pass only the columns that you need to assemble.
In [55]: pd.to_datetime(df[["year", "month", "day"]])
Out[55]:
0 2015-02-04
1 2016-03-05
dtype: datetime64[ns]
pd.to_datetime looks for standard designations of the datetime component in the column names, including:
required: year, month, day
optional: hour, minute, second, millisecond, microsecond, nanosecond
Invalid data#
The default behavior, errors='raise', is to raise when unparsable:
In [2]: pd.to_datetime(['2009/07/31', 'asd'], errors='raise')
ValueError: Unknown string format
Pass errors='ignore' to return the original input when unparsable:
In [56]: pd.to_datetime(["2009/07/31", "asd"], errors="ignore")
Out[56]: Index(['2009/07/31', 'asd'], dtype='object')
Pass errors='coerce' to convert unparsable data to NaT (not a time):
In [57]: pd.to_datetime(["2009/07/31", "asd"], errors="coerce")
Out[57]: DatetimeIndex(['2009-07-31', 'NaT'], dtype='datetime64[ns]', freq=None)
Epoch timestamps#
pandas supports converting integer or float epoch times to Timestamp and
DatetimeIndex. The default unit is nanoseconds, since that is how Timestamp
objects are stored internally. However, epochs are often stored in another unit
which can be specified. These are computed from the starting point specified by the
origin parameter.
In [58]: pd.to_datetime(
....: [1349720105, 1349806505, 1349892905, 1349979305, 1350065705], unit="s"
....: )
....:
Out[58]:
DatetimeIndex(['2012-10-08 18:15:05', '2012-10-09 18:15:05',
'2012-10-10 18:15:05', '2012-10-11 18:15:05',
'2012-10-12 18:15:05'],
dtype='datetime64[ns]', freq=None)
In [59]: pd.to_datetime(
....: [1349720105100, 1349720105200, 1349720105300, 1349720105400, 1349720105500],
....: unit="ms",
....: )
....:
Out[59]:
DatetimeIndex(['2012-10-08 18:15:05.100000', '2012-10-08 18:15:05.200000',
'2012-10-08 18:15:05.300000', '2012-10-08 18:15:05.400000',
'2012-10-08 18:15:05.500000'],
dtype='datetime64[ns]', freq=None)
Note
The unit parameter does not use the same strings as the format parameter
that was discussed above). The
available units are listed on the documentation for pandas.to_datetime().
Changed in version 1.0.0.
Constructing a Timestamp or DatetimeIndex with an epoch timestamp
with the tz argument specified will raise a ValueError. If you have
epochs in wall time in another timezone, you can read the epochs
as timezone-naive timestamps and then localize to the appropriate timezone:
In [60]: pd.Timestamp(1262347200000000000).tz_localize("US/Pacific")
Out[60]: Timestamp('2010-01-01 12:00:00-0800', tz='US/Pacific')
In [61]: pd.DatetimeIndex([1262347200000000000]).tz_localize("US/Pacific")
Out[61]: DatetimeIndex(['2010-01-01 12:00:00-08:00'], dtype='datetime64[ns, US/Pacific]', freq=None)
Note
Epoch times will be rounded to the nearest nanosecond.
Warning
Conversion of float epoch times can lead to inaccurate and unexpected results.
Python floats have about 15 digits precision in
decimal. Rounding during conversion from float to high precision Timestamp is
unavoidable. The only way to achieve exact precision is to use a fixed-width
types (e.g. an int64).
In [62]: pd.to_datetime([1490195805.433, 1490195805.433502912], unit="s")
Out[62]: DatetimeIndex(['2017-03-22 15:16:45.433000088', '2017-03-22 15:16:45.433502913'], dtype='datetime64[ns]', freq=None)
In [63]: pd.to_datetime(1490195805433502912, unit="ns")
Out[63]: Timestamp('2017-03-22 15:16:45.433502912')
See also
Using the origin parameter
From timestamps to epoch#
To invert the operation from above, namely, to convert from a Timestamp to a ‘unix’ epoch:
In [64]: stamps = pd.date_range("2012-10-08 18:15:05", periods=4, freq="D")
In [65]: stamps
Out[65]:
DatetimeIndex(['2012-10-08 18:15:05', '2012-10-09 18:15:05',
'2012-10-10 18:15:05', '2012-10-11 18:15:05'],
dtype='datetime64[ns]', freq='D')
We subtract the epoch (midnight at January 1, 1970 UTC) and then floor divide by the
“unit” (1 second).
In [66]: (stamps - pd.Timestamp("1970-01-01")) // pd.Timedelta("1s")
Out[66]: Int64Index([1349720105, 1349806505, 1349892905, 1349979305], dtype='int64')
Using the origin parameter#
Using the origin parameter, one can specify an alternative starting point for creation
of a DatetimeIndex. For example, to use 1960-01-01 as the starting date:
In [67]: pd.to_datetime([1, 2, 3], unit="D", origin=pd.Timestamp("1960-01-01"))
Out[67]: DatetimeIndex(['1960-01-02', '1960-01-03', '1960-01-04'], dtype='datetime64[ns]', freq=None)
The default is set at origin='unix', which defaults to 1970-01-01 00:00:00.
Commonly called ‘unix epoch’ or POSIX time.
In [68]: pd.to_datetime([1, 2, 3], unit="D")
Out[68]: DatetimeIndex(['1970-01-02', '1970-01-03', '1970-01-04'], dtype='datetime64[ns]', freq=None)
Generating ranges of timestamps#
To generate an index with timestamps, you can use either the DatetimeIndex or
Index constructor and pass in a list of datetime objects:
In [69]: dates = [
....: datetime.datetime(2012, 5, 1),
....: datetime.datetime(2012, 5, 2),
....: datetime.datetime(2012, 5, 3),
....: ]
....:
# Note the frequency information
In [70]: index = pd.DatetimeIndex(dates)
In [71]: index
Out[71]: DatetimeIndex(['2012-05-01', '2012-05-02', '2012-05-03'], dtype='datetime64[ns]', freq=None)
# Automatically converted to DatetimeIndex
In [72]: index = pd.Index(dates)
In [73]: index
Out[73]: DatetimeIndex(['2012-05-01', '2012-05-02', '2012-05-03'], dtype='datetime64[ns]', freq=None)
In practice this becomes very cumbersome because we often need a very long
index with a large number of timestamps. If we need timestamps on a regular
frequency, we can use the date_range() and bdate_range() functions
to create a DatetimeIndex. The default frequency for date_range is a
calendar day while the default for bdate_range is a business day:
In [74]: start = datetime.datetime(2011, 1, 1)
In [75]: end = datetime.datetime(2012, 1, 1)
In [76]: index = pd.date_range(start, end)
In [77]: index
Out[77]:
DatetimeIndex(['2011-01-01', '2011-01-02', '2011-01-03', '2011-01-04',
'2011-01-05', '2011-01-06', '2011-01-07', '2011-01-08',
'2011-01-09', '2011-01-10',
...
'2011-12-23', '2011-12-24', '2011-12-25', '2011-12-26',
'2011-12-27', '2011-12-28', '2011-12-29', '2011-12-30',
'2011-12-31', '2012-01-01'],
dtype='datetime64[ns]', length=366, freq='D')
In [78]: index = pd.bdate_range(start, end)
In [79]: index
Out[79]:
DatetimeIndex(['2011-01-03', '2011-01-04', '2011-01-05', '2011-01-06',
'2011-01-07', '2011-01-10', '2011-01-11', '2011-01-12',
'2011-01-13', '2011-01-14',
...
'2011-12-19', '2011-12-20', '2011-12-21', '2011-12-22',
'2011-12-23', '2011-12-26', '2011-12-27', '2011-12-28',
'2011-12-29', '2011-12-30'],
dtype='datetime64[ns]', length=260, freq='B')
Convenience functions like date_range and bdate_range can utilize a
variety of frequency aliases:
In [80]: pd.date_range(start, periods=1000, freq="M")
Out[80]:
DatetimeIndex(['2011-01-31', '2011-02-28', '2011-03-31', '2011-04-30',
'2011-05-31', '2011-06-30', '2011-07-31', '2011-08-31',
'2011-09-30', '2011-10-31',
...
'2093-07-31', '2093-08-31', '2093-09-30', '2093-10-31',
'2093-11-30', '2093-12-31', '2094-01-31', '2094-02-28',
'2094-03-31', '2094-04-30'],
dtype='datetime64[ns]', length=1000, freq='M')
In [81]: pd.bdate_range(start, periods=250, freq="BQS")
Out[81]:
DatetimeIndex(['2011-01-03', '2011-04-01', '2011-07-01', '2011-10-03',
'2012-01-02', '2012-04-02', '2012-07-02', '2012-10-01',
'2013-01-01', '2013-04-01',
...
'2071-01-01', '2071-04-01', '2071-07-01', '2071-10-01',
'2072-01-01', '2072-04-01', '2072-07-01', '2072-10-03',
'2073-01-02', '2073-04-03'],
dtype='datetime64[ns]', length=250, freq='BQS-JAN')
date_range and bdate_range make it easy to generate a range of dates
using various combinations of parameters like start, end, periods,
and freq. The start and end dates are strictly inclusive, so dates outside
of those specified will not be generated:
In [82]: pd.date_range(start, end, freq="BM")
Out[82]:
DatetimeIndex(['2011-01-31', '2011-02-28', '2011-03-31', '2011-04-29',
'2011-05-31', '2011-06-30', '2011-07-29', '2011-08-31',
'2011-09-30', '2011-10-31', '2011-11-30', '2011-12-30'],
dtype='datetime64[ns]', freq='BM')
In [83]: pd.date_range(start, end, freq="W")
Out[83]:
DatetimeIndex(['2011-01-02', '2011-01-09', '2011-01-16', '2011-01-23',
'2011-01-30', '2011-02-06', '2011-02-13', '2011-02-20',
'2011-02-27', '2011-03-06', '2011-03-13', '2011-03-20',
'2011-03-27', '2011-04-03', '2011-04-10', '2011-04-17',
'2011-04-24', '2011-05-01', '2011-05-08', '2011-05-15',
'2011-05-22', '2011-05-29', '2011-06-05', '2011-06-12',
'2011-06-19', '2011-06-26', '2011-07-03', '2011-07-10',
'2011-07-17', '2011-07-24', '2011-07-31', '2011-08-07',
'2011-08-14', '2011-08-21', '2011-08-28', '2011-09-04',
'2011-09-11', '2011-09-18', '2011-09-25', '2011-10-02',
'2011-10-09', '2011-10-16', '2011-10-23', '2011-10-30',
'2011-11-06', '2011-11-13', '2011-11-20', '2011-11-27',
'2011-12-04', '2011-12-11', '2011-12-18', '2011-12-25',
'2012-01-01'],
dtype='datetime64[ns]', freq='W-SUN')
In [84]: pd.bdate_range(end=end, periods=20)
Out[84]:
DatetimeIndex(['2011-12-05', '2011-12-06', '2011-12-07', '2011-12-08',
'2011-12-09', '2011-12-12', '2011-12-13', '2011-12-14',
'2011-12-15', '2011-12-16', '2011-12-19', '2011-12-20',
'2011-12-21', '2011-12-22', '2011-12-23', '2011-12-26',
'2011-12-27', '2011-12-28', '2011-12-29', '2011-12-30'],
dtype='datetime64[ns]', freq='B')
In [85]: pd.bdate_range(start=start, periods=20)
Out[85]:
DatetimeIndex(['2011-01-03', '2011-01-04', '2011-01-05', '2011-01-06',
'2011-01-07', '2011-01-10', '2011-01-11', '2011-01-12',
'2011-01-13', '2011-01-14', '2011-01-17', '2011-01-18',
'2011-01-19', '2011-01-20', '2011-01-21', '2011-01-24',
'2011-01-25', '2011-01-26', '2011-01-27', '2011-01-28'],
dtype='datetime64[ns]', freq='B')
Specifying start, end, and periods will generate a range of evenly spaced
dates from start to end inclusively, with periods number of elements in the
resulting DatetimeIndex:
In [86]: pd.date_range("2018-01-01", "2018-01-05", periods=5)
Out[86]:
DatetimeIndex(['2018-01-01', '2018-01-02', '2018-01-03', '2018-01-04',
'2018-01-05'],
dtype='datetime64[ns]', freq=None)
In [87]: pd.date_range("2018-01-01", "2018-01-05", periods=10)
Out[87]:
DatetimeIndex(['2018-01-01 00:00:00', '2018-01-01 10:40:00',
'2018-01-01 21:20:00', '2018-01-02 08:00:00',
'2018-01-02 18:40:00', '2018-01-03 05:20:00',
'2018-01-03 16:00:00', '2018-01-04 02:40:00',
'2018-01-04 13:20:00', '2018-01-05 00:00:00'],
dtype='datetime64[ns]', freq=None)
Custom frequency ranges#
bdate_range can also generate a range of custom frequency dates by using
the weekmask and holidays parameters. These parameters will only be
used if a custom frequency string is passed.
In [88]: weekmask = "Mon Wed Fri"
In [89]: holidays = [datetime.datetime(2011, 1, 5), datetime.datetime(2011, 3, 14)]
In [90]: pd.bdate_range(start, end, freq="C", weekmask=weekmask, holidays=holidays)
Out[90]:
DatetimeIndex(['2011-01-03', '2011-01-07', '2011-01-10', '2011-01-12',
'2011-01-14', '2011-01-17', '2011-01-19', '2011-01-21',
'2011-01-24', '2011-01-26',
...
'2011-12-09', '2011-12-12', '2011-12-14', '2011-12-16',
'2011-12-19', '2011-12-21', '2011-12-23', '2011-12-26',
'2011-12-28', '2011-12-30'],
dtype='datetime64[ns]', length=154, freq='C')
In [91]: pd.bdate_range(start, end, freq="CBMS", weekmask=weekmask)
Out[91]:
DatetimeIndex(['2011-01-03', '2011-02-02', '2011-03-02', '2011-04-01',
'2011-05-02', '2011-06-01', '2011-07-01', '2011-08-01',
'2011-09-02', '2011-10-03', '2011-11-02', '2011-12-02'],
dtype='datetime64[ns]', freq='CBMS')
See also
Custom business days
Timestamp limitations#
Since pandas represents timestamps in nanosecond resolution, the time span that
can be represented using a 64-bit integer is limited to approximately 584 years:
In [92]: pd.Timestamp.min
Out[92]: Timestamp('1677-09-21 00:12:43.145224193')
In [93]: pd.Timestamp.max
Out[93]: Timestamp('2262-04-11 23:47:16.854775807')
See also
Representing out-of-bounds spans
Indexing#
One of the main uses for DatetimeIndex is as an index for pandas objects.
The DatetimeIndex class contains many time series related optimizations:
A large range of dates for various offsets are pre-computed and cached
under the hood in order to make generating subsequent date ranges very fast
(just have to grab a slice).
Fast shifting using the shift method on pandas objects.
Unioning of overlapping DatetimeIndex objects with the same frequency is
very fast (important for fast data alignment).
Quick access to date fields via properties such as year, month, etc.
Regularization functions like snap and very fast asof logic.
DatetimeIndex objects have all the basic functionality of regular Index
objects, and a smorgasbord of advanced time series specific methods for easy
frequency processing.
See also
Reindexing methods
Note
While pandas does not force you to have a sorted date index, some of these
methods may have unexpected or incorrect behavior if the dates are unsorted.
DatetimeIndex can be used like a regular index and offers all of its
intelligent functionality like selection, slicing, etc.
In [94]: rng = pd.date_range(start, end, freq="BM")
In [95]: ts = pd.Series(np.random.randn(len(rng)), index=rng)
In [96]: ts.index
Out[96]:
DatetimeIndex(['2011-01-31', '2011-02-28', '2011-03-31', '2011-04-29',
'2011-05-31', '2011-06-30', '2011-07-29', '2011-08-31',
'2011-09-30', '2011-10-31', '2011-11-30', '2011-12-30'],
dtype='datetime64[ns]', freq='BM')
In [97]: ts[:5].index
Out[97]:
DatetimeIndex(['2011-01-31', '2011-02-28', '2011-03-31', '2011-04-29',
'2011-05-31'],
dtype='datetime64[ns]', freq='BM')
In [98]: ts[::2].index
Out[98]:
DatetimeIndex(['2011-01-31', '2011-03-31', '2011-05-31', '2011-07-29',
'2011-09-30', '2011-11-30'],
dtype='datetime64[ns]', freq='2BM')
Partial string indexing#
Dates and strings that parse to timestamps can be passed as indexing parameters:
In [99]: ts["1/31/2011"]
Out[99]: 0.11920871129693428
In [100]: ts[datetime.datetime(2011, 12, 25):]
Out[100]:
2011-12-30 0.56702
Freq: BM, dtype: float64
In [101]: ts["10/31/2011":"12/31/2011"]
Out[101]:
2011-10-31 0.271860
2011-11-30 -0.424972
2011-12-30 0.567020
Freq: BM, dtype: float64
To provide convenience for accessing longer time series, you can also pass in
the year or year and month as strings:
In [102]: ts["2011"]
Out[102]:
2011-01-31 0.119209
2011-02-28 -1.044236
2011-03-31 -0.861849
2011-04-29 -2.104569
2011-05-31 -0.494929
2011-06-30 1.071804
2011-07-29 0.721555
2011-08-31 -0.706771
2011-09-30 -1.039575
2011-10-31 0.271860
2011-11-30 -0.424972
2011-12-30 0.567020
Freq: BM, dtype: float64
In [103]: ts["2011-6"]
Out[103]:
2011-06-30 1.071804
Freq: BM, dtype: float64
This type of slicing will work on a DataFrame with a DatetimeIndex as well. Since the
partial string selection is a form of label slicing, the endpoints will be included. This
would include matching times on an included date:
Warning
Indexing DataFrame rows with a single string with getitem (e.g. frame[dtstring])
is deprecated starting with pandas 1.2.0 (given the ambiguity whether it is indexing
the rows or selecting a column) and will be removed in a future version. The equivalent
with .loc (e.g. frame.loc[dtstring]) is still supported.
In [104]: dft = pd.DataFrame(
.....: np.random.randn(100000, 1),
.....: columns=["A"],
.....: index=pd.date_range("20130101", periods=100000, freq="T"),
.....: )
.....:
In [105]: dft
Out[105]:
A
2013-01-01 00:00:00 0.276232
2013-01-01 00:01:00 -1.087401
2013-01-01 00:02:00 -0.673690
2013-01-01 00:03:00 0.113648
2013-01-01 00:04:00 -1.478427
... ...
2013-03-11 10:35:00 -0.747967
2013-03-11 10:36:00 -0.034523
2013-03-11 10:37:00 -0.201754
2013-03-11 10:38:00 -1.509067
2013-03-11 10:39:00 -1.693043
[100000 rows x 1 columns]
In [106]: dft.loc["2013"]
Out[106]:
A
2013-01-01 00:00:00 0.276232
2013-01-01 00:01:00 -1.087401
2013-01-01 00:02:00 -0.673690
2013-01-01 00:03:00 0.113648
2013-01-01 00:04:00 -1.478427
... ...
2013-03-11 10:35:00 -0.747967
2013-03-11 10:36:00 -0.034523
2013-03-11 10:37:00 -0.201754
2013-03-11 10:38:00 -1.509067
2013-03-11 10:39:00 -1.693043
[100000 rows x 1 columns]
This starts on the very first time in the month, and includes the last date and
time for the month:
In [107]: dft["2013-1":"2013-2"]
Out[107]:
A
2013-01-01 00:00:00 0.276232
2013-01-01 00:01:00 -1.087401
2013-01-01 00:02:00 -0.673690
2013-01-01 00:03:00 0.113648
2013-01-01 00:04:00 -1.478427
... ...
2013-02-28 23:55:00 0.850929
2013-02-28 23:56:00 0.976712
2013-02-28 23:57:00 -2.693884
2013-02-28 23:58:00 -1.575535
2013-02-28 23:59:00 -1.573517
[84960 rows x 1 columns]
This specifies a stop time that includes all of the times on the last day:
In [108]: dft["2013-1":"2013-2-28"]
Out[108]:
A
2013-01-01 00:00:00 0.276232
2013-01-01 00:01:00 -1.087401
2013-01-01 00:02:00 -0.673690
2013-01-01 00:03:00 0.113648
2013-01-01 00:04:00 -1.478427
... ...
2013-02-28 23:55:00 0.850929
2013-02-28 23:56:00 0.976712
2013-02-28 23:57:00 -2.693884
2013-02-28 23:58:00 -1.575535
2013-02-28 23:59:00 -1.573517
[84960 rows x 1 columns]
This specifies an exact stop time (and is not the same as the above):
In [109]: dft["2013-1":"2013-2-28 00:00:00"]
Out[109]:
A
2013-01-01 00:00:00 0.276232
2013-01-01 00:01:00 -1.087401
2013-01-01 00:02:00 -0.673690
2013-01-01 00:03:00 0.113648
2013-01-01 00:04:00 -1.478427
... ...
2013-02-27 23:56:00 1.197749
2013-02-27 23:57:00 0.720521
2013-02-27 23:58:00 -0.072718
2013-02-27 23:59:00 -0.681192
2013-02-28 00:00:00 -0.557501
[83521 rows x 1 columns]
We are stopping on the included end-point as it is part of the index:
In [110]: dft["2013-1-15":"2013-1-15 12:30:00"]
Out[110]:
A
2013-01-15 00:00:00 -0.984810
2013-01-15 00:01:00 0.941451
2013-01-15 00:02:00 1.559365
2013-01-15 00:03:00 1.034374
2013-01-15 00:04:00 -1.480656
... ...
2013-01-15 12:26:00 0.371454
2013-01-15 12:27:00 -0.930806
2013-01-15 12:28:00 -0.069177
2013-01-15 12:29:00 0.066510
2013-01-15 12:30:00 -0.003945
[751 rows x 1 columns]
DatetimeIndex partial string indexing also works on a DataFrame with a MultiIndex:
In [111]: dft2 = pd.DataFrame(
.....: np.random.randn(20, 1),
.....: columns=["A"],
.....: index=pd.MultiIndex.from_product(
.....: [pd.date_range("20130101", periods=10, freq="12H"), ["a", "b"]]
.....: ),
.....: )
.....:
In [112]: dft2
Out[112]:
A
2013-01-01 00:00:00 a -0.298694
b 0.823553
2013-01-01 12:00:00 a 0.943285
b -1.479399
2013-01-02 00:00:00 a -1.643342
... ...
2013-01-04 12:00:00 b 0.069036
2013-01-05 00:00:00 a 0.122297
b 1.422060
2013-01-05 12:00:00 a 0.370079
b 1.016331
[20 rows x 1 columns]
In [113]: dft2.loc["2013-01-05"]
Out[113]:
A
2013-01-05 00:00:00 a 0.122297
b 1.422060
2013-01-05 12:00:00 a 0.370079
b 1.016331
In [114]: idx = pd.IndexSlice
In [115]: dft2 = dft2.swaplevel(0, 1).sort_index()
In [116]: dft2.loc[idx[:, "2013-01-05"], :]
Out[116]:
A
a 2013-01-05 00:00:00 0.122297
2013-01-05 12:00:00 0.370079
b 2013-01-05 00:00:00 1.422060
2013-01-05 12:00:00 1.016331
New in version 0.25.0.
Slicing with string indexing also honors UTC offset.
In [117]: df = pd.DataFrame([0], index=pd.DatetimeIndex(["2019-01-01"], tz="US/Pacific"))
In [118]: df
Out[118]:
0
2019-01-01 00:00:00-08:00 0
In [119]: df["2019-01-01 12:00:00+04:00":"2019-01-01 13:00:00+04:00"]
Out[119]:
0
2019-01-01 00:00:00-08:00 0
Slice vs. exact match#
The same string used as an indexing parameter can be treated either as a slice or as an exact match depending on the resolution of the index. If the string is less accurate than the index, it will be treated as a slice, otherwise as an exact match.
Consider a Series object with a minute resolution index:
In [120]: series_minute = pd.Series(
.....: [1, 2, 3],
.....: pd.DatetimeIndex(
.....: ["2011-12-31 23:59:00", "2012-01-01 00:00:00", "2012-01-01 00:02:00"]
.....: ),
.....: )
.....:
In [121]: series_minute.index.resolution
Out[121]: 'minute'
A timestamp string less accurate than a minute gives a Series object.
In [122]: series_minute["2011-12-31 23"]
Out[122]:
2011-12-31 23:59:00 1
dtype: int64
A timestamp string with minute resolution (or more accurate), gives a scalar instead, i.e. it is not casted to a slice.
In [123]: series_minute["2011-12-31 23:59"]
Out[123]: 1
In [124]: series_minute["2011-12-31 23:59:00"]
Out[124]: 1
If index resolution is second, then the minute-accurate timestamp gives a
Series.
In [125]: series_second = pd.Series(
.....: [1, 2, 3],
.....: pd.DatetimeIndex(
.....: ["2011-12-31 23:59:59", "2012-01-01 00:00:00", "2012-01-01 00:00:01"]
.....: ),
.....: )
.....:
In [126]: series_second.index.resolution
Out[126]: 'second'
In [127]: series_second["2011-12-31 23:59"]
Out[127]:
2011-12-31 23:59:59 1
dtype: int64
If the timestamp string is treated as a slice, it can be used to index DataFrame with .loc[] as well.
In [128]: dft_minute = pd.DataFrame(
.....: {"a": [1, 2, 3], "b": [4, 5, 6]}, index=series_minute.index
.....: )
.....:
In [129]: dft_minute.loc["2011-12-31 23"]
Out[129]:
a b
2011-12-31 23:59:00 1 4
Warning
However, if the string is treated as an exact match, the selection in DataFrame’s [] will be column-wise and not row-wise, see Indexing Basics. For example dft_minute['2011-12-31 23:59'] will raise KeyError as '2012-12-31 23:59' has the same resolution as the index and there is no column with such name:
To always have unambiguous selection, whether the row is treated as a slice or a single selection, use .loc.
In [130]: dft_minute.loc["2011-12-31 23:59"]
Out[130]:
a 1
b 4
Name: 2011-12-31 23:59:00, dtype: int64
Note also that DatetimeIndex resolution cannot be less precise than day.
In [131]: series_monthly = pd.Series(
.....: [1, 2, 3], pd.DatetimeIndex(["2011-12", "2012-01", "2012-02"])
.....: )
.....:
In [132]: series_monthly.index.resolution
Out[132]: 'day'
In [133]: series_monthly["2011-12"] # returns Series
Out[133]:
2011-12-01 1
dtype: int64
Exact indexing#
As discussed in previous section, indexing a DatetimeIndex with a partial string depends on the “accuracy” of the period, in other words how specific the interval is in relation to the resolution of the index. In contrast, indexing with Timestamp or datetime objects is exact, because the objects have exact meaning. These also follow the semantics of including both endpoints.
These Timestamp and datetime objects have exact hours, minutes, and seconds, even though they were not explicitly specified (they are 0).
In [134]: dft[datetime.datetime(2013, 1, 1): datetime.datetime(2013, 2, 28)]
Out[134]:
A
2013-01-01 00:00:00 0.276232
2013-01-01 00:01:00 -1.087401
2013-01-01 00:02:00 -0.673690
2013-01-01 00:03:00 0.113648
2013-01-01 00:04:00 -1.478427
... ...
2013-02-27 23:56:00 1.197749
2013-02-27 23:57:00 0.720521
2013-02-27 23:58:00 -0.072718
2013-02-27 23:59:00 -0.681192
2013-02-28 00:00:00 -0.557501
[83521 rows x 1 columns]
With no defaults.
In [135]: dft[
.....: datetime.datetime(2013, 1, 1, 10, 12, 0): datetime.datetime(
.....: 2013, 2, 28, 10, 12, 0
.....: )
.....: ]
.....:
Out[135]:
A
2013-01-01 10:12:00 0.565375
2013-01-01 10:13:00 0.068184
2013-01-01 10:14:00 0.788871
2013-01-01 10:15:00 -0.280343
2013-01-01 10:16:00 0.931536
... ...
2013-02-28 10:08:00 0.148098
2013-02-28 10:09:00 -0.388138
2013-02-28 10:10:00 0.139348
2013-02-28 10:11:00 0.085288
2013-02-28 10:12:00 0.950146
[83521 rows x 1 columns]
Truncating & fancy indexing#
A truncate() convenience function is provided that is similar
to slicing. Note that truncate assumes a 0 value for any unspecified date
component in a DatetimeIndex in contrast to slicing which returns any
partially matching dates:
In [136]: rng2 = pd.date_range("2011-01-01", "2012-01-01", freq="W")
In [137]: ts2 = pd.Series(np.random.randn(len(rng2)), index=rng2)
In [138]: ts2.truncate(before="2011-11", after="2011-12")
Out[138]:
2011-11-06 0.437823
2011-11-13 -0.293083
2011-11-20 -0.059881
2011-11-27 1.252450
Freq: W-SUN, dtype: float64
In [139]: ts2["2011-11":"2011-12"]
Out[139]:
2011-11-06 0.437823
2011-11-13 -0.293083
2011-11-20 -0.059881
2011-11-27 1.252450
2011-12-04 0.046611
2011-12-11 0.059478
2011-12-18 -0.286539
2011-12-25 0.841669
Freq: W-SUN, dtype: float64
Even complicated fancy indexing that breaks the DatetimeIndex frequency
regularity will result in a DatetimeIndex, although frequency is lost:
In [140]: ts2[[0, 2, 6]].index
Out[140]: DatetimeIndex(['2011-01-02', '2011-01-16', '2011-02-13'], dtype='datetime64[ns]', freq=None)
Time/date components#
There are several time/date properties that one can access from Timestamp or a collection of timestamps like a DatetimeIndex.
Property
Description
year
The year of the datetime
month
The month of the datetime
day
The days of the datetime
hour
The hour of the datetime
minute
The minutes of the datetime
second
The seconds of the datetime
microsecond
The microseconds of the datetime
nanosecond
The nanoseconds of the datetime
date
Returns datetime.date (does not contain timezone information)
time
Returns datetime.time (does not contain timezone information)
timetz
Returns datetime.time as local time with timezone information
dayofyear
The ordinal day of year
day_of_year
The ordinal day of year
weekofyear
The week ordinal of the year
week
The week ordinal of the year
dayofweek
The number of the day of the week with Monday=0, Sunday=6
day_of_week
The number of the day of the week with Monday=0, Sunday=6
weekday
The number of the day of the week with Monday=0, Sunday=6
quarter
Quarter of the date: Jan-Mar = 1, Apr-Jun = 2, etc.
days_in_month
The number of days in the month of the datetime
is_month_start
Logical indicating if first day of month (defined by frequency)
is_month_end
Logical indicating if last day of month (defined by frequency)
is_quarter_start
Logical indicating if first day of quarter (defined by frequency)
is_quarter_end
Logical indicating if last day of quarter (defined by frequency)
is_year_start
Logical indicating if first day of year (defined by frequency)
is_year_end
Logical indicating if last day of year (defined by frequency)
is_leap_year
Logical indicating if the date belongs to a leap year
Furthermore, if you have a Series with datetimelike values, then you can
access these properties via the .dt accessor, as detailed in the section
on .dt accessors.
New in version 1.1.0.
You may obtain the year, week and day components of the ISO year from the ISO 8601 standard:
In [141]: idx = pd.date_range(start="2019-12-29", freq="D", periods=4)
In [142]: idx.isocalendar()
Out[142]:
year week day
2019-12-29 2019 52 7
2019-12-30 2020 1 1
2019-12-31 2020 1 2
2020-01-01 2020 1 3
In [143]: idx.to_series().dt.isocalendar()
Out[143]:
year week day
2019-12-29 2019 52 7
2019-12-30 2020 1 1
2019-12-31 2020 1 2
2020-01-01 2020 1 3
DateOffset objects#
In the preceding examples, frequency strings (e.g. 'D') were used to specify
a frequency that defined:
how the date times in DatetimeIndex were spaced when using date_range()
the frequency of a Period or PeriodIndex
These frequency strings map to a DateOffset object and its subclasses. A DateOffset
is similar to a Timedelta that represents a duration of time but follows specific calendar duration rules.
For example, a Timedelta day will always increment datetimes by 24 hours, while a DateOffset day
will increment datetimes to the same time the next day whether a day represents 23, 24 or 25 hours due to daylight
savings time. However, all DateOffset subclasses that are an hour or smaller
(Hour, Minute, Second, Milli, Micro, Nano) behave like
Timedelta and respect absolute time.
The basic DateOffset acts similar to dateutil.relativedelta (relativedelta documentation)
that shifts a date time by the corresponding calendar duration specified. The
arithmetic operator (+) can be used to perform the shift.
# This particular day contains a day light savings time transition
In [144]: ts = pd.Timestamp("2016-10-30 00:00:00", tz="Europe/Helsinki")
# Respects absolute time
In [145]: ts + pd.Timedelta(days=1)
Out[145]: Timestamp('2016-10-30 23:00:00+0200', tz='Europe/Helsinki')
# Respects calendar time
In [146]: ts + pd.DateOffset(days=1)
Out[146]: Timestamp('2016-10-31 00:00:00+0200', tz='Europe/Helsinki')
In [147]: friday = pd.Timestamp("2018-01-05")
In [148]: friday.day_name()
Out[148]: 'Friday'
# Add 2 business days (Friday --> Tuesday)
In [149]: two_business_days = 2 * pd.offsets.BDay()
In [150]: friday + two_business_days
Out[150]: Timestamp('2018-01-09 00:00:00')
In [151]: (friday + two_business_days).day_name()
Out[151]: 'Tuesday'
Most DateOffsets have associated frequencies strings, or offset aliases, that can be passed
into freq keyword arguments. The available date offsets and associated frequency strings can be found below:
Date Offset
Frequency String
Description
DateOffset
None
Generic offset class, defaults to absolute 24 hours
BDay or BusinessDay
'B'
business day (weekday)
CDay or CustomBusinessDay
'C'
custom business day
Week
'W'
one week, optionally anchored on a day of the week
WeekOfMonth
'WOM'
the x-th day of the y-th week of each month
LastWeekOfMonth
'LWOM'
the x-th day of the last week of each month
MonthEnd
'M'
calendar month end
MonthBegin
'MS'
calendar month begin
BMonthEnd or BusinessMonthEnd
'BM'
business month end
BMonthBegin or BusinessMonthBegin
'BMS'
business month begin
CBMonthEnd or CustomBusinessMonthEnd
'CBM'
custom business month end
CBMonthBegin or CustomBusinessMonthBegin
'CBMS'
custom business month begin
SemiMonthEnd
'SM'
15th (or other day_of_month) and calendar month end
SemiMonthBegin
'SMS'
15th (or other day_of_month) and calendar month begin
QuarterEnd
'Q'
calendar quarter end
QuarterBegin
'QS'
calendar quarter begin
BQuarterEnd
'BQ
business quarter end
BQuarterBegin
'BQS'
business quarter begin
FY5253Quarter
'REQ'
retail (aka 52-53 week) quarter
YearEnd
'A'
calendar year end
YearBegin
'AS' or 'BYS'
calendar year begin
BYearEnd
'BA'
business year end
BYearBegin
'BAS'
business year begin
FY5253
'RE'
retail (aka 52-53 week) year
Easter
None
Easter holiday
BusinessHour
'BH'
business hour
CustomBusinessHour
'CBH'
custom business hour
Day
'D'
one absolute day
Hour
'H'
one hour
Minute
'T' or 'min'
one minute
Second
'S'
one second
Milli
'L' or 'ms'
one millisecond
Micro
'U' or 'us'
one microsecond
Nano
'N'
one nanosecond
DateOffsets additionally have rollforward() and rollback()
methods for moving a date forward or backward respectively to a valid offset
date relative to the offset. For example, business offsets will roll dates
that land on the weekends (Saturday and Sunday) forward to Monday since
business offsets operate on the weekdays.
In [152]: ts = pd.Timestamp("2018-01-06 00:00:00")
In [153]: ts.day_name()
Out[153]: 'Saturday'
# BusinessHour's valid offset dates are Monday through Friday
In [154]: offset = pd.offsets.BusinessHour(start="09:00")
# Bring the date to the closest offset date (Monday)
In [155]: offset.rollforward(ts)
Out[155]: Timestamp('2018-01-08 09:00:00')
# Date is brought to the closest offset date first and then the hour is added
In [156]: ts + offset
Out[156]: Timestamp('2018-01-08 10:00:00')
These operations preserve time (hour, minute, etc) information by default.
To reset time to midnight, use normalize() before or after applying
the operation (depending on whether you want the time information included
in the operation).
In [157]: ts = pd.Timestamp("2014-01-01 09:00")
In [158]: day = pd.offsets.Day()
In [159]: day + ts
Out[159]: Timestamp('2014-01-02 09:00:00')
In [160]: (day + ts).normalize()
Out[160]: Timestamp('2014-01-02 00:00:00')
In [161]: ts = pd.Timestamp("2014-01-01 22:00")
In [162]: hour = pd.offsets.Hour()
In [163]: hour + ts
Out[163]: Timestamp('2014-01-01 23:00:00')
In [164]: (hour + ts).normalize()
Out[164]: Timestamp('2014-01-01 00:00:00')
In [165]: (hour + pd.Timestamp("2014-01-01 23:30")).normalize()
Out[165]: Timestamp('2014-01-02 00:00:00')
Parametric offsets#
Some of the offsets can be “parameterized” when created to result in different
behaviors. For example, the Week offset for generating weekly data accepts a
weekday parameter which results in the generated dates always lying on a
particular day of the week:
In [166]: d = datetime.datetime(2008, 8, 18, 9, 0)
In [167]: d
Out[167]: datetime.datetime(2008, 8, 18, 9, 0)
In [168]: d + pd.offsets.Week()
Out[168]: Timestamp('2008-08-25 09:00:00')
In [169]: d + pd.offsets.Week(weekday=4)
Out[169]: Timestamp('2008-08-22 09:00:00')
In [170]: (d + pd.offsets.Week(weekday=4)).weekday()
Out[170]: 4
In [171]: d - pd.offsets.Week()
Out[171]: Timestamp('2008-08-11 09:00:00')
The normalize option will be effective for addition and subtraction.
In [172]: d + pd.offsets.Week(normalize=True)
Out[172]: Timestamp('2008-08-25 00:00:00')
In [173]: d - pd.offsets.Week(normalize=True)
Out[173]: Timestamp('2008-08-11 00:00:00')
Another example is parameterizing YearEnd with the specific ending month:
In [174]: d + pd.offsets.YearEnd()
Out[174]: Timestamp('2008-12-31 09:00:00')
In [175]: d + pd.offsets.YearEnd(month=6)
Out[175]: Timestamp('2009-06-30 09:00:00')
Using offsets with Series / DatetimeIndex#
Offsets can be used with either a Series or DatetimeIndex to
apply the offset to each element.
In [176]: rng = pd.date_range("2012-01-01", "2012-01-03")
In [177]: s = pd.Series(rng)
In [178]: rng
Out[178]: DatetimeIndex(['2012-01-01', '2012-01-02', '2012-01-03'], dtype='datetime64[ns]', freq='D')
In [179]: rng + pd.DateOffset(months=2)
Out[179]: DatetimeIndex(['2012-03-01', '2012-03-02', '2012-03-03'], dtype='datetime64[ns]', freq=None)
In [180]: s + pd.DateOffset(months=2)
Out[180]:
0 2012-03-01
1 2012-03-02
2 2012-03-03
dtype: datetime64[ns]
In [181]: s - pd.DateOffset(months=2)
Out[181]:
0 2011-11-01
1 2011-11-02
2 2011-11-03
dtype: datetime64[ns]
If the offset class maps directly to a Timedelta (Day, Hour,
Minute, Second, Micro, Milli, Nano) it can be
used exactly like a Timedelta - see the
Timedelta section for more examples.
In [182]: s - pd.offsets.Day(2)
Out[182]:
0 2011-12-30
1 2011-12-31
2 2012-01-01
dtype: datetime64[ns]
In [183]: td = s - pd.Series(pd.date_range("2011-12-29", "2011-12-31"))
In [184]: td
Out[184]:
0 3 days
1 3 days
2 3 days
dtype: timedelta64[ns]
In [185]: td + pd.offsets.Minute(15)
Out[185]:
0 3 days 00:15:00
1 3 days 00:15:00
2 3 days 00:15:00
dtype: timedelta64[ns]
Note that some offsets (such as BQuarterEnd) do not have a
vectorized implementation. They can still be used but may
calculate significantly slower and will show a PerformanceWarning
In [186]: rng + pd.offsets.BQuarterEnd()
Out[186]: DatetimeIndex(['2012-03-30', '2012-03-30', '2012-03-30'], dtype='datetime64[ns]', freq=None)
Custom business days#
The CDay or CustomBusinessDay class provides a parametric
BusinessDay class which can be used to create customized business day
calendars which account for local holidays and local weekend conventions.
As an interesting example, let’s look at Egypt where a Friday-Saturday weekend is observed.
In [187]: weekmask_egypt = "Sun Mon Tue Wed Thu"
# They also observe International Workers' Day so let's
# add that for a couple of years
In [188]: holidays = [
.....: "2012-05-01",
.....: datetime.datetime(2013, 5, 1),
.....: np.datetime64("2014-05-01"),
.....: ]
.....:
In [189]: bday_egypt = pd.offsets.CustomBusinessDay(
.....: holidays=holidays,
.....: weekmask=weekmask_egypt,
.....: )
.....:
In [190]: dt = datetime.datetime(2013, 4, 30)
In [191]: dt + 2 * bday_egypt
Out[191]: Timestamp('2013-05-05 00:00:00')
Let’s map to the weekday names:
In [192]: dts = pd.date_range(dt, periods=5, freq=bday_egypt)
In [193]: pd.Series(dts.weekday, dts).map(pd.Series("Mon Tue Wed Thu Fri Sat Sun".split()))
Out[193]:
2013-04-30 Tue
2013-05-02 Thu
2013-05-05 Sun
2013-05-06 Mon
2013-05-07 Tue
Freq: C, dtype: object
Holiday calendars can be used to provide the list of holidays. See the
holiday calendar section for more information.
In [194]: from pandas.tseries.holiday import USFederalHolidayCalendar
In [195]: bday_us = pd.offsets.CustomBusinessDay(calendar=USFederalHolidayCalendar())
# Friday before MLK Day
In [196]: dt = datetime.datetime(2014, 1, 17)
# Tuesday after MLK Day (Monday is skipped because it's a holiday)
In [197]: dt + bday_us
Out[197]: Timestamp('2014-01-21 00:00:00')
Monthly offsets that respect a certain holiday calendar can be defined
in the usual way.
In [198]: bmth_us = pd.offsets.CustomBusinessMonthBegin(calendar=USFederalHolidayCalendar())
# Skip new years
In [199]: dt = datetime.datetime(2013, 12, 17)
In [200]: dt + bmth_us
Out[200]: Timestamp('2014-01-02 00:00:00')
# Define date index with custom offset
In [201]: pd.date_range(start="20100101", end="20120101", freq=bmth_us)
Out[201]:
DatetimeIndex(['2010-01-04', '2010-02-01', '2010-03-01', '2010-04-01',
'2010-05-03', '2010-06-01', '2010-07-01', '2010-08-02',
'2010-09-01', '2010-10-01', '2010-11-01', '2010-12-01',
'2011-01-03', '2011-02-01', '2011-03-01', '2011-04-01',
'2011-05-02', '2011-06-01', '2011-07-01', '2011-08-01',
'2011-09-01', '2011-10-03', '2011-11-01', '2011-12-01'],
dtype='datetime64[ns]', freq='CBMS')
Note
The frequency string ‘C’ is used to indicate that a CustomBusinessDay
DateOffset is used, it is important to note that since CustomBusinessDay is
a parameterised type, instances of CustomBusinessDay may differ and this is
not detectable from the ‘C’ frequency string. The user therefore needs to
ensure that the ‘C’ frequency string is used consistently within the user’s
application.
Business hour#
The BusinessHour class provides a business hour representation on BusinessDay,
allowing to use specific start and end times.
By default, BusinessHour uses 9:00 - 17:00 as business hours.
Adding BusinessHour will increment Timestamp by hourly frequency.
If target Timestamp is out of business hours, move to the next business hour
then increment it. If the result exceeds the business hours end, the remaining
hours are added to the next business day.
In [202]: bh = pd.offsets.BusinessHour()
In [203]: bh
Out[203]: <BusinessHour: BH=09:00-17:00>
# 2014-08-01 is Friday
In [204]: pd.Timestamp("2014-08-01 10:00").weekday()
Out[204]: 4
In [205]: pd.Timestamp("2014-08-01 10:00") + bh
Out[205]: Timestamp('2014-08-01 11:00:00')
# Below example is the same as: pd.Timestamp('2014-08-01 09:00') + bh
In [206]: pd.Timestamp("2014-08-01 08:00") + bh
Out[206]: Timestamp('2014-08-01 10:00:00')
# If the results is on the end time, move to the next business day
In [207]: pd.Timestamp("2014-08-01 16:00") + bh
Out[207]: Timestamp('2014-08-04 09:00:00')
# Remainings are added to the next day
In [208]: pd.Timestamp("2014-08-01 16:30") + bh
Out[208]: Timestamp('2014-08-04 09:30:00')
# Adding 2 business hours
In [209]: pd.Timestamp("2014-08-01 10:00") + pd.offsets.BusinessHour(2)
Out[209]: Timestamp('2014-08-01 12:00:00')
# Subtracting 3 business hours
In [210]: pd.Timestamp("2014-08-01 10:00") + pd.offsets.BusinessHour(-3)
Out[210]: Timestamp('2014-07-31 15:00:00')
You can also specify start and end time by keywords. The argument must
be a str with an hour:minute representation or a datetime.time
instance. Specifying seconds, microseconds and nanoseconds as business hour
results in ValueError.
In [211]: bh = pd.offsets.BusinessHour(start="11:00", end=datetime.time(20, 0))
In [212]: bh
Out[212]: <BusinessHour: BH=11:00-20:00>
In [213]: pd.Timestamp("2014-08-01 13:00") + bh
Out[213]: Timestamp('2014-08-01 14:00:00')
In [214]: pd.Timestamp("2014-08-01 09:00") + bh
Out[214]: Timestamp('2014-08-01 12:00:00')
In [215]: pd.Timestamp("2014-08-01 18:00") + bh
Out[215]: Timestamp('2014-08-01 19:00:00')
Passing start time later than end represents midnight business hour.
In this case, business hour exceeds midnight and overlap to the next day.
Valid business hours are distinguished by whether it started from valid BusinessDay.
In [216]: bh = pd.offsets.BusinessHour(start="17:00", end="09:00")
In [217]: bh
Out[217]: <BusinessHour: BH=17:00-09:00>
In [218]: pd.Timestamp("2014-08-01 17:00") + bh
Out[218]: Timestamp('2014-08-01 18:00:00')
In [219]: pd.Timestamp("2014-08-01 23:00") + bh
Out[219]: Timestamp('2014-08-02 00:00:00')
# Although 2014-08-02 is Saturday,
# it is valid because it starts from 08-01 (Friday).
In [220]: pd.Timestamp("2014-08-02 04:00") + bh
Out[220]: Timestamp('2014-08-02 05:00:00')
# Although 2014-08-04 is Monday,
# it is out of business hours because it starts from 08-03 (Sunday).
In [221]: pd.Timestamp("2014-08-04 04:00") + bh
Out[221]: Timestamp('2014-08-04 18:00:00')
Applying BusinessHour.rollforward and rollback to out of business hours results in
the next business hour start or previous day’s end. Different from other offsets, BusinessHour.rollforward
may output different results from apply by definition.
This is because one day’s business hour end is equal to next day’s business hour start. For example,
under the default business hours (9:00 - 17:00), there is no gap (0 minutes) between 2014-08-01 17:00 and
2014-08-04 09:00.
# This adjusts a Timestamp to business hour edge
In [222]: pd.offsets.BusinessHour().rollback(pd.Timestamp("2014-08-02 15:00"))
Out[222]: Timestamp('2014-08-01 17:00:00')
In [223]: pd.offsets.BusinessHour().rollforward(pd.Timestamp("2014-08-02 15:00"))
Out[223]: Timestamp('2014-08-04 09:00:00')
# It is the same as BusinessHour() + pd.Timestamp('2014-08-01 17:00').
# And it is the same as BusinessHour() + pd.Timestamp('2014-08-04 09:00')
In [224]: pd.offsets.BusinessHour() + pd.Timestamp("2014-08-02 15:00")
Out[224]: Timestamp('2014-08-04 10:00:00')
# BusinessDay results (for reference)
In [225]: pd.offsets.BusinessHour().rollforward(pd.Timestamp("2014-08-02"))
Out[225]: Timestamp('2014-08-04 09:00:00')
# It is the same as BusinessDay() + pd.Timestamp('2014-08-01')
# The result is the same as rollworward because BusinessDay never overlap.
In [226]: pd.offsets.BusinessHour() + pd.Timestamp("2014-08-02")
Out[226]: Timestamp('2014-08-04 10:00:00')
BusinessHour regards Saturday and Sunday as holidays. To use arbitrary
holidays, you can use CustomBusinessHour offset, as explained in the
following subsection.
Custom business hour#
The CustomBusinessHour is a mixture of BusinessHour and CustomBusinessDay which
allows you to specify arbitrary holidays. CustomBusinessHour works as the same
as BusinessHour except that it skips specified custom holidays.
In [227]: from pandas.tseries.holiday import USFederalHolidayCalendar
In [228]: bhour_us = pd.offsets.CustomBusinessHour(calendar=USFederalHolidayCalendar())
# Friday before MLK Day
In [229]: dt = datetime.datetime(2014, 1, 17, 15)
In [230]: dt + bhour_us
Out[230]: Timestamp('2014-01-17 16:00:00')
# Tuesday after MLK Day (Monday is skipped because it's a holiday)
In [231]: dt + bhour_us * 2
Out[231]: Timestamp('2014-01-21 09:00:00')
You can use keyword arguments supported by either BusinessHour and CustomBusinessDay.
In [232]: bhour_mon = pd.offsets.CustomBusinessHour(start="10:00", weekmask="Tue Wed Thu Fri")
# Monday is skipped because it's a holiday, business hour starts from 10:00
In [233]: dt + bhour_mon * 2
Out[233]: Timestamp('2014-01-21 10:00:00')
Offset aliases#
A number of string aliases are given to useful common time series
frequencies. We will refer to these aliases as offset aliases.
Alias
Description
B
business day frequency
C
custom business day frequency
D
calendar day frequency
W
weekly frequency
M
month end frequency
SM
semi-month end frequency (15th and end of month)
BM
business month end frequency
CBM
custom business month end frequency
MS
month start frequency
SMS
semi-month start frequency (1st and 15th)
BMS
business month start frequency
CBMS
custom business month start frequency
Q
quarter end frequency
BQ
business quarter end frequency
QS
quarter start frequency
BQS
business quarter start frequency
A, Y
year end frequency
BA, BY
business year end frequency
AS, YS
year start frequency
BAS, BYS
business year start frequency
BH
business hour frequency
H
hourly frequency
T, min
minutely frequency
S
secondly frequency
L, ms
milliseconds
U, us
microseconds
N
nanoseconds
Note
When using the offset aliases above, it should be noted that functions
such as date_range(), bdate_range(), will only return
timestamps that are in the interval defined by start_date and
end_date. If the start_date does not correspond to the frequency,
the returned timestamps will start at the next valid timestamp, same for
end_date, the returned timestamps will stop at the previous valid
timestamp.
For example, for the offset MS, if the start_date is not the first
of the month, the returned timestamps will start with the first day of the
next month. If end_date is not the first day of a month, the last
returned timestamp will be the first day of the corresponding month.
In [234]: dates_lst_1 = pd.date_range("2020-01-06", "2020-04-03", freq="MS")
In [235]: dates_lst_1
Out[235]: DatetimeIndex(['2020-02-01', '2020-03-01', '2020-04-01'], dtype='datetime64[ns]', freq='MS')
In [236]: dates_lst_2 = pd.date_range("2020-01-01", "2020-04-01", freq="MS")
In [237]: dates_lst_2
Out[237]: DatetimeIndex(['2020-01-01', '2020-02-01', '2020-03-01', '2020-04-01'], dtype='datetime64[ns]', freq='MS')
We can see in the above example date_range() and
bdate_range() will only return the valid timestamps between the
start_date and end_date. If these are not valid timestamps for the
given frequency it will roll to the next value for start_date
(respectively previous for the end_date)
Combining aliases#
As we have seen previously, the alias and the offset instance are fungible in
most functions:
In [238]: pd.date_range(start, periods=5, freq="B")
Out[238]:
DatetimeIndex(['2011-01-03', '2011-01-04', '2011-01-05', '2011-01-06',
'2011-01-07'],
dtype='datetime64[ns]', freq='B')
In [239]: pd.date_range(start, periods=5, freq=pd.offsets.BDay())
Out[239]:
DatetimeIndex(['2011-01-03', '2011-01-04', '2011-01-05', '2011-01-06',
'2011-01-07'],
dtype='datetime64[ns]', freq='B')
You can combine together day and intraday offsets:
In [240]: pd.date_range(start, periods=10, freq="2h20min")
Out[240]:
DatetimeIndex(['2011-01-01 00:00:00', '2011-01-01 02:20:00',
'2011-01-01 04:40:00', '2011-01-01 07:00:00',
'2011-01-01 09:20:00', '2011-01-01 11:40:00',
'2011-01-01 14:00:00', '2011-01-01 16:20:00',
'2011-01-01 18:40:00', '2011-01-01 21:00:00'],
dtype='datetime64[ns]', freq='140T')
In [241]: pd.date_range(start, periods=10, freq="1D10U")
Out[241]:
DatetimeIndex([ '2011-01-01 00:00:00', '2011-01-02 00:00:00.000010',
'2011-01-03 00:00:00.000020', '2011-01-04 00:00:00.000030',
'2011-01-05 00:00:00.000040', '2011-01-06 00:00:00.000050',
'2011-01-07 00:00:00.000060', '2011-01-08 00:00:00.000070',
'2011-01-09 00:00:00.000080', '2011-01-10 00:00:00.000090'],
dtype='datetime64[ns]', freq='86400000010U')
Anchored offsets#
For some frequencies you can specify an anchoring suffix:
Alias
Description
W-SUN
weekly frequency (Sundays). Same as ‘W’
W-MON
weekly frequency (Mondays)
W-TUE
weekly frequency (Tuesdays)
W-WED
weekly frequency (Wednesdays)
W-THU
weekly frequency (Thursdays)
W-FRI
weekly frequency (Fridays)
W-SAT
weekly frequency (Saturdays)
(B)Q(S)-DEC
quarterly frequency, year ends in December. Same as ‘Q’
(B)Q(S)-JAN
quarterly frequency, year ends in January
(B)Q(S)-FEB
quarterly frequency, year ends in February
(B)Q(S)-MAR
quarterly frequency, year ends in March
(B)Q(S)-APR
quarterly frequency, year ends in April
(B)Q(S)-MAY
quarterly frequency, year ends in May
(B)Q(S)-JUN
quarterly frequency, year ends in June
(B)Q(S)-JUL
quarterly frequency, year ends in July
(B)Q(S)-AUG
quarterly frequency, year ends in August
(B)Q(S)-SEP
quarterly frequency, year ends in September
(B)Q(S)-OCT
quarterly frequency, year ends in October
(B)Q(S)-NOV
quarterly frequency, year ends in November
(B)A(S)-DEC
annual frequency, anchored end of December. Same as ‘A’
(B)A(S)-JAN
annual frequency, anchored end of January
(B)A(S)-FEB
annual frequency, anchored end of February
(B)A(S)-MAR
annual frequency, anchored end of March
(B)A(S)-APR
annual frequency, anchored end of April
(B)A(S)-MAY
annual frequency, anchored end of May
(B)A(S)-JUN
annual frequency, anchored end of June
(B)A(S)-JUL
annual frequency, anchored end of July
(B)A(S)-AUG
annual frequency, anchored end of August
(B)A(S)-SEP
annual frequency, anchored end of September
(B)A(S)-OCT
annual frequency, anchored end of October
(B)A(S)-NOV
annual frequency, anchored end of November
These can be used as arguments to date_range, bdate_range, constructors
for DatetimeIndex, as well as various other timeseries-related functions
in pandas.
Anchored offset semantics#
For those offsets that are anchored to the start or end of specific
frequency (MonthEnd, MonthBegin, WeekEnd, etc), the following
rules apply to rolling forward and backwards.
When n is not 0, if the given date is not on an anchor point, it snapped to the next(previous)
anchor point, and moved |n|-1 additional steps forwards or backwards.
In [242]: pd.Timestamp("2014-01-02") + pd.offsets.MonthBegin(n=1)
Out[242]: Timestamp('2014-02-01 00:00:00')
In [243]: pd.Timestamp("2014-01-02") + pd.offsets.MonthEnd(n=1)
Out[243]: Timestamp('2014-01-31 00:00:00')
In [244]: pd.Timestamp("2014-01-02") - pd.offsets.MonthBegin(n=1)
Out[244]: Timestamp('2014-01-01 00:00:00')
In [245]: pd.Timestamp("2014-01-02") - pd.offsets.MonthEnd(n=1)
Out[245]: Timestamp('2013-12-31 00:00:00')
In [246]: pd.Timestamp("2014-01-02") + pd.offsets.MonthBegin(n=4)
Out[246]: Timestamp('2014-05-01 00:00:00')
In [247]: pd.Timestamp("2014-01-02") - pd.offsets.MonthBegin(n=4)
Out[247]: Timestamp('2013-10-01 00:00:00')
If the given date is on an anchor point, it is moved |n| points forwards
or backwards.
In [248]: pd.Timestamp("2014-01-01") + pd.offsets.MonthBegin(n=1)
Out[248]: Timestamp('2014-02-01 00:00:00')
In [249]: pd.Timestamp("2014-01-31") + pd.offsets.MonthEnd(n=1)
Out[249]: Timestamp('2014-02-28 00:00:00')
In [250]: pd.Timestamp("2014-01-01") - pd.offsets.MonthBegin(n=1)
Out[250]: Timestamp('2013-12-01 00:00:00')
In [251]: pd.Timestamp("2014-01-31") - pd.offsets.MonthEnd(n=1)
Out[251]: Timestamp('2013-12-31 00:00:00')
In [252]: pd.Timestamp("2014-01-01") + pd.offsets.MonthBegin(n=4)
Out[252]: Timestamp('2014-05-01 00:00:00')
In [253]: pd.Timestamp("2014-01-31") - pd.offsets.MonthBegin(n=4)
Out[253]: Timestamp('2013-10-01 00:00:00')
For the case when n=0, the date is not moved if on an anchor point, otherwise
it is rolled forward to the next anchor point.
In [254]: pd.Timestamp("2014-01-02") + pd.offsets.MonthBegin(n=0)
Out[254]: Timestamp('2014-02-01 00:00:00')
In [255]: pd.Timestamp("2014-01-02") + pd.offsets.MonthEnd(n=0)
Out[255]: Timestamp('2014-01-31 00:00:00')
In [256]: pd.Timestamp("2014-01-01") + pd.offsets.MonthBegin(n=0)
Out[256]: Timestamp('2014-01-01 00:00:00')
In [257]: pd.Timestamp("2014-01-31") + pd.offsets.MonthEnd(n=0)
Out[257]: Timestamp('2014-01-31 00:00:00')
Holidays / holiday calendars#
Holidays and calendars provide a simple way to define holiday rules to be used
with CustomBusinessDay or in other analysis that requires a predefined
set of holidays. The AbstractHolidayCalendar class provides all the necessary
methods to return a list of holidays and only rules need to be defined
in a specific holiday calendar class. Furthermore, the start_date and end_date
class attributes determine over what date range holidays are generated. These
should be overwritten on the AbstractHolidayCalendar class to have the range
apply to all calendar subclasses. USFederalHolidayCalendar is the
only calendar that exists and primarily serves as an example for developing
other calendars.
For holidays that occur on fixed dates (e.g., US Memorial Day or July 4th) an
observance rule determines when that holiday is observed if it falls on a weekend
or some other non-observed day. Defined observance rules are:
Rule
Description
nearest_workday
move Saturday to Friday and Sunday to Monday
sunday_to_monday
move Sunday to following Monday
next_monday_or_tuesday
move Saturday to Monday and Sunday/Monday to Tuesday
previous_friday
move Saturday and Sunday to previous Friday”
next_monday
move Saturday and Sunday to following Monday
An example of how holidays and holiday calendars are defined:
In [258]: from pandas.tseries.holiday import (
.....: Holiday,
.....: USMemorialDay,
.....: AbstractHolidayCalendar,
.....: nearest_workday,
.....: MO,
.....: )
.....:
In [259]: class ExampleCalendar(AbstractHolidayCalendar):
.....: rules = [
.....: USMemorialDay,
.....: Holiday("July 4th", month=7, day=4, observance=nearest_workday),
.....: Holiday(
.....: "Columbus Day",
.....: month=10,
.....: day=1,
.....: offset=pd.DateOffset(weekday=MO(2)),
.....: ),
.....: ]
.....:
In [260]: cal = ExampleCalendar()
In [261]: cal.holidays(datetime.datetime(2012, 1, 1), datetime.datetime(2012, 12, 31))
Out[261]: DatetimeIndex(['2012-05-28', '2012-07-04', '2012-10-08'], dtype='datetime64[ns]', freq=None)
hint
weekday=MO(2) is same as 2 * Week(weekday=2)
Using this calendar, creating an index or doing offset arithmetic skips weekends
and holidays (i.e., Memorial Day/July 4th). For example, the below defines
a custom business day offset using the ExampleCalendar. Like any other offset,
it can be used to create a DatetimeIndex or added to datetime
or Timestamp objects.
In [262]: pd.date_range(
.....: start="7/1/2012", end="7/10/2012", freq=pd.offsets.CDay(calendar=cal)
.....: ).to_pydatetime()
.....:
Out[262]:
array([datetime.datetime(2012, 7, 2, 0, 0),
datetime.datetime(2012, 7, 3, 0, 0),
datetime.datetime(2012, 7, 5, 0, 0),
datetime.datetime(2012, 7, 6, 0, 0),
datetime.datetime(2012, 7, 9, 0, 0),
datetime.datetime(2012, 7, 10, 0, 0)], dtype=object)
In [263]: offset = pd.offsets.CustomBusinessDay(calendar=cal)
In [264]: datetime.datetime(2012, 5, 25) + offset
Out[264]: Timestamp('2012-05-29 00:00:00')
In [265]: datetime.datetime(2012, 7, 3) + offset
Out[265]: Timestamp('2012-07-05 00:00:00')
In [266]: datetime.datetime(2012, 7, 3) + 2 * offset
Out[266]: Timestamp('2012-07-06 00:00:00')
In [267]: datetime.datetime(2012, 7, 6) + offset
Out[267]: Timestamp('2012-07-09 00:00:00')
Ranges are defined by the start_date and end_date class attributes
of AbstractHolidayCalendar. The defaults are shown below.
In [268]: AbstractHolidayCalendar.start_date
Out[268]: Timestamp('1970-01-01 00:00:00')
In [269]: AbstractHolidayCalendar.end_date
Out[269]: Timestamp('2200-12-31 00:00:00')
These dates can be overwritten by setting the attributes as
datetime/Timestamp/string.
In [270]: AbstractHolidayCalendar.start_date = datetime.datetime(2012, 1, 1)
In [271]: AbstractHolidayCalendar.end_date = datetime.datetime(2012, 12, 31)
In [272]: cal.holidays()
Out[272]: DatetimeIndex(['2012-05-28', '2012-07-04', '2012-10-08'], dtype='datetime64[ns]', freq=None)
Every calendar class is accessible by name using the get_calendar function
which returns a holiday class instance. Any imported calendar class will
automatically be available by this function. Also, HolidayCalendarFactory
provides an easy interface to create calendars that are combinations of calendars
or calendars with additional rules.
In [273]: from pandas.tseries.holiday import get_calendar, HolidayCalendarFactory, USLaborDay
In [274]: cal = get_calendar("ExampleCalendar")
In [275]: cal.rules
Out[275]:
[Holiday: Memorial Day (month=5, day=31, offset=<DateOffset: weekday=MO(-1)>),
Holiday: July 4th (month=7, day=4, observance=<function nearest_workday at 0x7f1e67138ee0>),
Holiday: Columbus Day (month=10, day=1, offset=<DateOffset: weekday=MO(+2)>)]
In [276]: new_cal = HolidayCalendarFactory("NewExampleCalendar", cal, USLaborDay)
In [277]: new_cal.rules
Out[277]:
[Holiday: Labor Day (month=9, day=1, offset=<DateOffset: weekday=MO(+1)>),
Holiday: Memorial Day (month=5, day=31, offset=<DateOffset: weekday=MO(-1)>),
Holiday: July 4th (month=7, day=4, observance=<function nearest_workday at 0x7f1e67138ee0>),
Holiday: Columbus Day (month=10, day=1, offset=<DateOffset: weekday=MO(+2)>)]
Time Series-related instance methods#
Shifting / lagging#
One may want to shift or lag the values in a time series back and forward in
time. The method for this is shift(), which is available on all of
the pandas objects.
In [278]: ts = pd.Series(range(len(rng)), index=rng)
In [279]: ts = ts[:5]
In [280]: ts.shift(1)
Out[280]:
2012-01-01 NaN
2012-01-02 0.0
2012-01-03 1.0
Freq: D, dtype: float64
The shift method accepts an freq argument which can accept a
DateOffset class or other timedelta-like object or also an
offset alias.
When freq is specified, shift method changes all the dates in the index
rather than changing the alignment of the data and the index:
In [281]: ts.shift(5, freq="D")
Out[281]:
2012-01-06 0
2012-01-07 1
2012-01-08 2
Freq: D, dtype: int64
In [282]: ts.shift(5, freq=pd.offsets.BDay())
Out[282]:
2012-01-06 0
2012-01-09 1
2012-01-10 2
dtype: int64
In [283]: ts.shift(5, freq="BM")
Out[283]:
2012-05-31 0
2012-05-31 1
2012-05-31 2
dtype: int64
Note that with when freq is specified, the leading entry is no longer NaN
because the data is not being realigned.
Frequency conversion#
The primary function for changing frequencies is the asfreq()
method. For a DatetimeIndex, this is basically just a thin, but convenient
wrapper around reindex() which generates a date_range and
calls reindex.
In [284]: dr = pd.date_range("1/1/2010", periods=3, freq=3 * pd.offsets.BDay())
In [285]: ts = pd.Series(np.random.randn(3), index=dr)
In [286]: ts
Out[286]:
2010-01-01 1.494522
2010-01-06 -0.778425
2010-01-11 -0.253355
Freq: 3B, dtype: float64
In [287]: ts.asfreq(pd.offsets.BDay())
Out[287]:
2010-01-01 1.494522
2010-01-04 NaN
2010-01-05 NaN
2010-01-06 -0.778425
2010-01-07 NaN
2010-01-08 NaN
2010-01-11 -0.253355
Freq: B, dtype: float64
asfreq provides a further convenience so you can specify an interpolation
method for any gaps that may appear after the frequency conversion.
In [288]: ts.asfreq(pd.offsets.BDay(), method="pad")
Out[288]:
2010-01-01 1.494522
2010-01-04 1.494522
2010-01-05 1.494522
2010-01-06 -0.778425
2010-01-07 -0.778425
2010-01-08 -0.778425
2010-01-11 -0.253355
Freq: B, dtype: float64
Filling forward / backward#
Related to asfreq and reindex is fillna(), which is
documented in the missing data section.
Converting to Python datetimes#
DatetimeIndex can be converted to an array of Python native
datetime.datetime objects using the to_pydatetime method.
Resampling#
pandas has a simple, powerful, and efficient functionality for performing
resampling operations during frequency conversion (e.g., converting secondly
data into 5-minutely data). This is extremely common in, but not limited to,
financial applications.
resample() is a time-based groupby, followed by a reduction method
on each of its groups. See some cookbook examples for
some advanced strategies.
The resample() method can be used directly from DataFrameGroupBy objects,
see the groupby docs.
Basics#
In [289]: rng = pd.date_range("1/1/2012", periods=100, freq="S")
In [290]: ts = pd.Series(np.random.randint(0, 500, len(rng)), index=rng)
In [291]: ts.resample("5Min").sum()
Out[291]:
2012-01-01 25103
Freq: 5T, dtype: int64
The resample function is very flexible and allows you to specify many
different parameters to control the frequency conversion and resampling
operation.
Any function available via dispatching is available as
a method of the returned object, including sum, mean, std, sem,
max, min, median, first, last, ohlc:
In [292]: ts.resample("5Min").mean()
Out[292]:
2012-01-01 251.03
Freq: 5T, dtype: float64
In [293]: ts.resample("5Min").ohlc()
Out[293]:
open high low close
2012-01-01 308 460 9 205
In [294]: ts.resample("5Min").max()
Out[294]:
2012-01-01 460
Freq: 5T, dtype: int64
For downsampling, closed can be set to ‘left’ or ‘right’ to specify which
end of the interval is closed:
In [295]: ts.resample("5Min", closed="right").mean()
Out[295]:
2011-12-31 23:55:00 308.000000
2012-01-01 00:00:00 250.454545
Freq: 5T, dtype: float64
In [296]: ts.resample("5Min", closed="left").mean()
Out[296]:
2012-01-01 251.03
Freq: 5T, dtype: float64
Parameters like label are used to manipulate the resulting labels.
label specifies whether the result is labeled with the beginning or
the end of the interval.
In [297]: ts.resample("5Min").mean() # by default label='left'
Out[297]:
2012-01-01 251.03
Freq: 5T, dtype: float64
In [298]: ts.resample("5Min", label="left").mean()
Out[298]:
2012-01-01 251.03
Freq: 5T, dtype: float64
Warning
The default values for label and closed is ‘left’ for all
frequency offsets except for ‘M’, ‘A’, ‘Q’, ‘BM’, ‘BA’, ‘BQ’, and ‘W’
which all have a default of ‘right’.
This might unintendedly lead to looking ahead, where the value for a later
time is pulled back to a previous time as in the following example with
the BusinessDay frequency:
In [299]: s = pd.date_range("2000-01-01", "2000-01-05").to_series()
In [300]: s.iloc[2] = pd.NaT
In [301]: s.dt.day_name()
Out[301]:
2000-01-01 Saturday
2000-01-02 Sunday
2000-01-03 NaN
2000-01-04 Tuesday
2000-01-05 Wednesday
Freq: D, dtype: object
# default: label='left', closed='left'
In [302]: s.resample("B").last().dt.day_name()
Out[302]:
1999-12-31 Sunday
2000-01-03 NaN
2000-01-04 Tuesday
2000-01-05 Wednesday
Freq: B, dtype: object
Notice how the value for Sunday got pulled back to the previous Friday.
To get the behavior where the value for Sunday is pushed to Monday, use
instead
In [303]: s.resample("B", label="right", closed="right").last().dt.day_name()
Out[303]:
2000-01-03 Sunday
2000-01-04 Tuesday
2000-01-05 Wednesday
Freq: B, dtype: object
The axis parameter can be set to 0 or 1 and allows you to resample the
specified axis for a DataFrame.
kind can be set to ‘timestamp’ or ‘period’ to convert the resulting index
to/from timestamp and time span representations. By default resample
retains the input representation.
convention can be set to ‘start’ or ‘end’ when resampling period data
(detail below). It specifies how low frequency periods are converted to higher
frequency periods.
Upsampling#
For upsampling, you can specify a way to upsample and the limit parameter to interpolate over the gaps that are created:
# from secondly to every 250 milliseconds
In [304]: ts[:2].resample("250L").asfreq()
Out[304]:
2012-01-01 00:00:00.000 308.0
2012-01-01 00:00:00.250 NaN
2012-01-01 00:00:00.500 NaN
2012-01-01 00:00:00.750 NaN
2012-01-01 00:00:01.000 204.0
Freq: 250L, dtype: float64
In [305]: ts[:2].resample("250L").ffill()
Out[305]:
2012-01-01 00:00:00.000 308
2012-01-01 00:00:00.250 308
2012-01-01 00:00:00.500 308
2012-01-01 00:00:00.750 308
2012-01-01 00:00:01.000 204
Freq: 250L, dtype: int64
In [306]: ts[:2].resample("250L").ffill(limit=2)
Out[306]:
2012-01-01 00:00:00.000 308.0
2012-01-01 00:00:00.250 308.0
2012-01-01 00:00:00.500 308.0
2012-01-01 00:00:00.750 NaN
2012-01-01 00:00:01.000 204.0
Freq: 250L, dtype: float64
Sparse resampling#
Sparse timeseries are the ones where you have a lot fewer points relative
to the amount of time you are looking to resample. Naively upsampling a sparse
series can potentially generate lots of intermediate values. When you don’t want
to use a method to fill these values, e.g. fill_method is None, then
intermediate values will be filled with NaN.
Since resample is a time-based groupby, the following is a method to efficiently
resample only the groups that are not all NaN.
In [307]: rng = pd.date_range("2014-1-1", periods=100, freq="D") + pd.Timedelta("1s")
In [308]: ts = pd.Series(range(100), index=rng)
If we want to resample to the full range of the series:
In [309]: ts.resample("3T").sum()
Out[309]:
2014-01-01 00:00:00 0
2014-01-01 00:03:00 0
2014-01-01 00:06:00 0
2014-01-01 00:09:00 0
2014-01-01 00:12:00 0
..
2014-04-09 23:48:00 0
2014-04-09 23:51:00 0
2014-04-09 23:54:00 0
2014-04-09 23:57:00 0
2014-04-10 00:00:00 99
Freq: 3T, Length: 47521, dtype: int64
We can instead only resample those groups where we have points as follows:
In [310]: from functools import partial
In [311]: from pandas.tseries.frequencies import to_offset
In [312]: def round(t, freq):
.....: freq = to_offset(freq)
.....: return pd.Timestamp((t.value // freq.delta.value) * freq.delta.value)
.....:
In [313]: ts.groupby(partial(round, freq="3T")).sum()
Out[313]:
2014-01-01 0
2014-01-02 1
2014-01-03 2
2014-01-04 3
2014-01-05 4
..
2014-04-06 95
2014-04-07 96
2014-04-08 97
2014-04-09 98
2014-04-10 99
Length: 100, dtype: int64
Aggregation#
Similar to the aggregating API, groupby API, and the window API,
a Resampler can be selectively resampled.
Resampling a DataFrame, the default will be to act on all columns with the same function.
In [314]: df = pd.DataFrame(
.....: np.random.randn(1000, 3),
.....: index=pd.date_range("1/1/2012", freq="S", periods=1000),
.....: columns=["A", "B", "C"],
.....: )
.....:
In [315]: r = df.resample("3T")
In [316]: r.mean()
Out[316]:
A B C
2012-01-01 00:00:00 -0.033823 -0.121514 -0.081447
2012-01-01 00:03:00 0.056909 0.146731 -0.024320
2012-01-01 00:06:00 -0.058837 0.047046 -0.052021
2012-01-01 00:09:00 0.063123 -0.026158 -0.066533
2012-01-01 00:12:00 0.186340 -0.003144 0.074752
2012-01-01 00:15:00 -0.085954 -0.016287 -0.050046
We can select a specific column or columns using standard getitem.
In [317]: r["A"].mean()
Out[317]:
2012-01-01 00:00:00 -0.033823
2012-01-01 00:03:00 0.056909
2012-01-01 00:06:00 -0.058837
2012-01-01 00:09:00 0.063123
2012-01-01 00:12:00 0.186340
2012-01-01 00:15:00 -0.085954
Freq: 3T, Name: A, dtype: float64
In [318]: r[["A", "B"]].mean()
Out[318]:
A B
2012-01-01 00:00:00 -0.033823 -0.121514
2012-01-01 00:03:00 0.056909 0.146731
2012-01-01 00:06:00 -0.058837 0.047046
2012-01-01 00:09:00 0.063123 -0.026158
2012-01-01 00:12:00 0.186340 -0.003144
2012-01-01 00:15:00 -0.085954 -0.016287
You can pass a list or dict of functions to do aggregation with, outputting a DataFrame:
In [319]: r["A"].agg([np.sum, np.mean, np.std])
Out[319]:
sum mean std
2012-01-01 00:00:00 -6.088060 -0.033823 1.043263
2012-01-01 00:03:00 10.243678 0.056909 1.058534
2012-01-01 00:06:00 -10.590584 -0.058837 0.949264
2012-01-01 00:09:00 11.362228 0.063123 1.028096
2012-01-01 00:12:00 33.541257 0.186340 0.884586
2012-01-01 00:15:00 -8.595393 -0.085954 1.035476
On a resampled DataFrame, you can pass a list of functions to apply to each
column, which produces an aggregated result with a hierarchical index:
In [320]: r.agg([np.sum, np.mean])
Out[320]:
A ... C
sum mean ... sum mean
2012-01-01 00:00:00 -6.088060 -0.033823 ... -14.660515 -0.081447
2012-01-01 00:03:00 10.243678 0.056909 ... -4.377642 -0.024320
2012-01-01 00:06:00 -10.590584 -0.058837 ... -9.363825 -0.052021
2012-01-01 00:09:00 11.362228 0.063123 ... -11.975895 -0.066533
2012-01-01 00:12:00 33.541257 0.186340 ... 13.455299 0.074752
2012-01-01 00:15:00 -8.595393 -0.085954 ... -5.004580 -0.050046
[6 rows x 6 columns]
By passing a dict to aggregate you can apply a different aggregation to the
columns of a DataFrame:
In [321]: r.agg({"A": np.sum, "B": lambda x: np.std(x, ddof=1)})
Out[321]:
A B
2012-01-01 00:00:00 -6.088060 1.001294
2012-01-01 00:03:00 10.243678 1.074597
2012-01-01 00:06:00 -10.590584 0.987309
2012-01-01 00:09:00 11.362228 0.944953
2012-01-01 00:12:00 33.541257 1.095025
2012-01-01 00:15:00 -8.595393 1.035312
The function names can also be strings. In order for a string to be valid it
must be implemented on the resampled object:
In [322]: r.agg({"A": "sum", "B": "std"})
Out[322]:
A B
2012-01-01 00:00:00 -6.088060 1.001294
2012-01-01 00:03:00 10.243678 1.074597
2012-01-01 00:06:00 -10.590584 0.987309
2012-01-01 00:09:00 11.362228 0.944953
2012-01-01 00:12:00 33.541257 1.095025
2012-01-01 00:15:00 -8.595393 1.035312
Furthermore, you can also specify multiple aggregation functions for each column separately.
In [323]: r.agg({"A": ["sum", "std"], "B": ["mean", "std"]})
Out[323]:
A B
sum std mean std
2012-01-01 00:00:00 -6.088060 1.043263 -0.121514 1.001294
2012-01-01 00:03:00 10.243678 1.058534 0.146731 1.074597
2012-01-01 00:06:00 -10.590584 0.949264 0.047046 0.987309
2012-01-01 00:09:00 11.362228 1.028096 -0.026158 0.944953
2012-01-01 00:12:00 33.541257 0.884586 -0.003144 1.095025
2012-01-01 00:15:00 -8.595393 1.035476 -0.016287 1.035312
If a DataFrame does not have a datetimelike index, but instead you want
to resample based on datetimelike column in the frame, it can passed to the
on keyword.
In [324]: df = pd.DataFrame(
.....: {"date": pd.date_range("2015-01-01", freq="W", periods=5), "a": np.arange(5)},
.....: index=pd.MultiIndex.from_arrays(
.....: [[1, 2, 3, 4, 5], pd.date_range("2015-01-01", freq="W", periods=5)],
.....: names=["v", "d"],
.....: ),
.....: )
.....:
In [325]: df
Out[325]:
date a
v d
1 2015-01-04 2015-01-04 0
2 2015-01-11 2015-01-11 1
3 2015-01-18 2015-01-18 2
4 2015-01-25 2015-01-25 3
5 2015-02-01 2015-02-01 4
In [326]: df.resample("M", on="date")[["a"]].sum()
Out[326]:
a
date
2015-01-31 6
2015-02-28 4
Similarly, if you instead want to resample by a datetimelike
level of MultiIndex, its name or location can be passed to the
level keyword.
In [327]: df.resample("M", level="d")[["a"]].sum()
Out[327]:
a
d
2015-01-31 6
2015-02-28 4
Iterating through groups#
With the Resampler object in hand, iterating through the grouped data is very
natural and functions similarly to itertools.groupby():
In [328]: small = pd.Series(
.....: range(6),
.....: index=pd.to_datetime(
.....: [
.....: "2017-01-01T00:00:00",
.....: "2017-01-01T00:30:00",
.....: "2017-01-01T00:31:00",
.....: "2017-01-01T01:00:00",
.....: "2017-01-01T03:00:00",
.....: "2017-01-01T03:05:00",
.....: ]
.....: ),
.....: )
.....:
In [329]: resampled = small.resample("H")
In [330]: for name, group in resampled:
.....: print("Group: ", name)
.....: print("-" * 27)
.....: print(group, end="\n\n")
.....:
Group: 2017-01-01 00:00:00
---------------------------
2017-01-01 00:00:00 0
2017-01-01 00:30:00 1
2017-01-01 00:31:00 2
dtype: int64
Group: 2017-01-01 01:00:00
---------------------------
2017-01-01 01:00:00 3
dtype: int64
Group: 2017-01-01 02:00:00
---------------------------
Series([], dtype: int64)
Group: 2017-01-01 03:00:00
---------------------------
2017-01-01 03:00:00 4
2017-01-01 03:05:00 5
dtype: int64
See Iterating through groups or Resampler.__iter__ for more.
Use origin or offset to adjust the start of the bins#
New in version 1.1.0.
The bins of the grouping are adjusted based on the beginning of the day of the time series starting point. This works well with frequencies that are multiples of a day (like 30D) or that divide a day evenly (like 90s or 1min). This can create inconsistencies with some frequencies that do not meet this criteria. To change this behavior you can specify a fixed Timestamp with the argument origin.
For example:
In [331]: start, end = "2000-10-01 23:30:00", "2000-10-02 00:30:00"
In [332]: middle = "2000-10-02 00:00:00"
In [333]: rng = pd.date_range(start, end, freq="7min")
In [334]: ts = pd.Series(np.arange(len(rng)) * 3, index=rng)
In [335]: ts
Out[335]:
2000-10-01 23:30:00 0
2000-10-01 23:37:00 3
2000-10-01 23:44:00 6
2000-10-01 23:51:00 9
2000-10-01 23:58:00 12
2000-10-02 00:05:00 15
2000-10-02 00:12:00 18
2000-10-02 00:19:00 21
2000-10-02 00:26:00 24
Freq: 7T, dtype: int64
Here we can see that, when using origin with its default value ('start_day'), the result after '2000-10-02 00:00:00' are not identical depending on the start of time series:
In [336]: ts.resample("17min", origin="start_day").sum()
Out[336]:
2000-10-01 23:14:00 0
2000-10-01 23:31:00 9
2000-10-01 23:48:00 21
2000-10-02 00:05:00 54
2000-10-02 00:22:00 24
Freq: 17T, dtype: int64
In [337]: ts[middle:end].resample("17min", origin="start_day").sum()
Out[337]:
2000-10-02 00:00:00 33
2000-10-02 00:17:00 45
Freq: 17T, dtype: int64
Here we can see that, when setting origin to 'epoch', the result after '2000-10-02 00:00:00' are identical depending on the start of time series:
In [338]: ts.resample("17min", origin="epoch").sum()
Out[338]:
2000-10-01 23:18:00 0
2000-10-01 23:35:00 18
2000-10-01 23:52:00 27
2000-10-02 00:09:00 39
2000-10-02 00:26:00 24
Freq: 17T, dtype: int64
In [339]: ts[middle:end].resample("17min", origin="epoch").sum()
Out[339]:
2000-10-01 23:52:00 15
2000-10-02 00:09:00 39
2000-10-02 00:26:00 24
Freq: 17T, dtype: int64
If needed you can use a custom timestamp for origin:
In [340]: ts.resample("17min", origin="2001-01-01").sum()
Out[340]:
2000-10-01 23:30:00 9
2000-10-01 23:47:00 21
2000-10-02 00:04:00 54
2000-10-02 00:21:00 24
Freq: 17T, dtype: int64
In [341]: ts[middle:end].resample("17min", origin=pd.Timestamp("2001-01-01")).sum()
Out[341]:
2000-10-02 00:04:00 54
2000-10-02 00:21:00 24
Freq: 17T, dtype: int64
If needed you can just adjust the bins with an offset Timedelta that would be added to the default origin.
Those two examples are equivalent for this time series:
In [342]: ts.resample("17min", origin="start").sum()
Out[342]:
2000-10-01 23:30:00 9
2000-10-01 23:47:00 21
2000-10-02 00:04:00 54
2000-10-02 00:21:00 24
Freq: 17T, dtype: int64
In [343]: ts.resample("17min", offset="23h30min").sum()
Out[343]:
2000-10-01 23:30:00 9
2000-10-01 23:47:00 21
2000-10-02 00:04:00 54
2000-10-02 00:21:00 24
Freq: 17T, dtype: int64
Note the use of 'start' for origin on the last example. In that case, origin will be set to the first value of the timeseries.
Backward resample#
New in version 1.3.0.
Instead of adjusting the beginning of bins, sometimes we need to fix the end of the bins to make a backward resample with a given freq. The backward resample sets closed to 'right' by default since the last value should be considered as the edge point for the last bin.
We can set origin to 'end'. The value for a specific Timestamp index stands for the resample result from the current Timestamp minus freq to the current Timestamp with a right close.
In [344]: ts.resample('17min', origin='end').sum()
Out[344]:
2000-10-01 23:35:00 0
2000-10-01 23:52:00 18
2000-10-02 00:09:00 27
2000-10-02 00:26:00 63
Freq: 17T, dtype: int64
Besides, in contrast with the 'start_day' option, end_day is supported. This will set the origin as the ceiling midnight of the largest Timestamp.
In [345]: ts.resample('17min', origin='end_day').sum()
Out[345]:
2000-10-01 23:38:00 3
2000-10-01 23:55:00 15
2000-10-02 00:12:00 45
2000-10-02 00:29:00 45
Freq: 17T, dtype: int64
The above result uses 2000-10-02 00:29:00 as the last bin’s right edge since the following computation.
In [346]: ceil_mid = rng.max().ceil('D')
In [347]: freq = pd.offsets.Minute(17)
In [348]: bin_res = ceil_mid - freq * ((ceil_mid - rng.max()) // freq)
In [349]: bin_res
Out[349]: Timestamp('2000-10-02 00:29:00')
Time span representation#
Regular intervals of time are represented by Period objects in pandas while
sequences of Period objects are collected in a PeriodIndex, which can
be created with the convenience function period_range.
Period#
A Period represents a span of time (e.g., a day, a month, a quarter, etc).
You can specify the span via freq keyword using a frequency alias like below.
Because freq represents a span of Period, it cannot be negative like “-3D”.
In [350]: pd.Period("2012", freq="A-DEC")
Out[350]: Period('2012', 'A-DEC')
In [351]: pd.Period("2012-1-1", freq="D")
Out[351]: Period('2012-01-01', 'D')
In [352]: pd.Period("2012-1-1 19:00", freq="H")
Out[352]: Period('2012-01-01 19:00', 'H')
In [353]: pd.Period("2012-1-1 19:00", freq="5H")
Out[353]: Period('2012-01-01 19:00', '5H')
Adding and subtracting integers from periods shifts the period by its own
frequency. Arithmetic is not allowed between Period with different freq (span).
In [354]: p = pd.Period("2012", freq="A-DEC")
In [355]: p + 1
Out[355]: Period('2013', 'A-DEC')
In [356]: p - 3
Out[356]: Period('2009', 'A-DEC')
In [357]: p = pd.Period("2012-01", freq="2M")
In [358]: p + 2
Out[358]: Period('2012-05', '2M')
In [359]: p - 1
Out[359]: Period('2011-11', '2M')
In [360]: p == pd.Period("2012-01", freq="3M")
Out[360]: False
If Period freq is daily or higher (D, H, T, S, L, U, N), offsets and timedelta-like can be added if the result can have the same freq. Otherwise, ValueError will be raised.
In [361]: p = pd.Period("2014-07-01 09:00", freq="H")
In [362]: p + pd.offsets.Hour(2)
Out[362]: Period('2014-07-01 11:00', 'H')
In [363]: p + datetime.timedelta(minutes=120)
Out[363]: Period('2014-07-01 11:00', 'H')
In [364]: p + np.timedelta64(7200, "s")
Out[364]: Period('2014-07-01 11:00', 'H')
In [1]: p + pd.offsets.Minute(5)
Traceback
...
ValueError: Input has different freq from Period(freq=H)
If Period has other frequencies, only the same offsets can be added. Otherwise, ValueError will be raised.
In [365]: p = pd.Period("2014-07", freq="M")
In [366]: p + pd.offsets.MonthEnd(3)
Out[366]: Period('2014-10', 'M')
In [1]: p + pd.offsets.MonthBegin(3)
Traceback
...
ValueError: Input has different freq from Period(freq=M)
Taking the difference of Period instances with the same frequency will
return the number of frequency units between them:
In [367]: pd.Period("2012", freq="A-DEC") - pd.Period("2002", freq="A-DEC")
Out[367]: <10 * YearEnds: month=12>
PeriodIndex and period_range#
Regular sequences of Period objects can be collected in a PeriodIndex,
which can be constructed using the period_range convenience function:
In [368]: prng = pd.period_range("1/1/2011", "1/1/2012", freq="M")
In [369]: prng
Out[369]:
PeriodIndex(['2011-01', '2011-02', '2011-03', '2011-04', '2011-05', '2011-06',
'2011-07', '2011-08', '2011-09', '2011-10', '2011-11', '2011-12',
'2012-01'],
dtype='period[M]')
The PeriodIndex constructor can also be used directly:
In [370]: pd.PeriodIndex(["2011-1", "2011-2", "2011-3"], freq="M")
Out[370]: PeriodIndex(['2011-01', '2011-02', '2011-03'], dtype='period[M]')
Passing multiplied frequency outputs a sequence of Period which
has multiplied span.
In [371]: pd.period_range(start="2014-01", freq="3M", periods=4)
Out[371]: PeriodIndex(['2014-01', '2014-04', '2014-07', '2014-10'], dtype='period[3M]')
If start or end are Period objects, they will be used as anchor
endpoints for a PeriodIndex with frequency matching that of the
PeriodIndex constructor.
In [372]: pd.period_range(
.....: start=pd.Period("2017Q1", freq="Q"), end=pd.Period("2017Q2", freq="Q"), freq="M"
.....: )
.....:
Out[372]: PeriodIndex(['2017-03', '2017-04', '2017-05', '2017-06'], dtype='period[M]')
Just like DatetimeIndex, a PeriodIndex can also be used to index pandas
objects:
In [373]: ps = pd.Series(np.random.randn(len(prng)), prng)
In [374]: ps
Out[374]:
2011-01 -2.916901
2011-02 0.514474
2011-03 1.346470
2011-04 0.816397
2011-05 2.258648
2011-06 0.494789
2011-07 0.301239
2011-08 0.464776
2011-09 -1.393581
2011-10 0.056780
2011-11 0.197035
2011-12 2.261385
2012-01 -0.329583
Freq: M, dtype: float64
PeriodIndex supports addition and subtraction with the same rule as Period.
In [375]: idx = pd.period_range("2014-07-01 09:00", periods=5, freq="H")
In [376]: idx
Out[376]:
PeriodIndex(['2014-07-01 09:00', '2014-07-01 10:00', '2014-07-01 11:00',
'2014-07-01 12:00', '2014-07-01 13:00'],
dtype='period[H]')
In [377]: idx + pd.offsets.Hour(2)
Out[377]:
PeriodIndex(['2014-07-01 11:00', '2014-07-01 12:00', '2014-07-01 13:00',
'2014-07-01 14:00', '2014-07-01 15:00'],
dtype='period[H]')
In [378]: idx = pd.period_range("2014-07", periods=5, freq="M")
In [379]: idx
Out[379]: PeriodIndex(['2014-07', '2014-08', '2014-09', '2014-10', '2014-11'], dtype='period[M]')
In [380]: idx + pd.offsets.MonthEnd(3)
Out[380]: PeriodIndex(['2014-10', '2014-11', '2014-12', '2015-01', '2015-02'], dtype='period[M]')
PeriodIndex has its own dtype named period, refer to Period Dtypes.
Period dtypes#
PeriodIndex has a custom period dtype. This is a pandas extension
dtype similar to the timezone aware dtype (datetime64[ns, tz]).
The period dtype holds the freq attribute and is represented with
period[freq] like period[D] or period[M], using frequency strings.
In [381]: pi = pd.period_range("2016-01-01", periods=3, freq="M")
In [382]: pi
Out[382]: PeriodIndex(['2016-01', '2016-02', '2016-03'], dtype='period[M]')
In [383]: pi.dtype
Out[383]: period[M]
The period dtype can be used in .astype(...). It allows one to change the
freq of a PeriodIndex like .asfreq() and convert a
DatetimeIndex to PeriodIndex like to_period():
# change monthly freq to daily freq
In [384]: pi.astype("period[D]")
Out[384]: PeriodIndex(['2016-01-31', '2016-02-29', '2016-03-31'], dtype='period[D]')
# convert to DatetimeIndex
In [385]: pi.astype("datetime64[ns]")
Out[385]: DatetimeIndex(['2016-01-01', '2016-02-01', '2016-03-01'], dtype='datetime64[ns]', freq='MS')
# convert to PeriodIndex
In [386]: dti = pd.date_range("2011-01-01", freq="M", periods=3)
In [387]: dti
Out[387]: DatetimeIndex(['2011-01-31', '2011-02-28', '2011-03-31'], dtype='datetime64[ns]', freq='M')
In [388]: dti.astype("period[M]")
Out[388]: PeriodIndex(['2011-01', '2011-02', '2011-03'], dtype='period[M]')
PeriodIndex partial string indexing#
PeriodIndex now supports partial string slicing with non-monotonic indexes.
New in version 1.1.0.
You can pass in dates and strings to Series and DataFrame with PeriodIndex, in the same manner as DatetimeIndex. For details, refer to DatetimeIndex Partial String Indexing.
In [389]: ps["2011-01"]
Out[389]: -2.9169013294054507
In [390]: ps[datetime.datetime(2011, 12, 25):]
Out[390]:
2011-12 2.261385
2012-01 -0.329583
Freq: M, dtype: float64
In [391]: ps["10/31/2011":"12/31/2011"]
Out[391]:
2011-10 0.056780
2011-11 0.197035
2011-12 2.261385
Freq: M, dtype: float64
Passing a string representing a lower frequency than PeriodIndex returns partial sliced data.
In [392]: ps["2011"]
Out[392]:
2011-01 -2.916901
2011-02 0.514474
2011-03 1.346470
2011-04 0.816397
2011-05 2.258648
2011-06 0.494789
2011-07 0.301239
2011-08 0.464776
2011-09 -1.393581
2011-10 0.056780
2011-11 0.197035
2011-12 2.261385
Freq: M, dtype: float64
In [393]: dfp = pd.DataFrame(
.....: np.random.randn(600, 1),
.....: columns=["A"],
.....: index=pd.period_range("2013-01-01 9:00", periods=600, freq="T"),
.....: )
.....:
In [394]: dfp
Out[394]:
A
2013-01-01 09:00 -0.538468
2013-01-01 09:01 -1.365819
2013-01-01 09:02 -0.969051
2013-01-01 09:03 -0.331152
2013-01-01 09:04 -0.245334
... ...
2013-01-01 18:55 0.522460
2013-01-01 18:56 0.118710
2013-01-01 18:57 0.167517
2013-01-01 18:58 0.922883
2013-01-01 18:59 1.721104
[600 rows x 1 columns]
In [395]: dfp.loc["2013-01-01 10H"]
Out[395]:
A
2013-01-01 10:00 -0.308975
2013-01-01 10:01 0.542520
2013-01-01 10:02 1.061068
2013-01-01 10:03 0.754005
2013-01-01 10:04 0.352933
... ...
2013-01-01 10:55 -0.865621
2013-01-01 10:56 -1.167818
2013-01-01 10:57 -2.081748
2013-01-01 10:58 -0.527146
2013-01-01 10:59 0.802298
[60 rows x 1 columns]
As with DatetimeIndex, the endpoints will be included in the result. The example below slices data starting from 10:00 to 11:59.
In [396]: dfp["2013-01-01 10H":"2013-01-01 11H"]
Out[396]:
A
2013-01-01 10:00 -0.308975
2013-01-01 10:01 0.542520
2013-01-01 10:02 1.061068
2013-01-01 10:03 0.754005
2013-01-01 10:04 0.352933
... ...
2013-01-01 11:55 -0.590204
2013-01-01 11:56 1.539990
2013-01-01 11:57 -1.224826
2013-01-01 11:58 0.578798
2013-01-01 11:59 -0.685496
[120 rows x 1 columns]
Frequency conversion and resampling with PeriodIndex#
The frequency of Period and PeriodIndex can be converted via the asfreq
method. Let’s start with the fiscal year 2011, ending in December:
In [397]: p = pd.Period("2011", freq="A-DEC")
In [398]: p
Out[398]: Period('2011', 'A-DEC')
We can convert it to a monthly frequency. Using the how parameter, we can
specify whether to return the starting or ending month:
In [399]: p.asfreq("M", how="start")
Out[399]: Period('2011-01', 'M')
In [400]: p.asfreq("M", how="end")
Out[400]: Period('2011-12', 'M')
The shorthands ‘s’ and ‘e’ are provided for convenience:
In [401]: p.asfreq("M", "s")
Out[401]: Period('2011-01', 'M')
In [402]: p.asfreq("M", "e")
Out[402]: Period('2011-12', 'M')
Converting to a “super-period” (e.g., annual frequency is a super-period of
quarterly frequency) automatically returns the super-period that includes the
input period:
In [403]: p = pd.Period("2011-12", freq="M")
In [404]: p.asfreq("A-NOV")
Out[404]: Period('2012', 'A-NOV')
Note that since we converted to an annual frequency that ends the year in
November, the monthly period of December 2011 is actually in the 2012 A-NOV
period.
Period conversions with anchored frequencies are particularly useful for
working with various quarterly data common to economics, business, and other
fields. Many organizations define quarters relative to the month in which their
fiscal year starts and ends. Thus, first quarter of 2011 could start in 2010 or
a few months into 2011. Via anchored frequencies, pandas works for all quarterly
frequencies Q-JAN through Q-DEC.
Q-DEC define regular calendar quarters:
In [405]: p = pd.Period("2012Q1", freq="Q-DEC")
In [406]: p.asfreq("D", "s")
Out[406]: Period('2012-01-01', 'D')
In [407]: p.asfreq("D", "e")
Out[407]: Period('2012-03-31', 'D')
Q-MAR defines fiscal year end in March:
In [408]: p = pd.Period("2011Q4", freq="Q-MAR")
In [409]: p.asfreq("D", "s")
Out[409]: Period('2011-01-01', 'D')
In [410]: p.asfreq("D", "e")
Out[410]: Period('2011-03-31', 'D')
Converting between representations#
Timestamped data can be converted to PeriodIndex-ed data using to_period
and vice-versa using to_timestamp:
In [411]: rng = pd.date_range("1/1/2012", periods=5, freq="M")
In [412]: ts = pd.Series(np.random.randn(len(rng)), index=rng)
In [413]: ts
Out[413]:
2012-01-31 1.931253
2012-02-29 -0.184594
2012-03-31 0.249656
2012-04-30 -0.978151
2012-05-31 -0.873389
Freq: M, dtype: float64
In [414]: ps = ts.to_period()
In [415]: ps
Out[415]:
2012-01 1.931253
2012-02 -0.184594
2012-03 0.249656
2012-04 -0.978151
2012-05 -0.873389
Freq: M, dtype: float64
In [416]: ps.to_timestamp()
Out[416]:
2012-01-01 1.931253
2012-02-01 -0.184594
2012-03-01 0.249656
2012-04-01 -0.978151
2012-05-01 -0.873389
Freq: MS, dtype: float64
Remember that ‘s’ and ‘e’ can be used to return the timestamps at the start or
end of the period:
In [417]: ps.to_timestamp("D", how="s")
Out[417]:
2012-01-01 1.931253
2012-02-01 -0.184594
2012-03-01 0.249656
2012-04-01 -0.978151
2012-05-01 -0.873389
Freq: MS, dtype: float64
Converting between period and timestamp enables some convenient arithmetic
functions to be used. In the following example, we convert a quarterly
frequency with year ending in November to 9am of the end of the month following
the quarter end:
In [418]: prng = pd.period_range("1990Q1", "2000Q4", freq="Q-NOV")
In [419]: ts = pd.Series(np.random.randn(len(prng)), prng)
In [420]: ts.index = (prng.asfreq("M", "e") + 1).asfreq("H", "s") + 9
In [421]: ts.head()
Out[421]:
1990-03-01 09:00 -0.109291
1990-06-01 09:00 -0.637235
1990-09-01 09:00 -1.735925
1990-12-01 09:00 2.096946
1991-03-01 09:00 -1.039926
Freq: H, dtype: float64
Representing out-of-bounds spans#
If you have data that is outside of the Timestamp bounds, see Timestamp limitations,
then you can use a PeriodIndex and/or Series of Periods to do computations.
In [422]: span = pd.period_range("1215-01-01", "1381-01-01", freq="D")
In [423]: span
Out[423]:
PeriodIndex(['1215-01-01', '1215-01-02', '1215-01-03', '1215-01-04',
'1215-01-05', '1215-01-06', '1215-01-07', '1215-01-08',
'1215-01-09', '1215-01-10',
...
'1380-12-23', '1380-12-24', '1380-12-25', '1380-12-26',
'1380-12-27', '1380-12-28', '1380-12-29', '1380-12-30',
'1380-12-31', '1381-01-01'],
dtype='period[D]', length=60632)
To convert from an int64 based YYYYMMDD representation.
In [424]: s = pd.Series([20121231, 20141130, 99991231])
In [425]: s
Out[425]:
0 20121231
1 20141130
2 99991231
dtype: int64
In [426]: def conv(x):
.....: return pd.Period(year=x // 10000, month=x // 100 % 100, day=x % 100, freq="D")
.....:
In [427]: s.apply(conv)
Out[427]:
0 2012-12-31
1 2014-11-30
2 9999-12-31
dtype: period[D]
In [428]: s.apply(conv)[2]
Out[428]: Period('9999-12-31', 'D')
These can easily be converted to a PeriodIndex:
In [429]: span = pd.PeriodIndex(s.apply(conv))
In [430]: span
Out[430]: PeriodIndex(['2012-12-31', '2014-11-30', '9999-12-31'], dtype='period[D]')
Time zone handling#
pandas provides rich support for working with timestamps in different time
zones using the pytz and dateutil libraries or datetime.timezone
objects from the standard library.
Working with time zones#
By default, pandas objects are time zone unaware:
In [431]: rng = pd.date_range("3/6/2012 00:00", periods=15, freq="D")
In [432]: rng.tz is None
Out[432]: True
To localize these dates to a time zone (assign a particular time zone to a naive date),
you can use the tz_localize method or the tz keyword argument in
date_range(), Timestamp, or DatetimeIndex.
You can either pass pytz or dateutil time zone objects or Olson time zone database strings.
Olson time zone strings will return pytz time zone objects by default.
To return dateutil time zone objects, append dateutil/ before the string.
In pytz you can find a list of common (and less common) time zones using
from pytz import common_timezones, all_timezones.
dateutil uses the OS time zones so there isn’t a fixed list available. For
common zones, the names are the same as pytz.
In [433]: import dateutil
# pytz
In [434]: rng_pytz = pd.date_range("3/6/2012 00:00", periods=3, freq="D", tz="Europe/London")
In [435]: rng_pytz.tz
Out[435]: <DstTzInfo 'Europe/London' LMT-1 day, 23:59:00 STD>
# dateutil
In [436]: rng_dateutil = pd.date_range("3/6/2012 00:00", periods=3, freq="D")
In [437]: rng_dateutil = rng_dateutil.tz_localize("dateutil/Europe/London")
In [438]: rng_dateutil.tz
Out[438]: tzfile('/usr/share/zoneinfo/Europe/London')
# dateutil - utc special case
In [439]: rng_utc = pd.date_range(
.....: "3/6/2012 00:00",
.....: periods=3,
.....: freq="D",
.....: tz=dateutil.tz.tzutc(),
.....: )
.....:
In [440]: rng_utc.tz
Out[440]: tzutc()
New in version 0.25.0.
# datetime.timezone
In [441]: rng_utc = pd.date_range(
.....: "3/6/2012 00:00",
.....: periods=3,
.....: freq="D",
.....: tz=datetime.timezone.utc,
.....: )
.....:
In [442]: rng_utc.tz
Out[442]: datetime.timezone.utc
Note that the UTC time zone is a special case in dateutil and should be constructed explicitly
as an instance of dateutil.tz.tzutc. You can also construct other time
zones objects explicitly first.
In [443]: import pytz
# pytz
In [444]: tz_pytz = pytz.timezone("Europe/London")
In [445]: rng_pytz = pd.date_range("3/6/2012 00:00", periods=3, freq="D")
In [446]: rng_pytz = rng_pytz.tz_localize(tz_pytz)
In [447]: rng_pytz.tz == tz_pytz
Out[447]: True
# dateutil
In [448]: tz_dateutil = dateutil.tz.gettz("Europe/London")
In [449]: rng_dateutil = pd.date_range("3/6/2012 00:00", periods=3, freq="D", tz=tz_dateutil)
In [450]: rng_dateutil.tz == tz_dateutil
Out[450]: True
To convert a time zone aware pandas object from one time zone to another,
you can use the tz_convert method.
In [451]: rng_pytz.tz_convert("US/Eastern")
Out[451]:
DatetimeIndex(['2012-03-05 19:00:00-05:00', '2012-03-06 19:00:00-05:00',
'2012-03-07 19:00:00-05:00'],
dtype='datetime64[ns, US/Eastern]', freq=None)
Note
When using pytz time zones, DatetimeIndex will construct a different
time zone object than a Timestamp for the same time zone input. A DatetimeIndex
can hold a collection of Timestamp objects that may have different UTC offsets and cannot be
succinctly represented by one pytz time zone instance while one Timestamp
represents one point in time with a specific UTC offset.
In [452]: dti = pd.date_range("2019-01-01", periods=3, freq="D", tz="US/Pacific")
In [453]: dti.tz
Out[453]: <DstTzInfo 'US/Pacific' LMT-1 day, 16:07:00 STD>
In [454]: ts = pd.Timestamp("2019-01-01", tz="US/Pacific")
In [455]: ts.tz
Out[455]: <DstTzInfo 'US/Pacific' PST-1 day, 16:00:00 STD>
Warning
Be wary of conversions between libraries. For some time zones, pytz and dateutil have different
definitions of the zone. This is more of a problem for unusual time zones than for
‘standard’ zones like US/Eastern.
Warning
Be aware that a time zone definition across versions of time zone libraries may not
be considered equal. This may cause problems when working with stored data that
is localized using one version and operated on with a different version.
See here for how to handle such a situation.
Warning
For pytz time zones, it is incorrect to pass a time zone object directly into
the datetime.datetime constructor
(e.g., datetime.datetime(2011, 1, 1, tzinfo=pytz.timezone('US/Eastern')).
Instead, the datetime needs to be localized using the localize method
on the pytz time zone object.
Warning
Be aware that for times in the future, correct conversion between time zones
(and UTC) cannot be guaranteed by any time zone library because a timezone’s
offset from UTC may be changed by the respective government.
Warning
If you are using dates beyond 2038-01-18, due to current deficiencies
in the underlying libraries caused by the year 2038 problem, daylight saving time (DST) adjustments
to timezone aware dates will not be applied. If and when the underlying libraries are fixed,
the DST transitions will be applied.
For example, for two dates that are in British Summer Time (and so would normally be GMT+1), both the following asserts evaluate as true:
In [456]: d_2037 = "2037-03-31T010101"
In [457]: d_2038 = "2038-03-31T010101"
In [458]: DST = "Europe/London"
In [459]: assert pd.Timestamp(d_2037, tz=DST) != pd.Timestamp(d_2037, tz="GMT")
In [460]: assert pd.Timestamp(d_2038, tz=DST) == pd.Timestamp(d_2038, tz="GMT")
Under the hood, all timestamps are stored in UTC. Values from a time zone aware
DatetimeIndex or Timestamp will have their fields (day, hour, minute, etc.)
localized to the time zone. However, timestamps with the same UTC value are
still considered to be equal even if they are in different time zones:
In [461]: rng_eastern = rng_utc.tz_convert("US/Eastern")
In [462]: rng_berlin = rng_utc.tz_convert("Europe/Berlin")
In [463]: rng_eastern[2]
Out[463]: Timestamp('2012-03-07 19:00:00-0500', tz='US/Eastern', freq='D')
In [464]: rng_berlin[2]
Out[464]: Timestamp('2012-03-08 01:00:00+0100', tz='Europe/Berlin', freq='D')
In [465]: rng_eastern[2] == rng_berlin[2]
Out[465]: True
Operations between Series in different time zones will yield UTC
Series, aligning the data on the UTC timestamps:
In [466]: ts_utc = pd.Series(range(3), pd.date_range("20130101", periods=3, tz="UTC"))
In [467]: eastern = ts_utc.tz_convert("US/Eastern")
In [468]: berlin = ts_utc.tz_convert("Europe/Berlin")
In [469]: result = eastern + berlin
In [470]: result
Out[470]:
2013-01-01 00:00:00+00:00 0
2013-01-02 00:00:00+00:00 2
2013-01-03 00:00:00+00:00 4
Freq: D, dtype: int64
In [471]: result.index
Out[471]:
DatetimeIndex(['2013-01-01 00:00:00+00:00', '2013-01-02 00:00:00+00:00',
'2013-01-03 00:00:00+00:00'],
dtype='datetime64[ns, UTC]', freq='D')
To remove time zone information, use tz_localize(None) or tz_convert(None).
tz_localize(None) will remove the time zone yielding the local time representation.
tz_convert(None) will remove the time zone after converting to UTC time.
In [472]: didx = pd.date_range(start="2014-08-01 09:00", freq="H", periods=3, tz="US/Eastern")
In [473]: didx
Out[473]:
DatetimeIndex(['2014-08-01 09:00:00-04:00', '2014-08-01 10:00:00-04:00',
'2014-08-01 11:00:00-04:00'],
dtype='datetime64[ns, US/Eastern]', freq='H')
In [474]: didx.tz_localize(None)
Out[474]:
DatetimeIndex(['2014-08-01 09:00:00', '2014-08-01 10:00:00',
'2014-08-01 11:00:00'],
dtype='datetime64[ns]', freq=None)
In [475]: didx.tz_convert(None)
Out[475]:
DatetimeIndex(['2014-08-01 13:00:00', '2014-08-01 14:00:00',
'2014-08-01 15:00:00'],
dtype='datetime64[ns]', freq='H')
# tz_convert(None) is identical to tz_convert('UTC').tz_localize(None)
In [476]: didx.tz_convert("UTC").tz_localize(None)
Out[476]:
DatetimeIndex(['2014-08-01 13:00:00', '2014-08-01 14:00:00',
'2014-08-01 15:00:00'],
dtype='datetime64[ns]', freq=None)
Fold#
New in version 1.1.0.
For ambiguous times, pandas supports explicitly specifying the keyword-only fold argument.
Due to daylight saving time, one wall clock time can occur twice when shifting
from summer to winter time; fold describes whether the datetime-like corresponds
to the first (0) or the second time (1) the wall clock hits the ambiguous time.
Fold is supported only for constructing from naive datetime.datetime
(see datetime documentation for details) or from Timestamp
or for constructing from components (see below). Only dateutil timezones are supported
(see dateutil documentation
for dateutil methods that deal with ambiguous datetimes) as pytz
timezones do not support fold (see pytz documentation
for details on how pytz deals with ambiguous datetimes). To localize an ambiguous datetime
with pytz, please use Timestamp.tz_localize(). In general, we recommend to rely
on Timestamp.tz_localize() when localizing ambiguous datetimes if you need direct
control over how they are handled.
In [477]: pd.Timestamp(
.....: datetime.datetime(2019, 10, 27, 1, 30, 0, 0),
.....: tz="dateutil/Europe/London",
.....: fold=0,
.....: )
.....:
Out[477]: Timestamp('2019-10-27 01:30:00+0100', tz='dateutil//usr/share/zoneinfo/Europe/London')
In [478]: pd.Timestamp(
.....: year=2019,
.....: month=10,
.....: day=27,
.....: hour=1,
.....: minute=30,
.....: tz="dateutil/Europe/London",
.....: fold=1,
.....: )
.....:
Out[478]: Timestamp('2019-10-27 01:30:00+0000', tz='dateutil//usr/share/zoneinfo/Europe/London')
Ambiguous times when localizing#
tz_localize may not be able to determine the UTC offset of a timestamp
because daylight savings time (DST) in a local time zone causes some times to occur
twice within one day (“clocks fall back”). The following options are available:
'raise': Raises a pytz.AmbiguousTimeError (the default behavior)
'infer': Attempt to determine the correct offset base on the monotonicity of the timestamps
'NaT': Replaces ambiguous times with NaT
bool: True represents a DST time, False represents non-DST time. An array-like of bool values is supported for a sequence of times.
In [479]: rng_hourly = pd.DatetimeIndex(
.....: ["11/06/2011 00:00", "11/06/2011 01:00", "11/06/2011 01:00", "11/06/2011 02:00"]
.....: )
.....:
This will fail as there are ambiguous times ('11/06/2011 01:00')
In [2]: rng_hourly.tz_localize('US/Eastern')
AmbiguousTimeError: Cannot infer dst time from Timestamp('2011-11-06 01:00:00'), try using the 'ambiguous' argument
Handle these ambiguous times by specifying the following.
In [480]: rng_hourly.tz_localize("US/Eastern", ambiguous="infer")
Out[480]:
DatetimeIndex(['2011-11-06 00:00:00-04:00', '2011-11-06 01:00:00-04:00',
'2011-11-06 01:00:00-05:00', '2011-11-06 02:00:00-05:00'],
dtype='datetime64[ns, US/Eastern]', freq=None)
In [481]: rng_hourly.tz_localize("US/Eastern", ambiguous="NaT")
Out[481]:
DatetimeIndex(['2011-11-06 00:00:00-04:00', 'NaT', 'NaT',
'2011-11-06 02:00:00-05:00'],
dtype='datetime64[ns, US/Eastern]', freq=None)
In [482]: rng_hourly.tz_localize("US/Eastern", ambiguous=[True, True, False, False])
Out[482]:
DatetimeIndex(['2011-11-06 00:00:00-04:00', '2011-11-06 01:00:00-04:00',
'2011-11-06 01:00:00-05:00', '2011-11-06 02:00:00-05:00'],
dtype='datetime64[ns, US/Eastern]', freq=None)
Nonexistent times when localizing#
A DST transition may also shift the local time ahead by 1 hour creating nonexistent
local times (“clocks spring forward”). The behavior of localizing a timeseries with nonexistent times
can be controlled by the nonexistent argument. The following options are available:
'raise': Raises a pytz.NonExistentTimeError (the default behavior)
'NaT': Replaces nonexistent times with NaT
'shift_forward': Shifts nonexistent times forward to the closest real time
'shift_backward': Shifts nonexistent times backward to the closest real time
timedelta object: Shifts nonexistent times by the timedelta duration
In [483]: dti = pd.date_range(start="2015-03-29 02:30:00", periods=3, freq="H")
# 2:30 is a nonexistent time
Localization of nonexistent times will raise an error by default.
In [2]: dti.tz_localize('Europe/Warsaw')
NonExistentTimeError: 2015-03-29 02:30:00
Transform nonexistent times to NaT or shift the times.
In [484]: dti
Out[484]:
DatetimeIndex(['2015-03-29 02:30:00', '2015-03-29 03:30:00',
'2015-03-29 04:30:00'],
dtype='datetime64[ns]', freq='H')
In [485]: dti.tz_localize("Europe/Warsaw", nonexistent="shift_forward")
Out[485]:
DatetimeIndex(['2015-03-29 03:00:00+02:00', '2015-03-29 03:30:00+02:00',
'2015-03-29 04:30:00+02:00'],
dtype='datetime64[ns, Europe/Warsaw]', freq=None)
In [486]: dti.tz_localize("Europe/Warsaw", nonexistent="shift_backward")
Out[486]:
DatetimeIndex(['2015-03-29 01:59:59.999999999+01:00',
'2015-03-29 03:30:00+02:00',
'2015-03-29 04:30:00+02:00'],
dtype='datetime64[ns, Europe/Warsaw]', freq=None)
In [487]: dti.tz_localize("Europe/Warsaw", nonexistent=pd.Timedelta(1, unit="H"))
Out[487]:
DatetimeIndex(['2015-03-29 03:30:00+02:00', '2015-03-29 03:30:00+02:00',
'2015-03-29 04:30:00+02:00'],
dtype='datetime64[ns, Europe/Warsaw]', freq=None)
In [488]: dti.tz_localize("Europe/Warsaw", nonexistent="NaT")
Out[488]:
DatetimeIndex(['NaT', '2015-03-29 03:30:00+02:00',
'2015-03-29 04:30:00+02:00'],
dtype='datetime64[ns, Europe/Warsaw]', freq=None)
Time zone Series operations#
A Series with time zone naive values is
represented with a dtype of datetime64[ns].
In [489]: s_naive = pd.Series(pd.date_range("20130101", periods=3))
In [490]: s_naive
Out[490]:
0 2013-01-01
1 2013-01-02
2 2013-01-03
dtype: datetime64[ns]
A Series with a time zone aware values is
represented with a dtype of datetime64[ns, tz] where tz is the time zone
In [491]: s_aware = pd.Series(pd.date_range("20130101", periods=3, tz="US/Eastern"))
In [492]: s_aware
Out[492]:
0 2013-01-01 00:00:00-05:00
1 2013-01-02 00:00:00-05:00
2 2013-01-03 00:00:00-05:00
dtype: datetime64[ns, US/Eastern]
Both of these Series time zone information
can be manipulated via the .dt accessor, see the dt accessor section.
For example, to localize and convert a naive stamp to time zone aware.
In [493]: s_naive.dt.tz_localize("UTC").dt.tz_convert("US/Eastern")
Out[493]:
0 2012-12-31 19:00:00-05:00
1 2013-01-01 19:00:00-05:00
2 2013-01-02 19:00:00-05:00
dtype: datetime64[ns, US/Eastern]
Time zone information can also be manipulated using the astype method.
This method can convert between different timezone-aware dtypes.
# convert to a new time zone
In [494]: s_aware.astype("datetime64[ns, CET]")
Out[494]:
0 2013-01-01 06:00:00+01:00
1 2013-01-02 06:00:00+01:00
2 2013-01-03 06:00:00+01:00
dtype: datetime64[ns, CET]
Note
Using Series.to_numpy() on a Series, returns a NumPy array of the data.
NumPy does not currently support time zones (even though it is printing in the local time zone!),
therefore an object array of Timestamps is returned for time zone aware data:
In [495]: s_naive.to_numpy()
Out[495]:
array(['2013-01-01T00:00:00.000000000', '2013-01-02T00:00:00.000000000',
'2013-01-03T00:00:00.000000000'], dtype='datetime64[ns]')
In [496]: s_aware.to_numpy()
Out[496]:
array([Timestamp('2013-01-01 00:00:00-0500', tz='US/Eastern'),
Timestamp('2013-01-02 00:00:00-0500', tz='US/Eastern'),
Timestamp('2013-01-03 00:00:00-0500', tz='US/Eastern')],
dtype=object)
By converting to an object array of Timestamps, it preserves the time zone
information. For example, when converting back to a Series:
In [497]: pd.Series(s_aware.to_numpy())
Out[497]:
0 2013-01-01 00:00:00-05:00
1 2013-01-02 00:00:00-05:00
2 2013-01-03 00:00:00-05:00
dtype: datetime64[ns, US/Eastern]
However, if you want an actual NumPy datetime64[ns] array (with the values
converted to UTC) instead of an array of objects, you can specify the
dtype argument:
In [498]: s_aware.to_numpy(dtype="datetime64[ns]")
Out[498]:
array(['2013-01-01T05:00:00.000000000', '2013-01-02T05:00:00.000000000',
'2013-01-03T05:00:00.000000000'], dtype='datetime64[ns]')
| 997
| 1,171
|
How to identify zones in a table using pandas?
I have a file with a table (.csv file).
The table is composed by many sub "areas" like this example:
As you can see, there are more some data which can be grouped together (blue group, orange group, etc.)
Now.. the color is just to make the concept clear, but in the .csv there is no group identified by a color. In reality there is no color to identify the groups and the groups dimensions (rows) can change. There is no pattern to predict where the next group has 1, 2, 3, 4 or more rows.
The problem is that I need to open the table and import it using a dataframe using pandas. In my algorithm one group should be identified, copied to another dataframe and then saved.
How can I group data using pandas?
I was thinking to index the groups like the following table:
but in this case I cannot access the cells with the same index sequentially.
Any idea?
EDIT: here the table from the .csv file:
,X,Y,Z,mm,ff,cc
1,1,2,3,0.2,0.4,0.3
,,,,0.1,0.3,0.4
2,1,2,3,0.1,1.2,-1.2
,,,,0.12,-1.234,303.4
,,,,1.2,43.2,44.3
,,,,7.4,88.3,34.4
3,2,4,2,1.13,4.1,55.1
,,,,80.3,34.1,4.01
,,,,43.12,12.3,98.4
|
64,697,241
|
Replacing values of rows with same ID with max date
|
<p>Below is script for a simplified version of the df in question:</p>
<pre><code>import pandas as pd
df = pd.DataFrame({
'id': ['1', '1','2','2','3','3','4','4','5','6','7'],
'product1_expiry_date' : ['-','-','2020-11-28','2020-11-13','-',
'2020-11-13','2020-12-13','-','2020-11-16','-',
'2020-11-28'],
'product2_expiry_date' : ['2020-11-16','2020-11-19','-',
'-','2020-11-23','2020-11-13',
'2020-12-13','-','2020-12-01','2020-12-01',
'2020-12-14']
})
df
id product1_expiry_date product2_expiry_date
1 - 2020-11-16
1 - 2020-11-19
2 2020-11-28 -
2 2020-11-13 -
3 - 2020-11-23
3 2020-11-13 2020-11-13
4 2020-12-13 2020-12-13
4 - -
5 2020-11-16 2020-12-01
6 - 2020-12-01
7 2020-11-28 2020-12-14
</code></pre>
<p>I would like to have no duplicate IDs by, for each ID, dropping earlier dates and '-' values where applicable. As I am only interested in later dates.</p>
<p>INTENDED DF:</p>
<pre><code> id product1_expiry_date product2_expiry_date
1 - 2020-11-19
2 2020-11-28 -
3 2020-11-13 2020-11-23
4 2020-11-13 2020-11-13
5 2020-12-13 2020-12-13
6 2020-11-16 2020-12-01
7 2020-11-28 2020-12-14
</code></pre>
<p>Any help would be greatly appreciated.</p>
| 64,697,271
| 2020-11-05T12:29:26.223000
| 1
| null | 1
| 57
|
python|pandas
|
<p>Convert <code>Id</code> to index, then convert all columns to datetimes and use <code>max</code> per index:</p>
<pre><code>f = lambda x: pd.to_datetime(x, errors='coerce')
df1 = df.set_index('id').apply(f).max(level=0)
print (df1)
product1_expiry_date product2_expiry_date
id
1 NaT 2020-11-19
2 2020-11-28 NaT
3 2020-11-13 2020-11-23
4 2020-12-13 2020-12-13
5 2020-11-16 2020-12-01
6 NaT 2020-12-01
7 2020-11-28 2020-12-14
</code></pre>
<p>If want replace <code>NaT</code> to <code>-</code> is is possible, but get mixed datetimes with strings, so next processing should be problem:</p>
<pre><code>f = lambda x: pd.to_datetime(x, errors='coerce')
df1 = df.set_index('id').apply(f).max(level=0).fillna('-')
print (df1)
product1_expiry_date product2_expiry_date
id
1 - 2020-11-19 00:00:00
2 2020-11-28 00:00:00 -
3 2020-11-13 00:00:00 2020-11-23 00:00:00
4 2020-12-13 00:00:00 2020-12-13 00:00:00
5 2020-11-16 00:00:00 2020-12-01 00:00:00
6 - 2020-12-01 00:00:00
7 2020-11-28 00:00:00 2020-12-14 00:00:00
</code></pre>
<p>Last if necessary <code>id</code> to column:</p>
<pre><code>df1 = df1.reset_index()
</code></pre>
| 2020-11-05T12:31:29.917000
| 0
|
https://pandas.pydata.org/docs/reference/api/pandas.Series.replace.html
|
pandas.Series.replace#
Convert Id to index, then convert all columns to datetimes and use max per index:
f = lambda x: pd.to_datetime(x, errors='coerce')
df1 = df.set_index('id').apply(f).max(level=0)
print (df1)
product1_expiry_date product2_expiry_date
id
1 NaT 2020-11-19
2 2020-11-28 NaT
3 2020-11-13 2020-11-23
4 2020-12-13 2020-12-13
5 2020-11-16 2020-12-01
6 NaT 2020-12-01
7 2020-11-28 2020-12-14
If want replace NaT to - is is possible, but get mixed datetimes with strings, so next processing should be problem:
f = lambda x: pd.to_datetime(x, errors='coerce')
df1 = df.set_index('id').apply(f).max(level=0).fillna('-')
print (df1)
product1_expiry_date product2_expiry_date
id
1 - 2020-11-19 00:00:00
2 2020-11-28 00:00:00 -
3 2020-11-13 00:00:00 2020-11-23 00:00:00
4 2020-12-13 00:00:00 2020-12-13 00:00:00
5 2020-11-16 00:00:00 2020-12-01 00:00:00
6 - 2020-12-01 00:00:00
7 2020-11-28 00:00:00 2020-12-14 00:00:00
Last if necessary id to column:
df1 = df1.reset_index()
pandas.Series.replace#
Series.replace(to_replace=None, value=_NoDefault.no_default, *, inplace=False, limit=None, regex=False, method=_NoDefault.no_default)[source]#
Replace values given in to_replace with value.
Values of the Series are replaced with other values dynamically.
This differs from updating with .loc or .iloc, which require
you to specify a location to update with some value.
Parameters
to_replacestr, regex, list, dict, Series, int, float, or NoneHow to find the values that will be replaced.
numeric, str or regex:
numeric: numeric values equal to to_replace will be
replaced with value
str: string exactly matching to_replace will be replaced
with value
regex: regexs matching to_replace will be replaced with
value
list of str, regex, or numeric:
First, if to_replace and value are both lists, they
must be the same length.
Second, if regex=True then all of the strings in both
lists will be interpreted as regexs otherwise they will match
directly. This doesn’t matter much for value since there
are only a few possible substitution regexes you can use.
str, regex and numeric rules apply as above.
dict:
Dicts can be used to specify different replacement values
for different existing values. For example,
{'a': 'b', 'y': 'z'} replaces the value ‘a’ with ‘b’ and
‘y’ with ‘z’. To use a dict in this way, the optional value
parameter should not be given.
For a DataFrame a dict can specify that different values
should be replaced in different columns. For example,
{'a': 1, 'b': 'z'} looks for the value 1 in column ‘a’
and the value ‘z’ in column ‘b’ and replaces these values
with whatever is specified in value. The value parameter
should not be None in this case. You can treat this as a
special case of passing two lists except that you are
specifying the column to search in.
For a DataFrame nested dictionaries, e.g.,
{'a': {'b': np.nan}}, are read as follows: look in column
‘a’ for the value ‘b’ and replace it with NaN. The optional value
parameter should not be specified to use a nested dict in this
way. You can nest regular expressions as well. Note that
column names (the top-level dictionary keys in a nested
dictionary) cannot be regular expressions.
None:
This means that the regex argument must be a string,
compiled regular expression, or list, dict, ndarray or
Series of such elements. If value is also None then
this must be a nested dictionary or Series.
See the examples section for examples of each of these.
valuescalar, dict, list, str, regex, default NoneValue to replace any values matching to_replace with.
For a DataFrame a dict of values can be used to specify which
value to use for each column (columns not in the dict will not be
filled). Regular expressions, strings and lists or dicts of such
objects are also allowed.
inplacebool, default FalseIf True, performs operation inplace and returns None.
limitint, default NoneMaximum size gap to forward or backward fill.
regexbool or same types as to_replace, default FalseWhether to interpret to_replace and/or value as regular
expressions. If this is True then to_replace must be a
string. Alternatively, this could be a regular expression or a
list, dict, or array of regular expressions in which case
to_replace must be None.
method{‘pad’, ‘ffill’, ‘bfill’}The method to use when for replacement, when to_replace is a
scalar, list or tuple and value is None.
Changed in version 0.23.0: Added to DataFrame.
Returns
SeriesObject after replacement.
Raises
AssertionError
If regex is not a bool and to_replace is not
None.
TypeError
If to_replace is not a scalar, array-like, dict, or None
If to_replace is a dict and value is not a list,
dict, ndarray, or Series
If to_replace is None and regex is not compilable
into a regular expression or is a list, dict, ndarray, or
Series.
When replacing multiple bool or datetime64 objects and
the arguments to to_replace does not match the type of the
value being replaced
ValueError
If a list or an ndarray is passed to to_replace and
value but they are not the same length.
See also
Series.fillnaFill NA values.
Series.whereReplace values based on boolean condition.
Series.str.replaceSimple string replacement.
Notes
Regex substitution is performed under the hood with re.sub. The
rules for substitution for re.sub are the same.
Regular expressions will only substitute on strings, meaning you
cannot provide, for example, a regular expression matching floating
point numbers and expect the columns in your frame that have a
numeric dtype to be matched. However, if those floating point
numbers are strings, then you can do this.
This method has a lot of options. You are encouraged to experiment
and play with this method to gain intuition about how it works.
When dict is used as the to_replace value, it is like
key(s) in the dict are the to_replace part and
value(s) in the dict are the value parameter.
Examples
Scalar `to_replace` and `value`
>>> s = pd.Series([1, 2, 3, 4, 5])
>>> s.replace(1, 5)
0 5
1 2
2 3
3 4
4 5
dtype: int64
>>> df = pd.DataFrame({'A': [0, 1, 2, 3, 4],
... 'B': [5, 6, 7, 8, 9],
... 'C': ['a', 'b', 'c', 'd', 'e']})
>>> df.replace(0, 5)
A B C
0 5 5 a
1 1 6 b
2 2 7 c
3 3 8 d
4 4 9 e
List-like `to_replace`
>>> df.replace([0, 1, 2, 3], 4)
A B C
0 4 5 a
1 4 6 b
2 4 7 c
3 4 8 d
4 4 9 e
>>> df.replace([0, 1, 2, 3], [4, 3, 2, 1])
A B C
0 4 5 a
1 3 6 b
2 2 7 c
3 1 8 d
4 4 9 e
>>> s.replace([1, 2], method='bfill')
0 3
1 3
2 3
3 4
4 5
dtype: int64
dict-like `to_replace`
>>> df.replace({0: 10, 1: 100})
A B C
0 10 5 a
1 100 6 b
2 2 7 c
3 3 8 d
4 4 9 e
>>> df.replace({'A': 0, 'B': 5}, 100)
A B C
0 100 100 a
1 1 6 b
2 2 7 c
3 3 8 d
4 4 9 e
>>> df.replace({'A': {0: 100, 4: 400}})
A B C
0 100 5 a
1 1 6 b
2 2 7 c
3 3 8 d
4 400 9 e
Regular expression `to_replace`
>>> df = pd.DataFrame({'A': ['bat', 'foo', 'bait'],
... 'B': ['abc', 'bar', 'xyz']})
>>> df.replace(to_replace=r'^ba.$', value='new', regex=True)
A B
0 new abc
1 foo new
2 bait xyz
>>> df.replace({'A': r'^ba.$'}, {'A': 'new'}, regex=True)
A B
0 new abc
1 foo bar
2 bait xyz
>>> df.replace(regex=r'^ba.$', value='new')
A B
0 new abc
1 foo new
2 bait xyz
>>> df.replace(regex={r'^ba.$': 'new', 'foo': 'xyz'})
A B
0 new abc
1 xyz new
2 bait xyz
>>> df.replace(regex=[r'^ba.$', 'foo'], value='new')
A B
0 new abc
1 new new
2 bait xyz
Compare the behavior of s.replace({'a': None}) and
s.replace('a', None) to understand the peculiarities
of the to_replace parameter:
>>> s = pd.Series([10, 'a', 'a', 'b', 'a'])
When one uses a dict as the to_replace value, it is like the
value(s) in the dict are equal to the value parameter.
s.replace({'a': None}) is equivalent to
s.replace(to_replace={'a': None}, value=None, method=None):
>>> s.replace({'a': None})
0 10
1 None
2 None
3 b
4 None
dtype: object
When value is not explicitly passed and to_replace is a scalar, list
or tuple, replace uses the method parameter (default ‘pad’) to do the
replacement. So this is why the ‘a’ values are being replaced by 10
in rows 1 and 2 and ‘b’ in row 4 in this case.
>>> s.replace('a')
0 10
1 10
2 10
3 b
4 b
dtype: object
On the other hand, if None is explicitly passed for value, it will
be respected:
>>> s.replace('a', None)
0 10
1 None
2 None
3 b
4 None
dtype: object
Changed in version 1.4.0: Previously the explicit None was silently ignored.
| 24
| 1,319
|
Replacing values of rows with same ID with max date
Below is script for a simplified version of the df in question:
import pandas as pd
df = pd.DataFrame({
'id': ['1', '1','2','2','3','3','4','4','5','6','7'],
'product1_expiry_date' : ['-','-','2020-11-28','2020-11-13','-',
'2020-11-13','2020-12-13','-','2020-11-16','-',
'2020-11-28'],
'product2_expiry_date' : ['2020-11-16','2020-11-19','-',
'-','2020-11-23','2020-11-13',
'2020-12-13','-','2020-12-01','2020-12-01',
'2020-12-14']
})
df
id product1_expiry_date product2_expiry_date
1 - 2020-11-16
1 - 2020-11-19
2 2020-11-28 -
2 2020-11-13 -
3 - 2020-11-23
3 2020-11-13 2020-11-13
4 2020-12-13 2020-12-13
4 - -
5 2020-11-16 2020-12-01
6 - 2020-12-01
7 2020-11-28 2020-12-14
I would like to have no duplicate IDs by, for each ID, dropping earlier dates and '-' values where applicable. As I am only interested in later dates.
INTENDED DF:
id product1_expiry_date product2_expiry_date
1 - 2020-11-19
2 2020-11-28 -
3 2020-11-13 2020-11-23
4 2020-11-13 2020-11-13
5 2020-12-13 2020-12-13
6 2020-11-16 2020-12-01
7 2020-11-28 2020-12-14
Any help would be greatly appreciated.
|
59,670,885
|
Convert day fraction and year to Panda Python Datatime
|
<p>I need help seem to convert </p>
<pre><code> Year DayFraction
1 1979 2.47
2 1979 2.83
3 1979 2.96
</code></pre>
<p>to the format I need. I'm trying to have it in the <code>2019/02/02 8:30:00</code> format but in pandas. If I titled this wrong please let me know. I am still new to this. </p>
<p>The issue was resolved by (Thank you all for helping):</p>
<p>for i in np.arange(len(Year)): temptime = [] for i in np.arange(len(Year)): temp = pd.to_datetime(Year[i], format = '%Y') + pd.Timedelta(days= DayF[i]-2) temptime = np.append([temptime], temp) </p>
| 59,671,970
| 2020-01-09T19:29:36.253000
| 3
| 0
| 0
| 329
|
python|pandas
|
<p>I hope it helps. You can try this,</p>
<pre class="lang-py prettyprint-override"><code>from datetime import timedelta
import pandas as pd
data = {
'Year': [1979, 1979, 1979],
'DayFraction': [2.47, 2.83, 2.96]
}
df = pd.DataFrame(data)
df['new_date'] = (df
.apply(lambda x: pd.to_datetime(x['Year'], format='%Y') +
timedelta(days=x['DayFraction']),
axis=1))
print(df)
Year DayFraction new_date
0 1979 2.47 1979-01-03 11:16:48
1 1979 2.83 1979-01-03 19:55:12
2 1979 2.96 1979-01-03 23:02:24
</code></pre>
<p>If you got <code>TypeError: 'float' object is unsliceable</code> error,</p>
<pre><code>df['Year'] = pd.to_datetime(df['Year'], format='%Y')
df['DayFraction'] = df['DayFraction'].apply(lambda x: timedelta(days=x))
df['new_date'] = df['DayFraction'] + df['Year']
</code></pre>
<p>If these are lists,</p>
<pre><code>Year = [1979, 1979, 1979]
DayF = [2.47, 2.83, 2.96]
new_dates = []
for y, d in zip(Year, DayF):
new = pd.to_datetime(y, format='%Y') + pd.Timedelta(days=d)
new_dates.append(new)
print(new_dates)
[Timestamp('1979-01-03 11:16:48'), Timestamp('1979-01-03 19:55:12'), Timestamp('1979-01-03 23:02:24')]
</code></pre>
| 2020-01-09T20:56:36.187000
| 0
|
https://pandas.pydata.org/docs/reference/api/pandas.Timedelta.html
|
pandas.Timedelta#
pandas.Timedelta#
class pandas.Timedelta(value=<object object>, unit=None, **kwargs)#
I hope it helps. You can try this,
from datetime import timedelta
import pandas as pd
data = {
'Year': [1979, 1979, 1979],
'DayFraction': [2.47, 2.83, 2.96]
}
df = pd.DataFrame(data)
df['new_date'] = (df
.apply(lambda x: pd.to_datetime(x['Year'], format='%Y') +
timedelta(days=x['DayFraction']),
axis=1))
print(df)
Year DayFraction new_date
0 1979 2.47 1979-01-03 11:16:48
1 1979 2.83 1979-01-03 19:55:12
2 1979 2.96 1979-01-03 23:02:24
If you got TypeError: 'float' object is unsliceable error,
df['Year'] = pd.to_datetime(df['Year'], format='%Y')
df['DayFraction'] = df['DayFraction'].apply(lambda x: timedelta(days=x))
df['new_date'] = df['DayFraction'] + df['Year']
If these are lists,
Year = [1979, 1979, 1979]
DayF = [2.47, 2.83, 2.96]
new_dates = []
for y, d in zip(Year, DayF):
new = pd.to_datetime(y, format='%Y') + pd.Timedelta(days=d)
new_dates.append(new)
print(new_dates)
[Timestamp('1979-01-03 11:16:48'), Timestamp('1979-01-03 19:55:12'), Timestamp('1979-01-03 23:02:24')]
Represents a duration, the difference between two dates or times.
Timedelta is the pandas equivalent of python’s datetime.timedelta
and is interchangeable with it in most cases.
Parameters
valueTimedelta, timedelta, np.timedelta64, str, or int
unitstr, default ‘ns’Denote the unit of the input, if input is an integer.
Possible values:
‘W’, ‘D’, ‘T’, ‘S’, ‘L’, ‘U’, or ‘N’
‘days’ or ‘day’
‘hours’, ‘hour’, ‘hr’, or ‘h’
‘minutes’, ‘minute’, ‘min’, or ‘m’
‘seconds’, ‘second’, or ‘sec’
‘milliseconds’, ‘millisecond’, ‘millis’, or ‘milli’
‘microseconds’, ‘microsecond’, ‘micros’, or ‘micro’
‘nanoseconds’, ‘nanosecond’, ‘nanos’, ‘nano’, or ‘ns’.
**kwargsAvailable kwargs: {days, seconds, microseconds,
milliseconds, minutes, hours, weeks}.
Values for construction in compat with datetime.timedelta.
Numpy ints and floats will be coerced to python ints and floats.
Notes
The constructor may take in either both values of value and unit or
kwargs as above. Either one of them must be used during initialization
The .value attribute is always in ns.
If the precision is higher than nanoseconds, the precision of the duration is
truncated to nanoseconds.
Examples
Here we initialize Timedelta object with both value and unit
>>> td = pd.Timedelta(1, "d")
>>> td
Timedelta('1 days 00:00:00')
Here we initialize the Timedelta object with kwargs
>>> td2 = pd.Timedelta(days=1)
>>> td2
Timedelta('1 days 00:00:00')
We see that either way we get the same result
Attributes
asm8
Return a numpy timedelta64 array scalar view.
components
Return a components namedtuple-like.
days
delta
(DEPRECATED) Return the timedelta in nanoseconds (ns), for internal compatibility.
freq
(DEPRECATED) Freq property.
is_populated
(DEPRECATED) Is_populated property.
microseconds
nanoseconds
Return the number of nanoseconds (n), where 0 <= n < 1 microsecond.
resolution_string
Return a string representing the lowest timedelta resolution.
seconds
value
Methods
ceil(freq)
Return a new Timedelta ceiled to this resolution.
floor(freq)
Return a new Timedelta floored to this resolution.
isoformat
Format the Timedelta as ISO 8601 Duration.
round(freq)
Round the Timedelta to the specified resolution.
to_numpy
Convert the Timedelta to a NumPy timedelta64.
to_pytimedelta
Convert a pandas Timedelta object into a python datetime.timedelta object.
to_timedelta64
Return a numpy.timedelta64 object with 'ns' precision.
total_seconds
Total seconds in the duration.
view
Array view compatibility.
| 108
| 1,175
|
Convert day fraction and year to Panda Python Datatime
I need help seem to convert
Year DayFraction
1 1979 2.47
2 1979 2.83
3 1979 2.96
to the format I need. I'm trying to have it in the 2019/02/02 8:30:00 format but in pandas. If I titled this wrong please let me know. I am still new to this.
The issue was resolved by (Thank you all for helping):
for i in np.arange(len(Year)): temptime = [] for i in np.arange(len(Year)): temp = pd.to_datetime(Year[i], format = '%Y') + pd.Timedelta(days= DayF[i]-2) temptime = np.append([temptime], temp)
|
69,718,783
|
Pandas generate report with titles and specific structure
|
<p>I have a pandas data frame like this (represent a investment portfolio):</p>
<pre><code>data = {'category':['stock', 'bond', 'cash', 'stock',’cash’],
'name':[‘AA’ , ‘BB’, ‘CC’, ‘DD’, ’EE’],
'quantity':[2, 2, 10, 4, 3],
'price':[10, 15, 4, 2, 4],
'value':[ 20, 30, 40,8, 12],
df = pd.DataFrame(data)
</code></pre>
<p>I would like to generate a report in a text file that looks like this :</p>
<pre><code>Stock: Total: 60
Name quantity price value
AA 2 10 20
CC 10 4 40
Bond: Total: 60
Name quantity price value
BB 2 15 30
Cash: Total: 52
Name quantity price value
CC 10 4 40
EE 3 4 12
</code></pre>
<p>I found a way to do this by looping through a list of dataframe but it is kind of ugly, I think there should be a way with iterrow or iteritem, but I can’t make it work.</p>
<p>Thank you for your help !</p>
| 69,719,151
| 2021-10-26T07:07:52.783000
| 1
| null | 1
| 76
|
python|pandas
|
<p>You can loop by <code>groupby</code> object and write custom header with data:</p>
<pre><code>for i, g in df.groupby('category', sort=False):
with open('out.csv', 'a') as f:
f.write(f'{i}: Total: {g["value"].sum()}\n')
(g.drop('category', axis=1)
.to_csv(f, index=False, mode='a', sep='\t', line_terminator='\n'))
f.write('\n')
</code></pre>
<p>Output:</p>
<pre><code>stock: Total: 28
name quantity price value
AA 2 10 20
DD 4 2 8
bond: Total: 30
name quantity price value
B 2 15 30
cash: Total: 52
name quantity price value
CC 10 4 40
EE 3 4 12
</code></pre>
| 2021-10-26T07:36:02.270000
| 0
|
https://pandas.pydata.org/docs/user_guide/dsintro.html
|
Intro to data structures#
Intro to data structures#
We’ll start with a quick, non-comprehensive overview of the fundamental data
structures in pandas to get you started. The fundamental behavior about data
types, indexing, axis labeling, and alignment apply across all of the
objects. To get started, import NumPy and load pandas into your namespace:
In [1]: import numpy as np
In [2]: import pandas as pd
Fundamentally, data alignment is intrinsic. The link
You can loop by groupby object and write custom header with data:
for i, g in df.groupby('category', sort=False):
with open('out.csv', 'a') as f:
f.write(f'{i}: Total: {g["value"].sum()}\n')
(g.drop('category', axis=1)
.to_csv(f, index=False, mode='a', sep='\t', line_terminator='\n'))
f.write('\n')
Output:
stock: Total: 28
name quantity price value
AA 2 10 20
DD 4 2 8
bond: Total: 30
name quantity price value
B 2 15 30
cash: Total: 52
name quantity price value
CC 10 4 40
EE 3 4 12
between labels and data will not be broken unless done so explicitly by you.
We’ll give a brief intro to the data structures, then consider all of the broad
categories of functionality and methods in separate sections.
Series#
Series is a one-dimensional labeled array capable of holding any data
type (integers, strings, floating point numbers, Python objects, etc.). The axis
labels are collectively referred to as the index. The basic method to create a Series is to call:
>>> s = pd.Series(data, index=index)
Here, data can be many different things:
a Python dict
an ndarray
a scalar value (like 5)
The passed index is a list of axis labels. Thus, this separates into a few
cases depending on what data is:
From ndarray
If data is an ndarray, index must be the same length as data. If no
index is passed, one will be created having values [0, ..., len(data) - 1].
In [3]: s = pd.Series(np.random.randn(5), index=["a", "b", "c", "d", "e"])
In [4]: s
Out[4]:
a 0.469112
b -0.282863
c -1.509059
d -1.135632
e 1.212112
dtype: float64
In [5]: s.index
Out[5]: Index(['a', 'b', 'c', 'd', 'e'], dtype='object')
In [6]: pd.Series(np.random.randn(5))
Out[6]:
0 -0.173215
1 0.119209
2 -1.044236
3 -0.861849
4 -2.104569
dtype: float64
Note
pandas supports non-unique index values. If an operation
that does not support duplicate index values is attempted, an exception
will be raised at that time.
From dict
Series can be instantiated from dicts:
In [7]: d = {"b": 1, "a": 0, "c": 2}
In [8]: pd.Series(d)
Out[8]:
b 1
a 0
c 2
dtype: int64
If an index is passed, the values in data corresponding to the labels in the
index will be pulled out.
In [9]: d = {"a": 0.0, "b": 1.0, "c": 2.0}
In [10]: pd.Series(d)
Out[10]:
a 0.0
b 1.0
c 2.0
dtype: float64
In [11]: pd.Series(d, index=["b", "c", "d", "a"])
Out[11]:
b 1.0
c 2.0
d NaN
a 0.0
dtype: float64
Note
NaN (not a number) is the standard missing data marker used in pandas.
From scalar value
If data is a scalar value, an index must be
provided. The value will be repeated to match the length of index.
In [12]: pd.Series(5.0, index=["a", "b", "c", "d", "e"])
Out[12]:
a 5.0
b 5.0
c 5.0
d 5.0
e 5.0
dtype: float64
Series is ndarray-like#
Series acts very similarly to a ndarray and is a valid argument to most NumPy functions.
However, operations such as slicing will also slice the index.
In [13]: s[0]
Out[13]: 0.4691122999071863
In [14]: s[:3]
Out[14]:
a 0.469112
b -0.282863
c -1.509059
dtype: float64
In [15]: s[s > s.median()]
Out[15]:
a 0.469112
e 1.212112
dtype: float64
In [16]: s[[4, 3, 1]]
Out[16]:
e 1.212112
d -1.135632
b -0.282863
dtype: float64
In [17]: np.exp(s)
Out[17]:
a 1.598575
b 0.753623
c 0.221118
d 0.321219
e 3.360575
dtype: float64
Note
We will address array-based indexing like s[[4, 3, 1]]
in section on indexing.
Like a NumPy array, a pandas Series has a single dtype.
In [18]: s.dtype
Out[18]: dtype('float64')
This is often a NumPy dtype. However, pandas and 3rd-party libraries
extend NumPy’s type system in a few places, in which case the dtype would
be an ExtensionDtype. Some examples within
pandas are Categorical data and Nullable integer data type. See dtypes
for more.
If you need the actual array backing a Series, use Series.array.
In [19]: s.array
Out[19]:
<PandasArray>
[ 0.4691122999071863, -0.2828633443286633, -1.5090585031735124,
-1.1356323710171934, 1.2121120250208506]
Length: 5, dtype: float64
Accessing the array can be useful when you need to do some operation without the
index (to disable automatic alignment, for example).
Series.array will always be an ExtensionArray.
Briefly, an ExtensionArray is a thin wrapper around one or more concrete arrays like a
numpy.ndarray. pandas knows how to take an ExtensionArray and
store it in a Series or a column of a DataFrame.
See dtypes for more.
While Series is ndarray-like, if you need an actual ndarray, then use
Series.to_numpy().
In [20]: s.to_numpy()
Out[20]: array([ 0.4691, -0.2829, -1.5091, -1.1356, 1.2121])
Even if the Series is backed by a ExtensionArray,
Series.to_numpy() will return a NumPy ndarray.
Series is dict-like#
A Series is also like a fixed-size dict in that you can get and set values by index
label:
In [21]: s["a"]
Out[21]: 0.4691122999071863
In [22]: s["e"] = 12.0
In [23]: s
Out[23]:
a 0.469112
b -0.282863
c -1.509059
d -1.135632
e 12.000000
dtype: float64
In [24]: "e" in s
Out[24]: True
In [25]: "f" in s
Out[25]: False
If a label is not contained in the index, an exception is raised:
In [26]: s["f"]
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
File ~/work/pandas/pandas/pandas/core/indexes/base.py:3802, in Index.get_loc(self, key, method, tolerance)
3801 try:
-> 3802 return self._engine.get_loc(casted_key)
3803 except KeyError as err:
File ~/work/pandas/pandas/pandas/_libs/index.pyx:138, in pandas._libs.index.IndexEngine.get_loc()
File ~/work/pandas/pandas/pandas/_libs/index.pyx:165, in pandas._libs.index.IndexEngine.get_loc()
File ~/work/pandas/pandas/pandas/_libs/hashtable_class_helper.pxi:5745, in pandas._libs.hashtable.PyObjectHashTable.get_item()
File ~/work/pandas/pandas/pandas/_libs/hashtable_class_helper.pxi:5753, in pandas._libs.hashtable.PyObjectHashTable.get_item()
KeyError: 'f'
The above exception was the direct cause of the following exception:
KeyError Traceback (most recent call last)
Cell In[26], line 1
----> 1 s["f"]
File ~/work/pandas/pandas/pandas/core/series.py:981, in Series.__getitem__(self, key)
978 return self._values[key]
980 elif key_is_scalar:
--> 981 return self._get_value(key)
983 if is_hashable(key):
984 # Otherwise index.get_value will raise InvalidIndexError
985 try:
986 # For labels that don't resolve as scalars like tuples and frozensets
File ~/work/pandas/pandas/pandas/core/series.py:1089, in Series._get_value(self, label, takeable)
1086 return self._values[label]
1088 # Similar to Index.get_value, but we do not fall back to positional
-> 1089 loc = self.index.get_loc(label)
1090 return self.index._get_values_for_loc(self, loc, label)
File ~/work/pandas/pandas/pandas/core/indexes/base.py:3804, in Index.get_loc(self, key, method, tolerance)
3802 return self._engine.get_loc(casted_key)
3803 except KeyError as err:
-> 3804 raise KeyError(key) from err
3805 except TypeError:
3806 # If we have a listlike key, _check_indexing_error will raise
3807 # InvalidIndexError. Otherwise we fall through and re-raise
3808 # the TypeError.
3809 self._check_indexing_error(key)
KeyError: 'f'
Using the Series.get() method, a missing label will return None or specified default:
In [27]: s.get("f")
In [28]: s.get("f", np.nan)
Out[28]: nan
These labels can also be accessed by attribute.
Vectorized operations and label alignment with Series#
When working with raw NumPy arrays, looping through value-by-value is usually
not necessary. The same is true when working with Series in pandas.
Series can also be passed into most NumPy methods expecting an ndarray.
In [29]: s + s
Out[29]:
a 0.938225
b -0.565727
c -3.018117
d -2.271265
e 24.000000
dtype: float64
In [30]: s * 2
Out[30]:
a 0.938225
b -0.565727
c -3.018117
d -2.271265
e 24.000000
dtype: float64
In [31]: np.exp(s)
Out[31]:
a 1.598575
b 0.753623
c 0.221118
d 0.321219
e 162754.791419
dtype: float64
A key difference between Series and ndarray is that operations between Series
automatically align the data based on label. Thus, you can write computations
without giving consideration to whether the Series involved have the same
labels.
In [32]: s[1:] + s[:-1]
Out[32]:
a NaN
b -0.565727
c -3.018117
d -2.271265
e NaN
dtype: float64
The result of an operation between unaligned Series will have the union of
the indexes involved. If a label is not found in one Series or the other, the
result will be marked as missing NaN. Being able to write code without doing
any explicit data alignment grants immense freedom and flexibility in
interactive data analysis and research. The integrated data alignment features
of the pandas data structures set pandas apart from the majority of related
tools for working with labeled data.
Note
In general, we chose to make the default result of operations between
differently indexed objects yield the union of the indexes in order to
avoid loss of information. Having an index label, though the data is
missing, is typically important information as part of a computation. You
of course have the option of dropping labels with missing data via the
dropna function.
Name attribute#
Series also has a name attribute:
In [33]: s = pd.Series(np.random.randn(5), name="something")
In [34]: s
Out[34]:
0 -0.494929
1 1.071804
2 0.721555
3 -0.706771
4 -1.039575
Name: something, dtype: float64
In [35]: s.name
Out[35]: 'something'
The Series name can be assigned automatically in many cases, in particular,
when selecting a single column from a DataFrame, the name will be assigned
the column label.
You can rename a Series with the pandas.Series.rename() method.
In [36]: s2 = s.rename("different")
In [37]: s2.name
Out[37]: 'different'
Note that s and s2 refer to different objects.
DataFrame#
DataFrame is a 2-dimensional labeled data structure with columns of
potentially different types. You can think of it like a spreadsheet or SQL
table, or a dict of Series objects. It is generally the most commonly used
pandas object. Like Series, DataFrame accepts many different kinds of input:
Dict of 1D ndarrays, lists, dicts, or Series
2-D numpy.ndarray
Structured or record ndarray
A Series
Another DataFrame
Along with the data, you can optionally pass index (row labels) and
columns (column labels) arguments. If you pass an index and / or columns,
you are guaranteeing the index and / or columns of the resulting
DataFrame. Thus, a dict of Series plus a specific index will discard all data
not matching up to the passed index.
If axis labels are not passed, they will be constructed from the input data
based on common sense rules.
From dict of Series or dicts#
The resulting index will be the union of the indexes of the various
Series. If there are any nested dicts, these will first be converted to
Series. If no columns are passed, the columns will be the ordered list of dict
keys.
In [38]: d = {
....: "one": pd.Series([1.0, 2.0, 3.0], index=["a", "b", "c"]),
....: "two": pd.Series([1.0, 2.0, 3.0, 4.0], index=["a", "b", "c", "d"]),
....: }
....:
In [39]: df = pd.DataFrame(d)
In [40]: df
Out[40]:
one two
a 1.0 1.0
b 2.0 2.0
c 3.0 3.0
d NaN 4.0
In [41]: pd.DataFrame(d, index=["d", "b", "a"])
Out[41]:
one two
d NaN 4.0
b 2.0 2.0
a 1.0 1.0
In [42]: pd.DataFrame(d, index=["d", "b", "a"], columns=["two", "three"])
Out[42]:
two three
d 4.0 NaN
b 2.0 NaN
a 1.0 NaN
The row and column labels can be accessed respectively by accessing the
index and columns attributes:
Note
When a particular set of columns is passed along with a dict of data, the
passed columns override the keys in the dict.
In [43]: df.index
Out[43]: Index(['a', 'b', 'c', 'd'], dtype='object')
In [44]: df.columns
Out[44]: Index(['one', 'two'], dtype='object')
From dict of ndarrays / lists#
The ndarrays must all be the same length. If an index is passed, it must
also be the same length as the arrays. If no index is passed, the
result will be range(n), where n is the array length.
In [45]: d = {"one": [1.0, 2.0, 3.0, 4.0], "two": [4.0, 3.0, 2.0, 1.0]}
In [46]: pd.DataFrame(d)
Out[46]:
one two
0 1.0 4.0
1 2.0 3.0
2 3.0 2.0
3 4.0 1.0
In [47]: pd.DataFrame(d, index=["a", "b", "c", "d"])
Out[47]:
one two
a 1.0 4.0
b 2.0 3.0
c 3.0 2.0
d 4.0 1.0
From structured or record array#
This case is handled identically to a dict of arrays.
In [48]: data = np.zeros((2,), dtype=[("A", "i4"), ("B", "f4"), ("C", "a10")])
In [49]: data[:] = [(1, 2.0, "Hello"), (2, 3.0, "World")]
In [50]: pd.DataFrame(data)
Out[50]:
A B C
0 1 2.0 b'Hello'
1 2 3.0 b'World'
In [51]: pd.DataFrame(data, index=["first", "second"])
Out[51]:
A B C
first 1 2.0 b'Hello'
second 2 3.0 b'World'
In [52]: pd.DataFrame(data, columns=["C", "A", "B"])
Out[52]:
C A B
0 b'Hello' 1 2.0
1 b'World' 2 3.0
Note
DataFrame is not intended to work exactly like a 2-dimensional NumPy
ndarray.
From a list of dicts#
In [53]: data2 = [{"a": 1, "b": 2}, {"a": 5, "b": 10, "c": 20}]
In [54]: pd.DataFrame(data2)
Out[54]:
a b c
0 1 2 NaN
1 5 10 20.0
In [55]: pd.DataFrame(data2, index=["first", "second"])
Out[55]:
a b c
first 1 2 NaN
second 5 10 20.0
In [56]: pd.DataFrame(data2, columns=["a", "b"])
Out[56]:
a b
0 1 2
1 5 10
From a dict of tuples#
You can automatically create a MultiIndexed frame by passing a tuples
dictionary.
In [57]: pd.DataFrame(
....: {
....: ("a", "b"): {("A", "B"): 1, ("A", "C"): 2},
....: ("a", "a"): {("A", "C"): 3, ("A", "B"): 4},
....: ("a", "c"): {("A", "B"): 5, ("A", "C"): 6},
....: ("b", "a"): {("A", "C"): 7, ("A", "B"): 8},
....: ("b", "b"): {("A", "D"): 9, ("A", "B"): 10},
....: }
....: )
....:
Out[57]:
a b
b a c a b
A B 1.0 4.0 5.0 8.0 10.0
C 2.0 3.0 6.0 7.0 NaN
D NaN NaN NaN NaN 9.0
From a Series#
The result will be a DataFrame with the same index as the input Series, and
with one column whose name is the original name of the Series (only if no other
column name provided).
In [58]: ser = pd.Series(range(3), index=list("abc"), name="ser")
In [59]: pd.DataFrame(ser)
Out[59]:
ser
a 0
b 1
c 2
From a list of namedtuples#
The field names of the first namedtuple in the list determine the columns
of the DataFrame. The remaining namedtuples (or tuples) are simply unpacked
and their values are fed into the rows of the DataFrame. If any of those
tuples is shorter than the first namedtuple then the later columns in the
corresponding row are marked as missing values. If any are longer than the
first namedtuple, a ValueError is raised.
In [60]: from collections import namedtuple
In [61]: Point = namedtuple("Point", "x y")
In [62]: pd.DataFrame([Point(0, 0), Point(0, 3), (2, 3)])
Out[62]:
x y
0 0 0
1 0 3
2 2 3
In [63]: Point3D = namedtuple("Point3D", "x y z")
In [64]: pd.DataFrame([Point3D(0, 0, 0), Point3D(0, 3, 5), Point(2, 3)])
Out[64]:
x y z
0 0 0 0.0
1 0 3 5.0
2 2 3 NaN
From a list of dataclasses#
New in version 1.1.0.
Data Classes as introduced in PEP557,
can be passed into the DataFrame constructor.
Passing a list of dataclasses is equivalent to passing a list of dictionaries.
Please be aware, that all values in the list should be dataclasses, mixing
types in the list would result in a TypeError.
In [65]: from dataclasses import make_dataclass
In [66]: Point = make_dataclass("Point", [("x", int), ("y", int)])
In [67]: pd.DataFrame([Point(0, 0), Point(0, 3), Point(2, 3)])
Out[67]:
x y
0 0 0
1 0 3
2 2 3
Missing data
To construct a DataFrame with missing data, we use np.nan to
represent missing values. Alternatively, you may pass a numpy.MaskedArray
as the data argument to the DataFrame constructor, and its masked entries will
be considered missing. See Missing data for more.
Alternate constructors#
DataFrame.from_dict
DataFrame.from_dict() takes a dict of dicts or a dict of array-like sequences
and returns a DataFrame. It operates like the DataFrame constructor except
for the orient parameter which is 'columns' by default, but which can be
set to 'index' in order to use the dict keys as row labels.
In [68]: pd.DataFrame.from_dict(dict([("A", [1, 2, 3]), ("B", [4, 5, 6])]))
Out[68]:
A B
0 1 4
1 2 5
2 3 6
If you pass orient='index', the keys will be the row labels. In this
case, you can also pass the desired column names:
In [69]: pd.DataFrame.from_dict(
....: dict([("A", [1, 2, 3]), ("B", [4, 5, 6])]),
....: orient="index",
....: columns=["one", "two", "three"],
....: )
....:
Out[69]:
one two three
A 1 2 3
B 4 5 6
DataFrame.from_records
DataFrame.from_records() takes a list of tuples or an ndarray with structured
dtype. It works analogously to the normal DataFrame constructor, except that
the resulting DataFrame index may be a specific field of the structured
dtype.
In [70]: data
Out[70]:
array([(1, 2., b'Hello'), (2, 3., b'World')],
dtype=[('A', '<i4'), ('B', '<f4'), ('C', 'S10')])
In [71]: pd.DataFrame.from_records(data, index="C")
Out[71]:
A B
C
b'Hello' 1 2.0
b'World' 2 3.0
Column selection, addition, deletion#
You can treat a DataFrame semantically like a dict of like-indexed Series
objects. Getting, setting, and deleting columns works with the same syntax as
the analogous dict operations:
In [72]: df["one"]
Out[72]:
a 1.0
b 2.0
c 3.0
d NaN
Name: one, dtype: float64
In [73]: df["three"] = df["one"] * df["two"]
In [74]: df["flag"] = df["one"] > 2
In [75]: df
Out[75]:
one two three flag
a 1.0 1.0 1.0 False
b 2.0 2.0 4.0 False
c 3.0 3.0 9.0 True
d NaN 4.0 NaN False
Columns can be deleted or popped like with a dict:
In [76]: del df["two"]
In [77]: three = df.pop("three")
In [78]: df
Out[78]:
one flag
a 1.0 False
b 2.0 False
c 3.0 True
d NaN False
When inserting a scalar value, it will naturally be propagated to fill the
column:
In [79]: df["foo"] = "bar"
In [80]: df
Out[80]:
one flag foo
a 1.0 False bar
b 2.0 False bar
c 3.0 True bar
d NaN False bar
When inserting a Series that does not have the same index as the DataFrame, it
will be conformed to the DataFrame’s index:
In [81]: df["one_trunc"] = df["one"][:2]
In [82]: df
Out[82]:
one flag foo one_trunc
a 1.0 False bar 1.0
b 2.0 False bar 2.0
c 3.0 True bar NaN
d NaN False bar NaN
You can insert raw ndarrays but their length must match the length of the
DataFrame’s index.
By default, columns get inserted at the end. DataFrame.insert()
inserts at a particular location in the columns:
In [83]: df.insert(1, "bar", df["one"])
In [84]: df
Out[84]:
one bar flag foo one_trunc
a 1.0 1.0 False bar 1.0
b 2.0 2.0 False bar 2.0
c 3.0 3.0 True bar NaN
d NaN NaN False bar NaN
Assigning new columns in method chains#
Inspired by dplyr’s
mutate verb, DataFrame has an assign()
method that allows you to easily create new columns that are potentially
derived from existing columns.
In [85]: iris = pd.read_csv("data/iris.data")
In [86]: iris.head()
Out[86]:
SepalLength SepalWidth PetalLength PetalWidth Name
0 5.1 3.5 1.4 0.2 Iris-setosa
1 4.9 3.0 1.4 0.2 Iris-setosa
2 4.7 3.2 1.3 0.2 Iris-setosa
3 4.6 3.1 1.5 0.2 Iris-setosa
4 5.0 3.6 1.4 0.2 Iris-setosa
In [87]: iris.assign(sepal_ratio=iris["SepalWidth"] / iris["SepalLength"]).head()
Out[87]:
SepalLength SepalWidth PetalLength PetalWidth Name sepal_ratio
0 5.1 3.5 1.4 0.2 Iris-setosa 0.686275
1 4.9 3.0 1.4 0.2 Iris-setosa 0.612245
2 4.7 3.2 1.3 0.2 Iris-setosa 0.680851
3 4.6 3.1 1.5 0.2 Iris-setosa 0.673913
4 5.0 3.6 1.4 0.2 Iris-setosa 0.720000
In the example above, we inserted a precomputed value. We can also pass in
a function of one argument to be evaluated on the DataFrame being assigned to.
In [88]: iris.assign(sepal_ratio=lambda x: (x["SepalWidth"] / x["SepalLength"])).head()
Out[88]:
SepalLength SepalWidth PetalLength PetalWidth Name sepal_ratio
0 5.1 3.5 1.4 0.2 Iris-setosa 0.686275
1 4.9 3.0 1.4 0.2 Iris-setosa 0.612245
2 4.7 3.2 1.3 0.2 Iris-setosa 0.680851
3 4.6 3.1 1.5 0.2 Iris-setosa 0.673913
4 5.0 3.6 1.4 0.2 Iris-setosa 0.720000
assign() always returns a copy of the data, leaving the original
DataFrame untouched.
Passing a callable, as opposed to an actual value to be inserted, is
useful when you don’t have a reference to the DataFrame at hand. This is
common when using assign() in a chain of operations. For example,
we can limit the DataFrame to just those observations with a Sepal Length
greater than 5, calculate the ratio, and plot:
In [89]: (
....: iris.query("SepalLength > 5")
....: .assign(
....: SepalRatio=lambda x: x.SepalWidth / x.SepalLength,
....: PetalRatio=lambda x: x.PetalWidth / x.PetalLength,
....: )
....: .plot(kind="scatter", x="SepalRatio", y="PetalRatio")
....: )
....:
Out[89]: <AxesSubplot: xlabel='SepalRatio', ylabel='PetalRatio'>
Since a function is passed in, the function is computed on the DataFrame
being assigned to. Importantly, this is the DataFrame that’s been filtered
to those rows with sepal length greater than 5. The filtering happens first,
and then the ratio calculations. This is an example where we didn’t
have a reference to the filtered DataFrame available.
The function signature for assign() is simply **kwargs. The keys
are the column names for the new fields, and the values are either a value
to be inserted (for example, a Series or NumPy array), or a function
of one argument to be called on the DataFrame. A copy of the original
DataFrame is returned, with the new values inserted.
The order of **kwargs is preserved. This allows
for dependent assignment, where an expression later in **kwargs can refer
to a column created earlier in the same assign().
In [90]: dfa = pd.DataFrame({"A": [1, 2, 3], "B": [4, 5, 6]})
In [91]: dfa.assign(C=lambda x: x["A"] + x["B"], D=lambda x: x["A"] + x["C"])
Out[91]:
A B C D
0 1 4 5 6
1 2 5 7 9
2 3 6 9 12
In the second expression, x['C'] will refer to the newly created column,
that’s equal to dfa['A'] + dfa['B'].
Indexing / selection#
The basics of indexing are as follows:
Operation
Syntax
Result
Select column
df[col]
Series
Select row by label
df.loc[label]
Series
Select row by integer location
df.iloc[loc]
Series
Slice rows
df[5:10]
DataFrame
Select rows by boolean vector
df[bool_vec]
DataFrame
Row selection, for example, returns a Series whose index is the columns of the
DataFrame:
In [92]: df.loc["b"]
Out[92]:
one 2.0
bar 2.0
flag False
foo bar
one_trunc 2.0
Name: b, dtype: object
In [93]: df.iloc[2]
Out[93]:
one 3.0
bar 3.0
flag True
foo bar
one_trunc NaN
Name: c, dtype: object
For a more exhaustive treatment of sophisticated label-based indexing and
slicing, see the section on indexing. We will address the
fundamentals of reindexing / conforming to new sets of labels in the
section on reindexing.
Data alignment and arithmetic#
Data alignment between DataFrame objects automatically align on both the
columns and the index (row labels). Again, the resulting object will have the
union of the column and row labels.
In [94]: df = pd.DataFrame(np.random.randn(10, 4), columns=["A", "B", "C", "D"])
In [95]: df2 = pd.DataFrame(np.random.randn(7, 3), columns=["A", "B", "C"])
In [96]: df + df2
Out[96]:
A B C D
0 0.045691 -0.014138 1.380871 NaN
1 -0.955398 -1.501007 0.037181 NaN
2 -0.662690 1.534833 -0.859691 NaN
3 -2.452949 1.237274 -0.133712 NaN
4 1.414490 1.951676 -2.320422 NaN
5 -0.494922 -1.649727 -1.084601 NaN
6 -1.047551 -0.748572 -0.805479 NaN
7 NaN NaN NaN NaN
8 NaN NaN NaN NaN
9 NaN NaN NaN NaN
When doing an operation between DataFrame and Series, the default behavior is
to align the Series index on the DataFrame columns, thus broadcasting
row-wise. For example:
In [97]: df - df.iloc[0]
Out[97]:
A B C D
0 0.000000 0.000000 0.000000 0.000000
1 -1.359261 -0.248717 -0.453372 -1.754659
2 0.253128 0.829678 0.010026 -1.991234
3 -1.311128 0.054325 -1.724913 -1.620544
4 0.573025 1.500742 -0.676070 1.367331
5 -1.741248 0.781993 -1.241620 -2.053136
6 -1.240774 -0.869551 -0.153282 0.000430
7 -0.743894 0.411013 -0.929563 -0.282386
8 -1.194921 1.320690 0.238224 -1.482644
9 2.293786 1.856228 0.773289 -1.446531
For explicit control over the matching and broadcasting behavior, see the
section on flexible binary operations.
Arithmetic operations with scalars operate element-wise:
In [98]: df * 5 + 2
Out[98]:
A B C D
0 3.359299 -0.124862 4.835102 3.381160
1 -3.437003 -1.368449 2.568242 -5.392133
2 4.624938 4.023526 4.885230 -6.575010
3 -3.196342 0.146766 -3.789461 -4.721559
4 6.224426 7.378849 1.454750 10.217815
5 -5.346940 3.785103 -1.373001 -6.884519
6 -2.844569 -4.472618 4.068691 3.383309
7 -0.360173 1.930201 0.187285 1.969232
8 -2.615303 6.478587 6.026220 -4.032059
9 14.828230 9.156280 8.701544 -3.851494
In [99]: 1 / df
Out[99]:
A B C D
0 3.678365 -2.353094 1.763605 3.620145
1 -0.919624 -1.484363 8.799067 -0.676395
2 1.904807 2.470934 1.732964 -0.583090
3 -0.962215 -2.697986 -0.863638 -0.743875
4 1.183593 0.929567 -9.170108 0.608434
5 -0.680555 2.800959 -1.482360 -0.562777
6 -1.032084 -0.772485 2.416988 3.614523
7 -2.118489 -71.634509 -2.758294 -162.507295
8 -1.083352 1.116424 1.241860 -0.828904
9 0.389765 0.698687 0.746097 -0.854483
In [100]: df ** 4
Out[100]:
A B C D
0 0.005462 3.261689e-02 0.103370 5.822320e-03
1 1.398165 2.059869e-01 0.000167 4.777482e+00
2 0.075962 2.682596e-02 0.110877 8.650845e+00
3 1.166571 1.887302e-02 1.797515 3.265879e+00
4 0.509555 1.339298e+00 0.000141 7.297019e+00
5 4.661717 1.624699e-02 0.207103 9.969092e+00
6 0.881334 2.808277e+00 0.029302 5.858632e-03
7 0.049647 3.797614e-08 0.017276 1.433866e-09
8 0.725974 6.437005e-01 0.420446 2.118275e+00
9 43.329821 4.196326e+00 3.227153 1.875802e+00
Boolean operators operate element-wise as well:
In [101]: df1 = pd.DataFrame({"a": [1, 0, 1], "b": [0, 1, 1]}, dtype=bool)
In [102]: df2 = pd.DataFrame({"a": [0, 1, 1], "b": [1, 1, 0]}, dtype=bool)
In [103]: df1 & df2
Out[103]:
a b
0 False False
1 False True
2 True False
In [104]: df1 | df2
Out[104]:
a b
0 True True
1 True True
2 True True
In [105]: df1 ^ df2
Out[105]:
a b
0 True True
1 True False
2 False True
In [106]: -df1
Out[106]:
a b
0 False True
1 True False
2 False False
Transposing#
To transpose, access the T attribute or DataFrame.transpose(),
similar to an ndarray:
# only show the first 5 rows
In [107]: df[:5].T
Out[107]:
0 1 2 3 4
A 0.271860 -1.087401 0.524988 -1.039268 0.844885
B -0.424972 -0.673690 0.404705 -0.370647 1.075770
C 0.567020 0.113648 0.577046 -1.157892 -0.109050
D 0.276232 -1.478427 -1.715002 -1.344312 1.643563
DataFrame interoperability with NumPy functions#
Most NumPy functions can be called directly on Series and DataFrame.
In [108]: np.exp(df)
Out[108]:
A B C D
0 1.312403 0.653788 1.763006 1.318154
1 0.337092 0.509824 1.120358 0.227996
2 1.690438 1.498861 1.780770 0.179963
3 0.353713 0.690288 0.314148 0.260719
4 2.327710 2.932249 0.896686 5.173571
5 0.230066 1.429065 0.509360 0.169161
6 0.379495 0.274028 1.512461 1.318720
7 0.623732 0.986137 0.695904 0.993865
8 0.397301 2.449092 2.237242 0.299269
9 13.009059 4.183951 3.820223 0.310274
In [109]: np.asarray(df)
Out[109]:
array([[ 0.2719, -0.425 , 0.567 , 0.2762],
[-1.0874, -0.6737, 0.1136, -1.4784],
[ 0.525 , 0.4047, 0.577 , -1.715 ],
[-1.0393, -0.3706, -1.1579, -1.3443],
[ 0.8449, 1.0758, -0.109 , 1.6436],
[-1.4694, 0.357 , -0.6746, -1.7769],
[-0.9689, -1.2945, 0.4137, 0.2767],
[-0.472 , -0.014 , -0.3625, -0.0062],
[-0.9231, 0.8957, 0.8052, -1.2064],
[ 2.5656, 1.4313, 1.3403, -1.1703]])
DataFrame is not intended to be a drop-in replacement for ndarray as its
indexing semantics and data model are quite different in places from an n-dimensional
array.
Series implements __array_ufunc__, which allows it to work with NumPy’s
universal functions.
The ufunc is applied to the underlying array in a Series.
In [110]: ser = pd.Series([1, 2, 3, 4])
In [111]: np.exp(ser)
Out[111]:
0 2.718282
1 7.389056
2 20.085537
3 54.598150
dtype: float64
Changed in version 0.25.0: When multiple Series are passed to a ufunc, they are aligned before
performing the operation.
Like other parts of the library, pandas will automatically align labeled inputs
as part of a ufunc with multiple inputs. For example, using numpy.remainder()
on two Series with differently ordered labels will align before the operation.
In [112]: ser1 = pd.Series([1, 2, 3], index=["a", "b", "c"])
In [113]: ser2 = pd.Series([1, 3, 5], index=["b", "a", "c"])
In [114]: ser1
Out[114]:
a 1
b 2
c 3
dtype: int64
In [115]: ser2
Out[115]:
b 1
a 3
c 5
dtype: int64
In [116]: np.remainder(ser1, ser2)
Out[116]:
a 1
b 0
c 3
dtype: int64
As usual, the union of the two indices is taken, and non-overlapping values are filled
with missing values.
In [117]: ser3 = pd.Series([2, 4, 6], index=["b", "c", "d"])
In [118]: ser3
Out[118]:
b 2
c 4
d 6
dtype: int64
In [119]: np.remainder(ser1, ser3)
Out[119]:
a NaN
b 0.0
c 3.0
d NaN
dtype: float64
When a binary ufunc is applied to a Series and Index, the Series
implementation takes precedence and a Series is returned.
In [120]: ser = pd.Series([1, 2, 3])
In [121]: idx = pd.Index([4, 5, 6])
In [122]: np.maximum(ser, idx)
Out[122]:
0 4
1 5
2 6
dtype: int64
NumPy ufuncs are safe to apply to Series backed by non-ndarray arrays,
for example arrays.SparseArray (see Sparse calculation). If possible,
the ufunc is applied without converting the underlying data to an ndarray.
Console display#
A very large DataFrame will be truncated to display them in the console.
You can also get a summary using info().
(The baseball dataset is from the plyr R package):
In [123]: baseball = pd.read_csv("data/baseball.csv")
In [124]: print(baseball)
id player year stint team lg ... so ibb hbp sh sf gidp
0 88641 womacto01 2006 2 CHN NL ... 4.0 0.0 0.0 3.0 0.0 0.0
1 88643 schilcu01 2006 1 BOS AL ... 1.0 0.0 0.0 0.0 0.0 0.0
.. ... ... ... ... ... .. ... ... ... ... ... ... ...
98 89533 aloumo01 2007 1 NYN NL ... 30.0 5.0 2.0 0.0 3.0 13.0
99 89534 alomasa02 2007 1 NYN NL ... 3.0 0.0 0.0 0.0 0.0 0.0
[100 rows x 23 columns]
In [125]: baseball.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 100 entries, 0 to 99
Data columns (total 23 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 id 100 non-null int64
1 player 100 non-null object
2 year 100 non-null int64
3 stint 100 non-null int64
4 team 100 non-null object
5 lg 100 non-null object
6 g 100 non-null int64
7 ab 100 non-null int64
8 r 100 non-null int64
9 h 100 non-null int64
10 X2b 100 non-null int64
11 X3b 100 non-null int64
12 hr 100 non-null int64
13 rbi 100 non-null float64
14 sb 100 non-null float64
15 cs 100 non-null float64
16 bb 100 non-null int64
17 so 100 non-null float64
18 ibb 100 non-null float64
19 hbp 100 non-null float64
20 sh 100 non-null float64
21 sf 100 non-null float64
22 gidp 100 non-null float64
dtypes: float64(9), int64(11), object(3)
memory usage: 18.1+ KB
However, using DataFrame.to_string() will return a string representation of the
DataFrame in tabular form, though it won’t always fit the console width:
In [126]: print(baseball.iloc[-20:, :12].to_string())
id player year stint team lg g ab r h X2b X3b
80 89474 finlest01 2007 1 COL NL 43 94 9 17 3 0
81 89480 embreal01 2007 1 OAK AL 4 0 0 0 0 0
82 89481 edmonji01 2007 1 SLN NL 117 365 39 92 15 2
83 89482 easleda01 2007 1 NYN NL 76 193 24 54 6 0
84 89489 delgaca01 2007 1 NYN NL 139 538 71 139 30 0
85 89493 cormirh01 2007 1 CIN NL 6 0 0 0 0 0
86 89494 coninje01 2007 2 NYN NL 21 41 2 8 2 0
87 89495 coninje01 2007 1 CIN NL 80 215 23 57 11 1
88 89497 clemero02 2007 1 NYA AL 2 2 0 1 0 0
89 89498 claytro01 2007 2 BOS AL 8 6 1 0 0 0
90 89499 claytro01 2007 1 TOR AL 69 189 23 48 14 0
91 89501 cirilje01 2007 2 ARI NL 28 40 6 8 4 0
92 89502 cirilje01 2007 1 MIN AL 50 153 18 40 9 2
93 89521 bondsba01 2007 1 SFN NL 126 340 75 94 14 0
94 89523 biggicr01 2007 1 HOU NL 141 517 68 130 31 3
95 89525 benitar01 2007 2 FLO NL 34 0 0 0 0 0
96 89526 benitar01 2007 1 SFN NL 19 0 0 0 0 0
97 89530 ausmubr01 2007 1 HOU NL 117 349 38 82 16 3
98 89533 aloumo01 2007 1 NYN NL 87 328 51 112 19 1
99 89534 alomasa02 2007 1 NYN NL 8 22 1 3 1 0
Wide DataFrames will be printed across multiple rows by
default:
In [127]: pd.DataFrame(np.random.randn(3, 12))
Out[127]:
0 1 2 ... 9 10 11
0 -1.226825 0.769804 -1.281247 ... -1.110336 -0.619976 0.149748
1 -0.732339 0.687738 0.176444 ... 1.462696 -1.743161 -0.826591
2 -0.345352 1.314232 0.690579 ... 0.896171 -0.487602 -0.082240
[3 rows x 12 columns]
You can change how much to print on a single row by setting the display.width
option:
In [128]: pd.set_option("display.width", 40) # default is 80
In [129]: pd.DataFrame(np.random.randn(3, 12))
Out[129]:
0 1 2 ... 9 10 11
0 -2.182937 0.380396 0.084844 ... -0.023688 2.410179 1.450520
1 0.206053 -0.251905 -2.213588 ... -0.025747 -0.988387 0.094055
2 1.262731 1.289997 0.082423 ... -0.281461 0.030711 0.109121
[3 rows x 12 columns]
You can adjust the max width of the individual columns by setting display.max_colwidth
In [130]: datafile = {
.....: "filename": ["filename_01", "filename_02"],
.....: "path": [
.....: "media/user_name/storage/folder_01/filename_01",
.....: "media/user_name/storage/folder_02/filename_02",
.....: ],
.....: }
.....:
In [131]: pd.set_option("display.max_colwidth", 30)
In [132]: pd.DataFrame(datafile)
Out[132]:
filename path
0 filename_01 media/user_name/storage/fo...
1 filename_02 media/user_name/storage/fo...
In [133]: pd.set_option("display.max_colwidth", 100)
In [134]: pd.DataFrame(datafile)
Out[134]:
filename path
0 filename_01 media/user_name/storage/folder_01/filename_01
1 filename_02 media/user_name/storage/folder_02/filename_02
You can also disable this feature via the expand_frame_repr option.
This will print the table in one block.
DataFrame column attribute access and IPython completion#
If a DataFrame column label is a valid Python variable name, the column can be
accessed like an attribute:
In [135]: df = pd.DataFrame({"foo1": np.random.randn(5), "foo2": np.random.randn(5)})
In [136]: df
Out[136]:
foo1 foo2
0 1.126203 0.781836
1 -0.977349 -1.071357
2 1.474071 0.441153
3 -0.064034 2.353925
4 -1.282782 0.583787
In [137]: df.foo1
Out[137]:
0 1.126203
1 -0.977349
2 1.474071
3 -0.064034
4 -1.282782
Name: foo1, dtype: float64
The columns are also connected to the IPython
completion mechanism so they can be tab-completed:
In [5]: df.foo<TAB> # noqa: E225, E999
df.foo1 df.foo2
| 464
| 1,046
|
Pandas generate report with titles and specific structure
I have a pandas data frame like this (represent a investment portfolio):
data = {'category':['stock', 'bond', 'cash', 'stock',’cash’],
'name':[‘AA’ , ‘BB’, ‘CC’, ‘DD’, ’EE’],
'quantity':[2, 2, 10, 4, 3],
'price':[10, 15, 4, 2, 4],
'value':[ 20, 30, 40,8, 12],
df = pd.DataFrame(data)
I would like to generate a report in a text file that looks like this :
Stock: Total: 60
Name quantity price value
AA 2 10 20
CC 10 4 40
Bond: Total: 60
Name quantity price value
BB 2 15 30
Cash: Total: 52
Name quantity price value
CC 10 4 40
EE 3 4 12
I found a way to do this by looping through a list of dataframe but it is kind of ugly, I think there should be a way with iterrow or iteritem, but I can’t make it work.
Thank you for your help !
|
69,775,658
|
How to compute the ratio of Recovered Cases to Confirmed Cases for each nation using pandas in 8 lines
|
<p>I have this dataset url and need to compute the ratio of Recovered cases to Confirmed cases for each nation in just <strong>7 to 8 lines max.</strong></p>
<p>Also need to extract top 10 nations with highest ratio of Recovered to confirmed cases and code lines must be <strong>max 8 lines long.</strong> <a href="https://i.stack.imgur.com/71JLl.png" rel="nofollow noreferrer">enter image description here</a></p>
<pre><code>df = pd.read_csv('https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_daily_reports/01-01-2021.csv')
</code></pre>
<p>I would really appreciate the help, thanks :)</p>
| 69,778,283
| 2021-10-29T23:23:37.367000
| 1
| 0
| -6
| 87
|
python|pandas
|
<h2>Computing the ratio</h2>
<p>Since there are multiple regions in a country, there are duplicated values in the <code>Country_Region</code> column. Therefore, I use <code>groupby</code> to sum the total cases of a nation.</p>
<pre class="lang-py prettyprint-override"><code>ratio = df.groupby("Country_Region")[["Recovered", "Confirmed"]].sum()
ratio["Ratio"] = ratio["Recovered"] / ratio["Confirmed"]
</code></pre>
<p>Let's get the first five nations.</p>
<pre class="lang-py prettyprint-override"><code>>>> ratio.head()
Recovered Confirmed Ratio
Country_Region
Afghanistan 41727 52513 0.794603
Albania 33634 58316 0.576754
Algeria 67395 99897 0.674645
Andorra 7463 8117 0.919428
Angola 11146 17568 0.634449
</code></pre>
<h2>Getting the countries with the highest ratio</h2>
<p>Then, you can filter out the ten countries with the highest ratio with <a href="https://pandas.pydata.org/docs/reference/api/pandas.Series.nlargest.html" rel="nofollow noreferrer"><code>Series.nlargest</code></a>.</p>
<pre class="lang-py prettyprint-override"><code>>>> ratio.nlargest(10, "Ratio")
Recovered Confirmed Ratio
Country_Region
Marshall Islands 4 4 1.000000
Samoa 2 2 1.000000
Vanuatu 1 1 1.000000
Singapore 58449 58629 0.996930
El Salvador 45960 46515 0.988068
Qatar 141556 144042 0.982741
Djibouti 5735 5840 0.982021
Diamond Princess 699 712 0.981742
Gabon 9388 9571 0.980880
Ghana 53758 54930 0.978664
</code></pre>
| 2021-10-30T09:18:25.637000
| 0
|
https://pandas.pydata.org/docs/user_guide/io.html
|
Computing the ratio
Since there are multiple regions in a country, there are duplicated values in the Country_Region column. Therefore, I use groupby to sum the total cases of a nation.
ratio = df.groupby("Country_Region")[["Recovered", "Confirmed"]].sum()
ratio["Ratio"] = ratio["Recovered"] / ratio["Confirmed"]
Let's get the first five nations.
>>> ratio.head()
Recovered Confirmed Ratio
Country_Region
Afghanistan 41727 52513 0.794603
Albania 33634 58316 0.576754
Algeria 67395 99897 0.674645
Andorra 7463 8117 0.919428
Angola 11146 17568 0.634449
Getting the countries with the highest ratio
Then, you can filter out the ten countries with the highest ratio with Series.nlargest.
>>> ratio.nlargest(10, "Ratio")
Recovered Confirmed Ratio
Country_Region
Marshall Islands 4 4 1.000000
Samoa 2 2 1.000000
Vanuatu 1 1 1.000000
Singapore 58449 58629 0.996930
El Salvador 45960 46515 0.988068
Qatar 141556 144042 0.982741
Djibouti 5735 5840 0.982021
Diamond Princess 699 712 0.981742
Gabon 9388 9571 0.980880
Ghana 53758 54930 0.978664
| 0
| 1,450
|
How to compute the ratio of Recovered Cases to Confirmed Cases for each nation using pandas in 8 lines
I have this dataset url and need to compute the ratio of Recovered cases to Confirmed cases for each nation in just 7 to 8 lines max.
Also need to extract top 10 nations with highest ratio of Recovered to confirmed cases and code lines must be max 8 lines long. enter image description here
df = pd.read_csv('https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_daily_reports/01-01-2021.csv')
I would really appreciate the help, thanks :)
|
69,339,880
|
Pandas data frame - apply function with lambda with multiple 'if else' statements
|
<p>I am hoping someone could help point out what I may be doing wrong in the following piece of code:</p>
<pre><code>master_output['tm_override'] = master_output.apply(lambda row: row['nrec_tm_lb'].astype(str) + '-' + row['nrec_tm_ub'].astype(str) if row['det_tw_fact'].isin([4, 5]) else row['tw2Open'] + dt.timedelta(hours=3).time() if (row['det_tw_fact'].isin([1, 2, 3]) and (~row['tw2Open'].isna()))
else row['tw1Open'] + dt.timedelta(hours=3).time() if (row['det_tw_fact'].isin([1, 2, 3]) and (~row['tw2Open'].isna())), axis=1)
</code></pre>
<p>I have a feeling that I may be doing something fundamentally silly here. The issue it seems may be coming from the last set of brackets ( ')))' ) before the 'axis=1' argument.</p>
<p>Thanks in advance for your help!</p>
| 69,340,095
| 2021-09-27T00:02:51.553000
| 1
| null | 1
| 89
|
python|pandas
|
<p>Nick ODell's comment is correct. I reformatted your orignal as:</p>
<pre><code>master_output['tm_override'] = (
master_output.apply(lambda row: row['nrec_tm_lb'].astype(str) + '-' + row['nrec_tm_ub'].astype(str)
if row['det_tw_fact'].isin([4, 5])
else row['tw2Open'] + dt.timedelta(hours=3).time()
if (row['det_tw_fact'].isin([1, 2, 3]) and (~row['tw2Open'].isna()))
else row['tw1Open']
+ dt.timedelta(hours=3).time()
if (row['det_tw_fact'].isin([1, 2, 3]) and (~row['tw2Open'].isna())), axis=1)
)
</code></pre>
<p>If you look at your last if, there is no matching else. What you are dealing is I believe is a DataFrame ? You are trying to assign values to a column, but if you only have if without else, when the if condition is not met, then there is no values to fill the column.</p>
<p>I don't know what values you are going to fill in the else part. But I've tried filling with ''. The syntax error goes away.</p>
<pre><code>master_output['tm_override'] = (
master_output.apply(lambda row: row['nrec_tm_lb'].astype(str) + '-' + row['nrec_tm_ub'].astype(str)
if row['det_tw_fact'].isin([4, 5])
else row['tw2Open'] + dt.timedelta(hours=3).time()
if (row['det_tw_fact'].isin([1, 2, 3]) and (~row['tw2Open'].isna()))
else row['tw1Open']
+ dt.timedelta(hours=3).time()
if (row['det_tw_fact'].isin([1, 2, 3]) and (~row['tw2Open'].isna()))
else '', axis=1)
)
</code></pre>
<p>What I get now is a different error because I don't have the DataFrame, but your syntax error is resoved.</p>
<pre><code>---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-251-72ce849871e0> in <module>
1 master_output['tm_override'] = (
----> 2 master_output.apply(lambda row: row['nrec_tm_lb'].astype(str) + '-' + row['nrec_tm_ub'].astype(str)
3 if row['det_tw_fact'].isin([4, 5])
4 else row['tw2Open'] + dt.timedelta(hours=3).time()
5 if (row['det_tw_fact'].isin([1, 2, 3]) and (~row['tw2Open'].isna()))
NameError: name 'master_output' is not defined
</code></pre>
| 2021-09-27T00:51:52.250000
| 0
|
https://pandas.pydata.org/docs/user_guide/groupby.html
|
Nick ODell's comment is correct. I reformatted your orignal as:
master_output['tm_override'] = (
master_output.apply(lambda row: row['nrec_tm_lb'].astype(str) + '-' + row['nrec_tm_ub'].astype(str)
if row['det_tw_fact'].isin([4, 5])
else row['tw2Open'] + dt.timedelta(hours=3).time()
if (row['det_tw_fact'].isin([1, 2, 3]) and (~row['tw2Open'].isna()))
else row['tw1Open']
+ dt.timedelta(hours=3).time()
if (row['det_tw_fact'].isin([1, 2, 3]) and (~row['tw2Open'].isna())), axis=1)
)
If you look at your last if, there is no matching else. What you are dealing is I believe is a DataFrame ? You are trying to assign values to a column, but if you only have if without else, when the if condition is not met, then there is no values to fill the column.
I don't know what values you are going to fill in the else part. But I've tried filling with ''. The syntax error goes away.
master_output['tm_override'] = (
master_output.apply(lambda row: row['nrec_tm_lb'].astype(str) + '-' + row['nrec_tm_ub'].astype(str)
if row['det_tw_fact'].isin([4, 5])
else row['tw2Open'] + dt.timedelta(hours=3).time()
if (row['det_tw_fact'].isin([1, 2, 3]) and (~row['tw2Open'].isna()))
else row['tw1Open']
+ dt.timedelta(hours=3).time()
if (row['det_tw_fact'].isin([1, 2, 3]) and (~row['tw2Open'].isna()))
else '', axis=1)
)
What I get now is a different error because I don't have the DataFrame, but your syntax error is resoved.
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-251-72ce849871e0> in <module>
1 master_output['tm_override'] = (
----> 2 master_output.apply(lambda row: row['nrec_tm_lb'].astype(str) + '-' + row['nrec_tm_ub'].astype(str)
3 if row['det_tw_fact'].isin([4, 5])
4 else row['tw2Open'] + dt.timedelta(hours=3).time()
5 if (row['det_tw_fact'].isin([1, 2, 3]) and (~row['tw2Open'].isna()))
NameError: name 'master_output' is not defined
| 0
| 2,136
|
Pandas data frame - apply function with lambda with multiple 'if else' statements
I am hoping someone could help point out what I may be doing wrong in the following piece of code:
master_output['tm_override'] = master_output.apply(lambda row: row['nrec_tm_lb'].astype(str) + '-' + row['nrec_tm_ub'].astype(str) if row['det_tw_fact'].isin([4, 5]) else row['tw2Open'] + dt.timedelta(hours=3).time() if (row['det_tw_fact'].isin([1, 2, 3]) and (~row['tw2Open'].isna()))
else row['tw1Open'] + dt.timedelta(hours=3).time() if (row['det_tw_fact'].isin([1, 2, 3]) and (~row['tw2Open'].isna())), axis=1)
I have a feeling that I may be doing something fundamentally silly here. The issue it seems may be coming from the last set of brackets ( ')))' ) before the 'axis=1' argument.
Thanks in advance for your help!
|
69,586,786
|
Reference Column Name with Spaces
|
<p>Beginner question: I am trying to use the following line of code but am getting syntax errors:</p>
<pre><code>db = db.drop('Aggregated Alliance Products', axis=1).join(db.Aggregated Alliance Products.str.split(', ', expand=True).stack().to_frame('Aggregated Alliance Products').reset_index(1, drop=True))
</code></pre>
<p>When I use <code>db.Aggregated Alliance Products.str.split(', ', expand=True)</code>, how do I adjust the column name Aggregated Alliance Products to accomodate the spaces in it?</p>
<p>Sample:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: center;">A</th>
<th style="text-align: center;">B</th>
<th style="text-align: center;">Aggregated Alliance Products</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: center;">1</td>
<td style="text-align: center;">2</td>
<td style="text-align: center;">"1,2,4"</td>
</tr>
<tr>
<td style="text-align: center;">3</td>
<td style="text-align: center;">4</td>
<td style="text-align: center;">"5,6"</td>
</tr>
</tbody>
</table>
</div>
<p>Desired Output:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: center;">A</th>
<th style="text-align: center;">B</th>
<th style="text-align: center;">Aggregated Alliance Products</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: center;">1</td>
<td style="text-align: center;">2</td>
<td style="text-align: center;">1</td>
</tr>
<tr>
<td style="text-align: center;">1</td>
<td style="text-align: center;">2</td>
<td style="text-align: center;">2</td>
</tr>
<tr>
<td style="text-align: center;">1</td>
<td style="text-align: center;">2</td>
<td style="text-align: center;">4</td>
</tr>
<tr>
<td style="text-align: center;">3</td>
<td style="text-align: center;">4</td>
<td style="text-align: center;">5</td>
</tr>
<tr>
<td style="text-align: center;">3</td>
<td style="text-align: center;">4</td>
<td style="text-align: center;">6</td>
</tr>
</tbody>
</table>
</div>
| 69,587,195
| 2021-10-15T15:02:34.983000
| 2
| null | 0
| 89
|
python|pandas
|
<p><strong>EDIT</strong></p>
<p>or you can use <code>assign</code> to acheive your goal without modifying original data as follows:</p>
<p><a href="https://i.stack.imgur.com/KkmQK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/KkmQK.png" alt="enter image description here" /></a></p>
<pre class="lang-py prettyprint-override"><code>db.assign(**{'Aggregated Alliance Products': db['Aggregated Alliance Products'].str.split(',')}).explode('Aggregated Alliance Products')
</code></pre>
<hr />
<p>if you can modify db itself, you can use <code>explode</code> func like as follows:</p>
<pre><code>db = pd.DataFrame([(1, 2, '1,2,4'), (3, 4, '5,6')], columns=['A', 'B', 'Aggregated Alliance Products'])
db['Aggregated Alliance Products'] = db['Aggregated Alliance Products'].apply(lambda x: x.split(','))
db.explode('Aggregated Alliance Products')
</code></pre>
| 2021-10-15T15:33:08.950000
| 0
|
https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.query.html
|
pandas.DataFrame.query#
pandas.DataFrame.query#
DataFrame.query(expr, *, inplace=False, **kwargs)[source]#
Query the columns of a DataFrame with a boolean expression.
Parameters
exprstrThe query string to evaluate.
You can refer to variables
EDIT
or you can use assign to acheive your goal without modifying original data as follows:
db.assign(**{'Aggregated Alliance Products': db['Aggregated Alliance Products'].str.split(',')}).explode('Aggregated Alliance Products')
if you can modify db itself, you can use explode func like as follows:
db = pd.DataFrame([(1, 2, '1,2,4'), (3, 4, '5,6')], columns=['A', 'B', 'Aggregated Alliance Products'])
db['Aggregated Alliance Products'] = db['Aggregated Alliance Products'].apply(lambda x: x.split(','))
db.explode('Aggregated Alliance Products')
in the environment by prefixing them with an ‘@’ character like
@a + b.
You can refer to column names that are not valid Python variable names
by surrounding them in backticks. Thus, column names containing spaces
or punctuations (besides underscores) or starting with digits must be
surrounded by backticks. (For example, a column named “Area (cm^2)” would
be referenced as `Area (cm^2)`). Column names which are Python keywords
(like “list”, “for”, “import”, etc) cannot be used.
For example, if one of your columns is called a a and you want
to sum it with b, your query should be `a a` + b.
New in version 0.25.0: Backtick quoting introduced.
New in version 1.0.0: Expanding functionality of backtick quoting for more than only spaces.
inplaceboolWhether to modify the DataFrame rather than creating a new one.
**kwargsSee the documentation for eval() for complete details
on the keyword arguments accepted by DataFrame.query().
Returns
DataFrame or NoneDataFrame resulting from the provided query expression or
None if inplace=True.
See also
evalEvaluate a string describing operations on DataFrame columns.
DataFrame.evalEvaluate a string describing operations on DataFrame columns.
Notes
The result of the evaluation of this expression is first passed to
DataFrame.loc and if that fails because of a
multidimensional key (e.g., a DataFrame) then the result will be passed
to DataFrame.__getitem__().
This method uses the top-level eval() function to
evaluate the passed query.
The query() method uses a slightly
modified Python syntax by default. For example, the & and |
(bitwise) operators have the precedence of their boolean cousins,
and and or. This is syntactically valid Python,
however the semantics are different.
You can change the semantics of the expression by passing the keyword
argument parser='python'. This enforces the same semantics as
evaluation in Python space. Likewise, you can pass engine='python'
to evaluate an expression using Python itself as a backend. This is not
recommended as it is inefficient compared to using numexpr as the
engine.
The DataFrame.index and
DataFrame.columns attributes of the
DataFrame instance are placed in the query namespace
by default, which allows you to treat both the index and columns of the
frame as a column in the frame.
The identifier index is used for the frame index; you can also
use the name of the index to identify it in a query. Please note that
Python keywords may not be used as identifiers.
For further details and examples see the query documentation in
indexing.
Backtick quoted variables
Backtick quoted variables are parsed as literal Python code and
are converted internally to a Python valid identifier.
This can lead to the following problems.
During parsing a number of disallowed characters inside the backtick
quoted string are replaced by strings that are allowed as a Python identifier.
These characters include all operators in Python, the space character, the
question mark, the exclamation mark, the dollar sign, and the euro sign.
For other characters that fall outside the ASCII range (U+0001..U+007F)
and those that are not further specified in PEP 3131,
the query parser will raise an error.
This excludes whitespace different than the space character,
but also the hashtag (as it is used for comments) and the backtick
itself (backtick can also not be escaped).
In a special case, quotes that make a pair around a backtick can
confuse the parser.
For example, `it's` > `that's` will raise an error,
as it forms a quoted string ('s > `that') with a backtick inside.
See also the Python documentation about lexical analysis
(https://docs.python.org/3/reference/lexical_analysis.html)
in combination with the source code in pandas.core.computation.parsing.
Examples
>>> df = pd.DataFrame({'A': range(1, 6),
... 'B': range(10, 0, -2),
... 'C C': range(10, 5, -1)})
>>> df
A B C C
0 1 10 10
1 2 8 9
2 3 6 8
3 4 4 7
4 5 2 6
>>> df.query('A > B')
A B C C
4 5 2 6
The previous expression is equivalent to
>>> df[df.A > df.B]
A B C C
4 5 2 6
For columns with spaces in their name, you can use backtick quoting.
>>> df.query('B == `C C`')
A B C C
0 1 10 10
The previous expression is equivalent to
>>> df[df.B == df['C C']]
A B C C
0 1 10 10
| 248
| 800
|
Reference Column Name with Spaces
Beginner question: I am trying to use the following line of code but am getting syntax errors:
db = db.drop('Aggregated Alliance Products', axis=1).join(db.Aggregated Alliance Products.str.split(', ', expand=True).stack().to_frame('Aggregated Alliance Products').reset_index(1, drop=True))
When I use db.Aggregated Alliance Products.str.split(', ', expand=True), how do I adjust the column name Aggregated Alliance Products to accomodate the spaces in it?
Sample:
A
B
Aggregated Alliance Products
1
2
"1,2,4"
3
4
"5,6"
Desired Output:
A
B
Aggregated Alliance Products
1
2
1
1
2
2
1
2
4
3
4
5
3
4
6
|
69,822,423
|
Pandas how to replace NaN in rows with duplicate keys
|
<p>I have the following dataframe:</p>
<pre><code> id item item_cost order_total
1 A 6 10
1 B 4 NaN
2 A 5 5
3 C 12 12
</code></pre>
<p>There are duplicate keys (column 'id') which relate to a specific order. order_total is a sum of each item_cost with the same id. I would now like to duplicate the order_total into each row of the same order. E.g. both rows with id = 1 should have an order_total of 10. One of them has NaN.</p>
<p>This dataframe is simply read in from a csv so I have done no calculations on any of these columns.</p>
<p>The simplified logic I am trying to achieve is: if column id is a duplicate, fill NaN values with the non-NaN value from a row with the same id.</p>
<p>I have tried the following code:</p>
<pre><code>print(df.groupby('id',as_index=False).sum())
</code></pre>
<p>However, the issue here is that I lose the item name which I need to use to perform further analysis.</p>
| 69,822,814
| 2021-11-03T09:25:59.340000
| 1
| null | 0
| 91
|
python|pandas
|
<p>Try this:</p>
<pre><code>df['order_total'] = df.groupby('id').order_total.transform('first')
print(df)
id item item_cost order_total
0 1 A 6 10.0
1 1 B 4 10.0
2 2 A 5 5.0
3 3 C 12 12.0
</code></pre>
| 2021-11-03T09:55:18.717000
| 0
|
https://pandas.pydata.org/docs/dev/user_guide/merging.html
|
Merge, join, concatenate and compare#
Merge, join, concatenate and compare#
pandas provides various facilities for easily combining together Series or
DataFrame with various kinds of set logic for the indexes
and relational algebra functionality in the case of join / merge-type
Try this:
df['order_total'] = df.groupby('id').order_total.transform('first')
print(df)
id item item_cost order_total
0 1 A 6 10.0
1 1 B 4 10.0
2 2 A 5 5.0
3 3 C 12 12.0
operations.
In addition, pandas also provides utilities to compare two Series or DataFrame
and summarize their differences.
Concatenating objects#
The concat() function (in the main pandas namespace) does all of
the heavy lifting of performing concatenation operations along an axis while
performing optional set logic (union or intersection) of the indexes (if any) on
the other axes. Note that I say “if any” because there is only a single possible
axis of concatenation for Series.
Before diving into all of the details of concat and what it can do, here is
a simple example:
In [1]: df1 = pd.DataFrame(
...: {
...: "A": ["A0", "A1", "A2", "A3"],
...: "B": ["B0", "B1", "B2", "B3"],
...: "C": ["C0", "C1", "C2", "C3"],
...: "D": ["D0", "D1", "D2", "D3"],
...: },
...: index=[0, 1, 2, 3],
...: )
...:
In [2]: df2 = pd.DataFrame(
...: {
...: "A": ["A4", "A5", "A6", "A7"],
...: "B": ["B4", "B5", "B6", "B7"],
...: "C": ["C4", "C5", "C6", "C7"],
...: "D": ["D4", "D5", "D6", "D7"],
...: },
...: index=[4, 5, 6, 7],
...: )
...:
In [3]: df3 = pd.DataFrame(
...: {
...: "A": ["A8", "A9", "A10", "A11"],
...: "B": ["B8", "B9", "B10", "B11"],
...: "C": ["C8", "C9", "C10", "C11"],
...: "D": ["D8", "D9", "D10", "D11"],
...: },
...: index=[8, 9, 10, 11],
...: )
...:
In [4]: frames = [df1, df2, df3]
In [5]: result = pd.concat(frames)
Like its sibling function on ndarrays, numpy.concatenate, pandas.concat
takes a list or dict of homogeneously-typed objects and concatenates them with
some configurable handling of “what to do with the other axes”:
pd.concat(
objs,
axis=0,
join="outer",
ignore_index=False,
keys=None,
levels=None,
names=None,
verify_integrity=False,
copy=True,
)
objs : a sequence or mapping of Series or DataFrame objects. If a
dict is passed, the sorted keys will be used as the keys argument, unless
it is passed, in which case the values will be selected (see below). Any None
objects will be dropped silently unless they are all None in which case a
ValueError will be raised.
axis : {0, 1, …}, default 0. The axis to concatenate along.
join : {‘inner’, ‘outer’}, default ‘outer’. How to handle indexes on
other axis(es). Outer for union and inner for intersection.
ignore_index : boolean, default False. If True, do not use the index
values on the concatenation axis. The resulting axis will be labeled 0, …,
n - 1. This is useful if you are concatenating objects where the
concatenation axis does not have meaningful indexing information. Note
the index values on the other axes are still respected in the join.
keys : sequence, default None. Construct hierarchical index using the
passed keys as the outermost level. If multiple levels passed, should
contain tuples.
levels : list of sequences, default None. Specific levels (unique values)
to use for constructing a MultiIndex. Otherwise they will be inferred from the
keys.
names : list, default None. Names for the levels in the resulting
hierarchical index.
verify_integrity : boolean, default False. Check whether the new
concatenated axis contains duplicates. This can be very expensive relative
to the actual data concatenation.
copy : boolean, default True. If False, do not copy data unnecessarily.
Without a little bit of context many of these arguments don’t make much sense.
Let’s revisit the above example. Suppose we wanted to associate specific keys
with each of the pieces of the chopped up DataFrame. We can do this using the
keys argument:
In [6]: result = pd.concat(frames, keys=["x", "y", "z"])
As you can see (if you’ve read the rest of the documentation), the resulting
object’s index has a hierarchical index. This
means that we can now select out each chunk by key:
In [7]: result.loc["y"]
Out[7]:
A B C D
4 A4 B4 C4 D4
5 A5 B5 C5 D5
6 A6 B6 C6 D6
7 A7 B7 C7 D7
It’s not a stretch to see how this can be very useful. More detail on this
functionality below.
Note
It is worth noting that concat() makes a full copy of the data, and that constantly
reusing this function can create a significant performance hit. If you need
to use the operation over several datasets, use a list comprehension.
frames = [ process_your_file(f) for f in files ]
result = pd.concat(frames)
Note
When concatenating DataFrames with named axes, pandas will attempt to preserve
these index/column names whenever possible. In the case where all inputs share a
common name, this name will be assigned to the result. When the input names do
not all agree, the result will be unnamed. The same is true for MultiIndex,
but the logic is applied separately on a level-by-level basis.
Set logic on the other axes#
When gluing together multiple DataFrames, you have a choice of how to handle
the other axes (other than the one being concatenated). This can be done in
the following two ways:
Take the union of them all, join='outer'. This is the default
option as it results in zero information loss.
Take the intersection, join='inner'.
Here is an example of each of these methods. First, the default join='outer'
behavior:
In [8]: df4 = pd.DataFrame(
...: {
...: "B": ["B2", "B3", "B6", "B7"],
...: "D": ["D2", "D3", "D6", "D7"],
...: "F": ["F2", "F3", "F6", "F7"],
...: },
...: index=[2, 3, 6, 7],
...: )
...:
In [9]: result = pd.concat([df1, df4], axis=1)
Here is the same thing with join='inner':
In [10]: result = pd.concat([df1, df4], axis=1, join="inner")
Lastly, suppose we just wanted to reuse the exact index from the original
DataFrame:
In [11]: result = pd.concat([df1, df4], axis=1).reindex(df1.index)
Similarly, we could index before the concatenation:
In [12]: pd.concat([df1, df4.reindex(df1.index)], axis=1)
Out[12]:
A B C D B D F
0 A0 B0 C0 D0 NaN NaN NaN
1 A1 B1 C1 D1 NaN NaN NaN
2 A2 B2 C2 D2 B2 D2 F2
3 A3 B3 C3 D3 B3 D3 F3
Ignoring indexes on the concatenation axis#
For DataFrame objects which don’t have a meaningful index, you may wish
to append them and ignore the fact that they may have overlapping indexes. To
do this, use the ignore_index argument:
In [13]: result = pd.concat([df1, df4], ignore_index=True, sort=False)
Concatenating with mixed ndims#
You can concatenate a mix of Series and DataFrame objects. The
Series will be transformed to DataFrame with the column name as
the name of the Series.
In [14]: s1 = pd.Series(["X0", "X1", "X2", "X3"], name="X")
In [15]: result = pd.concat([df1, s1], axis=1)
Note
Since we’re concatenating a Series to a DataFrame, we could have
achieved the same result with DataFrame.assign(). To concatenate an
arbitrary number of pandas objects (DataFrame or Series), use
concat.
If unnamed Series are passed they will be numbered consecutively.
In [16]: s2 = pd.Series(["_0", "_1", "_2", "_3"])
In [17]: result = pd.concat([df1, s2, s2, s2], axis=1)
Passing ignore_index=True will drop all name references.
In [18]: result = pd.concat([df1, s1], axis=1, ignore_index=True)
More concatenating with group keys#
A fairly common use of the keys argument is to override the column names
when creating a new DataFrame based on existing Series.
Notice how the default behaviour consists on letting the resulting DataFrame
inherit the parent Series’ name, when these existed.
In [19]: s3 = pd.Series([0, 1, 2, 3], name="foo")
In [20]: s4 = pd.Series([0, 1, 2, 3])
In [21]: s5 = pd.Series([0, 1, 4, 5])
In [22]: pd.concat([s3, s4, s5], axis=1)
Out[22]:
foo 0 1
0 0 0 0
1 1 1 1
2 2 2 4
3 3 3 5
Through the keys argument we can override the existing column names.
In [23]: pd.concat([s3, s4, s5], axis=1, keys=["red", "blue", "yellow"])
Out[23]:
red blue yellow
0 0 0 0
1 1 1 1
2 2 2 4
3 3 3 5
Let’s consider a variation of the very first example presented:
In [24]: result = pd.concat(frames, keys=["x", "y", "z"])
You can also pass a dict to concat in which case the dict keys will be used
for the keys argument (unless other keys are specified):
In [25]: pieces = {"x": df1, "y": df2, "z": df3}
In [26]: result = pd.concat(pieces)
In [27]: result = pd.concat(pieces, keys=["z", "y"])
The MultiIndex created has levels that are constructed from the passed keys and
the index of the DataFrame pieces:
In [28]: result.index.levels
Out[28]: FrozenList([['z', 'y'], [4, 5, 6, 7, 8, 9, 10, 11]])
If you wish to specify other levels (as will occasionally be the case), you can
do so using the levels argument:
In [29]: result = pd.concat(
....: pieces, keys=["x", "y", "z"], levels=[["z", "y", "x", "w"]], names=["group_key"]
....: )
....:
In [30]: result.index.levels
Out[30]: FrozenList([['z', 'y', 'x', 'w'], [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]])
This is fairly esoteric, but it is actually necessary for implementing things
like GroupBy where the order of a categorical variable is meaningful.
Appending rows to a DataFrame#
If you have a series that you want to append as a single row to a DataFrame, you can convert the row into a
DataFrame and use concat
In [31]: s2 = pd.Series(["X0", "X1", "X2", "X3"], index=["A", "B", "C", "D"])
In [32]: result = pd.concat([df1, s2.to_frame().T], ignore_index=True)
You should use ignore_index with this method to instruct DataFrame to
discard its index. If you wish to preserve the index, you should construct an
appropriately-indexed DataFrame and append or concatenate those objects.
Database-style DataFrame or named Series joining/merging#
pandas has full-featured, high performance in-memory join operations
idiomatically very similar to relational databases like SQL. These methods
perform significantly better (in some cases well over an order of magnitude
better) than other open source implementations (like base::merge.data.frame
in R). The reason for this is careful algorithmic design and the internal layout
of the data in DataFrame.
See the cookbook for some advanced strategies.
Users who are familiar with SQL but new to pandas might be interested in a
comparison with SQL.
pandas provides a single function, merge(), as the entry point for
all standard database join operations between DataFrame or named Series objects:
pd.merge(
left,
right,
how="inner",
on=None,
left_on=None,
right_on=None,
left_index=False,
right_index=False,
sort=True,
suffixes=("_x", "_y"),
copy=True,
indicator=False,
validate=None,
)
left: A DataFrame or named Series object.
right: Another DataFrame or named Series object.
on: Column or index level names to join on. Must be found in both the left
and right DataFrame and/or Series objects. If not passed and left_index and
right_index are False, the intersection of the columns in the
DataFrames and/or Series will be inferred to be the join keys.
left_on: Columns or index levels from the left DataFrame or Series to use as
keys. Can either be column names, index level names, or arrays with length
equal to the length of the DataFrame or Series.
right_on: Columns or index levels from the right DataFrame or Series to use as
keys. Can either be column names, index level names, or arrays with length
equal to the length of the DataFrame or Series.
left_index: If True, use the index (row labels) from the left
DataFrame or Series as its join key(s). In the case of a DataFrame or Series with a MultiIndex
(hierarchical), the number of levels must match the number of join keys
from the right DataFrame or Series.
right_index: Same usage as left_index for the right DataFrame or Series
how: One of 'left', 'right', 'outer', 'inner', 'cross'. Defaults
to inner. See below for more detailed description of each method.
sort: Sort the result DataFrame by the join keys in lexicographical
order. Defaults to True, setting to False will improve performance
substantially in many cases.
suffixes: A tuple of string suffixes to apply to overlapping
columns. Defaults to ('_x', '_y').
copy: Always copy data (default True) from the passed DataFrame or named Series
objects, even when reindexing is not necessary. Cannot be avoided in many
cases but may improve performance / memory usage. The cases where copying
can be avoided are somewhat pathological but this option is provided
nonetheless.
indicator: Add a column to the output DataFrame called _merge
with information on the source of each row. _merge is Categorical-type
and takes on a value of left_only for observations whose merge key
only appears in 'left' DataFrame or Series, right_only for observations whose
merge key only appears in 'right' DataFrame or Series, and both if the
observation’s merge key is found in both.
validate : string, default None.
If specified, checks if merge is of specified type.
“one_to_one” or “1:1”: checks if merge keys are unique in both
left and right datasets.
“one_to_many” or “1:m”: checks if merge keys are unique in left
dataset.
“many_to_one” or “m:1”: checks if merge keys are unique in right
dataset.
“many_to_many” or “m:m”: allowed, but does not result in checks.
Note
Support for specifying index levels as the on, left_on, and
right_on parameters was added in version 0.23.0.
Support for merging named Series objects was added in version 0.24.0.
The return type will be the same as left. If left is a DataFrame or named Series
and right is a subclass of DataFrame, the return type will still be DataFrame.
merge is a function in the pandas namespace, and it is also available as a
DataFrame instance method merge(), with the calling
DataFrame being implicitly considered the left object in the join.
The related join() method, uses merge internally for the
index-on-index (by default) and column(s)-on-index join. If you are joining on
index only, you may wish to use DataFrame.join to save yourself some typing.
Brief primer on merge methods (relational algebra)#
Experienced users of relational databases like SQL will be familiar with the
terminology used to describe join operations between two SQL-table like
structures (DataFrame objects). There are several cases to consider which
are very important to understand:
one-to-one joins: for example when joining two DataFrame objects on
their indexes (which must contain unique values).
many-to-one joins: for example when joining an index (unique) to one or
more columns in a different DataFrame.
many-to-many joins: joining columns on columns.
Note
When joining columns on columns (potentially a many-to-many join), any
indexes on the passed DataFrame objects will be discarded.
It is worth spending some time understanding the result of the many-to-many
join case. In SQL / standard relational algebra, if a key combination appears
more than once in both tables, the resulting table will have the Cartesian
product of the associated data. Here is a very basic example with one unique
key combination:
In [33]: left = pd.DataFrame(
....: {
....: "key": ["K0", "K1", "K2", "K3"],
....: "A": ["A0", "A1", "A2", "A3"],
....: "B": ["B0", "B1", "B2", "B3"],
....: }
....: )
....:
In [34]: right = pd.DataFrame(
....: {
....: "key": ["K0", "K1", "K2", "K3"],
....: "C": ["C0", "C1", "C2", "C3"],
....: "D": ["D0", "D1", "D2", "D3"],
....: }
....: )
....:
In [35]: result = pd.merge(left, right, on="key")
Here is a more complicated example with multiple join keys. Only the keys
appearing in left and right are present (the intersection), since
how='inner' by default.
In [36]: left = pd.DataFrame(
....: {
....: "key1": ["K0", "K0", "K1", "K2"],
....: "key2": ["K0", "K1", "K0", "K1"],
....: "A": ["A0", "A1", "A2", "A3"],
....: "B": ["B0", "B1", "B2", "B3"],
....: }
....: )
....:
In [37]: right = pd.DataFrame(
....: {
....: "key1": ["K0", "K1", "K1", "K2"],
....: "key2": ["K0", "K0", "K0", "K0"],
....: "C": ["C0", "C1", "C2", "C3"],
....: "D": ["D0", "D1", "D2", "D3"],
....: }
....: )
....:
In [38]: result = pd.merge(left, right, on=["key1", "key2"])
The how argument to merge specifies how to determine which keys are to
be included in the resulting table. If a key combination does not appear in
either the left or right tables, the values in the joined table will be
NA. Here is a summary of the how options and their SQL equivalent names:
Merge method
SQL Join Name
Description
left
LEFT OUTER JOIN
Use keys from left frame only
right
RIGHT OUTER JOIN
Use keys from right frame only
outer
FULL OUTER JOIN
Use union of keys from both frames
inner
INNER JOIN
Use intersection of keys from both frames
cross
CROSS JOIN
Create the cartesian product of rows of both frames
In [39]: result = pd.merge(left, right, how="left", on=["key1", "key2"])
In [40]: result = pd.merge(left, right, how="right", on=["key1", "key2"])
In [41]: result = pd.merge(left, right, how="outer", on=["key1", "key2"])
In [42]: result = pd.merge(left, right, how="inner", on=["key1", "key2"])
In [43]: result = pd.merge(left, right, how="cross")
You can merge a mult-indexed Series and a DataFrame, if the names of
the MultiIndex correspond to the columns from the DataFrame. Transform
the Series to a DataFrame using Series.reset_index() before merging,
as shown in the following example.
In [44]: df = pd.DataFrame({"Let": ["A", "B", "C"], "Num": [1, 2, 3]})
In [45]: df
Out[45]:
Let Num
0 A 1
1 B 2
2 C 3
In [46]: ser = pd.Series(
....: ["a", "b", "c", "d", "e", "f"],
....: index=pd.MultiIndex.from_arrays(
....: [["A", "B", "C"] * 2, [1, 2, 3, 4, 5, 6]], names=["Let", "Num"]
....: ),
....: )
....:
In [47]: ser
Out[47]:
Let Num
A 1 a
B 2 b
C 3 c
A 4 d
B 5 e
C 6 f
dtype: object
In [48]: pd.merge(df, ser.reset_index(), on=["Let", "Num"])
Out[48]:
Let Num 0
0 A 1 a
1 B 2 b
2 C 3 c
Here is another example with duplicate join keys in DataFrames:
In [49]: left = pd.DataFrame({"A": [1, 2], "B": [2, 2]})
In [50]: right = pd.DataFrame({"A": [4, 5, 6], "B": [2, 2, 2]})
In [51]: result = pd.merge(left, right, on="B", how="outer")
Warning
Joining / merging on duplicate keys can cause a returned frame that is the multiplication of the row dimensions, which may result in memory overflow. It is the user’ s responsibility to manage duplicate values in keys before joining large DataFrames.
Checking for duplicate keys#
Users can use the validate argument to automatically check whether there
are unexpected duplicates in their merge keys. Key uniqueness is checked before
merge operations and so should protect against memory overflows. Checking key
uniqueness is also a good way to ensure user data structures are as expected.
In the following example, there are duplicate values of B in the right
DataFrame. As this is not a one-to-one merge – as specified in the
validate argument – an exception will be raised.
In [52]: left = pd.DataFrame({"A": [1, 2], "B": [1, 2]})
In [53]: right = pd.DataFrame({"A": [4, 5, 6], "B": [2, 2, 2]})
In [53]: result = pd.merge(left, right, on="B", how="outer", validate="one_to_one")
...
MergeError: Merge keys are not unique in right dataset; not a one-to-one merge
If the user is aware of the duplicates in the right DataFrame but wants to
ensure there are no duplicates in the left DataFrame, one can use the
validate='one_to_many' argument instead, which will not raise an exception.
In [54]: pd.merge(left, right, on="B", how="outer", validate="one_to_many")
Out[54]:
A_x B A_y
0 1 1 NaN
1 2 2 4.0
2 2 2 5.0
3 2 2 6.0
The merge indicator#
merge() accepts the argument indicator. If True, a
Categorical-type column called _merge will be added to the output object
that takes on values:
Observation Origin
_merge value
Merge key only in 'left' frame
left_only
Merge key only in 'right' frame
right_only
Merge key in both frames
both
In [55]: df1 = pd.DataFrame({"col1": [0, 1], "col_left": ["a", "b"]})
In [56]: df2 = pd.DataFrame({"col1": [1, 2, 2], "col_right": [2, 2, 2]})
In [57]: pd.merge(df1, df2, on="col1", how="outer", indicator=True)
Out[57]:
col1 col_left col_right _merge
0 0 a NaN left_only
1 1 b 2.0 both
2 2 NaN 2.0 right_only
3 2 NaN 2.0 right_only
The indicator argument will also accept string arguments, in which case the indicator function will use the value of the passed string as the name for the indicator column.
In [58]: pd.merge(df1, df2, on="col1", how="outer", indicator="indicator_column")
Out[58]:
col1 col_left col_right indicator_column
0 0 a NaN left_only
1 1 b 2.0 both
2 2 NaN 2.0 right_only
3 2 NaN 2.0 right_only
Merge dtypes#
Merging will preserve the dtype of the join keys.
In [59]: left = pd.DataFrame({"key": [1], "v1": [10]})
In [60]: left
Out[60]:
key v1
0 1 10
In [61]: right = pd.DataFrame({"key": [1, 2], "v1": [20, 30]})
In [62]: right
Out[62]:
key v1
0 1 20
1 2 30
We are able to preserve the join keys:
In [63]: pd.merge(left, right, how="outer")
Out[63]:
key v1
0 1 10
1 1 20
2 2 30
In [64]: pd.merge(left, right, how="outer").dtypes
Out[64]:
key int64
v1 int64
dtype: object
Of course if you have missing values that are introduced, then the
resulting dtype will be upcast.
In [65]: pd.merge(left, right, how="outer", on="key")
Out[65]:
key v1_x v1_y
0 1 10.0 20
1 2 NaN 30
In [66]: pd.merge(left, right, how="outer", on="key").dtypes
Out[66]:
key int64
v1_x float64
v1_y int64
dtype: object
Merging will preserve category dtypes of the mergands. See also the section on categoricals.
The left frame.
In [67]: from pandas.api.types import CategoricalDtype
In [68]: X = pd.Series(np.random.choice(["foo", "bar"], size=(10,)))
In [69]: X = X.astype(CategoricalDtype(categories=["foo", "bar"]))
In [70]: left = pd.DataFrame(
....: {"X": X, "Y": np.random.choice(["one", "two", "three"], size=(10,))}
....: )
....:
In [71]: left
Out[71]:
X Y
0 bar one
1 foo one
2 foo three
3 bar three
4 foo one
5 bar one
6 bar three
7 bar three
8 bar three
9 foo three
In [72]: left.dtypes
Out[72]:
X category
Y object
dtype: object
The right frame.
In [73]: right = pd.DataFrame(
....: {
....: "X": pd.Series(["foo", "bar"], dtype=CategoricalDtype(["foo", "bar"])),
....: "Z": [1, 2],
....: }
....: )
....:
In [74]: right
Out[74]:
X Z
0 foo 1
1 bar 2
In [75]: right.dtypes
Out[75]:
X category
Z int64
dtype: object
The merged result:
In [76]: result = pd.merge(left, right, how="outer")
In [77]: result
Out[77]:
X Y Z
0 bar one 2
1 bar three 2
2 bar one 2
3 bar three 2
4 bar three 2
5 bar three 2
6 foo one 1
7 foo three 1
8 foo one 1
9 foo three 1
In [78]: result.dtypes
Out[78]:
X category
Y object
Z int64
dtype: object
Note
The category dtypes must be exactly the same, meaning the same categories and the ordered attribute.
Otherwise the result will coerce to the categories’ dtype.
Note
Merging on category dtypes that are the same can be quite performant compared to object dtype merging.
Joining on index#
DataFrame.join() is a convenient method for combining the columns of two
potentially differently-indexed DataFrames into a single result
DataFrame. Here is a very basic example:
In [79]: left = pd.DataFrame(
....: {"A": ["A0", "A1", "A2"], "B": ["B0", "B1", "B2"]}, index=["K0", "K1", "K2"]
....: )
....:
In [80]: right = pd.DataFrame(
....: {"C": ["C0", "C2", "C3"], "D": ["D0", "D2", "D3"]}, index=["K0", "K2", "K3"]
....: )
....:
In [81]: result = left.join(right)
In [82]: result = left.join(right, how="outer")
The same as above, but with how='inner'.
In [83]: result = left.join(right, how="inner")
The data alignment here is on the indexes (row labels). This same behavior can
be achieved using merge plus additional arguments instructing it to use the
indexes:
In [84]: result = pd.merge(left, right, left_index=True, right_index=True, how="outer")
In [85]: result = pd.merge(left, right, left_index=True, right_index=True, how="inner")
Joining key columns on an index#
join() takes an optional on argument which may be a column
or multiple column names, which specifies that the passed DataFrame is to be
aligned on that column in the DataFrame. These two function calls are
completely equivalent:
left.join(right, on=key_or_keys)
pd.merge(
left, right, left_on=key_or_keys, right_index=True, how="left", sort=False
)
Obviously you can choose whichever form you find more convenient. For
many-to-one joins (where one of the DataFrame’s is already indexed by the
join key), using join may be more convenient. Here is a simple example:
In [86]: left = pd.DataFrame(
....: {
....: "A": ["A0", "A1", "A2", "A3"],
....: "B": ["B0", "B1", "B2", "B3"],
....: "key": ["K0", "K1", "K0", "K1"],
....: }
....: )
....:
In [87]: right = pd.DataFrame({"C": ["C0", "C1"], "D": ["D0", "D1"]}, index=["K0", "K1"])
In [88]: result = left.join(right, on="key")
In [89]: result = pd.merge(
....: left, right, left_on="key", right_index=True, how="left", sort=False
....: )
....:
To join on multiple keys, the passed DataFrame must have a MultiIndex:
In [90]: left = pd.DataFrame(
....: {
....: "A": ["A0", "A1", "A2", "A3"],
....: "B": ["B0", "B1", "B2", "B3"],
....: "key1": ["K0", "K0", "K1", "K2"],
....: "key2": ["K0", "K1", "K0", "K1"],
....: }
....: )
....:
In [91]: index = pd.MultiIndex.from_tuples(
....: [("K0", "K0"), ("K1", "K0"), ("K2", "K0"), ("K2", "K1")]
....: )
....:
In [92]: right = pd.DataFrame(
....: {"C": ["C0", "C1", "C2", "C3"], "D": ["D0", "D1", "D2", "D3"]}, index=index
....: )
....:
Now this can be joined by passing the two key column names:
In [93]: result = left.join(right, on=["key1", "key2"])
The default for DataFrame.join is to perform a left join (essentially a
“VLOOKUP” operation, for Excel users), which uses only the keys found in the
calling DataFrame. Other join types, for example inner join, can be just as
easily performed:
In [94]: result = left.join(right, on=["key1", "key2"], how="inner")
As you can see, this drops any rows where there was no match.
Joining a single Index to a MultiIndex#
You can join a singly-indexed DataFrame with a level of a MultiIndexed DataFrame.
The level will match on the name of the index of the singly-indexed frame against
a level name of the MultiIndexed frame.
In [95]: left = pd.DataFrame(
....: {"A": ["A0", "A1", "A2"], "B": ["B0", "B1", "B2"]},
....: index=pd.Index(["K0", "K1", "K2"], name="key"),
....: )
....:
In [96]: index = pd.MultiIndex.from_tuples(
....: [("K0", "Y0"), ("K1", "Y1"), ("K2", "Y2"), ("K2", "Y3")],
....: names=["key", "Y"],
....: )
....:
In [97]: right = pd.DataFrame(
....: {"C": ["C0", "C1", "C2", "C3"], "D": ["D0", "D1", "D2", "D3"]},
....: index=index,
....: )
....:
In [98]: result = left.join(right, how="inner")
This is equivalent but less verbose and more memory efficient / faster than this.
In [99]: result = pd.merge(
....: left.reset_index(), right.reset_index(), on=["key"], how="inner"
....: ).set_index(["key","Y"])
....:
Joining with two MultiIndexes#
This is supported in a limited way, provided that the index for the right
argument is completely used in the join, and is a subset of the indices in
the left argument, as in this example:
In [100]: leftindex = pd.MultiIndex.from_product(
.....: [list("abc"), list("xy"), [1, 2]], names=["abc", "xy", "num"]
.....: )
.....:
In [101]: left = pd.DataFrame({"v1": range(12)}, index=leftindex)
In [102]: left
Out[102]:
v1
abc xy num
a x 1 0
2 1
y 1 2
2 3
b x 1 4
2 5
y 1 6
2 7
c x 1 8
2 9
y 1 10
2 11
In [103]: rightindex = pd.MultiIndex.from_product(
.....: [list("abc"), list("xy")], names=["abc", "xy"]
.....: )
.....:
In [104]: right = pd.DataFrame({"v2": [100 * i for i in range(1, 7)]}, index=rightindex)
In [105]: right
Out[105]:
v2
abc xy
a x 100
y 200
b x 300
y 400
c x 500
y 600
In [106]: left.join(right, on=["abc", "xy"], how="inner")
Out[106]:
v1 v2
abc xy num
a x 1 0 100
2 1 100
y 1 2 200
2 3 200
b x 1 4 300
2 5 300
y 1 6 400
2 7 400
c x 1 8 500
2 9 500
y 1 10 600
2 11 600
If that condition is not satisfied, a join with two multi-indexes can be
done using the following code.
In [107]: leftindex = pd.MultiIndex.from_tuples(
.....: [("K0", "X0"), ("K0", "X1"), ("K1", "X2")], names=["key", "X"]
.....: )
.....:
In [108]: left = pd.DataFrame(
.....: {"A": ["A0", "A1", "A2"], "B": ["B0", "B1", "B2"]}, index=leftindex
.....: )
.....:
In [109]: rightindex = pd.MultiIndex.from_tuples(
.....: [("K0", "Y0"), ("K1", "Y1"), ("K2", "Y2"), ("K2", "Y3")], names=["key", "Y"]
.....: )
.....:
In [110]: right = pd.DataFrame(
.....: {"C": ["C0", "C1", "C2", "C3"], "D": ["D0", "D1", "D2", "D3"]}, index=rightindex
.....: )
.....:
In [111]: result = pd.merge(
.....: left.reset_index(), right.reset_index(), on=["key"], how="inner"
.....: ).set_index(["key", "X", "Y"])
.....:
Merging on a combination of columns and index levels#
Strings passed as the on, left_on, and right_on parameters
may refer to either column names or index level names. This enables merging
DataFrame instances on a combination of index levels and columns without
resetting indexes.
In [112]: left_index = pd.Index(["K0", "K0", "K1", "K2"], name="key1")
In [113]: left = pd.DataFrame(
.....: {
.....: "A": ["A0", "A1", "A2", "A3"],
.....: "B": ["B0", "B1", "B2", "B3"],
.....: "key2": ["K0", "K1", "K0", "K1"],
.....: },
.....: index=left_index,
.....: )
.....:
In [114]: right_index = pd.Index(["K0", "K1", "K2", "K2"], name="key1")
In [115]: right = pd.DataFrame(
.....: {
.....: "C": ["C0", "C1", "C2", "C3"],
.....: "D": ["D0", "D1", "D2", "D3"],
.....: "key2": ["K0", "K0", "K0", "K1"],
.....: },
.....: index=right_index,
.....: )
.....:
In [116]: result = left.merge(right, on=["key1", "key2"])
Note
When DataFrames are merged on a string that matches an index level in both
frames, the index level is preserved as an index level in the resulting
DataFrame.
Note
When DataFrames are merged using only some of the levels of a MultiIndex,
the extra levels will be dropped from the resulting merge. In order to
preserve those levels, use reset_index on those level names to move
those levels to columns prior to doing the merge.
Note
If a string matches both a column name and an index level name, then a
warning is issued and the column takes precedence. This will result in an
ambiguity error in a future version.
Overlapping value columns#
The merge suffixes argument takes a tuple of list of strings to append to
overlapping column names in the input DataFrames to disambiguate the result
columns:
In [117]: left = pd.DataFrame({"k": ["K0", "K1", "K2"], "v": [1, 2, 3]})
In [118]: right = pd.DataFrame({"k": ["K0", "K0", "K3"], "v": [4, 5, 6]})
In [119]: result = pd.merge(left, right, on="k")
In [120]: result = pd.merge(left, right, on="k", suffixes=("_l", "_r"))
DataFrame.join() has lsuffix and rsuffix arguments which behave
similarly.
In [121]: left = left.set_index("k")
In [122]: right = right.set_index("k")
In [123]: result = left.join(right, lsuffix="_l", rsuffix="_r")
Joining multiple DataFrames#
A list or tuple of DataFrames can also be passed to join()
to join them together on their indexes.
In [124]: right2 = pd.DataFrame({"v": [7, 8, 9]}, index=["K1", "K1", "K2"])
In [125]: result = left.join([right, right2])
Merging together values within Series or DataFrame columns#
Another fairly common situation is to have two like-indexed (or similarly
indexed) Series or DataFrame objects and wanting to “patch” values in
one object from values for matching indices in the other. Here is an example:
In [126]: df1 = pd.DataFrame(
.....: [[np.nan, 3.0, 5.0], [-4.6, np.nan, np.nan], [np.nan, 7.0, np.nan]]
.....: )
.....:
In [127]: df2 = pd.DataFrame([[-42.6, np.nan, -8.2], [-5.0, 1.6, 4]], index=[1, 2])
For this, use the combine_first() method:
In [128]: result = df1.combine_first(df2)
Note that this method only takes values from the right DataFrame if they are
missing in the left DataFrame. A related method, update(),
alters non-NA values in place:
In [129]: df1.update(df2)
Timeseries friendly merging#
Merging ordered data#
A merge_ordered() function allows combining time series and other
ordered data. In particular it has an optional fill_method keyword to
fill/interpolate missing data:
In [130]: left = pd.DataFrame(
.....: {"k": ["K0", "K1", "K1", "K2"], "lv": [1, 2, 3, 4], "s": ["a", "b", "c", "d"]}
.....: )
.....:
In [131]: right = pd.DataFrame({"k": ["K1", "K2", "K4"], "rv": [1, 2, 3]})
In [132]: pd.merge_ordered(left, right, fill_method="ffill", left_by="s")
Out[132]:
k lv s rv
0 K0 1.0 a NaN
1 K1 1.0 a 1.0
2 K2 1.0 a 2.0
3 K4 1.0 a 3.0
4 K1 2.0 b 1.0
5 K2 2.0 b 2.0
6 K4 2.0 b 3.0
7 K1 3.0 c 1.0
8 K2 3.0 c 2.0
9 K4 3.0 c 3.0
10 K1 NaN d 1.0
11 K2 4.0 d 2.0
12 K4 4.0 d 3.0
Merging asof#
A merge_asof() is similar to an ordered left-join except that we match on
nearest key rather than equal keys. For each row in the left DataFrame,
we select the last row in the right DataFrame whose on key is less
than the left’s key. Both DataFrames must be sorted by the key.
Optionally an asof merge can perform a group-wise merge. This matches the
by key equally, in addition to the nearest match on the on key.
For example; we might have trades and quotes and we want to asof
merge them.
In [133]: trades = pd.DataFrame(
.....: {
.....: "time": pd.to_datetime(
.....: [
.....: "20160525 13:30:00.023",
.....: "20160525 13:30:00.038",
.....: "20160525 13:30:00.048",
.....: "20160525 13:30:00.048",
.....: "20160525 13:30:00.048",
.....: ]
.....: ),
.....: "ticker": ["MSFT", "MSFT", "GOOG", "GOOG", "AAPL"],
.....: "price": [51.95, 51.95, 720.77, 720.92, 98.00],
.....: "quantity": [75, 155, 100, 100, 100],
.....: },
.....: columns=["time", "ticker", "price", "quantity"],
.....: )
.....:
In [134]: quotes = pd.DataFrame(
.....: {
.....: "time": pd.to_datetime(
.....: [
.....: "20160525 13:30:00.023",
.....: "20160525 13:30:00.023",
.....: "20160525 13:30:00.030",
.....: "20160525 13:30:00.041",
.....: "20160525 13:30:00.048",
.....: "20160525 13:30:00.049",
.....: "20160525 13:30:00.072",
.....: "20160525 13:30:00.075",
.....: ]
.....: ),
.....: "ticker": ["GOOG", "MSFT", "MSFT", "MSFT", "GOOG", "AAPL", "GOOG", "MSFT"],
.....: "bid": [720.50, 51.95, 51.97, 51.99, 720.50, 97.99, 720.50, 52.01],
.....: "ask": [720.93, 51.96, 51.98, 52.00, 720.93, 98.01, 720.88, 52.03],
.....: },
.....: columns=["time", "ticker", "bid", "ask"],
.....: )
.....:
In [135]: trades
Out[135]:
time ticker price quantity
0 2016-05-25 13:30:00.023 MSFT 51.95 75
1 2016-05-25 13:30:00.038 MSFT 51.95 155
2 2016-05-25 13:30:00.048 GOOG 720.77 100
3 2016-05-25 13:30:00.048 GOOG 720.92 100
4 2016-05-25 13:30:00.048 AAPL 98.00 100
In [136]: quotes
Out[136]:
time ticker bid ask
0 2016-05-25 13:30:00.023 GOOG 720.50 720.93
1 2016-05-25 13:30:00.023 MSFT 51.95 51.96
2 2016-05-25 13:30:00.030 MSFT 51.97 51.98
3 2016-05-25 13:30:00.041 MSFT 51.99 52.00
4 2016-05-25 13:30:00.048 GOOG 720.50 720.93
5 2016-05-25 13:30:00.049 AAPL 97.99 98.01
6 2016-05-25 13:30:00.072 GOOG 720.50 720.88
7 2016-05-25 13:30:00.075 MSFT 52.01 52.03
By default we are taking the asof of the quotes.
In [137]: pd.merge_asof(trades, quotes, on="time", by="ticker")
Out[137]:
time ticker price quantity bid ask
0 2016-05-25 13:30:00.023 MSFT 51.95 75 51.95 51.96
1 2016-05-25 13:30:00.038 MSFT 51.95 155 51.97 51.98
2 2016-05-25 13:30:00.048 GOOG 720.77 100 720.50 720.93
3 2016-05-25 13:30:00.048 GOOG 720.92 100 720.50 720.93
4 2016-05-25 13:30:00.048 AAPL 98.00 100 NaN NaN
We only asof within 2ms between the quote time and the trade time.
In [138]: pd.merge_asof(trades, quotes, on="time", by="ticker", tolerance=pd.Timedelta("2ms"))
Out[138]:
time ticker price quantity bid ask
0 2016-05-25 13:30:00.023 MSFT 51.95 75 51.95 51.96
1 2016-05-25 13:30:00.038 MSFT 51.95 155 NaN NaN
2 2016-05-25 13:30:00.048 GOOG 720.77 100 720.50 720.93
3 2016-05-25 13:30:00.048 GOOG 720.92 100 720.50 720.93
4 2016-05-25 13:30:00.048 AAPL 98.00 100 NaN NaN
We only asof within 10ms between the quote time and the trade time and we
exclude exact matches on time. Note that though we exclude the exact matches
(of the quotes), prior quotes do propagate to that point in time.
In [139]: pd.merge_asof(
.....: trades,
.....: quotes,
.....: on="time",
.....: by="ticker",
.....: tolerance=pd.Timedelta("10ms"),
.....: allow_exact_matches=False,
.....: )
.....:
Out[139]:
time ticker price quantity bid ask
0 2016-05-25 13:30:00.023 MSFT 51.95 75 NaN NaN
1 2016-05-25 13:30:00.038 MSFT 51.95 155 51.97 51.98
2 2016-05-25 13:30:00.048 GOOG 720.77 100 NaN NaN
3 2016-05-25 13:30:00.048 GOOG 720.92 100 NaN NaN
4 2016-05-25 13:30:00.048 AAPL 98.00 100 NaN NaN
Comparing objects#
The compare() and compare() methods allow you to
compare two DataFrame or Series, respectively, and summarize their differences.
This feature was added in V1.1.0.
For example, you might want to compare two DataFrame and stack their differences
side by side.
In [140]: df = pd.DataFrame(
.....: {
.....: "col1": ["a", "a", "b", "b", "a"],
.....: "col2": [1.0, 2.0, 3.0, np.nan, 5.0],
.....: "col3": [1.0, 2.0, 3.0, 4.0, 5.0],
.....: },
.....: columns=["col1", "col2", "col3"],
.....: )
.....:
In [141]: df
Out[141]:
col1 col2 col3
0 a 1.0 1.0
1 a 2.0 2.0
2 b 3.0 3.0
3 b NaN 4.0
4 a 5.0 5.0
In [142]: df2 = df.copy()
In [143]: df2.loc[0, "col1"] = "c"
In [144]: df2.loc[2, "col3"] = 4.0
In [145]: df2
Out[145]:
col1 col2 col3
0 c 1.0 1.0
1 a 2.0 2.0
2 b 3.0 4.0
3 b NaN 4.0
4 a 5.0 5.0
In [146]: df.compare(df2)
Out[146]:
col1 col3
self other self other
0 a c NaN NaN
2 NaN NaN 3.0 4.0
By default, if two corresponding values are equal, they will be shown as NaN.
Furthermore, if all values in an entire row / column, the row / column will be
omitted from the result. The remaining differences will be aligned on columns.
If you wish, you may choose to stack the differences on rows.
In [147]: df.compare(df2, align_axis=0)
Out[147]:
col1 col3
0 self a NaN
other c NaN
2 self NaN 3.0
other NaN 4.0
If you wish to keep all original rows and columns, set keep_shape argument
to True.
In [148]: df.compare(df2, keep_shape=True)
Out[148]:
col1 col2 col3
self other self other self other
0 a c NaN NaN NaN NaN
1 NaN NaN NaN NaN NaN NaN
2 NaN NaN NaN NaN 3.0 4.0
3 NaN NaN NaN NaN NaN NaN
4 NaN NaN NaN NaN NaN NaN
You may also keep all the original values even if they are equal.
In [149]: df.compare(df2, keep_shape=True, keep_equal=True)
Out[149]:
col1 col2 col3
self other self other self other
0 a c 1.0 1.0 1.0 1.0
1 a a 2.0 2.0 2.0 2.0
2 b b 3.0 3.0 3.0 4.0
3 b b NaN NaN 4.0 4.0
4 a a 5.0 5.0 5.0 5.0
| 281
| 546
|
Pandas how to replace NaN in rows with duplicate keys
I have the following dataframe:
id item item_cost order_total
1 A 6 10
1 B 4 NaN
2 A 5 5
3 C 12 12
There are duplicate keys (column 'id') which relate to a specific order. order_total is a sum of each item_cost with the same id. I would now like to duplicate the order_total into each row of the same order. E.g. both rows with id = 1 should have an order_total of 10. One of them has NaN.
This dataframe is simply read in from a csv so I have done no calculations on any of these columns.
The simplified logic I am trying to achieve is: if column id is a duplicate, fill NaN values with the non-NaN value from a row with the same id.
I have tried the following code:
print(df.groupby('id',as_index=False).sum())
However, the issue here is that I lose the item name which I need to use to perform further analysis.
|
67,493,769
|
Is pandas .between() faster than using &?
|
<p>I have a dataframe that the user can apply a variety of filters on using sliders to specify a min and max value. Right now there are seven filters, but there may be more added in the future.</p>
<p>I currently have the filter definition as:</p>
<pre><code>filt = ( (df['A']>= sliderA[0]) & (df['A']<sliderA[1]) &
(df['B']>= sliderB[0]) & (df['B']<sliderB[1]) &
etc...)
</code></pre>
<p>Would it be computationally faster to use pandas' built-in <code>.between()</code> operator?</p>
<pre><code>filt = ( df['A'].between(sliderA[0], sliderA[1]) &
...)
</code></pre>
<p>My gut tells me no, since it would be going out and executing a separate function as opposed to writing out the evaluation in lower level. But my gut is also very hungry.</p>
<p>I don't think the speed is a big issue yet, but I can see in the future where it might become more important.</p>
| 67,591,776
| 2021-05-11T20:15:29.943000
| 1
| null | 0
| 91
|
python|pandas
|
<p>Using the <code>%timeit</code> function, I got the following results:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: center;">Filter</th>
<th style="text-align: center;">operator</th>
<th style="text-align: center;">mean</th>
<th style="text-align: center;">st dev</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: center;">1</td>
<td style="text-align: center;"><code>.between()</code></td>
<td style="text-align: center;">274us</td>
<td style="text-align: center;">19.8us</td>
</tr>
<tr>
<td style="text-align: center;">1</td>
<td style="text-align: center;"><code>&</code></td>
<td style="text-align: center;">282us</td>
<td style="text-align: center;">11.3us</td>
</tr>
<tr>
<td style="text-align: center;">2</td>
<td style="text-align: center;"><code>.between()</code></td>
<td style="text-align: center;">265us</td>
<td style="text-align: center;">2.64us</td>
</tr>
<tr>
<td style="text-align: center;">2</td>
<td style="text-align: center;"><code>&</code></td>
<td style="text-align: center;">265us</td>
<td style="text-align: center;">9.66us</td>
</tr>
</tbody>
</table>
</div>
<p>Filter 1 example:</p>
<pre><code>%timeit df['cpu_rank'].between(0,222)
%timeit (df['cpu_rank']>=0) & (df['cpu_rank']<=222)
</code></pre>
<p>Overall, not a great deal of difference, or at least not enough to warrant the work required to convert from <code>&</code> to <code>.between()</code></p>
| 2021-05-18T18:09:14.503000
| 0
|
https://pandas.pydata.org/docs/user_guide/enhancingperf.html
|
Enhancing performance#
Enhancing performance#
In this part of the tutorial, we will investigate how to speed up certain
functions operating on pandas DataFrame using three different techniques:
Cython, Numba and pandas.eval(). We will see a speed improvement of ~200
when we use Cython and Numba on a test function operating row-wise on the
DataFrame. Using pandas.eval() we will speed up a sum by an order of
~2.
Note
In addition to following the steps in this tutorial, users interested in enhancing
performance are highly encouraged to install the
recommended dependencies for pandas.
Using the %timeit function, I got the following results:
Filter
operator
mean
st dev
1
.between()
274us
19.8us
1
&
282us
11.3us
2
.between()
265us
2.64us
2
&
265us
9.66us
Filter 1 example:
%timeit df['cpu_rank'].between(0,222)
%timeit (df['cpu_rank']>=0) & (df['cpu_rank']<=222)
Overall, not a great deal of difference, or at least not enough to warrant the work required to convert from & to .between()
These dependencies are often not installed by default, but will offer speed
improvements if present.
Cython (writing C extensions for pandas)#
For many use cases writing pandas in pure Python and NumPy is sufficient. In some
computationally heavy applications however, it can be possible to achieve sizable
speed-ups by offloading work to cython.
This tutorial assumes you have refactored as much as possible in Python, for example
by trying to remove for-loops and making use of NumPy vectorization. It’s always worth
optimising in Python first.
This tutorial walks through a “typical” process of cythonizing a slow computation.
We use an example from the Cython documentation
but in the context of pandas. Our final cythonized solution is around 100 times
faster than the pure Python solution.
Pure Python#
We have a DataFrame to which we want to apply a function row-wise.
In [1]: df = pd.DataFrame(
...: {
...: "a": np.random.randn(1000),
...: "b": np.random.randn(1000),
...: "N": np.random.randint(100, 1000, (1000)),
...: "x": "x",
...: }
...: )
...:
In [2]: df
Out[2]:
a b N x
0 0.469112 -0.218470 585 x
1 -0.282863 -0.061645 841 x
2 -1.509059 -0.723780 251 x
3 -1.135632 0.551225 972 x
4 1.212112 -0.497767 181 x
.. ... ... ... ..
995 -1.512743 0.874737 374 x
996 0.933753 1.120790 246 x
997 -0.308013 0.198768 157 x
998 -0.079915 1.757555 977 x
999 -1.010589 -1.115680 770 x
[1000 rows x 4 columns]
Here’s the function in pure Python:
In [3]: def f(x):
...: return x * (x - 1)
...:
In [4]: def integrate_f(a, b, N):
...: s = 0
...: dx = (b - a) / N
...: for i in range(N):
...: s += f(a + i * dx)
...: return s * dx
...:
We achieve our result by using DataFrame.apply() (row-wise):
In [5]: %timeit df.apply(lambda x: integrate_f(x["a"], x["b"], x["N"]), axis=1)
86 ms +- 1.44 ms per loop (mean +- std. dev. of 7 runs, 10 loops each)
But clearly this isn’t fast enough for us. Let’s take a look and see where the
time is spent during this operation (limited to the most time consuming
four calls) using the prun ipython magic function:
In [6]: %prun -l 4 df.apply(lambda x: integrate_f(x["a"], x["b"], x["N"]), axis=1) # noqa E999
621327 function calls (621307 primitive calls) in 0.168 seconds
Ordered by: internal time
List reduced from 225 to 4 due to restriction <4>
ncalls tottime percall cumtime percall filename:lineno(function)
1000 0.093 0.000 0.143 0.000 <ipython-input-4-c2a74e076cf0>:1(integrate_f)
552423 0.050 0.000 0.050 0.000 <ipython-input-3-c138bdd570e3>:1(f)
3000 0.004 0.000 0.018 0.000 series.py:966(__getitem__)
3000 0.002 0.000 0.009 0.000 series.py:1072(_get_value)
By far the majority of time is spend inside either integrate_f or f,
hence we’ll concentrate our efforts cythonizing these two functions.
Plain Cython#
First we’re going to need to import the Cython magic function to IPython:
In [7]: %load_ext Cython
Now, let’s simply copy our functions over to Cython as is (the suffix
is here to distinguish between function versions):
In [8]: %%cython
...: def f_plain(x):
...: return x * (x - 1)
...: def integrate_f_plain(a, b, N):
...: s = 0
...: dx = (b - a) / N
...: for i in range(N):
...: s += f_plain(a + i * dx)
...: return s * dx
...:
Note
If you’re having trouble pasting the above into your ipython, you may need
to be using bleeding edge IPython for paste to play well with cell magics.
In [9]: %timeit df.apply(lambda x: integrate_f_plain(x["a"], x["b"], x["N"]), axis=1)
50.9 ms +- 160 us per loop (mean +- std. dev. of 7 runs, 10 loops each)
Already this has shaved a third off, not too bad for a simple copy and paste.
Adding type#
We get another huge improvement simply by providing type information:
In [10]: %%cython
....: cdef double f_typed(double x) except? -2:
....: return x * (x - 1)
....: cpdef double integrate_f_typed(double a, double b, int N):
....: cdef int i
....: cdef double s, dx
....: s = 0
....: dx = (b - a) / N
....: for i in range(N):
....: s += f_typed(a + i * dx)
....: return s * dx
....:
In [11]: %timeit df.apply(lambda x: integrate_f_typed(x["a"], x["b"], x["N"]), axis=1)
9.47 ms +- 279 us per loop (mean +- std. dev. of 7 runs, 100 loops each)
Now, we’re talking! It’s now over ten times faster than the original Python
implementation, and we haven’t really modified the code. Let’s have another
look at what’s eating up time:
In [12]: %prun -l 4 df.apply(lambda x: integrate_f_typed(x["a"], x["b"], x["N"]), axis=1)
68904 function calls (68884 primitive calls) in 0.026 seconds
Ordered by: internal time
List reduced from 224 to 4 due to restriction <4>
ncalls tottime percall cumtime percall filename:lineno(function)
3000 0.004 0.000 0.018 0.000 series.py:966(__getitem__)
3000 0.002 0.000 0.009 0.000 series.py:1072(_get_value)
16174 0.002 0.000 0.003 0.000 {built-in method builtins.isinstance}
3000 0.002 0.000 0.003 0.000 base.py:3754(get_loc)
Using ndarray#
It’s calling series a lot! It’s creating a Series from each row, and calling get from both
the index and the series (three times for each row). Function calls are expensive
in Python, so maybe we could minimize these by cythonizing the apply part.
Note
We are now passing ndarrays into the Cython function, fortunately Cython plays
very nicely with NumPy.
In [13]: %%cython
....: cimport numpy as np
....: import numpy as np
....: cdef double f_typed(double x) except? -2:
....: return x * (x - 1)
....: cpdef double integrate_f_typed(double a, double b, int N):
....: cdef int i
....: cdef double s, dx
....: s = 0
....: dx = (b - a) / N
....: for i in range(N):
....: s += f_typed(a + i * dx)
....: return s * dx
....: cpdef np.ndarray[double] apply_integrate_f(np.ndarray col_a, np.ndarray col_b,
....: np.ndarray col_N):
....: assert (col_a.dtype == np.float_
....: and col_b.dtype == np.float_ and col_N.dtype == np.int_)
....: cdef Py_ssize_t i, n = len(col_N)
....: assert (len(col_a) == len(col_b) == n)
....: cdef np.ndarray[double] res = np.empty(n)
....: for i in range(len(col_a)):
....: res[i] = integrate_f_typed(col_a[i], col_b[i], col_N[i])
....: return res
....:
The implementation is simple, it creates an array of zeros and loops over
the rows, applying our integrate_f_typed, and putting this in the zeros array.
Warning
You can not pass a Series directly as a ndarray typed parameter
to a Cython function. Instead pass the actual ndarray using the
Series.to_numpy(). The reason is that the Cython
definition is specific to an ndarray and not the passed Series.
So, do not do this:
apply_integrate_f(df["a"], df["b"], df["N"])
But rather, use Series.to_numpy() to get the underlying ndarray:
apply_integrate_f(df["a"].to_numpy(), df["b"].to_numpy(), df["N"].to_numpy())
Note
Loops like this would be extremely slow in Python, but in Cython looping
over NumPy arrays is fast.
In [14]: %timeit apply_integrate_f(df["a"].to_numpy(), df["b"].to_numpy(), df["N"].to_numpy())
854 us +- 2.62 us per loop (mean +- std. dev. of 7 runs, 1,000 loops each)
We’ve gotten another big improvement. Let’s check again where the time is spent:
In [15]: %prun -l 4 apply_integrate_f(df["a"].to_numpy(), df["b"].to_numpy(), df["N"].to_numpy())
85 function calls in 0.001 seconds
Ordered by: internal time
List reduced from 24 to 4 due to restriction <4>
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.001 0.001 0.001 0.001 {built-in method _cython_magic_6991e1e67eedbb03acaf53f278f60013.apply_integrate_f}
1 0.000 0.000 0.001 0.001 {built-in method builtins.exec}
3 0.000 0.000 0.000 0.000 frame.py:3758(__getitem__)
3 0.000 0.000 0.000 0.000 base.py:5254(__contains__)
As one might expect, the majority of the time is now spent in apply_integrate_f,
so if we wanted to make anymore efficiencies we must continue to concentrate our
efforts here.
More advanced techniques#
There is still hope for improvement. Here’s an example of using some more
advanced Cython techniques:
In [16]: %%cython
....: cimport cython
....: cimport numpy as np
....: import numpy as np
....: cdef np.float64_t f_typed(np.float64_t x) except? -2:
....: return x * (x - 1)
....: cpdef np.float64_t integrate_f_typed(np.float64_t a, np.float64_t b, np.int64_t N):
....: cdef np.int64_t i
....: cdef np.float64_t s = 0.0, dx
....: dx = (b - a) / N
....: for i in range(N):
....: s += f_typed(a + i * dx)
....: return s * dx
....: @cython.boundscheck(False)
....: @cython.wraparound(False)
....: cpdef np.ndarray[np.float64_t] apply_integrate_f_wrap(
....: np.ndarray[np.float64_t] col_a,
....: np.ndarray[np.float64_t] col_b,
....: np.ndarray[np.int64_t] col_N
....: ):
....: cdef np.int64_t i, n = len(col_N)
....: assert len(col_a) == len(col_b) == n
....: cdef np.ndarray[np.float64_t] res = np.empty(n, dtype=np.float64)
....: for i in range(n):
....: res[i] = integrate_f_typed(col_a[i], col_b[i], col_N[i])
....: return res
....:
In [17]: %timeit apply_integrate_f_wrap(df["a"].to_numpy(), df["b"].to_numpy(), df["N"].to_numpy())
723 us +- 2.91 us per loop (mean +- std. dev. of 7 runs, 1,000 loops each)
Even faster, with the caveat that a bug in our Cython code (an off-by-one error,
for example) might cause a segfault because memory access isn’t checked.
For more about boundscheck and wraparound, see the Cython docs on
compiler directives.
Numba (JIT compilation)#
An alternative to statically compiling Cython code is to use a dynamic just-in-time (JIT) compiler with Numba.
Numba allows you to write a pure Python function which can be JIT compiled to native machine instructions, similar in performance to C, C++ and Fortran,
by decorating your function with @jit.
Numba works by generating optimized machine code using the LLVM compiler infrastructure at import time, runtime, or statically (using the included pycc tool).
Numba supports compilation of Python to run on either CPU or GPU hardware and is designed to integrate with the Python scientific software stack.
Note
The @jit compilation will add overhead to the runtime of the function, so performance benefits may not be realized especially when using small data sets.
Consider caching your function to avoid compilation overhead each time your function is run.
Numba can be used in 2 ways with pandas:
Specify the engine="numba" keyword in select pandas methods
Define your own Python function decorated with @jit and pass the underlying NumPy array of Series or DataFrame (using to_numpy()) into the function
pandas Numba Engine#
If Numba is installed, one can specify engine="numba" in select pandas methods to execute the method using Numba.
Methods that support engine="numba" will also have an engine_kwargs keyword that accepts a dictionary that allows one to specify
"nogil", "nopython" and "parallel" keys with boolean values to pass into the @jit decorator.
If engine_kwargs is not specified, it defaults to {"nogil": False, "nopython": True, "parallel": False} unless otherwise specified.
In terms of performance, the first time a function is run using the Numba engine will be slow
as Numba will have some function compilation overhead. However, the JIT compiled functions are cached,
and subsequent calls will be fast. In general, the Numba engine is performant with
a larger amount of data points (e.g. 1+ million).
In [1]: data = pd.Series(range(1_000_000)) # noqa: E225
In [2]: roll = data.rolling(10)
In [3]: def f(x):
...: return np.sum(x) + 5
# Run the first time, compilation time will affect performance
In [4]: %timeit -r 1 -n 1 roll.apply(f, engine='numba', raw=True)
1.23 s ± 0 ns per loop (mean ± std. dev. of 1 run, 1 loop each)
# Function is cached and performance will improve
In [5]: %timeit roll.apply(f, engine='numba', raw=True)
188 ms ± 1.93 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [6]: %timeit roll.apply(f, engine='cython', raw=True)
3.92 s ± 59 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
If your compute hardware contains multiple CPUs, the largest performance gain can be realized by setting parallel to True
to leverage more than 1 CPU. Internally, pandas leverages numba to parallelize computations over the columns of a DataFrame;
therefore, this performance benefit is only beneficial for a DataFrame with a large number of columns.
In [1]: import numba
In [2]: numba.set_num_threads(1)
In [3]: df = pd.DataFrame(np.random.randn(10_000, 100))
In [4]: roll = df.rolling(100)
In [5]: %timeit roll.mean(engine="numba", engine_kwargs={"parallel": True})
347 ms ± 26 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [6]: numba.set_num_threads(2)
In [7]: %timeit roll.mean(engine="numba", engine_kwargs={"parallel": True})
201 ms ± 2.97 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
Custom Function Examples#
A custom Python function decorated with @jit can be used with pandas objects by passing their NumPy array
representations with to_numpy().
import numba
@numba.jit
def f_plain(x):
return x * (x - 1)
@numba.jit
def integrate_f_numba(a, b, N):
s = 0
dx = (b - a) / N
for i in range(N):
s += f_plain(a + i * dx)
return s * dx
@numba.jit
def apply_integrate_f_numba(col_a, col_b, col_N):
n = len(col_N)
result = np.empty(n, dtype="float64")
assert len(col_a) == len(col_b) == n
for i in range(n):
result[i] = integrate_f_numba(col_a[i], col_b[i], col_N[i])
return result
def compute_numba(df):
result = apply_integrate_f_numba(
df["a"].to_numpy(), df["b"].to_numpy(), df["N"].to_numpy()
)
return pd.Series(result, index=df.index, name="result")
In [4]: %timeit compute_numba(df)
1000 loops, best of 3: 798 us per loop
In this example, using Numba was faster than Cython.
Numba can also be used to write vectorized functions that do not require the user to explicitly
loop over the observations of a vector; a vectorized function will be applied to each row automatically.
Consider the following example of doubling each observation:
import numba
def double_every_value_nonumba(x):
return x * 2
@numba.vectorize
def double_every_value_withnumba(x): # noqa E501
return x * 2
# Custom function without numba
In [5]: %timeit df["col1_doubled"] = df["a"].apply(double_every_value_nonumba) # noqa E501
1000 loops, best of 3: 797 us per loop
# Standard implementation (faster than a custom function)
In [6]: %timeit df["col1_doubled"] = df["a"] * 2
1000 loops, best of 3: 233 us per loop
# Custom function with numba
In [7]: %timeit df["col1_doubled"] = double_every_value_withnumba(df["a"].to_numpy())
1000 loops, best of 3: 145 us per loop
Caveats#
Numba is best at accelerating functions that apply numerical functions to NumPy
arrays. If you try to @jit a function that contains unsupported Python
or NumPy
code, compilation will revert object mode which
will mostly likely not speed up your function. If you would
prefer that Numba throw an error if it cannot compile a function in a way that
speeds up your code, pass Numba the argument
nopython=True (e.g. @jit(nopython=True)). For more on
troubleshooting Numba modes, see the Numba troubleshooting page.
Using parallel=True (e.g. @jit(parallel=True)) may result in a SIGABRT if the threading layer leads to unsafe
behavior. You can first specify a safe threading layer
before running a JIT function with parallel=True.
Generally if the you encounter a segfault (SIGSEGV) while using Numba, please report the issue
to the Numba issue tracker.
Expression evaluation via eval()#
The top-level function pandas.eval() implements expression evaluation of
Series and DataFrame objects.
Note
To benefit from using eval() you need to
install numexpr. See the recommended dependencies section for more details.
The point of using eval() for expression evaluation rather than
plain Python is two-fold: 1) large DataFrame objects are
evaluated more efficiently and 2) large arithmetic and boolean expressions are
evaluated all at once by the underlying engine (by default numexpr is used
for evaluation).
Note
You should not use eval() for simple
expressions or for expressions involving small DataFrames. In fact,
eval() is many orders of magnitude slower for
smaller expressions/objects than plain ol’ Python. A good rule of thumb is
to only use eval() when you have a
DataFrame with more than 10,000 rows.
eval() supports all arithmetic expressions supported by the
engine in addition to some extensions available only in pandas.
Note
The larger the frame and the larger the expression the more speedup you will
see from using eval().
Supported syntax#
These operations are supported by pandas.eval():
Arithmetic operations except for the left shift (<<) and right shift
(>>) operators, e.g., df + 2 * pi / s ** 4 % 42 - the_golden_ratio
Comparison operations, including chained comparisons, e.g., 2 < df < df2
Boolean operations, e.g., df < df2 and df3 < df4 or not df_bool
list and tuple literals, e.g., [1, 2] or (1, 2)
Attribute access, e.g., df.a
Subscript expressions, e.g., df[0]
Simple variable evaluation, e.g., pd.eval("df") (this is not very useful)
Math functions: sin, cos, exp, log, expm1, log1p,
sqrt, sinh, cosh, tanh, arcsin, arccos, arctan, arccosh,
arcsinh, arctanh, abs, arctan2 and log10.
This Python syntax is not allowed:
Expressions
Function calls other than math functions.
is/is not operations
if expressions
lambda expressions
list/set/dict comprehensions
Literal dict and set expressions
yield expressions
Generator expressions
Boolean expressions consisting of only scalar values
Statements
Neither simple
nor compound
statements are allowed. This includes things like for, while, and
if.
eval() examples#
pandas.eval() works well with expressions containing large arrays.
First let’s create a few decent-sized arrays to play with:
In [18]: nrows, ncols = 20000, 100
In [19]: df1, df2, df3, df4 = [pd.DataFrame(np.random.randn(nrows, ncols)) for _ in range(4)]
Now let’s compare adding them together using plain ol’ Python versus
eval():
In [20]: %timeit df1 + df2 + df3 + df4
18.3 ms +- 251 us per loop (mean +- std. dev. of 7 runs, 100 loops each)
In [21]: %timeit pd.eval("df1 + df2 + df3 + df4")
9.56 ms +- 588 us per loop (mean +- std. dev. of 7 runs, 100 loops each)
Now let’s do the same thing but with comparisons:
In [22]: %timeit (df1 > 0) & (df2 > 0) & (df3 > 0) & (df4 > 0)
15.9 ms +- 225 us per loop (mean +- std. dev. of 7 runs, 100 loops each)
In [23]: %timeit pd.eval("(df1 > 0) & (df2 > 0) & (df3 > 0) & (df4 > 0)")
27.9 ms +- 2.34 ms per loop (mean +- std. dev. of 7 runs, 10 loops each)
eval() also works with unaligned pandas objects:
In [24]: s = pd.Series(np.random.randn(50))
In [25]: %timeit df1 + df2 + df3 + df4 + s
30.1 ms +- 949 us per loop (mean +- std. dev. of 7 runs, 10 loops each)
In [26]: %timeit pd.eval("df1 + df2 + df3 + df4 + s")
12.4 ms +- 270 us per loop (mean +- std. dev. of 7 runs, 100 loops each)
Note
Operations such as
1 and 2 # would parse to 1 & 2, but should evaluate to 2
3 or 4 # would parse to 3 | 4, but should evaluate to 3
~1 # this is okay, but slower when using eval
should be performed in Python. An exception will be raised if you try to
perform any boolean/bitwise operations with scalar operands that are not
of type bool or np.bool_. Again, you should perform these kinds of
operations in plain Python.
The DataFrame.eval() method#
In addition to the top level pandas.eval() function you can also
evaluate an expression in the “context” of a DataFrame.
In [27]: df = pd.DataFrame(np.random.randn(5, 2), columns=["a", "b"])
In [28]: df.eval("a + b")
Out[28]:
0 -0.246747
1 0.867786
2 -1.626063
3 -1.134978
4 -1.027798
dtype: float64
Any expression that is a valid pandas.eval() expression is also a valid
DataFrame.eval() expression, with the added benefit that you don’t have to
prefix the name of the DataFrame to the column(s) you’re
interested in evaluating.
In addition, you can perform assignment of columns within an expression.
This allows for formulaic evaluation. The assignment target can be a
new column name or an existing column name, and it must be a valid Python
identifier.
The inplace keyword determines whether this assignment will performed
on the original DataFrame or return a copy with the new column.
In [29]: df = pd.DataFrame(dict(a=range(5), b=range(5, 10)))
In [30]: df.eval("c = a + b", inplace=True)
In [31]: df.eval("d = a + b + c", inplace=True)
In [32]: df.eval("a = 1", inplace=True)
In [33]: df
Out[33]:
a b c d
0 1 5 5 10
1 1 6 7 14
2 1 7 9 18
3 1 8 11 22
4 1 9 13 26
When inplace is set to False, the default, a copy of the DataFrame with the
new or modified columns is returned and the original frame is unchanged.
In [34]: df
Out[34]:
a b c d
0 1 5 5 10
1 1 6 7 14
2 1 7 9 18
3 1 8 11 22
4 1 9 13 26
In [35]: df.eval("e = a - c", inplace=False)
Out[35]:
a b c d e
0 1 5 5 10 -4
1 1 6 7 14 -6
2 1 7 9 18 -8
3 1 8 11 22 -10
4 1 9 13 26 -12
In [36]: df
Out[36]:
a b c d
0 1 5 5 10
1 1 6 7 14
2 1 7 9 18
3 1 8 11 22
4 1 9 13 26
As a convenience, multiple assignments can be performed by using a
multi-line string.
In [37]: df.eval(
....: """
....: c = a + b
....: d = a + b + c
....: a = 1""",
....: inplace=False,
....: )
....:
Out[37]:
a b c d
0 1 5 6 12
1 1 6 7 14
2 1 7 8 16
3 1 8 9 18
4 1 9 10 20
The equivalent in standard Python would be
In [38]: df = pd.DataFrame(dict(a=range(5), b=range(5, 10)))
In [39]: df["c"] = df["a"] + df["b"]
In [40]: df["d"] = df["a"] + df["b"] + df["c"]
In [41]: df["a"] = 1
In [42]: df
Out[42]:
a b c d
0 1 5 5 10
1 1 6 7 14
2 1 7 9 18
3 1 8 11 22
4 1 9 13 26
The DataFrame.query method has a inplace keyword which determines
whether the query modifies the original frame.
In [43]: df = pd.DataFrame(dict(a=range(5), b=range(5, 10)))
In [44]: df.query("a > 2")
Out[44]:
a b
3 3 8
4 4 9
In [45]: df.query("a > 2", inplace=True)
In [46]: df
Out[46]:
a b
3 3 8
4 4 9
Local variables#
You must explicitly reference any local variable that you want to use in an
expression by placing the @ character in front of the name. For example,
In [47]: df = pd.DataFrame(np.random.randn(5, 2), columns=list("ab"))
In [48]: newcol = np.random.randn(len(df))
In [49]: df.eval("b + @newcol")
Out[49]:
0 -0.173926
1 2.493083
2 -0.881831
3 -0.691045
4 1.334703
dtype: float64
In [50]: df.query("b < @newcol")
Out[50]:
a b
0 0.863987 -0.115998
2 -2.621419 -1.297879
If you don’t prefix the local variable with @, pandas will raise an
exception telling you the variable is undefined.
When using DataFrame.eval() and DataFrame.query(), this allows you
to have a local variable and a DataFrame column with the same
name in an expression.
In [51]: a = np.random.randn()
In [52]: df.query("@a < a")
Out[52]:
a b
0 0.863987 -0.115998
In [53]: df.loc[a < df["a"]] # same as the previous expression
Out[53]:
a b
0 0.863987 -0.115998
With pandas.eval() you cannot use the @ prefix at all, because it
isn’t defined in that context. pandas will let you know this if you try to
use @ in a top-level call to pandas.eval(). For example,
In [54]: a, b = 1, 2
In [55]: pd.eval("@a + b")
Traceback (most recent call last):
File ~/micromamba/envs/test/lib/python3.8/site-packages/IPython/core/interactiveshell.py:3442 in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
Cell In[55], line 1
pd.eval("@a + b")
File ~/work/pandas/pandas/pandas/core/computation/eval.py:342 in eval
_check_for_locals(expr, level, parser)
File ~/work/pandas/pandas/pandas/core/computation/eval.py:167 in _check_for_locals
raise SyntaxError(msg)
File <string>
SyntaxError: The '@' prefix is not allowed in top-level eval calls.
please refer to your variables by name without the '@' prefix.
In this case, you should simply refer to the variables like you would in
standard Python.
In [56]: pd.eval("a + b")
Out[56]: 3
pandas.eval() parsers#
There are two different parsers and two different engines you can use as
the backend.
The default 'pandas' parser allows a more intuitive syntax for expressing
query-like operations (comparisons, conjunctions and disjunctions). In
particular, the precedence of the & and | operators is made equal to
the precedence of the corresponding boolean operations and and or.
For example, the above conjunction can be written without parentheses.
Alternatively, you can use the 'python' parser to enforce strict Python
semantics.
In [57]: expr = "(df1 > 0) & (df2 > 0) & (df3 > 0) & (df4 > 0)"
In [58]: x = pd.eval(expr, parser="python")
In [59]: expr_no_parens = "df1 > 0 & df2 > 0 & df3 > 0 & df4 > 0"
In [60]: y = pd.eval(expr_no_parens, parser="pandas")
In [61]: np.all(x == y)
Out[61]: True
The same expression can be “anded” together with the word and as
well:
In [62]: expr = "(df1 > 0) & (df2 > 0) & (df3 > 0) & (df4 > 0)"
In [63]: x = pd.eval(expr, parser="python")
In [64]: expr_with_ands = "df1 > 0 and df2 > 0 and df3 > 0 and df4 > 0"
In [65]: y = pd.eval(expr_with_ands, parser="pandas")
In [66]: np.all(x == y)
Out[66]: True
The and and or operators here have the same precedence that they would
in vanilla Python.
pandas.eval() backends#
There’s also the option to make eval() operate identical to plain
ol’ Python.
Note
Using the 'python' engine is generally not useful, except for testing
other evaluation engines against it. You will achieve no performance
benefits using eval() with engine='python' and in fact may
incur a performance hit.
You can see this by using pandas.eval() with the 'python' engine. It
is a bit slower (not by much) than evaluating the same expression in Python
In [67]: %timeit df1 + df2 + df3 + df4
17.9 ms +- 228 us per loop (mean +- std. dev. of 7 runs, 100 loops each)
In [68]: %timeit pd.eval("df1 + df2 + df3 + df4", engine="python")
19 ms +- 375 us per loop (mean +- std. dev. of 7 runs, 100 loops each)
pandas.eval() performance#
eval() is intended to speed up certain kinds of operations. In
particular, those operations involving complex expressions with large
DataFrame/Series objects should see a
significant performance benefit. Here is a plot showing the running time of
pandas.eval() as function of the size of the frame involved in the
computation. The two lines are two different engines.
Note
Operations with smallish objects (around 15k-20k rows) are faster using
plain Python:
This plot was created using a DataFrame with 3 columns each containing
floating point values generated using numpy.random.randn().
Technical minutia regarding expression evaluation#
Expressions that would result in an object dtype or involve datetime operations
(because of NaT) must be evaluated in Python space. The main reason for
this behavior is to maintain backwards compatibility with versions of NumPy <
1.7. In those versions of NumPy a call to ndarray.astype(str) will
truncate any strings that are more than 60 characters in length. Second, we
can’t pass object arrays to numexpr thus string comparisons must be
evaluated in Python space.
The upshot is that this only applies to object-dtype expressions. So, if
you have an expression–for example
In [69]: df = pd.DataFrame(
....: {"strings": np.repeat(list("cba"), 3), "nums": np.repeat(range(3), 3)}
....: )
....:
In [70]: df
Out[70]:
strings nums
0 c 0
1 c 0
2 c 0
3 b 1
4 b 1
5 b 1
6 a 2
7 a 2
8 a 2
In [71]: df.query("strings == 'a' and nums == 1")
Out[71]:
Empty DataFrame
Columns: [strings, nums]
Index: []
the numeric part of the comparison (nums == 1) will be evaluated by
numexpr.
In general, DataFrame.query()/pandas.eval() will
evaluate the subexpressions that can be evaluated by numexpr and those
that must be evaluated in Python space transparently to the user. This is done
by inferring the result type of an expression from its arguments and operators.
| 591
| 1,013
|
Is pandas .between() faster than using &?
I have a dataframe that the user can apply a variety of filters on using sliders to specify a min and max value. Right now there are seven filters, but there may be more added in the future.
I currently have the filter definition as:
filt = ( (df['A']>= sliderA[0]) & (df['A']<sliderA[1]) &
(df['B']>= sliderB[0]) & (df['B']<sliderB[1]) &
etc...)
Would it be computationally faster to use pandas' built-in .between() operator?
filt = ( df['A'].between(sliderA[0], sliderA[1]) &
...)
My gut tells me no, since it would be going out and executing a separate function as opposed to writing out the evaluation in lower level. But my gut is also very hungry.
I don't think the speed is a big issue yet, but I can see in the future where it might become more important.
|
69,575,180
|
Pandas can't read in excel file
|
<p>Something is wrong with my pandas module. I tried to read in an excel file using the following code, which works on my classmate's computer, but it's giving me an error on my computer:</p>
<pre><code>
FFT1=pd.read_excel('FFT1.xlsx', sheet_name='sheet1')
</code></pre>
<p>The file named 'FFT1.xlsx' is in the same directory as my jupyter notebook. The error message says:</p>
<pre><code>XLRDError Traceback (most recent call last)
~\AppData\Local\Temp/ipykernel_7436/2793485739.py in <module>
----> 1 FFT1=pd.read_excel('FFT1.xlsx', sheet_name='sheet1')
D:\Softwares\Anaconda\lib\site-packages\pandas\io\excel\_base.py in read_excel(io, sheet_name, header, names, index_col, usecols, squeeze, dtype, engine, converters, true_values, false_values, skiprows, nrows, na_values, keep_default_na, verbose, parse_dates, date_parser, thousands, comment, skipfooter, convert_float, mangle_dupe_cols, **kwds)
302
303 if not isinstance(io, ExcelFile):
--> 304 io = ExcelFile(io, engine=engine)
305 elif engine and engine != io.engine:
306 raise ValueError(
D:\Softwares\Anaconda\lib\site-packages\pandas\io\excel\_base.py in __init__(self, io, engine)
819 self._io = stringify_path(io)
820
--> 821 self._reader = self._engines[engine](self._io)
822
823 def __fspath__(self):
D:\Softwares\Anaconda\lib\site-packages\pandas\io\excel\_xlrd.py in __init__(self, filepath_or_buffer)
19 err_msg = "Install xlrd >= 1.0.0 for Excel support"
20 import_optional_dependency("xlrd", extra=err_msg)
---> 21 super().__init__(filepath_or_buffer)
22
23 @property
D:\Softwares\Anaconda\lib\site-packages\pandas\io\excel\_base.py in __init__(self, filepath_or_buffer)
351 self.book = self.load_workbook(filepath_or_buffer)
352 elif isinstance(filepath_or_buffer, str):
--> 353 self.book = self.load_workbook(filepath_or_buffer)
354 elif isinstance(filepath_or_buffer, bytes):
355 self.book = self.load_workbook(BytesIO(filepath_or_buffer))
D:\Softwares\Anaconda\lib\site-packages\pandas\io\excel\_xlrd.py in load_workbook(self, filepath_or_buffer)
34 return open_workbook(file_contents=data)
35 else:
---> 36 return open_workbook(filepath_or_buffer)
37
38 @property
D:\Softwares\Anaconda\lib\site-packages\xlrd\__init__.py in open_workbook(filename, logfile, verbosity, use_mmap, file_contents, encoding_override, formatting_info, on_demand, ragged_rows, ignore_workbook_corruption)
168 # files that xlrd can parse don't start with the expected signature.
169 if file_format and file_format != 'xls':
--> 170 raise XLRDError(FILE_FORMAT_DESCRIPTIONS[file_format]+'; not supported')
171
172 bk = open_workbook_xls(
XLRDError: Excel xlsx file; not supported
</code></pre>
<p>How should I fix this?</p>
| 69,575,448
| 2021-10-14T17:45:35.467000
| 1
| null | 0
| 347
|
pandas
|
<ol>
<li>Make sure that you already install openpyxl, if you don't try</li>
</ol>
<p><code>pip install openpyxl</code></p>
<ol start="2">
<li>Change your code to</li>
</ol>
<p><code>FFT1=pd.read_excel('FFT1.xlsx', sheet_name='sheet1',engine='openpyxl')</code></p>
| 2021-10-14T18:08:07.003000
| 0
|
https://pandas.pydata.org/docs/user_guide/io.html
|
IO tools (text, CSV, HDF5, …)#
IO tools (text, CSV, HDF5, …)#
The pandas I/O API is a set of top level reader functions accessed like
pandas.read_csv() that generally return a pandas object. The corresponding
writer functions are object methods that are accessed like
DataFrame.to_csv(). Below is a table containing available readers and
writers.
Format Type
Data Description
Reader
Writer
text
CSV
read_csv
to_csv
text
Fixed-Width Text File
read_fwf
text
JSON
read_json
to_json
text
HTML
read_html
to_html
text
LaTeX
Styler.to_latex
text
XML
read_xml
to_xml
text
Local clipboard
read_clipboard
to_clipboard
binary
MS Excel
read_excel
to_excel
binary
OpenDocument
read_excel
binary
HDF5 Format
read_hdf
to_hdf
binary
Feather Format
read_feather
to_feather
binary
Parquet Format
read_parquet
to_parquet
binary
ORC Format
read_orc
to_orc
binary
Stata
read_stata
to_stata
binary
SAS
read_sas
binary
SPSS
Make sure that you already install openpyxl, if you don't try
pip install openpyxl
Change your code to
FFT1=pd.read_excel('FFT1.xlsx', sheet_name='sheet1',engine='openpyxl')
read_spss
binary
Python Pickle Format
read_pickle
to_pickle
SQL
SQL
read_sql
to_sql
SQL
Google BigQuery
read_gbq
to_gbq
Here is an informal performance comparison for some of these IO methods.
Note
For examples that use the StringIO class, make sure you import it
with from io import StringIO for Python 3.
CSV & text files#
The workhorse function for reading text files (a.k.a. flat files) is
read_csv(). See the cookbook for some advanced strategies.
Parsing options#
read_csv() accepts the following common arguments:
Basic#
filepath_or_buffervariousEither a path to a file (a str, pathlib.Path,
or py:py._path.local.LocalPath), URL (including http, ftp, and S3
locations), or any object with a read() method (such as an open file or
StringIO).
sepstr, defaults to ',' for read_csv(), \t for read_table()Delimiter to use. If sep is None, the C engine cannot automatically detect
the separator, but the Python parsing engine can, meaning the latter will be
used and automatically detect the separator by Python’s builtin sniffer tool,
csv.Sniffer. In addition, separators longer than 1 character and
different from '\s+' will be interpreted as regular expressions and
will also force the use of the Python parsing engine. Note that regex
delimiters are prone to ignoring quoted data. Regex example: '\\r\\t'.
delimiterstr, default NoneAlternative argument name for sep.
delim_whitespaceboolean, default FalseSpecifies whether or not whitespace (e.g. ' ' or '\t')
will be used as the delimiter. Equivalent to setting sep='\s+'.
If this option is set to True, nothing should be passed in for the
delimiter parameter.
Column and index locations and names#
headerint or list of ints, default 'infer'Row number(s) to use as the column names, and the start of the
data. Default behavior is to infer the column names: if no names are
passed the behavior is identical to header=0 and column names
are inferred from the first line of the file, if column names are
passed explicitly then the behavior is identical to
header=None. Explicitly pass header=0 to be able to replace
existing names.
The header can be a list of ints that specify row locations
for a MultiIndex on the columns e.g. [0,1,3]. Intervening rows
that are not specified will be skipped (e.g. 2 in this example is
skipped). Note that this parameter ignores commented lines and empty
lines if skip_blank_lines=True, so header=0 denotes the first
line of data rather than the first line of the file.
namesarray-like, default NoneList of column names to use. If file contains no header row, then you should
explicitly pass header=None. Duplicates in this list are not allowed.
index_colint, str, sequence of int / str, or False, optional, default NoneColumn(s) to use as the row labels of the DataFrame, either given as
string name or column index. If a sequence of int / str is given, a
MultiIndex is used.
Note
index_col=False can be used to force pandas to not use the first
column as the index, e.g. when you have a malformed file with delimiters at
the end of each line.
The default value of None instructs pandas to guess. If the number of
fields in the column header row is equal to the number of fields in the body
of the data file, then a default index is used. If it is larger, then
the first columns are used as index so that the remaining number of fields in
the body are equal to the number of fields in the header.
The first row after the header is used to determine the number of columns,
which will go into the index. If the subsequent rows contain less columns
than the first row, they are filled with NaN.
This can be avoided through usecols. This ensures that the columns are
taken as is and the trailing data are ignored.
usecolslist-like or callable, default NoneReturn a subset of the columns. If list-like, all elements must either
be positional (i.e. integer indices into the document columns) or strings
that correspond to column names provided either by the user in names or
inferred from the document header row(s). If names are given, the document
header row(s) are not taken into account. For example, a valid list-like
usecols parameter would be [0, 1, 2] or ['foo', 'bar', 'baz'].
Element order is ignored, so usecols=[0, 1] is the same as [1, 0]. To
instantiate a DataFrame from data with element order preserved use
pd.read_csv(data, usecols=['foo', 'bar'])[['foo', 'bar']] for columns
in ['foo', 'bar'] order or
pd.read_csv(data, usecols=['foo', 'bar'])[['bar', 'foo']] for
['bar', 'foo'] order.
If callable, the callable function will be evaluated against the column names,
returning names where the callable function evaluates to True:
In [1]: import pandas as pd
In [2]: from io import StringIO
In [3]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"
In [4]: pd.read_csv(StringIO(data))
Out[4]:
col1 col2 col3
0 a b 1
1 a b 2
2 c d 3
In [5]: pd.read_csv(StringIO(data), usecols=lambda x: x.upper() in ["COL1", "COL3"])
Out[5]:
col1 col3
0 a 1
1 a 2
2 c 3
Using this parameter results in much faster parsing time and lower memory usage
when using the c engine. The Python engine loads the data first before deciding
which columns to drop.
squeezeboolean, default FalseIf the parsed data only contains one column then return a Series.
Deprecated since version 1.4.0: Append .squeeze("columns") to the call to {func_name} to squeeze
the data.
prefixstr, default NonePrefix to add to column numbers when no header, e.g. ‘X’ for X0, X1, …
Deprecated since version 1.4.0: Use a list comprehension on the DataFrame’s columns after calling read_csv.
In [6]: data = "col1,col2,col3\na,b,1"
In [7]: df = pd.read_csv(StringIO(data))
In [8]: df.columns = [f"pre_{col}" for col in df.columns]
In [9]: df
Out[9]:
pre_col1 pre_col2 pre_col3
0 a b 1
mangle_dupe_colsboolean, default TrueDuplicate columns will be specified as ‘X’, ‘X.1’…’X.N’, rather than ‘X’…’X’.
Passing in False will cause data to be overwritten if there are duplicate
names in the columns.
Deprecated since version 1.5.0: The argument was never implemented, and a new argument where the
renaming pattern can be specified will be added instead.
General parsing configuration#
dtypeType name or dict of column -> type, default NoneData type for data or columns. E.g. {'a': np.float64, 'b': np.int32, 'c': 'Int64'}
Use str or object together with suitable na_values settings to preserve
and not interpret dtype. If converters are specified, they will be applied INSTEAD
of dtype conversion.
New in version 1.5.0: Support for defaultdict was added. Specify a defaultdict as input where
the default determines the dtype of the columns which are not explicitly
listed.
engine{'c', 'python', 'pyarrow'}Parser engine to use. The C and pyarrow engines are faster, while the python engine
is currently more feature-complete. Multithreading is currently only supported by
the pyarrow engine.
New in version 1.4.0: The “pyarrow” engine was added as an experimental engine, and some features
are unsupported, or may not work correctly, with this engine.
convertersdict, default NoneDict of functions for converting values in certain columns. Keys can either be
integers or column labels.
true_valueslist, default NoneValues to consider as True.
false_valueslist, default NoneValues to consider as False.
skipinitialspaceboolean, default FalseSkip spaces after delimiter.
skiprowslist-like or integer, default NoneLine numbers to skip (0-indexed) or number of lines to skip (int) at the start
of the file.
If callable, the callable function will be evaluated against the row
indices, returning True if the row should be skipped and False otherwise:
In [10]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"
In [11]: pd.read_csv(StringIO(data))
Out[11]:
col1 col2 col3
0 a b 1
1 a b 2
2 c d 3
In [12]: pd.read_csv(StringIO(data), skiprows=lambda x: x % 2 != 0)
Out[12]:
col1 col2 col3
0 a b 2
skipfooterint, default 0Number of lines at bottom of file to skip (unsupported with engine=’c’).
nrowsint, default NoneNumber of rows of file to read. Useful for reading pieces of large files.
low_memoryboolean, default TrueInternally process the file in chunks, resulting in lower memory use
while parsing, but possibly mixed type inference. To ensure no mixed
types either set False, or specify the type with the dtype parameter.
Note that the entire file is read into a single DataFrame regardless,
use the chunksize or iterator parameter to return the data in chunks.
(Only valid with C parser)
memory_mapboolean, default FalseIf a filepath is provided for filepath_or_buffer, map the file object
directly onto memory and access the data directly from there. Using this
option can improve performance because there is no longer any I/O overhead.
NA and missing data handling#
na_valuesscalar, str, list-like, or dict, default NoneAdditional strings to recognize as NA/NaN. If dict passed, specific per-column
NA values. See na values const below
for a list of the values interpreted as NaN by default.
keep_default_naboolean, default TrueWhether or not to include the default NaN values when parsing the data.
Depending on whether na_values is passed in, the behavior is as follows:
If keep_default_na is True, and na_values are specified, na_values
is appended to the default NaN values used for parsing.
If keep_default_na is True, and na_values are not specified, only
the default NaN values are used for parsing.
If keep_default_na is False, and na_values are specified, only
the NaN values specified na_values are used for parsing.
If keep_default_na is False, and na_values are not specified, no
strings will be parsed as NaN.
Note that if na_filter is passed in as False, the keep_default_na and
na_values parameters will be ignored.
na_filterboolean, default TrueDetect missing value markers (empty strings and the value of na_values). In
data without any NAs, passing na_filter=False can improve the performance
of reading a large file.
verboseboolean, default FalseIndicate number of NA values placed in non-numeric columns.
skip_blank_linesboolean, default TrueIf True, skip over blank lines rather than interpreting as NaN values.
Datetime handling#
parse_datesboolean or list of ints or names or list of lists or dict, default False.
If True -> try parsing the index.
If [1, 2, 3] -> try parsing columns 1, 2, 3 each as a separate date
column.
If [[1, 3]] -> combine columns 1 and 3 and parse as a single date
column.
If {'foo': [1, 3]} -> parse columns 1, 3 as date and call result ‘foo’.
Note
A fast-path exists for iso8601-formatted dates.
infer_datetime_formatboolean, default FalseIf True and parse_dates is enabled for a column, attempt to infer the
datetime format to speed up the processing.
keep_date_colboolean, default FalseIf True and parse_dates specifies combining multiple columns then keep the
original columns.
date_parserfunction, default NoneFunction to use for converting a sequence of string columns to an array of
datetime instances. The default uses dateutil.parser.parser to do the
conversion. pandas will try to call date_parser in three different ways,
advancing to the next if an exception occurs: 1) Pass one or more arrays (as
defined by parse_dates) as arguments; 2) concatenate (row-wise) the string
values from the columns defined by parse_dates into a single array and pass
that; and 3) call date_parser once for each row using one or more strings
(corresponding to the columns defined by parse_dates) as arguments.
dayfirstboolean, default FalseDD/MM format dates, international and European format.
cache_datesboolean, default TrueIf True, use a cache of unique, converted dates to apply the datetime
conversion. May produce significant speed-up when parsing duplicate
date strings, especially ones with timezone offsets.
New in version 0.25.0.
Iteration#
iteratorboolean, default FalseReturn TextFileReader object for iteration or getting chunks with
get_chunk().
chunksizeint, default NoneReturn TextFileReader object for iteration. See iterating and chunking below.
Quoting, compression, and file format#
compression{'infer', 'gzip', 'bz2', 'zip', 'xz', 'zstd', None, dict}, default 'infer'For on-the-fly decompression of on-disk data. If ‘infer’, then use gzip,
bz2, zip, xz, or zstandard if filepath_or_buffer is path-like ending in ‘.gz’, ‘.bz2’,
‘.zip’, ‘.xz’, ‘.zst’, respectively, and no decompression otherwise. If using ‘zip’,
the ZIP file must contain only one data file to be read in.
Set to None for no decompression. Can also be a dict with key 'method'
set to one of {'zip', 'gzip', 'bz2', 'zstd'} and other key-value pairs are
forwarded to zipfile.ZipFile, gzip.GzipFile, bz2.BZ2File, or zstandard.ZstdDecompressor.
As an example, the following could be passed for faster compression and to
create a reproducible gzip archive:
compression={'method': 'gzip', 'compresslevel': 1, 'mtime': 1}.
Changed in version 1.1.0: dict option extended to support gzip and bz2.
Changed in version 1.2.0: Previous versions forwarded dict entries for ‘gzip’ to gzip.open.
thousandsstr, default NoneThousands separator.
decimalstr, default '.'Character to recognize as decimal point. E.g. use ',' for European data.
float_precisionstring, default NoneSpecifies which converter the C engine should use for floating-point values.
The options are None for the ordinary converter, high for the
high-precision converter, and round_trip for the round-trip converter.
lineterminatorstr (length 1), default NoneCharacter to break file into lines. Only valid with C parser.
quotecharstr (length 1)The character used to denote the start and end of a quoted item. Quoted items
can include the delimiter and it will be ignored.
quotingint or csv.QUOTE_* instance, default 0Control field quoting behavior per csv.QUOTE_* constants. Use one of
QUOTE_MINIMAL (0), QUOTE_ALL (1), QUOTE_NONNUMERIC (2) or
QUOTE_NONE (3).
doublequoteboolean, default TrueWhen quotechar is specified and quoting is not QUOTE_NONE,
indicate whether or not to interpret two consecutive quotechar elements
inside a field as a single quotechar element.
escapecharstr (length 1), default NoneOne-character string used to escape delimiter when quoting is QUOTE_NONE.
commentstr, default NoneIndicates remainder of line should not be parsed. If found at the beginning of
a line, the line will be ignored altogether. This parameter must be a single
character. Like empty lines (as long as skip_blank_lines=True), fully
commented lines are ignored by the parameter header but not by skiprows.
For example, if comment='#', parsing ‘#empty\na,b,c\n1,2,3’ with
header=0 will result in ‘a,b,c’ being treated as the header.
encodingstr, default NoneEncoding to use for UTF when reading/writing (e.g. 'utf-8'). List of
Python standard encodings.
dialectstr or csv.Dialect instance, default NoneIf provided, this parameter will override values (default or not) for the
following parameters: delimiter, doublequote, escapechar,
skipinitialspace, quotechar, and quoting. If it is necessary to
override values, a ParserWarning will be issued. See csv.Dialect
documentation for more details.
Error handling#
error_bad_linesboolean, optional, default NoneLines with too many fields (e.g. a csv line with too many commas) will by
default cause an exception to be raised, and no DataFrame will be
returned. If False, then these “bad lines” will dropped from the
DataFrame that is returned. See bad lines
below.
Deprecated since version 1.3.0: The on_bad_lines parameter should be used instead to specify behavior upon
encountering a bad line instead.
warn_bad_linesboolean, optional, default NoneIf error_bad_lines is False, and warn_bad_lines is True, a warning for
each “bad line” will be output.
Deprecated since version 1.3.0: The on_bad_lines parameter should be used instead to specify behavior upon
encountering a bad line instead.
on_bad_lines(‘error’, ‘warn’, ‘skip’), default ‘error’Specifies what to do upon encountering a bad line (a line with too many fields).
Allowed values are :
‘error’, raise an ParserError when a bad line is encountered.
‘warn’, print a warning when a bad line is encountered and skip that line.
‘skip’, skip bad lines without raising or warning when they are encountered.
New in version 1.3.0.
Specifying column data types#
You can indicate the data type for the whole DataFrame or individual
columns:
In [13]: import numpy as np
In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"
In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11
In [16]: df = pd.read_csv(StringIO(data), dtype=object)
In [17]: df
Out[17]:
a b c d
0 1 2 3 4
1 5 6 7 8
2 9 10 11 NaN
In [18]: df["a"][0]
Out[18]: '1'
In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})
In [20]: df.dtypes
Out[20]:
a int64
b object
c float64
d Int64
dtype: object
Fortunately, pandas offers more than one way to ensure that your column(s)
contain only one dtype. If you’re unfamiliar with these concepts, you can
see here to learn more about dtypes, and
here to learn more about object conversion in
pandas.
For instance, you can use the converters argument
of read_csv():
In [21]: data = "col_1\n1\n2\n'A'\n4.22"
In [22]: df = pd.read_csv(StringIO(data), converters={"col_1": str})
In [23]: df
Out[23]:
col_1
0 1
1 2
2 'A'
3 4.22
In [24]: df["col_1"].apply(type).value_counts()
Out[24]:
<class 'str'> 4
Name: col_1, dtype: int64
Or you can use the to_numeric() function to coerce the
dtypes after reading in the data,
In [25]: df2 = pd.read_csv(StringIO(data))
In [26]: df2["col_1"] = pd.to_numeric(df2["col_1"], errors="coerce")
In [27]: df2
Out[27]:
col_1
0 1.00
1 2.00
2 NaN
3 4.22
In [28]: df2["col_1"].apply(type).value_counts()
Out[28]:
<class 'float'> 4
Name: col_1, dtype: int64
which will convert all valid parsing to floats, leaving the invalid parsing
as NaN.
Ultimately, how you deal with reading in columns containing mixed dtypes
depends on your specific needs. In the case above, if you wanted to NaN out
the data anomalies, then to_numeric() is probably your best option.
However, if you wanted for all the data to be coerced, no matter the type, then
using the converters argument of read_csv() would certainly be
worth trying.
Note
In some cases, reading in abnormal data with columns containing mixed dtypes
will result in an inconsistent dataset. If you rely on pandas to infer the
dtypes of your columns, the parsing engine will go and infer the dtypes for
different chunks of the data, rather than the whole dataset at once. Consequently,
you can end up with column(s) with mixed dtypes. For example,
In [29]: col_1 = list(range(500000)) + ["a", "b"] + list(range(500000))
In [30]: df = pd.DataFrame({"col_1": col_1})
In [31]: df.to_csv("foo.csv")
In [32]: mixed_df = pd.read_csv("foo.csv")
In [33]: mixed_df["col_1"].apply(type).value_counts()
Out[33]:
<class 'int'> 737858
<class 'str'> 262144
Name: col_1, dtype: int64
In [34]: mixed_df["col_1"].dtype
Out[34]: dtype('O')
will result with mixed_df containing an int dtype for certain chunks
of the column, and str for others due to the mixed dtypes from the
data that was read in. It is important to note that the overall column will be
marked with a dtype of object, which is used for columns with mixed dtypes.
Specifying categorical dtype#
Categorical columns can be parsed directly by specifying dtype='category' or
dtype=CategoricalDtype(categories, ordered).
In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"
In [36]: pd.read_csv(StringIO(data))
Out[36]:
col1 col2 col3
0 a b 1
1 a b 2
2 c d 3
In [37]: pd.read_csv(StringIO(data)).dtypes
Out[37]:
col1 object
col2 object
col3 int64
dtype: object
In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
Out[38]:
col1 category
col2 category
col3 category
dtype: object
Individual columns can be parsed as a Categorical using a dict
specification:
In [39]: pd.read_csv(StringIO(data), dtype={"col1": "category"}).dtypes
Out[39]:
col1 category
col2 object
col3 int64
dtype: object
Specifying dtype='category' will result in an unordered Categorical
whose categories are the unique values observed in the data. For more
control on the categories and order, create a
CategoricalDtype ahead of time, and pass that for
that column’s dtype.
In [40]: from pandas.api.types import CategoricalDtype
In [41]: dtype = CategoricalDtype(["d", "c", "b", "a"], ordered=True)
In [42]: pd.read_csv(StringIO(data), dtype={"col1": dtype}).dtypes
Out[42]:
col1 category
col2 object
col3 int64
dtype: object
When using dtype=CategoricalDtype, “unexpected” values outside of
dtype.categories are treated as missing values.
In [43]: dtype = CategoricalDtype(["a", "b", "d"]) # No 'c'
In [44]: pd.read_csv(StringIO(data), dtype={"col1": dtype}).col1
Out[44]:
0 a
1 a
2 NaN
Name: col1, dtype: category
Categories (3, object): ['a', 'b', 'd']
This matches the behavior of Categorical.set_categories().
Note
With dtype='category', the resulting categories will always be parsed
as strings (object dtype). If the categories are numeric they can be
converted using the to_numeric() function, or as appropriate, another
converter such as to_datetime().
When dtype is a CategoricalDtype with homogeneous categories (
all numeric, all datetimes, etc.), the conversion is done automatically.
In [45]: df = pd.read_csv(StringIO(data), dtype="category")
In [46]: df.dtypes
Out[46]:
col1 category
col2 category
col3 category
dtype: object
In [47]: df["col3"]
Out[47]:
0 1
1 2
2 3
Name: col3, dtype: category
Categories (3, object): ['1', '2', '3']
In [48]: new_categories = pd.to_numeric(df["col3"].cat.categories)
In [49]: df["col3"] = df["col3"].cat.rename_categories(new_categories)
In [50]: df["col3"]
Out[50]:
0 1
1 2
2 3
Name: col3, dtype: category
Categories (3, int64): [1, 2, 3]
Naming and using columns#
Handling column names#
A file may or may not have a header row. pandas assumes the first row should be
used as the column names:
In [51]: data = "a,b,c\n1,2,3\n4,5,6\n7,8,9"
In [52]: print(data)
a,b,c
1,2,3
4,5,6
7,8,9
In [53]: pd.read_csv(StringIO(data))
Out[53]:
a b c
0 1 2 3
1 4 5 6
2 7 8 9
By specifying the names argument in conjunction with header you can
indicate other names to use and whether or not to throw away the header row (if
any):
In [54]: print(data)
a,b,c
1,2,3
4,5,6
7,8,9
In [55]: pd.read_csv(StringIO(data), names=["foo", "bar", "baz"], header=0)
Out[55]:
foo bar baz
0 1 2 3
1 4 5 6
2 7 8 9
In [56]: pd.read_csv(StringIO(data), names=["foo", "bar", "baz"], header=None)
Out[56]:
foo bar baz
0 a b c
1 1 2 3
2 4 5 6
3 7 8 9
If the header is in a row other than the first, pass the row number to
header. This will skip the preceding rows:
In [57]: data = "skip this skip it\na,b,c\n1,2,3\n4,5,6\n7,8,9"
In [58]: pd.read_csv(StringIO(data), header=1)
Out[58]:
a b c
0 1 2 3
1 4 5 6
2 7 8 9
Note
Default behavior is to infer the column names: if no names are
passed the behavior is identical to header=0 and column names
are inferred from the first non-blank line of the file, if column
names are passed explicitly then the behavior is identical to
header=None.
Duplicate names parsing#
Deprecated since version 1.5.0: mangle_dupe_cols was never implemented, and a new argument where the
renaming pattern can be specified will be added instead.
If the file or header contains duplicate names, pandas will by default
distinguish between them so as to prevent overwriting data:
In [59]: data = "a,b,a\n0,1,2\n3,4,5"
In [60]: pd.read_csv(StringIO(data))
Out[60]:
a b a.1
0 0 1 2
1 3 4 5
There is no more duplicate data because mangle_dupe_cols=True by default,
which modifies a series of duplicate columns ‘X’, …, ‘X’ to become
‘X’, ‘X.1’, …, ‘X.N’.
Filtering columns (usecols)#
The usecols argument allows you to select any subset of the columns in a
file, either using the column names, position numbers or a callable:
In [61]: data = "a,b,c,d\n1,2,3,foo\n4,5,6,bar\n7,8,9,baz"
In [62]: pd.read_csv(StringIO(data))
Out[62]:
a b c d
0 1 2 3 foo
1 4 5 6 bar
2 7 8 9 baz
In [63]: pd.read_csv(StringIO(data), usecols=["b", "d"])
Out[63]:
b d
0 2 foo
1 5 bar
2 8 baz
In [64]: pd.read_csv(StringIO(data), usecols=[0, 2, 3])
Out[64]:
a c d
0 1 3 foo
1 4 6 bar
2 7 9 baz
In [65]: pd.read_csv(StringIO(data), usecols=lambda x: x.upper() in ["A", "C"])
Out[65]:
a c
0 1 3
1 4 6
2 7 9
The usecols argument can also be used to specify which columns not to
use in the final result:
In [66]: pd.read_csv(StringIO(data), usecols=lambda x: x not in ["a", "c"])
Out[66]:
b d
0 2 foo
1 5 bar
2 8 baz
In this case, the callable is specifying that we exclude the “a” and “c”
columns from the output.
Comments and empty lines#
Ignoring line comments and empty lines#
If the comment parameter is specified, then completely commented lines will
be ignored. By default, completely blank lines will be ignored as well.
In [67]: data = "\na,b,c\n \n# commented line\n1,2,3\n\n4,5,6"
In [68]: print(data)
a,b,c
# commented line
1,2,3
4,5,6
In [69]: pd.read_csv(StringIO(data), comment="#")
Out[69]:
a b c
0 1 2 3
1 4 5 6
If skip_blank_lines=False, then read_csv will not ignore blank lines:
In [70]: data = "a,b,c\n\n1,2,3\n\n\n4,5,6"
In [71]: pd.read_csv(StringIO(data), skip_blank_lines=False)
Out[71]:
a b c
0 NaN NaN NaN
1 1.0 2.0 3.0
2 NaN NaN NaN
3 NaN NaN NaN
4 4.0 5.0 6.0
Warning
The presence of ignored lines might create ambiguities involving line numbers;
the parameter header uses row numbers (ignoring commented/empty
lines), while skiprows uses line numbers (including commented/empty lines):
In [72]: data = "#comment\na,b,c\nA,B,C\n1,2,3"
In [73]: pd.read_csv(StringIO(data), comment="#", header=1)
Out[73]:
A B C
0 1 2 3
In [74]: data = "A,B,C\n#comment\na,b,c\n1,2,3"
In [75]: pd.read_csv(StringIO(data), comment="#", skiprows=2)
Out[75]:
a b c
0 1 2 3
If both header and skiprows are specified, header will be
relative to the end of skiprows. For example:
In [76]: data = (
....: "# empty\n"
....: "# second empty line\n"
....: "# third emptyline\n"
....: "X,Y,Z\n"
....: "1,2,3\n"
....: "A,B,C\n"
....: "1,2.,4.\n"
....: "5.,NaN,10.0\n"
....: )
....:
In [77]: print(data)
# empty
# second empty line
# third emptyline
X,Y,Z
1,2,3
A,B,C
1,2.,4.
5.,NaN,10.0
In [78]: pd.read_csv(StringIO(data), comment="#", skiprows=4, header=1)
Out[78]:
A B C
0 1.0 2.0 4.0
1 5.0 NaN 10.0
Comments#
Sometimes comments or meta data may be included in a file:
In [79]: print(open("tmp.csv").read())
ID,level,category
Patient1,123000,x # really unpleasant
Patient2,23000,y # wouldn't take his medicine
Patient3,1234018,z # awesome
By default, the parser includes the comments in the output:
In [80]: df = pd.read_csv("tmp.csv")
In [81]: df
Out[81]:
ID level category
0 Patient1 123000 x # really unpleasant
1 Patient2 23000 y # wouldn't take his medicine
2 Patient3 1234018 z # awesome
We can suppress the comments using the comment keyword:
In [82]: df = pd.read_csv("tmp.csv", comment="#")
In [83]: df
Out[83]:
ID level category
0 Patient1 123000 x
1 Patient2 23000 y
2 Patient3 1234018 z
Dealing with Unicode data#
The encoding argument should be used for encoded unicode data, which will
result in byte strings being decoded to unicode in the result:
In [84]: from io import BytesIO
In [85]: data = b"word,length\n" b"Tr\xc3\xa4umen,7\n" b"Gr\xc3\xbc\xc3\x9fe,5"
In [86]: data = data.decode("utf8").encode("latin-1")
In [87]: df = pd.read_csv(BytesIO(data), encoding="latin-1")
In [88]: df
Out[88]:
word length
0 Träumen 7
1 Grüße 5
In [89]: df["word"][1]
Out[89]: 'Grüße'
Some formats which encode all characters as multiple bytes, like UTF-16, won’t
parse correctly at all without specifying the encoding. Full list of Python
standard encodings.
Index columns and trailing delimiters#
If a file has one more column of data than the number of column names, the
first column will be used as the DataFrame’s row names:
In [90]: data = "a,b,c\n4,apple,bat,5.7\n8,orange,cow,10"
In [91]: pd.read_csv(StringIO(data))
Out[91]:
a b c
4 apple bat 5.7
8 orange cow 10.0
In [92]: data = "index,a,b,c\n4,apple,bat,5.7\n8,orange,cow,10"
In [93]: pd.read_csv(StringIO(data), index_col=0)
Out[93]:
a b c
index
4 apple bat 5.7
8 orange cow 10.0
Ordinarily, you can achieve this behavior using the index_col option.
There are some exception cases when a file has been prepared with delimiters at
the end of each data line, confusing the parser. To explicitly disable the
index column inference and discard the last column, pass index_col=False:
In [94]: data = "a,b,c\n4,apple,bat,\n8,orange,cow,"
In [95]: print(data)
a,b,c
4,apple,bat,
8,orange,cow,
In [96]: pd.read_csv(StringIO(data))
Out[96]:
a b c
4 apple bat NaN
8 orange cow NaN
In [97]: pd.read_csv(StringIO(data), index_col=False)
Out[97]:
a b c
0 4 apple bat
1 8 orange cow
If a subset of data is being parsed using the usecols option, the
index_col specification is based on that subset, not the original data.
In [98]: data = "a,b,c\n4,apple,bat,\n8,orange,cow,"
In [99]: print(data)
a,b,c
4,apple,bat,
8,orange,cow,
In [100]: pd.read_csv(StringIO(data), usecols=["b", "c"])
Out[100]:
b c
4 bat NaN
8 cow NaN
In [101]: pd.read_csv(StringIO(data), usecols=["b", "c"], index_col=0)
Out[101]:
b c
4 bat NaN
8 cow NaN
Date Handling#
Specifying date columns#
To better facilitate working with datetime data, read_csv()
uses the keyword arguments parse_dates and date_parser
to allow users to specify a variety of columns and date/time formats to turn the
input text data into datetime objects.
The simplest case is to just pass in parse_dates=True:
In [102]: with open("foo.csv", mode="w") as f:
.....: f.write("date,A,B,C\n20090101,a,1,2\n20090102,b,3,4\n20090103,c,4,5")
.....:
# Use a column as an index, and parse it as dates.
In [103]: df = pd.read_csv("foo.csv", index_col=0, parse_dates=True)
In [104]: df
Out[104]:
A B C
date
2009-01-01 a 1 2
2009-01-02 b 3 4
2009-01-03 c 4 5
# These are Python datetime objects
In [105]: df.index
Out[105]: DatetimeIndex(['2009-01-01', '2009-01-02', '2009-01-03'], dtype='datetime64[ns]', name='date', freq=None)
It is often the case that we may want to store date and time data separately,
or store various date fields separately. the parse_dates keyword can be
used to specify a combination of columns to parse the dates and/or times from.
You can specify a list of column lists to parse_dates, the resulting date
columns will be prepended to the output (so as to not affect the existing column
order) and the new column names will be the concatenation of the component
column names:
In [106]: data = (
.....: "KORD,19990127, 19:00:00, 18:56:00, 0.8100\n"
.....: "KORD,19990127, 20:00:00, 19:56:00, 0.0100\n"
.....: "KORD,19990127, 21:00:00, 20:56:00, -0.5900\n"
.....: "KORD,19990127, 21:00:00, 21:18:00, -0.9900\n"
.....: "KORD,19990127, 22:00:00, 21:56:00, -0.5900\n"
.....: "KORD,19990127, 23:00:00, 22:56:00, -0.5900"
.....: )
.....:
In [107]: with open("tmp.csv", "w") as fh:
.....: fh.write(data)
.....:
In [108]: df = pd.read_csv("tmp.csv", header=None, parse_dates=[[1, 2], [1, 3]])
In [109]: df
Out[109]:
1_2 1_3 0 4
0 1999-01-27 19:00:00 1999-01-27 18:56:00 KORD 0.81
1 1999-01-27 20:00:00 1999-01-27 19:56:00 KORD 0.01
2 1999-01-27 21:00:00 1999-01-27 20:56:00 KORD -0.59
3 1999-01-27 21:00:00 1999-01-27 21:18:00 KORD -0.99
4 1999-01-27 22:00:00 1999-01-27 21:56:00 KORD -0.59
5 1999-01-27 23:00:00 1999-01-27 22:56:00 KORD -0.59
By default the parser removes the component date columns, but you can choose
to retain them via the keep_date_col keyword:
In [110]: df = pd.read_csv(
.....: "tmp.csv", header=None, parse_dates=[[1, 2], [1, 3]], keep_date_col=True
.....: )
.....:
In [111]: df
Out[111]:
1_2 1_3 0 ... 2 3 4
0 1999-01-27 19:00:00 1999-01-27 18:56:00 KORD ... 19:00:00 18:56:00 0.81
1 1999-01-27 20:00:00 1999-01-27 19:56:00 KORD ... 20:00:00 19:56:00 0.01
2 1999-01-27 21:00:00 1999-01-27 20:56:00 KORD ... 21:00:00 20:56:00 -0.59
3 1999-01-27 21:00:00 1999-01-27 21:18:00 KORD ... 21:00:00 21:18:00 -0.99
4 1999-01-27 22:00:00 1999-01-27 21:56:00 KORD ... 22:00:00 21:56:00 -0.59
5 1999-01-27 23:00:00 1999-01-27 22:56:00 KORD ... 23:00:00 22:56:00 -0.59
[6 rows x 7 columns]
Note that if you wish to combine multiple columns into a single date column, a
nested list must be used. In other words, parse_dates=[1, 2] indicates that
the second and third columns should each be parsed as separate date columns
while parse_dates=[[1, 2]] means the two columns should be parsed into a
single column.
You can also use a dict to specify custom name columns:
In [112]: date_spec = {"nominal": [1, 2], "actual": [1, 3]}
In [113]: df = pd.read_csv("tmp.csv", header=None, parse_dates=date_spec)
In [114]: df
Out[114]:
nominal actual 0 4
0 1999-01-27 19:00:00 1999-01-27 18:56:00 KORD 0.81
1 1999-01-27 20:00:00 1999-01-27 19:56:00 KORD 0.01
2 1999-01-27 21:00:00 1999-01-27 20:56:00 KORD -0.59
3 1999-01-27 21:00:00 1999-01-27 21:18:00 KORD -0.99
4 1999-01-27 22:00:00 1999-01-27 21:56:00 KORD -0.59
5 1999-01-27 23:00:00 1999-01-27 22:56:00 KORD -0.59
It is important to remember that if multiple text columns are to be parsed into
a single date column, then a new column is prepended to the data. The index_col
specification is based off of this new set of columns rather than the original
data columns:
In [115]: date_spec = {"nominal": [1, 2], "actual": [1, 3]}
In [116]: df = pd.read_csv(
.....: "tmp.csv", header=None, parse_dates=date_spec, index_col=0
.....: ) # index is the nominal column
.....:
In [117]: df
Out[117]:
actual 0 4
nominal
1999-01-27 19:00:00 1999-01-27 18:56:00 KORD 0.81
1999-01-27 20:00:00 1999-01-27 19:56:00 KORD 0.01
1999-01-27 21:00:00 1999-01-27 20:56:00 KORD -0.59
1999-01-27 21:00:00 1999-01-27 21:18:00 KORD -0.99
1999-01-27 22:00:00 1999-01-27 21:56:00 KORD -0.59
1999-01-27 23:00:00 1999-01-27 22:56:00 KORD -0.59
Note
If a column or index contains an unparsable date, the entire column or
index will be returned unaltered as an object data type. For non-standard
datetime parsing, use to_datetime() after pd.read_csv.
Note
read_csv has a fast_path for parsing datetime strings in iso8601 format,
e.g “2000-01-01T00:01:02+00:00” and similar variations. If you can arrange
for your data to store datetimes in this format, load times will be
significantly faster, ~20x has been observed.
Date parsing functions#
Finally, the parser allows you to specify a custom date_parser function to
take full advantage of the flexibility of the date parsing API:
In [118]: df = pd.read_csv(
.....: "tmp.csv", header=None, parse_dates=date_spec, date_parser=pd.to_datetime
.....: )
.....:
In [119]: df
Out[119]:
nominal actual 0 4
0 1999-01-27 19:00:00 1999-01-27 18:56:00 KORD 0.81
1 1999-01-27 20:00:00 1999-01-27 19:56:00 KORD 0.01
2 1999-01-27 21:00:00 1999-01-27 20:56:00 KORD -0.59
3 1999-01-27 21:00:00 1999-01-27 21:18:00 KORD -0.99
4 1999-01-27 22:00:00 1999-01-27 21:56:00 KORD -0.59
5 1999-01-27 23:00:00 1999-01-27 22:56:00 KORD -0.59
pandas will try to call the date_parser function in three different ways. If
an exception is raised, the next one is tried:
date_parser is first called with one or more arrays as arguments,
as defined using parse_dates (e.g., date_parser(['2013', '2013'], ['1', '2'])).
If #1 fails, date_parser is called with all the columns
concatenated row-wise into a single array (e.g., date_parser(['2013 1', '2013 2'])).
Note that performance-wise, you should try these methods of parsing dates in order:
Try to infer the format using infer_datetime_format=True (see section below).
If you know the format, use pd.to_datetime():
date_parser=lambda x: pd.to_datetime(x, format=...).
If you have a really non-standard format, use a custom date_parser function.
For optimal performance, this should be vectorized, i.e., it should accept arrays
as arguments.
Parsing a CSV with mixed timezones#
pandas cannot natively represent a column or index with mixed timezones. If your CSV
file contains columns with a mixture of timezones, the default result will be
an object-dtype column with strings, even with parse_dates.
In [120]: content = """\
.....: a
.....: 2000-01-01T00:00:00+05:00
.....: 2000-01-01T00:00:00+06:00"""
.....:
In [121]: df = pd.read_csv(StringIO(content), parse_dates=["a"])
In [122]: df["a"]
Out[122]:
0 2000-01-01 00:00:00+05:00
1 2000-01-01 00:00:00+06:00
Name: a, dtype: object
To parse the mixed-timezone values as a datetime column, pass a partially-applied
to_datetime() with utc=True as the date_parser.
In [123]: df = pd.read_csv(
.....: StringIO(content),
.....: parse_dates=["a"],
.....: date_parser=lambda col: pd.to_datetime(col, utc=True),
.....: )
.....:
In [124]: df["a"]
Out[124]:
0 1999-12-31 19:00:00+00:00
1 1999-12-31 18:00:00+00:00
Name: a, dtype: datetime64[ns, UTC]
Inferring datetime format#
If you have parse_dates enabled for some or all of your columns, and your
datetime strings are all formatted the same way, you may get a large speed
up by setting infer_datetime_format=True. If set, pandas will attempt
to guess the format of your datetime strings, and then use a faster means
of parsing the strings. 5-10x parsing speeds have been observed. pandas
will fallback to the usual parsing if either the format cannot be guessed
or the format that was guessed cannot properly parse the entire column
of strings. So in general, infer_datetime_format should not have any
negative consequences if enabled.
Here are some examples of datetime strings that can be guessed (All
representing December 30th, 2011 at 00:00:00):
“20111230”
“2011/12/30”
“20111230 00:00:00”
“12/30/2011 00:00:00”
“30/Dec/2011 00:00:00”
“30/December/2011 00:00:00”
Note that infer_datetime_format is sensitive to dayfirst. With
dayfirst=True, it will guess “01/12/2011” to be December 1st. With
dayfirst=False (default) it will guess “01/12/2011” to be January 12th.
# Try to infer the format for the index column
In [125]: df = pd.read_csv(
.....: "foo.csv",
.....: index_col=0,
.....: parse_dates=True,
.....: infer_datetime_format=True,
.....: )
.....:
In [126]: df
Out[126]:
A B C
date
2009-01-01 a 1 2
2009-01-02 b 3 4
2009-01-03 c 4 5
International date formats#
While US date formats tend to be MM/DD/YYYY, many international formats use
DD/MM/YYYY instead. For convenience, a dayfirst keyword is provided:
In [127]: data = "date,value,cat\n1/6/2000,5,a\n2/6/2000,10,b\n3/6/2000,15,c"
In [128]: print(data)
date,value,cat
1/6/2000,5,a
2/6/2000,10,b
3/6/2000,15,c
In [129]: with open("tmp.csv", "w") as fh:
.....: fh.write(data)
.....:
In [130]: pd.read_csv("tmp.csv", parse_dates=[0])
Out[130]:
date value cat
0 2000-01-06 5 a
1 2000-02-06 10 b
2 2000-03-06 15 c
In [131]: pd.read_csv("tmp.csv", dayfirst=True, parse_dates=[0])
Out[131]:
date value cat
0 2000-06-01 5 a
1 2000-06-02 10 b
2 2000-06-03 15 c
Writing CSVs to binary file objects#
New in version 1.2.0.
df.to_csv(..., mode="wb") allows writing a CSV to a file object
opened binary mode. In most cases, it is not necessary to specify
mode as Pandas will auto-detect whether the file object is
opened in text or binary mode.
In [132]: import io
In [133]: data = pd.DataFrame([0, 1, 2])
In [134]: buffer = io.BytesIO()
In [135]: data.to_csv(buffer, encoding="utf-8", compression="gzip")
Specifying method for floating-point conversion#
The parameter float_precision can be specified in order to use
a specific floating-point converter during parsing with the C engine.
The options are the ordinary converter, the high-precision converter, and
the round-trip converter (which is guaranteed to round-trip values after
writing to a file). For example:
In [136]: val = "0.3066101993807095471566981359501369297504425048828125"
In [137]: data = "a,b,c\n1,2,{0}".format(val)
In [138]: abs(
.....: pd.read_csv(
.....: StringIO(data),
.....: engine="c",
.....: float_precision=None,
.....: )["c"][0] - float(val)
.....: )
.....:
Out[138]: 5.551115123125783e-17
In [139]: abs(
.....: pd.read_csv(
.....: StringIO(data),
.....: engine="c",
.....: float_precision="high",
.....: )["c"][0] - float(val)
.....: )
.....:
Out[139]: 5.551115123125783e-17
In [140]: abs(
.....: pd.read_csv(StringIO(data), engine="c", float_precision="round_trip")["c"][0]
.....: - float(val)
.....: )
.....:
Out[140]: 0.0
Thousand separators#
For large numbers that have been written with a thousands separator, you can
set the thousands keyword to a string of length 1 so that integers will be parsed
correctly:
By default, numbers with a thousands separator will be parsed as strings:
In [141]: data = (
.....: "ID|level|category\n"
.....: "Patient1|123,000|x\n"
.....: "Patient2|23,000|y\n"
.....: "Patient3|1,234,018|z"
.....: )
.....:
In [142]: with open("tmp.csv", "w") as fh:
.....: fh.write(data)
.....:
In [143]: df = pd.read_csv("tmp.csv", sep="|")
In [144]: df
Out[144]:
ID level category
0 Patient1 123,000 x
1 Patient2 23,000 y
2 Patient3 1,234,018 z
In [145]: df.level.dtype
Out[145]: dtype('O')
The thousands keyword allows integers to be parsed correctly:
In [146]: df = pd.read_csv("tmp.csv", sep="|", thousands=",")
In [147]: df
Out[147]:
ID level category
0 Patient1 123000 x
1 Patient2 23000 y
2 Patient3 1234018 z
In [148]: df.level.dtype
Out[148]: dtype('int64')
NA values#
To control which values are parsed as missing values (which are signified by
NaN), specify a string in na_values. If you specify a list of strings,
then all values in it are considered to be missing values. If you specify a
number (a float, like 5.0 or an integer like 5), the
corresponding equivalent values will also imply a missing value (in this case
effectively [5.0, 5] are recognized as NaN).
To completely override the default values that are recognized as missing, specify keep_default_na=False.
The default NaN recognized values are ['-1.#IND', '1.#QNAN', '1.#IND', '-1.#QNAN', '#N/A N/A', '#N/A', 'N/A',
'n/a', 'NA', '<NA>', '#NA', 'NULL', 'null', 'NaN', '-NaN', 'nan', '-nan', ''].
Let us consider some examples:
pd.read_csv("path_to_file.csv", na_values=[5])
In the example above 5 and 5.0 will be recognized as NaN, in
addition to the defaults. A string will first be interpreted as a numerical
5, then as a NaN.
pd.read_csv("path_to_file.csv", keep_default_na=False, na_values=[""])
Above, only an empty field will be recognized as NaN.
pd.read_csv("path_to_file.csv", keep_default_na=False, na_values=["NA", "0"])
Above, both NA and 0 as strings are NaN.
pd.read_csv("path_to_file.csv", na_values=["Nope"])
The default values, in addition to the string "Nope" are recognized as
NaN.
Infinity#
inf like values will be parsed as np.inf (positive infinity), and -inf as -np.inf (negative infinity).
These will ignore the case of the value, meaning Inf, will also be parsed as np.inf.
Returning Series#
Using the squeeze keyword, the parser will return output with a single column
as a Series:
Deprecated since version 1.4.0: Users should append .squeeze("columns") to the DataFrame returned by
read_csv instead.
In [149]: data = "level\nPatient1,123000\nPatient2,23000\nPatient3,1234018"
In [150]: with open("tmp.csv", "w") as fh:
.....: fh.write(data)
.....:
In [151]: print(open("tmp.csv").read())
level
Patient1,123000
Patient2,23000
Patient3,1234018
In [152]: output = pd.read_csv("tmp.csv", squeeze=True)
In [153]: output
Out[153]:
Patient1 123000
Patient2 23000
Patient3 1234018
Name: level, dtype: int64
In [154]: type(output)
Out[154]: pandas.core.series.Series
Boolean values#
The common values True, False, TRUE, and FALSE are all
recognized as boolean. Occasionally you might want to recognize other values
as being boolean. To do this, use the true_values and false_values
options as follows:
In [155]: data = "a,b,c\n1,Yes,2\n3,No,4"
In [156]: print(data)
a,b,c
1,Yes,2
3,No,4
In [157]: pd.read_csv(StringIO(data))
Out[157]:
a b c
0 1 Yes 2
1 3 No 4
In [158]: pd.read_csv(StringIO(data), true_values=["Yes"], false_values=["No"])
Out[158]:
a b c
0 1 True 2
1 3 False 4
Handling “bad” lines#
Some files may have malformed lines with too few fields or too many. Lines with
too few fields will have NA values filled in the trailing fields. Lines with
too many fields will raise an error by default:
In [159]: data = "a,b,c\n1,2,3\n4,5,6,7\n8,9,10"
In [160]: pd.read_csv(StringIO(data))
---------------------------------------------------------------------------
ParserError Traceback (most recent call last)
Cell In[160], line 1
----> 1 pd.read_csv(StringIO(data))
File ~/work/pandas/pandas/pandas/util/_decorators.py:211, in deprecate_kwarg.<locals>._deprecate_kwarg.<locals>.wrapper(*args, **kwargs)
209 else:
210 kwargs[new_arg_name] = new_arg_value
--> 211 return func(*args, **kwargs)
File ~/work/pandas/pandas/pandas/util/_decorators.py:331, in deprecate_nonkeyword_arguments.<locals>.decorate.<locals>.wrapper(*args, **kwargs)
325 if len(args) > num_allow_args:
326 warnings.warn(
327 msg.format(arguments=_format_argument_list(allow_args)),
328 FutureWarning,
329 stacklevel=find_stack_level(),
330 )
--> 331 return func(*args, **kwargs)
File ~/work/pandas/pandas/pandas/io/parsers/readers.py:950, in read_csv(filepath_or_buffer, sep, delimiter, header, names, index_col, usecols, squeeze, prefix, mangle_dupe_cols, dtype, engine, converters, true_values, false_values, skipinitialspace, skiprows, skipfooter, nrows, na_values, keep_default_na, na_filter, verbose, skip_blank_lines, parse_dates, infer_datetime_format, keep_date_col, date_parser, dayfirst, cache_dates, iterator, chunksize, compression, thousands, decimal, lineterminator, quotechar, quoting, doublequote, escapechar, comment, encoding, encoding_errors, dialect, error_bad_lines, warn_bad_lines, on_bad_lines, delim_whitespace, low_memory, memory_map, float_precision, storage_options)
935 kwds_defaults = _refine_defaults_read(
936 dialect,
937 delimiter,
(...)
946 defaults={"delimiter": ","},
947 )
948 kwds.update(kwds_defaults)
--> 950 return _read(filepath_or_buffer, kwds)
File ~/work/pandas/pandas/pandas/io/parsers/readers.py:611, in _read(filepath_or_buffer, kwds)
608 return parser
610 with parser:
--> 611 return parser.read(nrows)
File ~/work/pandas/pandas/pandas/io/parsers/readers.py:1778, in TextFileReader.read(self, nrows)
1771 nrows = validate_integer("nrows", nrows)
1772 try:
1773 # error: "ParserBase" has no attribute "read"
1774 (
1775 index,
1776 columns,
1777 col_dict,
-> 1778 ) = self._engine.read( # type: ignore[attr-defined]
1779 nrows
1780 )
1781 except Exception:
1782 self.close()
File ~/work/pandas/pandas/pandas/io/parsers/c_parser_wrapper.py:230, in CParserWrapper.read(self, nrows)
228 try:
229 if self.low_memory:
--> 230 chunks = self._reader.read_low_memory(nrows)
231 # destructive to chunks
232 data = _concatenate_chunks(chunks)
File ~/work/pandas/pandas/pandas/_libs/parsers.pyx:808, in pandas._libs.parsers.TextReader.read_low_memory()
File ~/work/pandas/pandas/pandas/_libs/parsers.pyx:866, in pandas._libs.parsers.TextReader._read_rows()
File ~/work/pandas/pandas/pandas/_libs/parsers.pyx:852, in pandas._libs.parsers.TextReader._tokenize_rows()
File ~/work/pandas/pandas/pandas/_libs/parsers.pyx:1973, in pandas._libs.parsers.raise_parser_error()
ParserError: Error tokenizing data. C error: Expected 3 fields in line 3, saw 4
You can elect to skip bad lines:
In [29]: pd.read_csv(StringIO(data), on_bad_lines="warn")
Skipping line 3: expected 3 fields, saw 4
Out[29]:
a b c
0 1 2 3
1 8 9 10
Or pass a callable function to handle the bad line if engine="python".
The bad line will be a list of strings that was split by the sep:
In [29]: external_list = []
In [30]: def bad_lines_func(line):
...: external_list.append(line)
...: return line[-3:]
In [31]: pd.read_csv(StringIO(data), on_bad_lines=bad_lines_func, engine="python")
Out[31]:
a b c
0 1 2 3
1 5 6 7
2 8 9 10
In [32]: external_list
Out[32]: [4, 5, 6, 7]
.. versionadded:: 1.4.0
You can also use the usecols parameter to eliminate extraneous column
data that appear in some lines but not others:
In [33]: pd.read_csv(StringIO(data), usecols=[0, 1, 2])
Out[33]:
a b c
0 1 2 3
1 4 5 6
2 8 9 10
In case you want to keep all data including the lines with too many fields, you can
specify a sufficient number of names. This ensures that lines with not enough
fields are filled with NaN.
In [34]: pd.read_csv(StringIO(data), names=['a', 'b', 'c', 'd'])
Out[34]:
a b c d
0 1 2 3 NaN
1 4 5 6 7
2 8 9 10 NaN
Dialect#
The dialect keyword gives greater flexibility in specifying the file format.
By default it uses the Excel dialect but you can specify either the dialect name
or a csv.Dialect instance.
Suppose you had data with unenclosed quotes:
In [161]: data = "label1,label2,label3\n" 'index1,"a,c,e\n' "index2,b,d,f"
In [162]: print(data)
label1,label2,label3
index1,"a,c,e
index2,b,d,f
By default, read_csv uses the Excel dialect and treats the double quote as
the quote character, which causes it to fail when it finds a newline before it
finds the closing double quote.
We can get around this using dialect:
In [163]: import csv
In [164]: dia = csv.excel()
In [165]: dia.quoting = csv.QUOTE_NONE
In [166]: pd.read_csv(StringIO(data), dialect=dia)
Out[166]:
label1 label2 label3
index1 "a c e
index2 b d f
All of the dialect options can be specified separately by keyword arguments:
In [167]: data = "a,b,c~1,2,3~4,5,6"
In [168]: pd.read_csv(StringIO(data), lineterminator="~")
Out[168]:
a b c
0 1 2 3
1 4 5 6
Another common dialect option is skipinitialspace, to skip any whitespace
after a delimiter:
In [169]: data = "a, b, c\n1, 2, 3\n4, 5, 6"
In [170]: print(data)
a, b, c
1, 2, 3
4, 5, 6
In [171]: pd.read_csv(StringIO(data), skipinitialspace=True)
Out[171]:
a b c
0 1 2 3
1 4 5 6
The parsers make every attempt to “do the right thing” and not be fragile. Type
inference is a pretty big deal. If a column can be coerced to integer dtype
without altering the contents, the parser will do so. Any non-numeric
columns will come through as object dtype as with the rest of pandas objects.
Quoting and Escape Characters#
Quotes (and other escape characters) in embedded fields can be handled in any
number of ways. One way is to use backslashes; to properly parse this data, you
should pass the escapechar option:
In [172]: data = 'a,b\n"hello, \\"Bob\\", nice to see you",5'
In [173]: print(data)
a,b
"hello, \"Bob\", nice to see you",5
In [174]: pd.read_csv(StringIO(data), escapechar="\\")
Out[174]:
a b
0 hello, "Bob", nice to see you 5
Files with fixed width columns#
While read_csv() reads delimited data, the read_fwf() function works
with data files that have known and fixed column widths. The function parameters
to read_fwf are largely the same as read_csv with two extra parameters, and
a different usage of the delimiter parameter:
colspecs: A list of pairs (tuples) giving the extents of the
fixed-width fields of each line as half-open intervals (i.e., [from, to[ ).
String value ‘infer’ can be used to instruct the parser to try detecting
the column specifications from the first 100 rows of the data. Default
behavior, if not specified, is to infer.
widths: A list of field widths which can be used instead of ‘colspecs’
if the intervals are contiguous.
delimiter: Characters to consider as filler characters in the fixed-width file.
Can be used to specify the filler character of the fields
if it is not spaces (e.g., ‘~’).
Consider a typical fixed-width data file:
In [175]: data1 = (
.....: "id8141 360.242940 149.910199 11950.7\n"
.....: "id1594 444.953632 166.985655 11788.4\n"
.....: "id1849 364.136849 183.628767 11806.2\n"
.....: "id1230 413.836124 184.375703 11916.8\n"
.....: "id1948 502.953953 173.237159 12468.3"
.....: )
.....:
In [176]: with open("bar.csv", "w") as f:
.....: f.write(data1)
.....:
In order to parse this file into a DataFrame, we simply need to supply the
column specifications to the read_fwf function along with the file name:
# Column specifications are a list of half-intervals
In [177]: colspecs = [(0, 6), (8, 20), (21, 33), (34, 43)]
In [178]: df = pd.read_fwf("bar.csv", colspecs=colspecs, header=None, index_col=0)
In [179]: df
Out[179]:
1 2 3
0
id8141 360.242940 149.910199 11950.7
id1594 444.953632 166.985655 11788.4
id1849 364.136849 183.628767 11806.2
id1230 413.836124 184.375703 11916.8
id1948 502.953953 173.237159 12468.3
Note how the parser automatically picks column names X.<column number> when
header=None argument is specified. Alternatively, you can supply just the
column widths for contiguous columns:
# Widths are a list of integers
In [180]: widths = [6, 14, 13, 10]
In [181]: df = pd.read_fwf("bar.csv", widths=widths, header=None)
In [182]: df
Out[182]:
0 1 2 3
0 id8141 360.242940 149.910199 11950.7
1 id1594 444.953632 166.985655 11788.4
2 id1849 364.136849 183.628767 11806.2
3 id1230 413.836124 184.375703 11916.8
4 id1948 502.953953 173.237159 12468.3
The parser will take care of extra white spaces around the columns
so it’s ok to have extra separation between the columns in the file.
By default, read_fwf will try to infer the file’s colspecs by using the
first 100 rows of the file. It can do it only in cases when the columns are
aligned and correctly separated by the provided delimiter (default delimiter
is whitespace).
In [183]: df = pd.read_fwf("bar.csv", header=None, index_col=0)
In [184]: df
Out[184]:
1 2 3
0
id8141 360.242940 149.910199 11950.7
id1594 444.953632 166.985655 11788.4
id1849 364.136849 183.628767 11806.2
id1230 413.836124 184.375703 11916.8
id1948 502.953953 173.237159 12468.3
read_fwf supports the dtype parameter for specifying the types of
parsed columns to be different from the inferred type.
In [185]: pd.read_fwf("bar.csv", header=None, index_col=0).dtypes
Out[185]:
1 float64
2 float64
3 float64
dtype: object
In [186]: pd.read_fwf("bar.csv", header=None, dtype={2: "object"}).dtypes
Out[186]:
0 object
1 float64
2 object
3 float64
dtype: object
Indexes#
Files with an “implicit” index column#
Consider a file with one less entry in the header than the number of data
column:
In [187]: data = "A,B,C\n20090101,a,1,2\n20090102,b,3,4\n20090103,c,4,5"
In [188]: print(data)
A,B,C
20090101,a,1,2
20090102,b,3,4
20090103,c,4,5
In [189]: with open("foo.csv", "w") as f:
.....: f.write(data)
.....:
In this special case, read_csv assumes that the first column is to be used
as the index of the DataFrame:
In [190]: pd.read_csv("foo.csv")
Out[190]:
A B C
20090101 a 1 2
20090102 b 3 4
20090103 c 4 5
Note that the dates weren’t automatically parsed. In that case you would need
to do as before:
In [191]: df = pd.read_csv("foo.csv", parse_dates=True)
In [192]: df.index
Out[192]: DatetimeIndex(['2009-01-01', '2009-01-02', '2009-01-03'], dtype='datetime64[ns]', freq=None)
Reading an index with a MultiIndex#
Suppose you have data indexed by two columns:
In [193]: data = 'year,indiv,zit,xit\n1977,"A",1.2,.6\n1977,"B",1.5,.5'
In [194]: print(data)
year,indiv,zit,xit
1977,"A",1.2,.6
1977,"B",1.5,.5
In [195]: with open("mindex_ex.csv", mode="w") as f:
.....: f.write(data)
.....:
The index_col argument to read_csv can take a list of
column numbers to turn multiple columns into a MultiIndex for the index of the
returned object:
In [196]: df = pd.read_csv("mindex_ex.csv", index_col=[0, 1])
In [197]: df
Out[197]:
zit xit
year indiv
1977 A 1.2 0.6
B 1.5 0.5
In [198]: df.loc[1977]
Out[198]:
zit xit
indiv
A 1.2 0.6
B 1.5 0.5
Reading columns with a MultiIndex#
By specifying list of row locations for the header argument, you
can read in a MultiIndex for the columns. Specifying non-consecutive
rows will skip the intervening rows.
In [199]: from pandas._testing import makeCustomDataframe as mkdf
In [200]: df = mkdf(5, 3, r_idx_nlevels=2, c_idx_nlevels=4)
In [201]: df.to_csv("mi.csv")
In [202]: print(open("mi.csv").read())
C0,,C_l0_g0,C_l0_g1,C_l0_g2
C1,,C_l1_g0,C_l1_g1,C_l1_g2
C2,,C_l2_g0,C_l2_g1,C_l2_g2
C3,,C_l3_g0,C_l3_g1,C_l3_g2
R0,R1,,,
R_l0_g0,R_l1_g0,R0C0,R0C1,R0C2
R_l0_g1,R_l1_g1,R1C0,R1C1,R1C2
R_l0_g2,R_l1_g2,R2C0,R2C1,R2C2
R_l0_g3,R_l1_g3,R3C0,R3C1,R3C2
R_l0_g4,R_l1_g4,R4C0,R4C1,R4C2
In [203]: pd.read_csv("mi.csv", header=[0, 1, 2, 3], index_col=[0, 1])
Out[203]:
C0 C_l0_g0 C_l0_g1 C_l0_g2
C1 C_l1_g0 C_l1_g1 C_l1_g2
C2 C_l2_g0 C_l2_g1 C_l2_g2
C3 C_l3_g0 C_l3_g1 C_l3_g2
R0 R1
R_l0_g0 R_l1_g0 R0C0 R0C1 R0C2
R_l0_g1 R_l1_g1 R1C0 R1C1 R1C2
R_l0_g2 R_l1_g2 R2C0 R2C1 R2C2
R_l0_g3 R_l1_g3 R3C0 R3C1 R3C2
R_l0_g4 R_l1_g4 R4C0 R4C1 R4C2
read_csv is also able to interpret a more common format
of multi-columns indices.
In [204]: data = ",a,a,a,b,c,c\n,q,r,s,t,u,v\none,1,2,3,4,5,6\ntwo,7,8,9,10,11,12"
In [205]: print(data)
,a,a,a,b,c,c
,q,r,s,t,u,v
one,1,2,3,4,5,6
two,7,8,9,10,11,12
In [206]: with open("mi2.csv", "w") as fh:
.....: fh.write(data)
.....:
In [207]: pd.read_csv("mi2.csv", header=[0, 1], index_col=0)
Out[207]:
a b c
q r s t u v
one 1 2 3 4 5 6
two 7 8 9 10 11 12
Note
If an index_col is not specified (e.g. you don’t have an index, or wrote it
with df.to_csv(..., index=False), then any names on the columns index will
be lost.
Automatically “sniffing” the delimiter#
read_csv is capable of inferring delimited (not necessarily
comma-separated) files, as pandas uses the csv.Sniffer
class of the csv module. For this, you have to specify sep=None.
In [208]: df = pd.DataFrame(np.random.randn(10, 4))
In [209]: df.to_csv("tmp.csv", sep="|")
In [210]: df.to_csv("tmp2.csv", sep=":")
In [211]: pd.read_csv("tmp2.csv", sep=None, engine="python")
Out[211]:
Unnamed: 0 0 1 2 3
0 0 0.469112 -0.282863 -1.509059 -1.135632
1 1 1.212112 -0.173215 0.119209 -1.044236
2 2 -0.861849 -2.104569 -0.494929 1.071804
3 3 0.721555 -0.706771 -1.039575 0.271860
4 4 -0.424972 0.567020 0.276232 -1.087401
5 5 -0.673690 0.113648 -1.478427 0.524988
6 6 0.404705 0.577046 -1.715002 -1.039268
7 7 -0.370647 -1.157892 -1.344312 0.844885
8 8 1.075770 -0.109050 1.643563 -1.469388
9 9 0.357021 -0.674600 -1.776904 -0.968914
Reading multiple files to create a single DataFrame#
It’s best to use concat() to combine multiple files.
See the cookbook for an example.
Iterating through files chunk by chunk#
Suppose you wish to iterate through a (potentially very large) file lazily
rather than reading the entire file into memory, such as the following:
In [212]: df = pd.DataFrame(np.random.randn(10, 4))
In [213]: df.to_csv("tmp.csv", sep="|")
In [214]: table = pd.read_csv("tmp.csv", sep="|")
In [215]: table
Out[215]:
Unnamed: 0 0 1 2 3
0 0 -1.294524 0.413738 0.276662 -0.472035
1 1 -0.013960 -0.362543 -0.006154 -0.923061
2 2 0.895717 0.805244 -1.206412 2.565646
3 3 1.431256 1.340309 -1.170299 -0.226169
4 4 0.410835 0.813850 0.132003 -0.827317
5 5 -0.076467 -1.187678 1.130127 -1.436737
6 6 -1.413681 1.607920 1.024180 0.569605
7 7 0.875906 -2.211372 0.974466 -2.006747
8 8 -0.410001 -0.078638 0.545952 -1.219217
9 9 -1.226825 0.769804 -1.281247 -0.727707
By specifying a chunksize to read_csv, the return
value will be an iterable object of type TextFileReader:
In [216]: with pd.read_csv("tmp.csv", sep="|", chunksize=4) as reader:
.....: reader
.....: for chunk in reader:
.....: print(chunk)
.....:
Unnamed: 0 0 1 2 3
0 0 -1.294524 0.413738 0.276662 -0.472035
1 1 -0.013960 -0.362543 -0.006154 -0.923061
2 2 0.895717 0.805244 -1.206412 2.565646
3 3 1.431256 1.340309 -1.170299 -0.226169
Unnamed: 0 0 1 2 3
4 4 0.410835 0.813850 0.132003 -0.827317
5 5 -0.076467 -1.187678 1.130127 -1.436737
6 6 -1.413681 1.607920 1.024180 0.569605
7 7 0.875906 -2.211372 0.974466 -2.006747
Unnamed: 0 0 1 2 3
8 8 -0.410001 -0.078638 0.545952 -1.219217
9 9 -1.226825 0.769804 -1.281247 -0.727707
Changed in version 1.2: read_csv/json/sas return a context-manager when iterating through a file.
Specifying iterator=True will also return the TextFileReader object:
In [217]: with pd.read_csv("tmp.csv", sep="|", iterator=True) as reader:
.....: reader.get_chunk(5)
.....:
Specifying the parser engine#
Pandas currently supports three engines, the C engine, the python engine, and an experimental
pyarrow engine (requires the pyarrow package). In general, the pyarrow engine is fastest
on larger workloads and is equivalent in speed to the C engine on most other workloads.
The python engine tends to be slower than the pyarrow and C engines on most workloads. However,
the pyarrow engine is much less robust than the C engine, which lacks a few features compared to the
Python engine.
Where possible, pandas uses the C parser (specified as engine='c'), but it may fall
back to Python if C-unsupported options are specified.
Currently, options unsupported by the C and pyarrow engines include:
sep other than a single character (e.g. regex separators)
skipfooter
sep=None with delim_whitespace=False
Specifying any of the above options will produce a ParserWarning unless the
python engine is selected explicitly using engine='python'.
Options that are unsupported by the pyarrow engine which are not covered by the list above include:
float_precision
chunksize
comment
nrows
thousands
memory_map
dialect
warn_bad_lines
error_bad_lines
on_bad_lines
delim_whitespace
quoting
lineterminator
converters
decimal
iterator
dayfirst
infer_datetime_format
verbose
skipinitialspace
low_memory
Specifying these options with engine='pyarrow' will raise a ValueError.
Reading/writing remote files#
You can pass in a URL to read or write remote files to many of pandas’ IO
functions - the following example shows reading a CSV file:
df = pd.read_csv("https://download.bls.gov/pub/time.series/cu/cu.item", sep="\t")
New in version 1.3.0.
A custom header can be sent alongside HTTP(s) requests by passing a dictionary
of header key value mappings to the storage_options keyword argument as shown below:
headers = {"User-Agent": "pandas"}
df = pd.read_csv(
"https://download.bls.gov/pub/time.series/cu/cu.item",
sep="\t",
storage_options=headers
)
All URLs which are not local files or HTTP(s) are handled by
fsspec, if installed, and its various filesystem implementations
(including Amazon S3, Google Cloud, SSH, FTP, webHDFS…).
Some of these implementations will require additional packages to be
installed, for example
S3 URLs require the s3fs library:
df = pd.read_json("s3://pandas-test/adatafile.json")
When dealing with remote storage systems, you might need
extra configuration with environment variables or config files in
special locations. For example, to access data in your S3 bucket,
you will need to define credentials in one of the several ways listed in
the S3Fs documentation. The same is true
for several of the storage backends, and you should follow the links
at fsimpl1 for implementations built into fsspec and fsimpl2
for those not included in the main fsspec
distribution.
You can also pass parameters directly to the backend driver. For example,
if you do not have S3 credentials, you can still access public data by
specifying an anonymous connection, such as
New in version 1.2.0.
pd.read_csv(
"s3://ncei-wcsd-archive/data/processed/SH1305/18kHz/SaKe2013"
"-D20130523-T080854_to_SaKe2013-D20130523-T085643.csv",
storage_options={"anon": True},
)
fsspec also allows complex URLs, for accessing data in compressed
archives, local caching of files, and more. To locally cache the above
example, you would modify the call to
pd.read_csv(
"simplecache::s3://ncei-wcsd-archive/data/processed/SH1305/18kHz/"
"SaKe2013-D20130523-T080854_to_SaKe2013-D20130523-T085643.csv",
storage_options={"s3": {"anon": True}},
)
where we specify that the “anon” parameter is meant for the “s3” part of
the implementation, not to the caching implementation. Note that this caches to a temporary
directory for the duration of the session only, but you can also specify
a permanent store.
Writing out data#
Writing to CSV format#
The Series and DataFrame objects have an instance method to_csv which
allows storing the contents of the object as a comma-separated-values file. The
function takes a number of arguments. Only the first is required.
path_or_buf: A string path to the file to write or a file object. If a file object it must be opened with newline=''
sep : Field delimiter for the output file (default “,”)
na_rep: A string representation of a missing value (default ‘’)
float_format: Format string for floating point numbers
columns: Columns to write (default None)
header: Whether to write out the column names (default True)
index: whether to write row (index) names (default True)
index_label: Column label(s) for index column(s) if desired. If None
(default), and header and index are True, then the index names are
used. (A sequence should be given if the DataFrame uses MultiIndex).
mode : Python write mode, default ‘w’
encoding: a string representing the encoding to use if the contents are
non-ASCII, for Python versions prior to 3
lineterminator: Character sequence denoting line end (default os.linesep)
quoting: Set quoting rules as in csv module (default csv.QUOTE_MINIMAL). Note that if you have set a float_format then floats are converted to strings and csv.QUOTE_NONNUMERIC will treat them as non-numeric
quotechar: Character used to quote fields (default ‘”’)
doublequote: Control quoting of quotechar in fields (default True)
escapechar: Character used to escape sep and quotechar when
appropriate (default None)
chunksize: Number of rows to write at a time
date_format: Format string for datetime objects
Writing a formatted string#
The DataFrame object has an instance method to_string which allows control
over the string representation of the object. All arguments are optional:
buf default None, for example a StringIO object
columns default None, which columns to write
col_space default None, minimum width of each column.
na_rep default NaN, representation of NA value
formatters default None, a dictionary (by column) of functions each of
which takes a single argument and returns a formatted string
float_format default None, a function which takes a single (float)
argument and returns a formatted string; to be applied to floats in the
DataFrame.
sparsify default True, set to False for a DataFrame with a hierarchical
index to print every MultiIndex key at each row.
index_names default True, will print the names of the indices
index default True, will print the index (ie, row labels)
header default True, will print the column labels
justify default left, will print column headers left- or
right-justified
The Series object also has a to_string method, but with only the buf,
na_rep, float_format arguments. There is also a length argument
which, if set to True, will additionally output the length of the Series.
JSON#
Read and write JSON format files and strings.
Writing JSON#
A Series or DataFrame can be converted to a valid JSON string. Use to_json
with optional parameters:
path_or_buf : the pathname or buffer to write the output
This can be None in which case a JSON string is returned
orient :
Series:
default is index
allowed values are {split, records, index}
DataFrame:
default is columns
allowed values are {split, records, index, columns, values, table}
The format of the JSON string
split
dict like {index -> [index], columns -> [columns], data -> [values]}
records
list like [{column -> value}, … , {column -> value}]
index
dict like {index -> {column -> value}}
columns
dict like {column -> {index -> value}}
values
just the values array
table
adhering to the JSON Table Schema
date_format : string, type of date conversion, ‘epoch’ for timestamp, ‘iso’ for ISO8601.
double_precision : The number of decimal places to use when encoding floating point values, default 10.
force_ascii : force encoded string to be ASCII, default True.
date_unit : The time unit to encode to, governs timestamp and ISO8601 precision. One of ‘s’, ‘ms’, ‘us’ or ‘ns’ for seconds, milliseconds, microseconds and nanoseconds respectively. Default ‘ms’.
default_handler : The handler to call if an object cannot otherwise be converted to a suitable format for JSON. Takes a single argument, which is the object to convert, and returns a serializable object.
lines : If records orient, then will write each record per line as json.
Note NaN’s, NaT’s and None will be converted to null and datetime objects will be converted based on the date_format and date_unit parameters.
In [218]: dfj = pd.DataFrame(np.random.randn(5, 2), columns=list("AB"))
In [219]: json = dfj.to_json()
In [220]: json
Out[220]: '{"A":{"0":-0.1213062281,"1":0.6957746499,"2":0.9597255933,"3":-0.6199759194,"4":-0.7323393705},"B":{"0":-0.0978826728,"1":0.3417343559,"2":-1.1103361029,"3":0.1497483186,"4":0.6877383895}}'
Orient options#
There are a number of different options for the format of the resulting JSON
file / string. Consider the following DataFrame and Series:
In [221]: dfjo = pd.DataFrame(
.....: dict(A=range(1, 4), B=range(4, 7), C=range(7, 10)),
.....: columns=list("ABC"),
.....: index=list("xyz"),
.....: )
.....:
In [222]: dfjo
Out[222]:
A B C
x 1 4 7
y 2 5 8
z 3 6 9
In [223]: sjo = pd.Series(dict(x=15, y=16, z=17), name="D")
In [224]: sjo
Out[224]:
x 15
y 16
z 17
Name: D, dtype: int64
Column oriented (the default for DataFrame) serializes the data as
nested JSON objects with column labels acting as the primary index:
In [225]: dfjo.to_json(orient="columns")
Out[225]: '{"A":{"x":1,"y":2,"z":3},"B":{"x":4,"y":5,"z":6},"C":{"x":7,"y":8,"z":9}}'
# Not available for Series
Index oriented (the default for Series) similar to column oriented
but the index labels are now primary:
In [226]: dfjo.to_json(orient="index")
Out[226]: '{"x":{"A":1,"B":4,"C":7},"y":{"A":2,"B":5,"C":8},"z":{"A":3,"B":6,"C":9}}'
In [227]: sjo.to_json(orient="index")
Out[227]: '{"x":15,"y":16,"z":17}'
Record oriented serializes the data to a JSON array of column -> value records,
index labels are not included. This is useful for passing DataFrame data to plotting
libraries, for example the JavaScript library d3.js:
In [228]: dfjo.to_json(orient="records")
Out[228]: '[{"A":1,"B":4,"C":7},{"A":2,"B":5,"C":8},{"A":3,"B":6,"C":9}]'
In [229]: sjo.to_json(orient="records")
Out[229]: '[15,16,17]'
Value oriented is a bare-bones option which serializes to nested JSON arrays of
values only, column and index labels are not included:
In [230]: dfjo.to_json(orient="values")
Out[230]: '[[1,4,7],[2,5,8],[3,6,9]]'
# Not available for Series
Split oriented serializes to a JSON object containing separate entries for
values, index and columns. Name is also included for Series:
In [231]: dfjo.to_json(orient="split")
Out[231]: '{"columns":["A","B","C"],"index":["x","y","z"],"data":[[1,4,7],[2,5,8],[3,6,9]]}'
In [232]: sjo.to_json(orient="split")
Out[232]: '{"name":"D","index":["x","y","z"],"data":[15,16,17]}'
Table oriented serializes to the JSON Table Schema, allowing for the
preservation of metadata including but not limited to dtypes and index names.
Note
Any orient option that encodes to a JSON object will not preserve the ordering of
index and column labels during round-trip serialization. If you wish to preserve
label ordering use the split option as it uses ordered containers.
Date handling#
Writing in ISO date format:
In [233]: dfd = pd.DataFrame(np.random.randn(5, 2), columns=list("AB"))
In [234]: dfd["date"] = pd.Timestamp("20130101")
In [235]: dfd = dfd.sort_index(axis=1, ascending=False)
In [236]: json = dfd.to_json(date_format="iso")
In [237]: json
Out[237]: '{"date":{"0":"2013-01-01T00:00:00.000","1":"2013-01-01T00:00:00.000","2":"2013-01-01T00:00:00.000","3":"2013-01-01T00:00:00.000","4":"2013-01-01T00:00:00.000"},"B":{"0":0.403309524,"1":0.3016244523,"2":-1.3698493577,"3":1.4626960492,"4":-0.8265909164},"A":{"0":0.1764443426,"1":-0.1549507744,"2":-2.1798606054,"3":-0.9542078401,"4":-1.7431609117}}'
Writing in ISO date format, with microseconds:
In [238]: json = dfd.to_json(date_format="iso", date_unit="us")
In [239]: json
Out[239]: '{"date":{"0":"2013-01-01T00:00:00.000000","1":"2013-01-01T00:00:00.000000","2":"2013-01-01T00:00:00.000000","3":"2013-01-01T00:00:00.000000","4":"2013-01-01T00:00:00.000000"},"B":{"0":0.403309524,"1":0.3016244523,"2":-1.3698493577,"3":1.4626960492,"4":-0.8265909164},"A":{"0":0.1764443426,"1":-0.1549507744,"2":-2.1798606054,"3":-0.9542078401,"4":-1.7431609117}}'
Epoch timestamps, in seconds:
In [240]: json = dfd.to_json(date_format="epoch", date_unit="s")
In [241]: json
Out[241]: '{"date":{"0":1356998400,"1":1356998400,"2":1356998400,"3":1356998400,"4":1356998400},"B":{"0":0.403309524,"1":0.3016244523,"2":-1.3698493577,"3":1.4626960492,"4":-0.8265909164},"A":{"0":0.1764443426,"1":-0.1549507744,"2":-2.1798606054,"3":-0.9542078401,"4":-1.7431609117}}'
Writing to a file, with a date index and a date column:
In [242]: dfj2 = dfj.copy()
In [243]: dfj2["date"] = pd.Timestamp("20130101")
In [244]: dfj2["ints"] = list(range(5))
In [245]: dfj2["bools"] = True
In [246]: dfj2.index = pd.date_range("20130101", periods=5)
In [247]: dfj2.to_json("test.json")
In [248]: with open("test.json") as fh:
.....: print(fh.read())
.....:
{"A":{"1356998400000":-0.1213062281,"1357084800000":0.6957746499,"1357171200000":0.9597255933,"1357257600000":-0.6199759194,"1357344000000":-0.7323393705},"B":{"1356998400000":-0.0978826728,"1357084800000":0.3417343559,"1357171200000":-1.1103361029,"1357257600000":0.1497483186,"1357344000000":0.6877383895},"date":{"1356998400000":1356998400000,"1357084800000":1356998400000,"1357171200000":1356998400000,"1357257600000":1356998400000,"1357344000000":1356998400000},"ints":{"1356998400000":0,"1357084800000":1,"1357171200000":2,"1357257600000":3,"1357344000000":4},"bools":{"1356998400000":true,"1357084800000":true,"1357171200000":true,"1357257600000":true,"1357344000000":true}}
Fallback behavior#
If the JSON serializer cannot handle the container contents directly it will
fall back in the following manner:
if the dtype is unsupported (e.g. np.complex_) then the default_handler, if provided, will be called
for each value, otherwise an exception is raised.
if an object is unsupported it will attempt the following:
check if the object has defined a toDict method and call it.
A toDict method should return a dict which will then be JSON serialized.
invoke the default_handler if one was provided.
convert the object to a dict by traversing its contents. However this will often fail
with an OverflowError or give unexpected results.
In general the best approach for unsupported objects or dtypes is to provide a default_handler.
For example:
>>> DataFrame([1.0, 2.0, complex(1.0, 2.0)]).to_json() # raises
RuntimeError: Unhandled numpy dtype 15
can be dealt with by specifying a simple default_handler:
In [249]: pd.DataFrame([1.0, 2.0, complex(1.0, 2.0)]).to_json(default_handler=str)
Out[249]: '{"0":{"0":"(1+0j)","1":"(2+0j)","2":"(1+2j)"}}'
Reading JSON#
Reading a JSON string to pandas object can take a number of parameters.
The parser will try to parse a DataFrame if typ is not supplied or
is None. To explicitly force Series parsing, pass typ=series
filepath_or_buffer : a VALID JSON string or file handle / StringIO. The string could be
a URL. Valid URL schemes include http, ftp, S3, and file. For file URLs, a host
is expected. For instance, a local file could be
file ://localhost/path/to/table.json
typ : type of object to recover (series or frame), default ‘frame’
orient :
Series :
default is index
allowed values are {split, records, index}
DataFrame
default is columns
allowed values are {split, records, index, columns, values, table}
The format of the JSON string
split
dict like {index -> [index], columns -> [columns], data -> [values]}
records
list like [{column -> value}, … , {column -> value}]
index
dict like {index -> {column -> value}}
columns
dict like {column -> {index -> value}}
values
just the values array
table
adhering to the JSON Table Schema
dtype : if True, infer dtypes, if a dict of column to dtype, then use those, if False, then don’t infer dtypes at all, default is True, apply only to the data.
convert_axes : boolean, try to convert the axes to the proper dtypes, default is True
convert_dates : a list of columns to parse for dates; If True, then try to parse date-like columns, default is True.
keep_default_dates : boolean, default True. If parsing dates, then parse the default date-like columns.
numpy : direct decoding to NumPy arrays. default is False;
Supports numeric data only, although labels may be non-numeric. Also note that the JSON ordering MUST be the same for each term if numpy=True.
precise_float : boolean, default False. Set to enable usage of higher precision (strtod) function when decoding string to double values. Default (False) is to use fast but less precise builtin functionality.
date_unit : string, the timestamp unit to detect if converting dates. Default
None. By default the timestamp precision will be detected, if this is not desired
then pass one of ‘s’, ‘ms’, ‘us’ or ‘ns’ to force timestamp precision to
seconds, milliseconds, microseconds or nanoseconds respectively.
lines : reads file as one json object per line.
encoding : The encoding to use to decode py3 bytes.
chunksize : when used in combination with lines=True, return a JsonReader which reads in chunksize lines per iteration.
The parser will raise one of ValueError/TypeError/AssertionError if the JSON is not parseable.
If a non-default orient was used when encoding to JSON be sure to pass the same
option here so that decoding produces sensible results, see Orient Options for an
overview.
Data conversion#
The default of convert_axes=True, dtype=True, and convert_dates=True
will try to parse the axes, and all of the data into appropriate types,
including dates. If you need to override specific dtypes, pass a dict to
dtype. convert_axes should only be set to False if you need to
preserve string-like numbers (e.g. ‘1’, ‘2’) in an axes.
Note
Large integer values may be converted to dates if convert_dates=True and the data and / or column labels appear ‘date-like’. The exact threshold depends on the date_unit specified. ‘date-like’ means that the column label meets one of the following criteria:
it ends with '_at'
it ends with '_time'
it begins with 'timestamp'
it is 'modified'
it is 'date'
Warning
When reading JSON data, automatic coercing into dtypes has some quirks:
an index can be reconstructed in a different order from serialization, that is, the returned order is not guaranteed to be the same as before serialization
a column that was float data will be converted to integer if it can be done safely, e.g. a column of 1.
bool columns will be converted to integer on reconstruction
Thus there are times where you may want to specify specific dtypes via the dtype keyword argument.
Reading from a JSON string:
In [250]: pd.read_json(json)
Out[250]:
date B A
0 2013-01-01 0.403310 0.176444
1 2013-01-01 0.301624 -0.154951
2 2013-01-01 -1.369849 -2.179861
3 2013-01-01 1.462696 -0.954208
4 2013-01-01 -0.826591 -1.743161
Reading from a file:
In [251]: pd.read_json("test.json")
Out[251]:
A B date ints bools
2013-01-01 -0.121306 -0.097883 2013-01-01 0 True
2013-01-02 0.695775 0.341734 2013-01-01 1 True
2013-01-03 0.959726 -1.110336 2013-01-01 2 True
2013-01-04 -0.619976 0.149748 2013-01-01 3 True
2013-01-05 -0.732339 0.687738 2013-01-01 4 True
Don’t convert any data (but still convert axes and dates):
In [252]: pd.read_json("test.json", dtype=object).dtypes
Out[252]:
A object
B object
date object
ints object
bools object
dtype: object
Specify dtypes for conversion:
In [253]: pd.read_json("test.json", dtype={"A": "float32", "bools": "int8"}).dtypes
Out[253]:
A float32
B float64
date datetime64[ns]
ints int64
bools int8
dtype: object
Preserve string indices:
In [254]: si = pd.DataFrame(
.....: np.zeros((4, 4)), columns=list(range(4)), index=[str(i) for i in range(4)]
.....: )
.....:
In [255]: si
Out[255]:
0 1 2 3
0 0.0 0.0 0.0 0.0
1 0.0 0.0 0.0 0.0
2 0.0 0.0 0.0 0.0
3 0.0 0.0 0.0 0.0
In [256]: si.index
Out[256]: Index(['0', '1', '2', '3'], dtype='object')
In [257]: si.columns
Out[257]: Int64Index([0, 1, 2, 3], dtype='int64')
In [258]: json = si.to_json()
In [259]: sij = pd.read_json(json, convert_axes=False)
In [260]: sij
Out[260]:
0 1 2 3
0 0 0 0 0
1 0 0 0 0
2 0 0 0 0
3 0 0 0 0
In [261]: sij.index
Out[261]: Index(['0', '1', '2', '3'], dtype='object')
In [262]: sij.columns
Out[262]: Index(['0', '1', '2', '3'], dtype='object')
Dates written in nanoseconds need to be read back in nanoseconds:
In [263]: json = dfj2.to_json(date_unit="ns")
# Try to parse timestamps as milliseconds -> Won't Work
In [264]: dfju = pd.read_json(json, date_unit="ms")
In [265]: dfju
Out[265]:
A B date ints bools
1356998400000000000 -0.121306 -0.097883 1356998400000000000 0 True
1357084800000000000 0.695775 0.341734 1356998400000000000 1 True
1357171200000000000 0.959726 -1.110336 1356998400000000000 2 True
1357257600000000000 -0.619976 0.149748 1356998400000000000 3 True
1357344000000000000 -0.732339 0.687738 1356998400000000000 4 True
# Let pandas detect the correct precision
In [266]: dfju = pd.read_json(json)
In [267]: dfju
Out[267]:
A B date ints bools
2013-01-01 -0.121306 -0.097883 2013-01-01 0 True
2013-01-02 0.695775 0.341734 2013-01-01 1 True
2013-01-03 0.959726 -1.110336 2013-01-01 2 True
2013-01-04 -0.619976 0.149748 2013-01-01 3 True
2013-01-05 -0.732339 0.687738 2013-01-01 4 True
# Or specify that all timestamps are in nanoseconds
In [268]: dfju = pd.read_json(json, date_unit="ns")
In [269]: dfju
Out[269]:
A B date ints bools
2013-01-01 -0.121306 -0.097883 2013-01-01 0 True
2013-01-02 0.695775 0.341734 2013-01-01 1 True
2013-01-03 0.959726 -1.110336 2013-01-01 2 True
2013-01-04 -0.619976 0.149748 2013-01-01 3 True
2013-01-05 -0.732339 0.687738 2013-01-01 4 True
The Numpy parameter#
Note
This param has been deprecated as of version 1.0.0 and will raise a FutureWarning.
This supports numeric data only. Index and columns labels may be non-numeric, e.g. strings, dates etc.
If numpy=True is passed to read_json an attempt will be made to sniff
an appropriate dtype during deserialization and to subsequently decode directly
to NumPy arrays, bypassing the need for intermediate Python objects.
This can provide speedups if you are deserialising a large amount of numeric
data:
In [270]: randfloats = np.random.uniform(-100, 1000, 10000)
In [271]: randfloats.shape = (1000, 10)
In [272]: dffloats = pd.DataFrame(randfloats, columns=list("ABCDEFGHIJ"))
In [273]: jsonfloats = dffloats.to_json()
In [274]: %timeit pd.read_json(jsonfloats)
7.91 ms +- 77.3 us per loop (mean +- std. dev. of 7 runs, 100 loops each)
In [275]: %timeit pd.read_json(jsonfloats, numpy=True)
5.71 ms +- 333 us per loop (mean +- std. dev. of 7 runs, 100 loops each)
The speedup is less noticeable for smaller datasets:
In [276]: jsonfloats = dffloats.head(100).to_json()
In [277]: %timeit pd.read_json(jsonfloats)
4.46 ms +- 25.9 us per loop (mean +- std. dev. of 7 runs, 100 loops each)
In [278]: %timeit pd.read_json(jsonfloats, numpy=True)
4.09 ms +- 32.3 us per loop (mean +- std. dev. of 7 runs, 100 loops each)
Warning
Direct NumPy decoding makes a number of assumptions and may fail or produce
unexpected output if these assumptions are not satisfied:
data is numeric.
data is uniform. The dtype is sniffed from the first value decoded.
A ValueError may be raised, or incorrect output may be produced
if this condition is not satisfied.
labels are ordered. Labels are only read from the first container, it is assumed
that each subsequent row / column has been encoded in the same order. This should be satisfied if the
data was encoded using to_json but may not be the case if the JSON
is from another source.
Normalization#
pandas provides a utility function to take a dict or list of dicts and normalize this semi-structured data
into a flat table.
In [279]: data = [
.....: {"id": 1, "name": {"first": "Coleen", "last": "Volk"}},
.....: {"name": {"given": "Mark", "family": "Regner"}},
.....: {"id": 2, "name": "Faye Raker"},
.....: ]
.....:
In [280]: pd.json_normalize(data)
Out[280]:
id name.first name.last name.given name.family name
0 1.0 Coleen Volk NaN NaN NaN
1 NaN NaN NaN Mark Regner NaN
2 2.0 NaN NaN NaN NaN Faye Raker
In [281]: data = [
.....: {
.....: "state": "Florida",
.....: "shortname": "FL",
.....: "info": {"governor": "Rick Scott"},
.....: "county": [
.....: {"name": "Dade", "population": 12345},
.....: {"name": "Broward", "population": 40000},
.....: {"name": "Palm Beach", "population": 60000},
.....: ],
.....: },
.....: {
.....: "state": "Ohio",
.....: "shortname": "OH",
.....: "info": {"governor": "John Kasich"},
.....: "county": [
.....: {"name": "Summit", "population": 1234},
.....: {"name": "Cuyahoga", "population": 1337},
.....: ],
.....: },
.....: ]
.....:
In [282]: pd.json_normalize(data, "county", ["state", "shortname", ["info", "governor"]])
Out[282]:
name population state shortname info.governor
0 Dade 12345 Florida FL Rick Scott
1 Broward 40000 Florida FL Rick Scott
2 Palm Beach 60000 Florida FL Rick Scott
3 Summit 1234 Ohio OH John Kasich
4 Cuyahoga 1337 Ohio OH John Kasich
The max_level parameter provides more control over which level to end normalization.
With max_level=1 the following snippet normalizes until 1st nesting level of the provided dict.
In [283]: data = [
.....: {
.....: "CreatedBy": {"Name": "User001"},
.....: "Lookup": {
.....: "TextField": "Some text",
.....: "UserField": {"Id": "ID001", "Name": "Name001"},
.....: },
.....: "Image": {"a": "b"},
.....: }
.....: ]
.....:
In [284]: pd.json_normalize(data, max_level=1)
Out[284]:
CreatedBy.Name Lookup.TextField Lookup.UserField Image.a
0 User001 Some text {'Id': 'ID001', 'Name': 'Name001'} b
Line delimited json#
pandas is able to read and write line-delimited json files that are common in data processing pipelines
using Hadoop or Spark.
For line-delimited json files, pandas can also return an iterator which reads in chunksize lines at a time. This can be useful for large files or to read from a stream.
In [285]: jsonl = """
.....: {"a": 1, "b": 2}
.....: {"a": 3, "b": 4}
.....: """
.....:
In [286]: df = pd.read_json(jsonl, lines=True)
In [287]: df
Out[287]:
a b
0 1 2
1 3 4
In [288]: df.to_json(orient="records", lines=True)
Out[288]: '{"a":1,"b":2}\n{"a":3,"b":4}\n'
# reader is an iterator that returns ``chunksize`` lines each iteration
In [289]: with pd.read_json(StringIO(jsonl), lines=True, chunksize=1) as reader:
.....: reader
.....: for chunk in reader:
.....: print(chunk)
.....:
Empty DataFrame
Columns: []
Index: []
a b
0 1 2
a b
1 3 4
Table schema#
Table Schema is a spec for describing tabular datasets as a JSON
object. The JSON includes information on the field names, types, and
other attributes. You can use the orient table to build
a JSON string with two fields, schema and data.
In [290]: df = pd.DataFrame(
.....: {
.....: "A": [1, 2, 3],
.....: "B": ["a", "b", "c"],
.....: "C": pd.date_range("2016-01-01", freq="d", periods=3),
.....: },
.....: index=pd.Index(range(3), name="idx"),
.....: )
.....:
In [291]: df
Out[291]:
A B C
idx
0 1 a 2016-01-01
1 2 b 2016-01-02
2 3 c 2016-01-03
In [292]: df.to_json(orient="table", date_format="iso")
Out[292]: '{"schema":{"fields":[{"name":"idx","type":"integer"},{"name":"A","type":"integer"},{"name":"B","type":"string"},{"name":"C","type":"datetime"}],"primaryKey":["idx"],"pandas_version":"1.4.0"},"data":[{"idx":0,"A":1,"B":"a","C":"2016-01-01T00:00:00.000"},{"idx":1,"A":2,"B":"b","C":"2016-01-02T00:00:00.000"},{"idx":2,"A":3,"B":"c","C":"2016-01-03T00:00:00.000"}]}'
The schema field contains the fields key, which itself contains
a list of column name to type pairs, including the Index or MultiIndex
(see below for a list of types).
The schema field also contains a primaryKey field if the (Multi)index
is unique.
The second field, data, contains the serialized data with the records
orient.
The index is included, and any datetimes are ISO 8601 formatted, as required
by the Table Schema spec.
The full list of types supported are described in the Table Schema
spec. This table shows the mapping from pandas types:
pandas type
Table Schema type
int64
integer
float64
number
bool
boolean
datetime64[ns]
datetime
timedelta64[ns]
duration
categorical
any
object
str
A few notes on the generated table schema:
The schema object contains a pandas_version field. This contains
the version of pandas’ dialect of the schema, and will be incremented
with each revision.
All dates are converted to UTC when serializing. Even timezone naive values,
which are treated as UTC with an offset of 0.
In [293]: from pandas.io.json import build_table_schema
In [294]: s = pd.Series(pd.date_range("2016", periods=4))
In [295]: build_table_schema(s)
Out[295]:
{'fields': [{'name': 'index', 'type': 'integer'},
{'name': 'values', 'type': 'datetime'}],
'primaryKey': ['index'],
'pandas_version': '1.4.0'}
datetimes with a timezone (before serializing), include an additional field
tz with the time zone name (e.g. 'US/Central').
In [296]: s_tz = pd.Series(pd.date_range("2016", periods=12, tz="US/Central"))
In [297]: build_table_schema(s_tz)
Out[297]:
{'fields': [{'name': 'index', 'type': 'integer'},
{'name': 'values', 'type': 'datetime', 'tz': 'US/Central'}],
'primaryKey': ['index'],
'pandas_version': '1.4.0'}
Periods are converted to timestamps before serialization, and so have the
same behavior of being converted to UTC. In addition, periods will contain
and additional field freq with the period’s frequency, e.g. 'A-DEC'.
In [298]: s_per = pd.Series(1, index=pd.period_range("2016", freq="A-DEC", periods=4))
In [299]: build_table_schema(s_per)
Out[299]:
{'fields': [{'name': 'index', 'type': 'datetime', 'freq': 'A-DEC'},
{'name': 'values', 'type': 'integer'}],
'primaryKey': ['index'],
'pandas_version': '1.4.0'}
Categoricals use the any type and an enum constraint listing
the set of possible values. Additionally, an ordered field is included:
In [300]: s_cat = pd.Series(pd.Categorical(["a", "b", "a"]))
In [301]: build_table_schema(s_cat)
Out[301]:
{'fields': [{'name': 'index', 'type': 'integer'},
{'name': 'values',
'type': 'any',
'constraints': {'enum': ['a', 'b']},
'ordered': False}],
'primaryKey': ['index'],
'pandas_version': '1.4.0'}
A primaryKey field, containing an array of labels, is included
if the index is unique:
In [302]: s_dupe = pd.Series([1, 2], index=[1, 1])
In [303]: build_table_schema(s_dupe)
Out[303]:
{'fields': [{'name': 'index', 'type': 'integer'},
{'name': 'values', 'type': 'integer'}],
'pandas_version': '1.4.0'}
The primaryKey behavior is the same with MultiIndexes, but in this
case the primaryKey is an array:
In [304]: s_multi = pd.Series(1, index=pd.MultiIndex.from_product([("a", "b"), (0, 1)]))
In [305]: build_table_schema(s_multi)
Out[305]:
{'fields': [{'name': 'level_0', 'type': 'string'},
{'name': 'level_1', 'type': 'integer'},
{'name': 'values', 'type': 'integer'}],
'primaryKey': FrozenList(['level_0', 'level_1']),
'pandas_version': '1.4.0'}
The default naming roughly follows these rules:
For series, the object.name is used. If that’s none, then the
name is values
For DataFrames, the stringified version of the column name is used
For Index (not MultiIndex), index.name is used, with a
fallback to index if that is None.
For MultiIndex, mi.names is used. If any level has no name,
then level_<i> is used.
read_json also accepts orient='table' as an argument. This allows for
the preservation of metadata such as dtypes and index names in a
round-trippable manner.
In [306]: df = pd.DataFrame(
.....: {
.....: "foo": [1, 2, 3, 4],
.....: "bar": ["a", "b", "c", "d"],
.....: "baz": pd.date_range("2018-01-01", freq="d", periods=4),
.....: "qux": pd.Categorical(["a", "b", "c", "c"]),
.....: },
.....: index=pd.Index(range(4), name="idx"),
.....: )
.....:
In [307]: df
Out[307]:
foo bar baz qux
idx
0 1 a 2018-01-01 a
1 2 b 2018-01-02 b
2 3 c 2018-01-03 c
3 4 d 2018-01-04 c
In [308]: df.dtypes
Out[308]:
foo int64
bar object
baz datetime64[ns]
qux category
dtype: object
In [309]: df.to_json("test.json", orient="table")
In [310]: new_df = pd.read_json("test.json", orient="table")
In [311]: new_df
Out[311]:
foo bar baz qux
idx
0 1 a 2018-01-01 a
1 2 b 2018-01-02 b
2 3 c 2018-01-03 c
3 4 d 2018-01-04 c
In [312]: new_df.dtypes
Out[312]:
foo int64
bar object
baz datetime64[ns]
qux category
dtype: object
Please note that the literal string ‘index’ as the name of an Index
is not round-trippable, nor are any names beginning with 'level_' within a
MultiIndex. These are used by default in DataFrame.to_json() to
indicate missing values and the subsequent read cannot distinguish the intent.
In [313]: df.index.name = "index"
In [314]: df.to_json("test.json", orient="table")
In [315]: new_df = pd.read_json("test.json", orient="table")
In [316]: print(new_df.index.name)
None
When using orient='table' along with user-defined ExtensionArray,
the generated schema will contain an additional extDtype key in the respective
fields element. This extra key is not standard but does enable JSON roundtrips
for extension types (e.g. read_json(df.to_json(orient="table"), orient="table")).
The extDtype key carries the name of the extension, if you have properly registered
the ExtensionDtype, pandas will use said name to perform a lookup into the registry
and re-convert the serialized data into your custom dtype.
HTML#
Reading HTML content#
Warning
We highly encourage you to read the HTML Table Parsing gotchas
below regarding the issues surrounding the BeautifulSoup4/html5lib/lxml parsers.
The top-level read_html() function can accept an HTML
string/file/URL and will parse HTML tables into list of pandas DataFrames.
Let’s look at a few examples.
Note
read_html returns a list of DataFrame objects, even if there is
only a single table contained in the HTML content.
Read a URL with no options:
In [320]: "https://www.fdic.gov/resources/resolutions/bank-failures/failed-bank-list"
In [321]: pd.read_html(url)
Out[321]:
[ Bank NameBank CityCity StateSt ... Acquiring InstitutionAI Closing DateClosing FundFund
0 Almena State Bank Almena KS ... Equity Bank October 23, 2020 10538
1 First City Bank of Florida Fort Walton Beach FL ... United Fidelity Bank, fsb October 16, 2020 10537
2 The First State Bank Barboursville WV ... MVB Bank, Inc. April 3, 2020 10536
3 Ericson State Bank Ericson NE ... Farmers and Merchants Bank February 14, 2020 10535
4 City National Bank of New Jersey Newark NJ ... Industrial Bank November 1, 2019 10534
.. ... ... ... ... ... ... ...
558 Superior Bank, FSB Hinsdale IL ... Superior Federal, FSB July 27, 2001 6004
559 Malta National Bank Malta OH ... North Valley Bank May 3, 2001 4648
560 First Alliance Bank & Trust Co. Manchester NH ... Southern New Hampshire Bank & Trust February 2, 2001 4647
561 National State Bank of Metropolis Metropolis IL ... Banterra Bank of Marion December 14, 2000 4646
562 Bank of Honolulu Honolulu HI ... Bank of the Orient October 13, 2000 4645
[563 rows x 7 columns]]
Note
The data from the above URL changes every Monday so the resulting data above may be slightly different.
Read in the content of the file from the above URL and pass it to read_html
as a string:
In [317]: html_str = """
.....: <table>
.....: <tr>
.....: <th>A</th>
.....: <th colspan="1">B</th>
.....: <th rowspan="1">C</th>
.....: </tr>
.....: <tr>
.....: <td>a</td>
.....: <td>b</td>
.....: <td>c</td>
.....: </tr>
.....: </table>
.....: """
.....:
In [318]: with open("tmp.html", "w") as f:
.....: f.write(html_str)
.....:
In [319]: df = pd.read_html("tmp.html")
In [320]: df[0]
Out[320]:
A B C
0 a b c
You can even pass in an instance of StringIO if you so desire:
In [321]: dfs = pd.read_html(StringIO(html_str))
In [322]: dfs[0]
Out[322]:
A B C
0 a b c
Note
The following examples are not run by the IPython evaluator due to the fact
that having so many network-accessing functions slows down the documentation
build. If you spot an error or an example that doesn’t run, please do not
hesitate to report it over on pandas GitHub issues page.
Read a URL and match a table that contains specific text:
match = "Metcalf Bank"
df_list = pd.read_html(url, match=match)
Specify a header row (by default <th> or <td> elements located within a
<thead> are used to form the column index, if multiple rows are contained within
<thead> then a MultiIndex is created); if specified, the header row is taken
from the data minus the parsed header elements (<th> elements).
dfs = pd.read_html(url, header=0)
Specify an index column:
dfs = pd.read_html(url, index_col=0)
Specify a number of rows to skip:
dfs = pd.read_html(url, skiprows=0)
Specify a number of rows to skip using a list (range works
as well):
dfs = pd.read_html(url, skiprows=range(2))
Specify an HTML attribute:
dfs1 = pd.read_html(url, attrs={"id": "table"})
dfs2 = pd.read_html(url, attrs={"class": "sortable"})
print(np.array_equal(dfs1[0], dfs2[0])) # Should be True
Specify values that should be converted to NaN:
dfs = pd.read_html(url, na_values=["No Acquirer"])
Specify whether to keep the default set of NaN values:
dfs = pd.read_html(url, keep_default_na=False)
Specify converters for columns. This is useful for numerical text data that has
leading zeros. By default columns that are numerical are cast to numeric
types and the leading zeros are lost. To avoid this, we can convert these
columns to strings.
url_mcc = "https://en.wikipedia.org/wiki/Mobile_country_code"
dfs = pd.read_html(
url_mcc,
match="Telekom Albania",
header=0,
converters={"MNC": str},
)
Use some combination of the above:
dfs = pd.read_html(url, match="Metcalf Bank", index_col=0)
Read in pandas to_html output (with some loss of floating point precision):
df = pd.DataFrame(np.random.randn(2, 2))
s = df.to_html(float_format="{0:.40g}".format)
dfin = pd.read_html(s, index_col=0)
The lxml backend will raise an error on a failed parse if that is the only
parser you provide. If you only have a single parser you can provide just a
string, but it is considered good practice to pass a list with one string if,
for example, the function expects a sequence of strings. You may use:
dfs = pd.read_html(url, "Metcalf Bank", index_col=0, flavor=["lxml"])
Or you could pass flavor='lxml' without a list:
dfs = pd.read_html(url, "Metcalf Bank", index_col=0, flavor="lxml")
However, if you have bs4 and html5lib installed and pass None or ['lxml',
'bs4'] then the parse will most likely succeed. Note that as soon as a parse
succeeds, the function will return.
dfs = pd.read_html(url, "Metcalf Bank", index_col=0, flavor=["lxml", "bs4"])
Links can be extracted from cells along with the text using extract_links="all".
In [323]: html_table = """
.....: <table>
.....: <tr>
.....: <th>GitHub</th>
.....: </tr>
.....: <tr>
.....: <td><a href="https://github.com/pandas-dev/pandas">pandas</a></td>
.....: </tr>
.....: </table>
.....: """
.....:
In [324]: df = pd.read_html(
.....: html_table,
.....: extract_links="all"
.....: )[0]
.....:
In [325]: df
Out[325]:
(GitHub, None)
0 (pandas, https://github.com/pandas-dev/pandas)
In [326]: df[("GitHub", None)]
Out[326]:
0 (pandas, https://github.com/pandas-dev/pandas)
Name: (GitHub, None), dtype: object
In [327]: df[("GitHub", None)].str[1]
Out[327]:
0 https://github.com/pandas-dev/pandas
Name: (GitHub, None), dtype: object
New in version 1.5.0.
Writing to HTML files#
DataFrame objects have an instance method to_html which renders the
contents of the DataFrame as an HTML table. The function arguments are as
in the method to_string described above.
Note
Not all of the possible options for DataFrame.to_html are shown here for
brevity’s sake. See to_html() for the
full set of options.
Note
In an HTML-rendering supported environment like a Jupyter Notebook, display(HTML(...))`
will render the raw HTML into the environment.
In [328]: from IPython.display import display, HTML
In [329]: df = pd.DataFrame(np.random.randn(2, 2))
In [330]: df
Out[330]:
0 1
0 0.070319 1.773907
1 0.253908 0.414581
In [331]: html = df.to_html()
In [332]: print(html) # raw html
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>0</th>
<th>1</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>0.070319</td>
<td>1.773907</td>
</tr>
<tr>
<th>1</th>
<td>0.253908</td>
<td>0.414581</td>
</tr>
</tbody>
</table>
In [333]: display(HTML(html))
<IPython.core.display.HTML object>
The columns argument will limit the columns shown:
In [334]: html = df.to_html(columns=[0])
In [335]: print(html)
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>0</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>0.070319</td>
</tr>
<tr>
<th>1</th>
<td>0.253908</td>
</tr>
</tbody>
</table>
In [336]: display(HTML(html))
<IPython.core.display.HTML object>
float_format takes a Python callable to control the precision of floating
point values:
In [337]: html = df.to_html(float_format="{0:.10f}".format)
In [338]: print(html)
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>0</th>
<th>1</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>0.0703192665</td>
<td>1.7739074228</td>
</tr>
<tr>
<th>1</th>
<td>0.2539083433</td>
<td>0.4145805920</td>
</tr>
</tbody>
</table>
In [339]: display(HTML(html))
<IPython.core.display.HTML object>
bold_rows will make the row labels bold by default, but you can turn that
off:
In [340]: html = df.to_html(bold_rows=False)
In [341]: print(html)
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>0</th>
<th>1</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>0.070319</td>
<td>1.773907</td>
</tr>
<tr>
<td>1</td>
<td>0.253908</td>
<td>0.414581</td>
</tr>
</tbody>
</table>
In [342]: display(HTML(html))
<IPython.core.display.HTML object>
The classes argument provides the ability to give the resulting HTML
table CSS classes. Note that these classes are appended to the existing
'dataframe' class.
In [343]: print(df.to_html(classes=["awesome_table_class", "even_more_awesome_class"]))
<table border="1" class="dataframe awesome_table_class even_more_awesome_class">
<thead>
<tr style="text-align: right;">
<th></th>
<th>0</th>
<th>1</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>0.070319</td>
<td>1.773907</td>
</tr>
<tr>
<th>1</th>
<td>0.253908</td>
<td>0.414581</td>
</tr>
</tbody>
</table>
The render_links argument provides the ability to add hyperlinks to cells
that contain URLs.
In [344]: url_df = pd.DataFrame(
.....: {
.....: "name": ["Python", "pandas"],
.....: "url": ["https://www.python.org/", "https://pandas.pydata.org"],
.....: }
.....: )
.....:
In [345]: html = url_df.to_html(render_links=True)
In [346]: print(html)
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>name</th>
<th>url</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>Python</td>
<td><a href="https://www.python.org/" target="_blank">https://www.python.org/</a></td>
</tr>
<tr>
<th>1</th>
<td>pandas</td>
<td><a href="https://pandas.pydata.org" target="_blank">https://pandas.pydata.org</a></td>
</tr>
</tbody>
</table>
In [347]: display(HTML(html))
<IPython.core.display.HTML object>
Finally, the escape argument allows you to control whether the
“<”, “>” and “&” characters escaped in the resulting HTML (by default it is
True). So to get the HTML without escaped characters pass escape=False
In [348]: df = pd.DataFrame({"a": list("&<>"), "b": np.random.randn(3)})
Escaped:
In [349]: html = df.to_html()
In [350]: print(html)
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>a</th>
<th>b</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>&</td>
<td>0.842321</td>
</tr>
<tr>
<th>1</th>
<td><</td>
<td>0.211337</td>
</tr>
<tr>
<th>2</th>
<td>></td>
<td>-1.055427</td>
</tr>
</tbody>
</table>
In [351]: display(HTML(html))
<IPython.core.display.HTML object>
Not escaped:
In [352]: html = df.to_html(escape=False)
In [353]: print(html)
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>a</th>
<th>b</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>&</td>
<td>0.842321</td>
</tr>
<tr>
<th>1</th>
<td><</td>
<td>0.211337</td>
</tr>
<tr>
<th>2</th>
<td>></td>
<td>-1.055427</td>
</tr>
</tbody>
</table>
In [354]: display(HTML(html))
<IPython.core.display.HTML object>
Note
Some browsers may not show a difference in the rendering of the previous two
HTML tables.
HTML Table Parsing Gotchas#
There are some versioning issues surrounding the libraries that are used to
parse HTML tables in the top-level pandas io function read_html.
Issues with lxml
Benefits
lxml is very fast.
lxml requires Cython to install correctly.
Drawbacks
lxml does not make any guarantees about the results of its parse
unless it is given strictly valid markup.
In light of the above, we have chosen to allow you, the user, to use the
lxml backend, but this backend will use html5lib if lxml
fails to parse
It is therefore highly recommended that you install both
BeautifulSoup4 and html5lib, so that you will still get a valid
result (provided everything else is valid) even if lxml fails.
Issues with BeautifulSoup4 using lxml as a backend
The above issues hold here as well since BeautifulSoup4 is essentially
just a wrapper around a parser backend.
Issues with BeautifulSoup4 using html5lib as a backend
Benefits
html5lib is far more lenient than lxml and consequently deals
with real-life markup in a much saner way rather than just, e.g.,
dropping an element without notifying you.
html5lib generates valid HTML5 markup from invalid markup
automatically. This is extremely important for parsing HTML tables,
since it guarantees a valid document. However, that does NOT mean that
it is “correct”, since the process of fixing markup does not have a
single definition.
html5lib is pure Python and requires no additional build steps beyond
its own installation.
Drawbacks
The biggest drawback to using html5lib is that it is slow as
molasses. However consider the fact that many tables on the web are not
big enough for the parsing algorithm runtime to matter. It is more
likely that the bottleneck will be in the process of reading the raw
text from the URL over the web, i.e., IO (input-output). For very large
tables, this might not be true.
LaTeX#
New in version 1.3.0.
Currently there are no methods to read from LaTeX, only output methods.
Writing to LaTeX files#
Note
DataFrame and Styler objects currently have a to_latex method. We recommend
using the Styler.to_latex() method
over DataFrame.to_latex() due to the former’s greater flexibility with
conditional styling, and the latter’s possible future deprecation.
Review the documentation for Styler.to_latex,
which gives examples of conditional styling and explains the operation of its keyword
arguments.
For simple application the following pattern is sufficient.
In [355]: df = pd.DataFrame([[1, 2], [3, 4]], index=["a", "b"], columns=["c", "d"])
In [356]: print(df.style.to_latex())
\begin{tabular}{lrr}
& c & d \\
a & 1 & 2 \\
b & 3 & 4 \\
\end{tabular}
To format values before output, chain the Styler.format
method.
In [357]: print(df.style.format("€ {}").to_latex())
\begin{tabular}{lrr}
& c & d \\
a & € 1 & € 2 \\
b & € 3 & € 4 \\
\end{tabular}
XML#
Reading XML#
New in version 1.3.0.
The top-level read_xml() function can accept an XML
string/file/URL and will parse nodes and attributes into a pandas DataFrame.
Note
Since there is no standard XML structure where design types can vary in
many ways, read_xml works best with flatter, shallow versions. If
an XML document is deeply nested, use the stylesheet feature to
transform XML into a flatter version.
Let’s look at a few examples.
Read an XML string:
In [358]: xml = """<?xml version="1.0" encoding="UTF-8"?>
.....: <bookstore>
.....: <book category="cooking">
.....: <title lang="en">Everyday Italian</title>
.....: <author>Giada De Laurentiis</author>
.....: <year>2005</year>
.....: <price>30.00</price>
.....: </book>
.....: <book category="children">
.....: <title lang="en">Harry Potter</title>
.....: <author>J K. Rowling</author>
.....: <year>2005</year>
.....: <price>29.99</price>
.....: </book>
.....: <book category="web">
.....: <title lang="en">Learning XML</title>
.....: <author>Erik T. Ray</author>
.....: <year>2003</year>
.....: <price>39.95</price>
.....: </book>
.....: </bookstore>"""
.....:
In [359]: df = pd.read_xml(xml)
In [360]: df
Out[360]:
category title author year price
0 cooking Everyday Italian Giada De Laurentiis 2005 30.00
1 children Harry Potter J K. Rowling 2005 29.99
2 web Learning XML Erik T. Ray 2003 39.95
Read a URL with no options:
In [361]: df = pd.read_xml("https://www.w3schools.com/xml/books.xml")
In [362]: df
Out[362]:
category title author year price cover
0 cooking Everyday Italian Giada De Laurentiis 2005 30.00 None
1 children Harry Potter J K. Rowling 2005 29.99 None
2 web XQuery Kick Start Vaidyanathan Nagarajan 2003 49.99 None
3 web Learning XML Erik T. Ray 2003 39.95 paperback
Read in the content of the “books.xml” file and pass it to read_xml
as a string:
In [363]: file_path = "books.xml"
In [364]: with open(file_path, "w") as f:
.....: f.write(xml)
.....:
In [365]: with open(file_path, "r") as f:
.....: df = pd.read_xml(f.read())
.....:
In [366]: df
Out[366]:
category title author year price
0 cooking Everyday Italian Giada De Laurentiis 2005 30.00
1 children Harry Potter J K. Rowling 2005 29.99
2 web Learning XML Erik T. Ray 2003 39.95
Read in the content of the “books.xml” as instance of StringIO or
BytesIO and pass it to read_xml:
In [367]: with open(file_path, "r") as f:
.....: sio = StringIO(f.read())
.....:
In [368]: df = pd.read_xml(sio)
In [369]: df
Out[369]:
category title author year price
0 cooking Everyday Italian Giada De Laurentiis 2005 30.00
1 children Harry Potter J K. Rowling 2005 29.99
2 web Learning XML Erik T. Ray 2003 39.95
In [370]: with open(file_path, "rb") as f:
.....: bio = BytesIO(f.read())
.....:
In [371]: df = pd.read_xml(bio)
In [372]: df
Out[372]:
category title author year price
0 cooking Everyday Italian Giada De Laurentiis 2005 30.00
1 children Harry Potter J K. Rowling 2005 29.99
2 web Learning XML Erik T. Ray 2003 39.95
Even read XML from AWS S3 buckets such as NIH NCBI PMC Article Datasets providing
Biomedical and Life Science Jorurnals:
In [373]: df = pd.read_xml(
.....: "s3://pmc-oa-opendata/oa_comm/xml/all/PMC1236943.xml",
.....: xpath=".//journal-meta",
.....: )
.....:
In [374]: df
Out[374]:
journal-id journal-title issn publisher
0 Cardiovasc Ultrasound Cardiovascular Ultrasound 1476-7120 NaN
With lxml as default parser, you access the full-featured XML library
that extends Python’s ElementTree API. One powerful tool is ability to query
nodes selectively or conditionally with more expressive XPath:
In [375]: df = pd.read_xml(file_path, xpath="//book[year=2005]")
In [376]: df
Out[376]:
category title author year price
0 cooking Everyday Italian Giada De Laurentiis 2005 30.00
1 children Harry Potter J K. Rowling 2005 29.99
Specify only elements or only attributes to parse:
In [377]: df = pd.read_xml(file_path, elems_only=True)
In [378]: df
Out[378]:
title author year price
0 Everyday Italian Giada De Laurentiis 2005 30.00
1 Harry Potter J K. Rowling 2005 29.99
2 Learning XML Erik T. Ray 2003 39.95
In [379]: df = pd.read_xml(file_path, attrs_only=True)
In [380]: df
Out[380]:
category
0 cooking
1 children
2 web
XML documents can have namespaces with prefixes and default namespaces without
prefixes both of which are denoted with a special attribute xmlns. In order
to parse by node under a namespace context, xpath must reference a prefix.
For example, below XML contains a namespace with prefix, doc, and URI at
https://example.com. In order to parse doc:row nodes,
namespaces must be used.
In [381]: xml = """<?xml version='1.0' encoding='utf-8'?>
.....: <doc:data xmlns:doc="https://example.com">
.....: <doc:row>
.....: <doc:shape>square</doc:shape>
.....: <doc:degrees>360</doc:degrees>
.....: <doc:sides>4.0</doc:sides>
.....: </doc:row>
.....: <doc:row>
.....: <doc:shape>circle</doc:shape>
.....: <doc:degrees>360</doc:degrees>
.....: <doc:sides/>
.....: </doc:row>
.....: <doc:row>
.....: <doc:shape>triangle</doc:shape>
.....: <doc:degrees>180</doc:degrees>
.....: <doc:sides>3.0</doc:sides>
.....: </doc:row>
.....: </doc:data>"""
.....:
In [382]: df = pd.read_xml(xml,
.....: xpath="//doc:row",
.....: namespaces={"doc": "https://example.com"})
.....:
In [383]: df
Out[383]:
shape degrees sides
0 square 360 4.0
1 circle 360 NaN
2 triangle 180 3.0
Similarly, an XML document can have a default namespace without prefix. Failing
to assign a temporary prefix will return no nodes and raise a ValueError.
But assigning any temporary name to correct URI allows parsing by nodes.
In [384]: xml = """<?xml version='1.0' encoding='utf-8'?>
.....: <data xmlns="https://example.com">
.....: <row>
.....: <shape>square</shape>
.....: <degrees>360</degrees>
.....: <sides>4.0</sides>
.....: </row>
.....: <row>
.....: <shape>circle</shape>
.....: <degrees>360</degrees>
.....: <sides/>
.....: </row>
.....: <row>
.....: <shape>triangle</shape>
.....: <degrees>180</degrees>
.....: <sides>3.0</sides>
.....: </row>
.....: </data>"""
.....:
In [385]: df = pd.read_xml(xml,
.....: xpath="//pandas:row",
.....: namespaces={"pandas": "https://example.com"})
.....:
In [386]: df
Out[386]:
shape degrees sides
0 square 360 4.0
1 circle 360 NaN
2 triangle 180 3.0
However, if XPath does not reference node names such as default, /*, then
namespaces is not required.
With lxml as parser, you can flatten nested XML documents with an XSLT
script which also can be string/file/URL types. As background, XSLT is
a special-purpose language written in a special XML file that can transform
original XML documents into other XML, HTML, even text (CSV, JSON, etc.)
using an XSLT processor.
For example, consider this somewhat nested structure of Chicago “L” Rides
where station and rides elements encapsulate data in their own sections.
With below XSLT, lxml can transform original nested document into a flatter
output (as shown below for demonstration) for easier parse into DataFrame:
In [387]: xml = """<?xml version='1.0' encoding='utf-8'?>
.....: <response>
.....: <row>
.....: <station id="40850" name="Library"/>
.....: <month>2020-09-01T00:00:00</month>
.....: <rides>
.....: <avg_weekday_rides>864.2</avg_weekday_rides>
.....: <avg_saturday_rides>534</avg_saturday_rides>
.....: <avg_sunday_holiday_rides>417.2</avg_sunday_holiday_rides>
.....: </rides>
.....: </row>
.....: <row>
.....: <station id="41700" name="Washington/Wabash"/>
.....: <month>2020-09-01T00:00:00</month>
.....: <rides>
.....: <avg_weekday_rides>2707.4</avg_weekday_rides>
.....: <avg_saturday_rides>1909.8</avg_saturday_rides>
.....: <avg_sunday_holiday_rides>1438.6</avg_sunday_holiday_rides>
.....: </rides>
.....: </row>
.....: <row>
.....: <station id="40380" name="Clark/Lake"/>
.....: <month>2020-09-01T00:00:00</month>
.....: <rides>
.....: <avg_weekday_rides>2949.6</avg_weekday_rides>
.....: <avg_saturday_rides>1657</avg_saturday_rides>
.....: <avg_sunday_holiday_rides>1453.8</avg_sunday_holiday_rides>
.....: </rides>
.....: </row>
.....: </response>"""
.....:
In [388]: xsl = """<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
.....: <xsl:output method="xml" omit-xml-declaration="no" indent="yes"/>
.....: <xsl:strip-space elements="*"/>
.....: <xsl:template match="/response">
.....: <xsl:copy>
.....: <xsl:apply-templates select="row"/>
.....: </xsl:copy>
.....: </xsl:template>
.....: <xsl:template match="row">
.....: <xsl:copy>
.....: <station_id><xsl:value-of select="station/@id"/></station_id>
.....: <station_name><xsl:value-of select="station/@name"/></station_name>
.....: <xsl:copy-of select="month|rides/*"/>
.....: </xsl:copy>
.....: </xsl:template>
.....: </xsl:stylesheet>"""
.....:
In [389]: output = """<?xml version='1.0' encoding='utf-8'?>
.....: <response>
.....: <row>
.....: <station_id>40850</station_id>
.....: <station_name>Library</station_name>
.....: <month>2020-09-01T00:00:00</month>
.....: <avg_weekday_rides>864.2</avg_weekday_rides>
.....: <avg_saturday_rides>534</avg_saturday_rides>
.....: <avg_sunday_holiday_rides>417.2</avg_sunday_holiday_rides>
.....: </row>
.....: <row>
.....: <station_id>41700</station_id>
.....: <station_name>Washington/Wabash</station_name>
.....: <month>2020-09-01T00:00:00</month>
.....: <avg_weekday_rides>2707.4</avg_weekday_rides>
.....: <avg_saturday_rides>1909.8</avg_saturday_rides>
.....: <avg_sunday_holiday_rides>1438.6</avg_sunday_holiday_rides>
.....: </row>
.....: <row>
.....: <station_id>40380</station_id>
.....: <station_name>Clark/Lake</station_name>
.....: <month>2020-09-01T00:00:00</month>
.....: <avg_weekday_rides>2949.6</avg_weekday_rides>
.....: <avg_saturday_rides>1657</avg_saturday_rides>
.....: <avg_sunday_holiday_rides>1453.8</avg_sunday_holiday_rides>
.....: </row>
.....: </response>"""
.....:
In [390]: df = pd.read_xml(xml, stylesheet=xsl)
In [391]: df
Out[391]:
station_id station_name ... avg_saturday_rides avg_sunday_holiday_rides
0 40850 Library ... 534.0 417.2
1 41700 Washington/Wabash ... 1909.8 1438.6
2 40380 Clark/Lake ... 1657.0 1453.8
[3 rows x 6 columns]
For very large XML files that can range in hundreds of megabytes to gigabytes, pandas.read_xml()
supports parsing such sizeable files using lxml’s iterparse and etree’s iterparse
which are memory-efficient methods to iterate through an XML tree and extract specific elements and attributes.
without holding entire tree in memory.
New in version 1.5.0.
To use this feature, you must pass a physical XML file path into read_xml and use the iterparse argument.
Files should not be compressed or point to online sources but stored on local disk. Also, iterparse should be
a dictionary where the key is the repeating nodes in document (which become the rows) and the value is a list of
any element or attribute that is a descendant (i.e., child, grandchild) of repeating node. Since XPath is not
used in this method, descendants do not need to share same relationship with one another. Below shows example
of reading in Wikipedia’s very large (12 GB+) latest article data dump.
In [1]: df = pd.read_xml(
... "/path/to/downloaded/enwikisource-latest-pages-articles.xml",
... iterparse = {"page": ["title", "ns", "id"]}
... )
... df
Out[2]:
title ns id
0 Gettysburg Address 0 21450
1 Main Page 0 42950
2 Declaration by United Nations 0 8435
3 Constitution of the United States of America 0 8435
4 Declaration of Independence (Israel) 0 17858
... ... ... ...
3578760 Page:Black cat 1897 07 v2 n10.pdf/17 104 219649
3578761 Page:Black cat 1897 07 v2 n10.pdf/43 104 219649
3578762 Page:Black cat 1897 07 v2 n10.pdf/44 104 219649
3578763 The History of Tom Jones, a Foundling/Book IX 0 12084291
3578764 Page:Shakespeare of Stratford (1926) Yale.djvu/91 104 21450
[3578765 rows x 3 columns]
Writing XML#
New in version 1.3.0.
DataFrame objects have an instance method to_xml which renders the
contents of the DataFrame as an XML document.
Note
This method does not support special properties of XML including DTD,
CData, XSD schemas, processing instructions, comments, and others.
Only namespaces at the root level is supported. However, stylesheet
allows design changes after initial output.
Let’s look at a few examples.
Write an XML without options:
In [392]: geom_df = pd.DataFrame(
.....: {
.....: "shape": ["square", "circle", "triangle"],
.....: "degrees": [360, 360, 180],
.....: "sides": [4, np.nan, 3],
.....: }
.....: )
.....:
In [393]: print(geom_df.to_xml())
<?xml version='1.0' encoding='utf-8'?>
<data>
<row>
<index>0</index>
<shape>square</shape>
<degrees>360</degrees>
<sides>4.0</sides>
</row>
<row>
<index>1</index>
<shape>circle</shape>
<degrees>360</degrees>
<sides/>
</row>
<row>
<index>2</index>
<shape>triangle</shape>
<degrees>180</degrees>
<sides>3.0</sides>
</row>
</data>
Write an XML with new root and row name:
In [394]: print(geom_df.to_xml(root_name="geometry", row_name="objects"))
<?xml version='1.0' encoding='utf-8'?>
<geometry>
<objects>
<index>0</index>
<shape>square</shape>
<degrees>360</degrees>
<sides>4.0</sides>
</objects>
<objects>
<index>1</index>
<shape>circle</shape>
<degrees>360</degrees>
<sides/>
</objects>
<objects>
<index>2</index>
<shape>triangle</shape>
<degrees>180</degrees>
<sides>3.0</sides>
</objects>
</geometry>
Write an attribute-centric XML:
In [395]: print(geom_df.to_xml(attr_cols=geom_df.columns.tolist()))
<?xml version='1.0' encoding='utf-8'?>
<data>
<row index="0" shape="square" degrees="360" sides="4.0"/>
<row index="1" shape="circle" degrees="360"/>
<row index="2" shape="triangle" degrees="180" sides="3.0"/>
</data>
Write a mix of elements and attributes:
In [396]: print(
.....: geom_df.to_xml(
.....: index=False,
.....: attr_cols=['shape'],
.....: elem_cols=['degrees', 'sides'])
.....: )
.....:
<?xml version='1.0' encoding='utf-8'?>
<data>
<row shape="square">
<degrees>360</degrees>
<sides>4.0</sides>
</row>
<row shape="circle">
<degrees>360</degrees>
<sides/>
</row>
<row shape="triangle">
<degrees>180</degrees>
<sides>3.0</sides>
</row>
</data>
Any DataFrames with hierarchical columns will be flattened for XML element names
with levels delimited by underscores:
In [397]: ext_geom_df = pd.DataFrame(
.....: {
.....: "type": ["polygon", "other", "polygon"],
.....: "shape": ["square", "circle", "triangle"],
.....: "degrees": [360, 360, 180],
.....: "sides": [4, np.nan, 3],
.....: }
.....: )
.....:
In [398]: pvt_df = ext_geom_df.pivot_table(index='shape',
.....: columns='type',
.....: values=['degrees', 'sides'],
.....: aggfunc='sum')
.....:
In [399]: pvt_df
Out[399]:
degrees sides
type other polygon other polygon
shape
circle 360.0 NaN 0.0 NaN
square NaN 360.0 NaN 4.0
triangle NaN 180.0 NaN 3.0
In [400]: print(pvt_df.to_xml())
<?xml version='1.0' encoding='utf-8'?>
<data>
<row>
<shape>circle</shape>
<degrees_other>360.0</degrees_other>
<degrees_polygon/>
<sides_other>0.0</sides_other>
<sides_polygon/>
</row>
<row>
<shape>square</shape>
<degrees_other/>
<degrees_polygon>360.0</degrees_polygon>
<sides_other/>
<sides_polygon>4.0</sides_polygon>
</row>
<row>
<shape>triangle</shape>
<degrees_other/>
<degrees_polygon>180.0</degrees_polygon>
<sides_other/>
<sides_polygon>3.0</sides_polygon>
</row>
</data>
Write an XML with default namespace:
In [401]: print(geom_df.to_xml(namespaces={"": "https://example.com"}))
<?xml version='1.0' encoding='utf-8'?>
<data xmlns="https://example.com">
<row>
<index>0</index>
<shape>square</shape>
<degrees>360</degrees>
<sides>4.0</sides>
</row>
<row>
<index>1</index>
<shape>circle</shape>
<degrees>360</degrees>
<sides/>
</row>
<row>
<index>2</index>
<shape>triangle</shape>
<degrees>180</degrees>
<sides>3.0</sides>
</row>
</data>
Write an XML with namespace prefix:
In [402]: print(
.....: geom_df.to_xml(namespaces={"doc": "https://example.com"},
.....: prefix="doc")
.....: )
.....:
<?xml version='1.0' encoding='utf-8'?>
<doc:data xmlns:doc="https://example.com">
<doc:row>
<doc:index>0</doc:index>
<doc:shape>square</doc:shape>
<doc:degrees>360</doc:degrees>
<doc:sides>4.0</doc:sides>
</doc:row>
<doc:row>
<doc:index>1</doc:index>
<doc:shape>circle</doc:shape>
<doc:degrees>360</doc:degrees>
<doc:sides/>
</doc:row>
<doc:row>
<doc:index>2</doc:index>
<doc:shape>triangle</doc:shape>
<doc:degrees>180</doc:degrees>
<doc:sides>3.0</doc:sides>
</doc:row>
</doc:data>
Write an XML without declaration or pretty print:
In [403]: print(
.....: geom_df.to_xml(xml_declaration=False,
.....: pretty_print=False)
.....: )
.....:
<data><row><index>0</index><shape>square</shape><degrees>360</degrees><sides>4.0</sides></row><row><index>1</index><shape>circle</shape><degrees>360</degrees><sides/></row><row><index>2</index><shape>triangle</shape><degrees>180</degrees><sides>3.0</sides></row></data>
Write an XML and transform with stylesheet:
In [404]: xsl = """<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
.....: <xsl:output method="xml" omit-xml-declaration="no" indent="yes"/>
.....: <xsl:strip-space elements="*"/>
.....: <xsl:template match="/data">
.....: <geometry>
.....: <xsl:apply-templates select="row"/>
.....: </geometry>
.....: </xsl:template>
.....: <xsl:template match="row">
.....: <object index="{index}">
.....: <xsl:if test="shape!='circle'">
.....: <xsl:attribute name="type">polygon</xsl:attribute>
.....: </xsl:if>
.....: <xsl:copy-of select="shape"/>
.....: <property>
.....: <xsl:copy-of select="degrees|sides"/>
.....: </property>
.....: </object>
.....: </xsl:template>
.....: </xsl:stylesheet>"""
.....:
In [405]: print(geom_df.to_xml(stylesheet=xsl))
<?xml version="1.0"?>
<geometry>
<object index="0" type="polygon">
<shape>square</shape>
<property>
<degrees>360</degrees>
<sides>4.0</sides>
</property>
</object>
<object index="1">
<shape>circle</shape>
<property>
<degrees>360</degrees>
<sides/>
</property>
</object>
<object index="2" type="polygon">
<shape>triangle</shape>
<property>
<degrees>180</degrees>
<sides>3.0</sides>
</property>
</object>
</geometry>
XML Final Notes#
All XML documents adhere to W3C specifications. Both etree and lxml
parsers will fail to parse any markup document that is not well-formed or
follows XML syntax rules. Do be aware HTML is not an XML document unless it
follows XHTML specs. However, other popular markup types including KML, XAML,
RSS, MusicML, MathML are compliant XML schemas.
For above reason, if your application builds XML prior to pandas operations,
use appropriate DOM libraries like etree and lxml to build the necessary
document and not by string concatenation or regex adjustments. Always remember
XML is a special text file with markup rules.
With very large XML files (several hundred MBs to GBs), XPath and XSLT
can become memory-intensive operations. Be sure to have enough available
RAM for reading and writing to large XML files (roughly about 5 times the
size of text).
Because XSLT is a programming language, use it with caution since such scripts
can pose a security risk in your environment and can run large or infinite
recursive operations. Always test scripts on small fragments before full run.
The etree parser supports all functionality of both read_xml and
to_xml except for complex XPath and any XSLT. Though limited in features,
etree is still a reliable and capable parser and tree builder. Its
performance may trail lxml to a certain degree for larger files but
relatively unnoticeable on small to medium size files.
Excel files#
The read_excel() method can read Excel 2007+ (.xlsx) files
using the openpyxl Python module. Excel 2003 (.xls) files
can be read using xlrd. Binary Excel (.xlsb)
files can be read using pyxlsb.
The to_excel() instance method is used for
saving a DataFrame to Excel. Generally the semantics are
similar to working with csv data.
See the cookbook for some advanced strategies.
Warning
The xlwt package for writing old-style .xls
excel files is no longer maintained.
The xlrd package is now only for reading
old-style .xls files.
Before pandas 1.3.0, the default argument engine=None to read_excel()
would result in using the xlrd engine in many cases, including new
Excel 2007+ (.xlsx) files. pandas will now default to using the
openpyxl engine.
It is strongly encouraged to install openpyxl to read Excel 2007+
(.xlsx) files.
Please do not report issues when using ``xlrd`` to read ``.xlsx`` files.
This is no longer supported, switch to using openpyxl instead.
Attempting to use the xlwt engine will raise a FutureWarning
unless the option io.excel.xls.writer is set to "xlwt".
While this option is now deprecated and will also raise a FutureWarning,
it can be globally set and the warning suppressed. Users are recommended to
write .xlsx files using the openpyxl engine instead.
Reading Excel files#
In the most basic use-case, read_excel takes a path to an Excel
file, and the sheet_name indicating which sheet to parse.
# Returns a DataFrame
pd.read_excel("path_to_file.xls", sheet_name="Sheet1")
ExcelFile class#
To facilitate working with multiple sheets from the same file, the ExcelFile
class can be used to wrap the file and can be passed into read_excel
There will be a performance benefit for reading multiple sheets as the file is
read into memory only once.
xlsx = pd.ExcelFile("path_to_file.xls")
df = pd.read_excel(xlsx, "Sheet1")
The ExcelFile class can also be used as a context manager.
with pd.ExcelFile("path_to_file.xls") as xls:
df1 = pd.read_excel(xls, "Sheet1")
df2 = pd.read_excel(xls, "Sheet2")
The sheet_names property will generate
a list of the sheet names in the file.
The primary use-case for an ExcelFile is parsing multiple sheets with
different parameters:
data = {}
# For when Sheet1's format differs from Sheet2
with pd.ExcelFile("path_to_file.xls") as xls:
data["Sheet1"] = pd.read_excel(xls, "Sheet1", index_col=None, na_values=["NA"])
data["Sheet2"] = pd.read_excel(xls, "Sheet2", index_col=1)
Note that if the same parsing parameters are used for all sheets, a list
of sheet names can simply be passed to read_excel with no loss in performance.
# using the ExcelFile class
data = {}
with pd.ExcelFile("path_to_file.xls") as xls:
data["Sheet1"] = pd.read_excel(xls, "Sheet1", index_col=None, na_values=["NA"])
data["Sheet2"] = pd.read_excel(xls, "Sheet2", index_col=None, na_values=["NA"])
# equivalent using the read_excel function
data = pd.read_excel(
"path_to_file.xls", ["Sheet1", "Sheet2"], index_col=None, na_values=["NA"]
)
ExcelFile can also be called with a xlrd.book.Book object
as a parameter. This allows the user to control how the excel file is read.
For example, sheets can be loaded on demand by calling xlrd.open_workbook()
with on_demand=True.
import xlrd
xlrd_book = xlrd.open_workbook("path_to_file.xls", on_demand=True)
with pd.ExcelFile(xlrd_book) as xls:
df1 = pd.read_excel(xls, "Sheet1")
df2 = pd.read_excel(xls, "Sheet2")
Specifying sheets#
Note
The second argument is sheet_name, not to be confused with ExcelFile.sheet_names.
Note
An ExcelFile’s attribute sheet_names provides access to a list of sheets.
The arguments sheet_name allows specifying the sheet or sheets to read.
The default value for sheet_name is 0, indicating to read the first sheet
Pass a string to refer to the name of a particular sheet in the workbook.
Pass an integer to refer to the index of a sheet. Indices follow Python
convention, beginning at 0.
Pass a list of either strings or integers, to return a dictionary of specified sheets.
Pass a None to return a dictionary of all available sheets.
# Returns a DataFrame
pd.read_excel("path_to_file.xls", "Sheet1", index_col=None, na_values=["NA"])
Using the sheet index:
# Returns a DataFrame
pd.read_excel("path_to_file.xls", 0, index_col=None, na_values=["NA"])
Using all default values:
# Returns a DataFrame
pd.read_excel("path_to_file.xls")
Using None to get all sheets:
# Returns a dictionary of DataFrames
pd.read_excel("path_to_file.xls", sheet_name=None)
Using a list to get multiple sheets:
# Returns the 1st and 4th sheet, as a dictionary of DataFrames.
pd.read_excel("path_to_file.xls", sheet_name=["Sheet1", 3])
read_excel can read more than one sheet, by setting sheet_name to either
a list of sheet names, a list of sheet positions, or None to read all sheets.
Sheets can be specified by sheet index or sheet name, using an integer or string,
respectively.
Reading a MultiIndex#
read_excel can read a MultiIndex index, by passing a list of columns to index_col
and a MultiIndex column by passing a list of rows to header. If either the index
or columns have serialized level names those will be read in as well by specifying
the rows/columns that make up the levels.
For example, to read in a MultiIndex index without names:
In [406]: df = pd.DataFrame(
.....: {"a": [1, 2, 3, 4], "b": [5, 6, 7, 8]},
.....: index=pd.MultiIndex.from_product([["a", "b"], ["c", "d"]]),
.....: )
.....:
In [407]: df.to_excel("path_to_file.xlsx")
In [408]: df = pd.read_excel("path_to_file.xlsx", index_col=[0, 1])
In [409]: df
Out[409]:
a b
a c 1 5
d 2 6
b c 3 7
d 4 8
If the index has level names, they will parsed as well, using the same
parameters.
In [410]: df.index = df.index.set_names(["lvl1", "lvl2"])
In [411]: df.to_excel("path_to_file.xlsx")
In [412]: df = pd.read_excel("path_to_file.xlsx", index_col=[0, 1])
In [413]: df
Out[413]:
a b
lvl1 lvl2
a c 1 5
d 2 6
b c 3 7
d 4 8
If the source file has both MultiIndex index and columns, lists specifying each
should be passed to index_col and header:
In [414]: df.columns = pd.MultiIndex.from_product([["a"], ["b", "d"]], names=["c1", "c2"])
In [415]: df.to_excel("path_to_file.xlsx")
In [416]: df = pd.read_excel("path_to_file.xlsx", index_col=[0, 1], header=[0, 1])
In [417]: df
Out[417]:
c1 a
c2 b d
lvl1 lvl2
a c 1 5
d 2 6
b c 3 7
d 4 8
Missing values in columns specified in index_col will be forward filled to
allow roundtripping with to_excel for merged_cells=True. To avoid forward
filling the missing values use set_index after reading the data instead of
index_col.
Parsing specific columns#
It is often the case that users will insert columns to do temporary computations
in Excel and you may not want to read in those columns. read_excel takes
a usecols keyword to allow you to specify a subset of columns to parse.
Changed in version 1.0.0.
Passing in an integer for usecols will no longer work. Please pass in a list
of ints from 0 to usecols inclusive instead.
You can specify a comma-delimited set of Excel columns and ranges as a string:
pd.read_excel("path_to_file.xls", "Sheet1", usecols="A,C:E")
If usecols is a list of integers, then it is assumed to be the file column
indices to be parsed.
pd.read_excel("path_to_file.xls", "Sheet1", usecols=[0, 2, 3])
Element order is ignored, so usecols=[0, 1] is the same as [1, 0].
If usecols is a list of strings, it is assumed that each string corresponds
to a column name provided either by the user in names or inferred from the
document header row(s). Those strings define which columns will be parsed:
pd.read_excel("path_to_file.xls", "Sheet1", usecols=["foo", "bar"])
Element order is ignored, so usecols=['baz', 'joe'] is the same as ['joe', 'baz'].
If usecols is callable, the callable function will be evaluated against
the column names, returning names where the callable function evaluates to True.
pd.read_excel("path_to_file.xls", "Sheet1", usecols=lambda x: x.isalpha())
Parsing dates#
Datetime-like values are normally automatically converted to the appropriate
dtype when reading the excel file. But if you have a column of strings that
look like dates (but are not actually formatted as dates in excel), you can
use the parse_dates keyword to parse those strings to datetimes:
pd.read_excel("path_to_file.xls", "Sheet1", parse_dates=["date_strings"])
Cell converters#
It is possible to transform the contents of Excel cells via the converters
option. For instance, to convert a column to boolean:
pd.read_excel("path_to_file.xls", "Sheet1", converters={"MyBools": bool})
This options handles missing values and treats exceptions in the converters
as missing data. Transformations are applied cell by cell rather than to the
column as a whole, so the array dtype is not guaranteed. For instance, a
column of integers with missing values cannot be transformed to an array
with integer dtype, because NaN is strictly a float. You can manually mask
missing data to recover integer dtype:
def cfun(x):
return int(x) if x else -1
pd.read_excel("path_to_file.xls", "Sheet1", converters={"MyInts": cfun})
Dtype specifications#
As an alternative to converters, the type for an entire column can
be specified using the dtype keyword, which takes a dictionary
mapping column names to types. To interpret data with
no type inference, use the type str or object.
pd.read_excel("path_to_file.xls", dtype={"MyInts": "int64", "MyText": str})
Writing Excel files#
Writing Excel files to disk#
To write a DataFrame object to a sheet of an Excel file, you can use the
to_excel instance method. The arguments are largely the same as to_csv
described above, the first argument being the name of the excel file, and the
optional second argument the name of the sheet to which the DataFrame should be
written. For example:
df.to_excel("path_to_file.xlsx", sheet_name="Sheet1")
Files with a .xls extension will be written using xlwt and those with a
.xlsx extension will be written using xlsxwriter (if available) or
openpyxl.
The DataFrame will be written in a way that tries to mimic the REPL output.
The index_label will be placed in the second
row instead of the first. You can place it in the first row by setting the
merge_cells option in to_excel() to False:
df.to_excel("path_to_file.xlsx", index_label="label", merge_cells=False)
In order to write separate DataFrames to separate sheets in a single Excel file,
one can pass an ExcelWriter.
with pd.ExcelWriter("path_to_file.xlsx") as writer:
df1.to_excel(writer, sheet_name="Sheet1")
df2.to_excel(writer, sheet_name="Sheet2")
Writing Excel files to memory#
pandas supports writing Excel files to buffer-like objects such as StringIO or
BytesIO using ExcelWriter.
from io import BytesIO
bio = BytesIO()
# By setting the 'engine' in the ExcelWriter constructor.
writer = pd.ExcelWriter(bio, engine="xlsxwriter")
df.to_excel(writer, sheet_name="Sheet1")
# Save the workbook
writer.save()
# Seek to the beginning and read to copy the workbook to a variable in memory
bio.seek(0)
workbook = bio.read()
Note
engine is optional but recommended. Setting the engine determines
the version of workbook produced. Setting engine='xlrd' will produce an
Excel 2003-format workbook (xls). Using either 'openpyxl' or
'xlsxwriter' will produce an Excel 2007-format workbook (xlsx). If
omitted, an Excel 2007-formatted workbook is produced.
Excel writer engines#
Deprecated since version 1.2.0: As the xlwt package is no longer
maintained, the xlwt engine will be removed from a future version
of pandas. This is the only engine in pandas that supports writing to
.xls files.
pandas chooses an Excel writer via two methods:
the engine keyword argument
the filename extension (via the default specified in config options)
By default, pandas uses the XlsxWriter for .xlsx, openpyxl
for .xlsm, and xlwt for .xls files. If you have multiple
engines installed, you can set the default engine through setting the
config options io.excel.xlsx.writer and
io.excel.xls.writer. pandas will fall back on openpyxl for .xlsx
files if Xlsxwriter is not available.
To specify which writer you want to use, you can pass an engine keyword
argument to to_excel and to ExcelWriter. The built-in engines are:
openpyxl: version 2.4 or higher is required
xlsxwriter
xlwt
# By setting the 'engine' in the DataFrame 'to_excel()' methods.
df.to_excel("path_to_file.xlsx", sheet_name="Sheet1", engine="xlsxwriter")
# By setting the 'engine' in the ExcelWriter constructor.
writer = pd.ExcelWriter("path_to_file.xlsx", engine="xlsxwriter")
# Or via pandas configuration.
from pandas import options # noqa: E402
options.io.excel.xlsx.writer = "xlsxwriter"
df.to_excel("path_to_file.xlsx", sheet_name="Sheet1")
Style and formatting#
The look and feel of Excel worksheets created from pandas can be modified using the following parameters on the DataFrame’s to_excel method.
float_format : Format string for floating point numbers (default None).
freeze_panes : A tuple of two integers representing the bottommost row and rightmost column to freeze. Each of these parameters is one-based, so (1, 1) will freeze the first row and first column (default None).
Using the Xlsxwriter engine provides many options for controlling the
format of an Excel worksheet created with the to_excel method. Excellent examples can be found in the
Xlsxwriter documentation here: https://xlsxwriter.readthedocs.io/working_with_pandas.html
OpenDocument Spreadsheets#
New in version 0.25.
The read_excel() method can also read OpenDocument spreadsheets
using the odfpy module. The semantics and features for reading
OpenDocument spreadsheets match what can be done for Excel files using
engine='odf'.
# Returns a DataFrame
pd.read_excel("path_to_file.ods", engine="odf")
Note
Currently pandas only supports reading OpenDocument spreadsheets. Writing
is not implemented.
Binary Excel (.xlsb) files#
New in version 1.0.0.
The read_excel() method can also read binary Excel files
using the pyxlsb module. The semantics and features for reading
binary Excel files mostly match what can be done for Excel files using
engine='pyxlsb'. pyxlsb does not recognize datetime types
in files and will return floats instead.
# Returns a DataFrame
pd.read_excel("path_to_file.xlsb", engine="pyxlsb")
Note
Currently pandas only supports reading binary Excel files. Writing
is not implemented.
Clipboard#
A handy way to grab data is to use the read_clipboard() method,
which takes the contents of the clipboard buffer and passes them to the
read_csv method. For instance, you can copy the following text to the
clipboard (CTRL-C on many operating systems):
A B C
x 1 4 p
y 2 5 q
z 3 6 r
And then import the data directly to a DataFrame by calling:
>>> clipdf = pd.read_clipboard()
>>> clipdf
A B C
x 1 4 p
y 2 5 q
z 3 6 r
The to_clipboard method can be used to write the contents of a DataFrame to
the clipboard. Following which you can paste the clipboard contents into other
applications (CTRL-V on many operating systems). Here we illustrate writing a
DataFrame into clipboard and reading it back.
>>> df = pd.DataFrame(
... {"A": [1, 2, 3], "B": [4, 5, 6], "C": ["p", "q", "r"]}, index=["x", "y", "z"]
... )
>>> df
A B C
x 1 4 p
y 2 5 q
z 3 6 r
>>> df.to_clipboard()
>>> pd.read_clipboard()
A B C
x 1 4 p
y 2 5 q
z 3 6 r
We can see that we got the same content back, which we had earlier written to the clipboard.
Note
You may need to install xclip or xsel (with PyQt5, PyQt4 or qtpy) on Linux to use these methods.
Pickling#
All pandas objects are equipped with to_pickle methods which use Python’s
cPickle module to save data structures to disk using the pickle format.
In [418]: df
Out[418]:
c1 a
c2 b d
lvl1 lvl2
a c 1 5
d 2 6
b c 3 7
d 4 8
In [419]: df.to_pickle("foo.pkl")
The read_pickle function in the pandas namespace can be used to load
any pickled pandas object (or any other pickled object) from file:
In [420]: pd.read_pickle("foo.pkl")
Out[420]:
c1 a
c2 b d
lvl1 lvl2
a c 1 5
d 2 6
b c 3 7
d 4 8
Warning
Loading pickled data received from untrusted sources can be unsafe.
See: https://docs.python.org/3/library/pickle.html
Warning
read_pickle() is only guaranteed backwards compatible back to pandas version 0.20.3
Compressed pickle files#
read_pickle(), DataFrame.to_pickle() and Series.to_pickle() can read
and write compressed pickle files. The compression types of gzip, bz2, xz, zstd are supported for reading and writing.
The zip file format only supports reading and must contain only one data file
to be read.
The compression type can be an explicit parameter or be inferred from the file extension.
If ‘infer’, then use gzip, bz2, zip, xz, zstd if filename ends in '.gz', '.bz2', '.zip',
'.xz', or '.zst', respectively.
The compression parameter can also be a dict in order to pass options to the
compression protocol. It must have a 'method' key set to the name
of the compression protocol, which must be one of
{'zip', 'gzip', 'bz2', 'xz', 'zstd'}. All other key-value pairs are passed to
the underlying compression library.
In [421]: df = pd.DataFrame(
.....: {
.....: "A": np.random.randn(1000),
.....: "B": "foo",
.....: "C": pd.date_range("20130101", periods=1000, freq="s"),
.....: }
.....: )
.....:
In [422]: df
Out[422]:
A B C
0 -0.828876 foo 2013-01-01 00:00:00
1 -0.110383 foo 2013-01-01 00:00:01
2 2.357598 foo 2013-01-01 00:00:02
3 -1.620073 foo 2013-01-01 00:00:03
4 0.440903 foo 2013-01-01 00:00:04
.. ... ... ...
995 -1.177365 foo 2013-01-01 00:16:35
996 1.236988 foo 2013-01-01 00:16:36
997 0.743946 foo 2013-01-01 00:16:37
998 -0.533097 foo 2013-01-01 00:16:38
999 -0.140850 foo 2013-01-01 00:16:39
[1000 rows x 3 columns]
Using an explicit compression type:
In [423]: df.to_pickle("data.pkl.compress", compression="gzip")
In [424]: rt = pd.read_pickle("data.pkl.compress", compression="gzip")
In [425]: rt
Out[425]:
A B C
0 -0.828876 foo 2013-01-01 00:00:00
1 -0.110383 foo 2013-01-01 00:00:01
2 2.357598 foo 2013-01-01 00:00:02
3 -1.620073 foo 2013-01-01 00:00:03
4 0.440903 foo 2013-01-01 00:00:04
.. ... ... ...
995 -1.177365 foo 2013-01-01 00:16:35
996 1.236988 foo 2013-01-01 00:16:36
997 0.743946 foo 2013-01-01 00:16:37
998 -0.533097 foo 2013-01-01 00:16:38
999 -0.140850 foo 2013-01-01 00:16:39
[1000 rows x 3 columns]
Inferring compression type from the extension:
In [426]: df.to_pickle("data.pkl.xz", compression="infer")
In [427]: rt = pd.read_pickle("data.pkl.xz", compression="infer")
In [428]: rt
Out[428]:
A B C
0 -0.828876 foo 2013-01-01 00:00:00
1 -0.110383 foo 2013-01-01 00:00:01
2 2.357598 foo 2013-01-01 00:00:02
3 -1.620073 foo 2013-01-01 00:00:03
4 0.440903 foo 2013-01-01 00:00:04
.. ... ... ...
995 -1.177365 foo 2013-01-01 00:16:35
996 1.236988 foo 2013-01-01 00:16:36
997 0.743946 foo 2013-01-01 00:16:37
998 -0.533097 foo 2013-01-01 00:16:38
999 -0.140850 foo 2013-01-01 00:16:39
[1000 rows x 3 columns]
The default is to ‘infer’:
In [429]: df.to_pickle("data.pkl.gz")
In [430]: rt = pd.read_pickle("data.pkl.gz")
In [431]: rt
Out[431]:
A B C
0 -0.828876 foo 2013-01-01 00:00:00
1 -0.110383 foo 2013-01-01 00:00:01
2 2.357598 foo 2013-01-01 00:00:02
3 -1.620073 foo 2013-01-01 00:00:03
4 0.440903 foo 2013-01-01 00:00:04
.. ... ... ...
995 -1.177365 foo 2013-01-01 00:16:35
996 1.236988 foo 2013-01-01 00:16:36
997 0.743946 foo 2013-01-01 00:16:37
998 -0.533097 foo 2013-01-01 00:16:38
999 -0.140850 foo 2013-01-01 00:16:39
[1000 rows x 3 columns]
In [432]: df["A"].to_pickle("s1.pkl.bz2")
In [433]: rt = pd.read_pickle("s1.pkl.bz2")
In [434]: rt
Out[434]:
0 -0.828876
1 -0.110383
2 2.357598
3 -1.620073
4 0.440903
...
995 -1.177365
996 1.236988
997 0.743946
998 -0.533097
999 -0.140850
Name: A, Length: 1000, dtype: float64
Passing options to the compression protocol in order to speed up compression:
In [435]: df.to_pickle("data.pkl.gz", compression={"method": "gzip", "compresslevel": 1})
msgpack#
pandas support for msgpack has been removed in version 1.0.0. It is
recommended to use pickle instead.
Alternatively, you can also the Arrow IPC serialization format for on-the-wire
transmission of pandas objects. For documentation on pyarrow, see
here.
HDF5 (PyTables)#
HDFStore is a dict-like object which reads and writes pandas using
the high performance HDF5 format using the excellent PyTables library. See the cookbook
for some advanced strategies
Warning
pandas uses PyTables for reading and writing HDF5 files, which allows
serializing object-dtype data with pickle. Loading pickled data received from
untrusted sources can be unsafe.
See: https://docs.python.org/3/library/pickle.html for more.
In [436]: store = pd.HDFStore("store.h5")
In [437]: print(store)
<class 'pandas.io.pytables.HDFStore'>
File path: store.h5
Objects can be written to the file just like adding key-value pairs to a
dict:
In [438]: index = pd.date_range("1/1/2000", periods=8)
In [439]: s = pd.Series(np.random.randn(5), index=["a", "b", "c", "d", "e"])
In [440]: df = pd.DataFrame(np.random.randn(8, 3), index=index, columns=["A", "B", "C"])
# store.put('s', s) is an equivalent method
In [441]: store["s"] = s
In [442]: store["df"] = df
In [443]: store
Out[443]:
<class 'pandas.io.pytables.HDFStore'>
File path: store.h5
In a current or later Python session, you can retrieve stored objects:
# store.get('df') is an equivalent method
In [444]: store["df"]
Out[444]:
A B C
2000-01-01 -0.398501 -0.677311 -0.874991
2000-01-02 -1.167564 -0.593353 0.146262
2000-01-03 -0.131959 0.089012 0.667450
2000-01-04 0.169405 -1.358046 -0.105563
2000-01-05 0.492195 0.076693 0.213685
2000-01-06 -0.285283 -1.210529 -1.408386
2000-01-07 0.941577 -0.342447 0.222031
2000-01-08 0.052607 2.093214 1.064908
# dotted (attribute) access provides get as well
In [445]: store.df
Out[445]:
A B C
2000-01-01 -0.398501 -0.677311 -0.874991
2000-01-02 -1.167564 -0.593353 0.146262
2000-01-03 -0.131959 0.089012 0.667450
2000-01-04 0.169405 -1.358046 -0.105563
2000-01-05 0.492195 0.076693 0.213685
2000-01-06 -0.285283 -1.210529 -1.408386
2000-01-07 0.941577 -0.342447 0.222031
2000-01-08 0.052607 2.093214 1.064908
Deletion of the object specified by the key:
# store.remove('df') is an equivalent method
In [446]: del store["df"]
In [447]: store
Out[447]:
<class 'pandas.io.pytables.HDFStore'>
File path: store.h5
Closing a Store and using a context manager:
In [448]: store.close()
In [449]: store
Out[449]:
<class 'pandas.io.pytables.HDFStore'>
File path: store.h5
In [450]: store.is_open
Out[450]: False
# Working with, and automatically closing the store using a context manager
In [451]: with pd.HDFStore("store.h5") as store:
.....: store.keys()
.....:
Read/write API#
HDFStore supports a top-level API using read_hdf for reading and to_hdf for writing,
similar to how read_csv and to_csv work.
In [452]: df_tl = pd.DataFrame({"A": list(range(5)), "B": list(range(5))})
In [453]: df_tl.to_hdf("store_tl.h5", "table", append=True)
In [454]: pd.read_hdf("store_tl.h5", "table", where=["index>2"])
Out[454]:
A B
3 3 3
4 4 4
HDFStore will by default not drop rows that are all missing. This behavior can be changed by setting dropna=True.
In [455]: df_with_missing = pd.DataFrame(
.....: {
.....: "col1": [0, np.nan, 2],
.....: "col2": [1, np.nan, np.nan],
.....: }
.....: )
.....:
In [456]: df_with_missing
Out[456]:
col1 col2
0 0.0 1.0
1 NaN NaN
2 2.0 NaN
In [457]: df_with_missing.to_hdf("file.h5", "df_with_missing", format="table", mode="w")
In [458]: pd.read_hdf("file.h5", "df_with_missing")
Out[458]:
col1 col2
0 0.0 1.0
1 NaN NaN
2 2.0 NaN
In [459]: df_with_missing.to_hdf(
.....: "file.h5", "df_with_missing", format="table", mode="w", dropna=True
.....: )
.....:
In [460]: pd.read_hdf("file.h5", "df_with_missing")
Out[460]:
col1 col2
0 0.0 1.0
2 2.0 NaN
Fixed format#
The examples above show storing using put, which write the HDF5 to PyTables in a fixed array format, called
the fixed format. These types of stores are not appendable once written (though you can simply
remove them and rewrite). Nor are they queryable; they must be
retrieved in their entirety. They also do not support dataframes with non-unique column names.
The fixed format stores offer very fast writing and slightly faster reading than table stores.
This format is specified by default when using put or to_hdf or by format='fixed' or format='f'.
Warning
A fixed format will raise a TypeError if you try to retrieve using a where:
>>> pd.DataFrame(np.random.randn(10, 2)).to_hdf("test_fixed.h5", "df")
>>> pd.read_hdf("test_fixed.h5", "df", where="index>5")
TypeError: cannot pass a where specification when reading a fixed format.
this store must be selected in its entirety
Table format#
HDFStore supports another PyTables format on disk, the table
format. Conceptually a table is shaped very much like a DataFrame,
with rows and columns. A table may be appended to in the same or
other sessions. In addition, delete and query type operations are
supported. This format is specified by format='table' or format='t'
to append or put or to_hdf.
This format can be set as an option as well pd.set_option('io.hdf.default_format','table') to
enable put/append/to_hdf to by default store in the table format.
In [461]: store = pd.HDFStore("store.h5")
In [462]: df1 = df[0:4]
In [463]: df2 = df[4:]
# append data (creates a table automatically)
In [464]: store.append("df", df1)
In [465]: store.append("df", df2)
In [466]: store
Out[466]:
<class 'pandas.io.pytables.HDFStore'>
File path: store.h5
# select the entire object
In [467]: store.select("df")
Out[467]:
A B C
2000-01-01 -0.398501 -0.677311 -0.874991
2000-01-02 -1.167564 -0.593353 0.146262
2000-01-03 -0.131959 0.089012 0.667450
2000-01-04 0.169405 -1.358046 -0.105563
2000-01-05 0.492195 0.076693 0.213685
2000-01-06 -0.285283 -1.210529 -1.408386
2000-01-07 0.941577 -0.342447 0.222031
2000-01-08 0.052607 2.093214 1.064908
# the type of stored data
In [468]: store.root.df._v_attrs.pandas_type
Out[468]: 'frame_table'
Note
You can also create a table by passing format='table' or format='t' to a put operation.
Hierarchical keys#
Keys to a store can be specified as a string. These can be in a
hierarchical path-name like format (e.g. foo/bar/bah), which will
generate a hierarchy of sub-stores (or Groups in PyTables
parlance). Keys can be specified without the leading ‘/’ and are always
absolute (e.g. ‘foo’ refers to ‘/foo’). Removal operations can remove
everything in the sub-store and below, so be careful.
In [469]: store.put("foo/bar/bah", df)
In [470]: store.append("food/orange", df)
In [471]: store.append("food/apple", df)
In [472]: store
Out[472]:
<class 'pandas.io.pytables.HDFStore'>
File path: store.h5
# a list of keys are returned
In [473]: store.keys()
Out[473]: ['/df', '/food/apple', '/food/orange', '/foo/bar/bah']
# remove all nodes under this level
In [474]: store.remove("food")
In [475]: store
Out[475]:
<class 'pandas.io.pytables.HDFStore'>
File path: store.h5
You can walk through the group hierarchy using the walk method which
will yield a tuple for each group key along with the relative keys of its contents.
In [476]: for (path, subgroups, subkeys) in store.walk():
.....: for subgroup in subgroups:
.....: print("GROUP: {}/{}".format(path, subgroup))
.....: for subkey in subkeys:
.....: key = "/".join([path, subkey])
.....: print("KEY: {}".format(key))
.....: print(store.get(key))
.....:
GROUP: /foo
KEY: /df
A B C
2000-01-01 -0.398501 -0.677311 -0.874991
2000-01-02 -1.167564 -0.593353 0.146262
2000-01-03 -0.131959 0.089012 0.667450
2000-01-04 0.169405 -1.358046 -0.105563
2000-01-05 0.492195 0.076693 0.213685
2000-01-06 -0.285283 -1.210529 -1.408386
2000-01-07 0.941577 -0.342447 0.222031
2000-01-08 0.052607 2.093214 1.064908
GROUP: /foo/bar
KEY: /foo/bar/bah
A B C
2000-01-01 -0.398501 -0.677311 -0.874991
2000-01-02 -1.167564 -0.593353 0.146262
2000-01-03 -0.131959 0.089012 0.667450
2000-01-04 0.169405 -1.358046 -0.105563
2000-01-05 0.492195 0.076693 0.213685
2000-01-06 -0.285283 -1.210529 -1.408386
2000-01-07 0.941577 -0.342447 0.222031
2000-01-08 0.052607 2.093214 1.064908
Warning
Hierarchical keys cannot be retrieved as dotted (attribute) access as described above for items stored under the root node.
In [8]: store.foo.bar.bah
AttributeError: 'HDFStore' object has no attribute 'foo'
# you can directly access the actual PyTables node but using the root node
In [9]: store.root.foo.bar.bah
Out[9]:
/foo/bar/bah (Group) ''
children := ['block0_items' (Array), 'block0_values' (Array), 'axis0' (Array), 'axis1' (Array)]
Instead, use explicit string based keys:
In [477]: store["foo/bar/bah"]
Out[477]:
A B C
2000-01-01 -0.398501 -0.677311 -0.874991
2000-01-02 -1.167564 -0.593353 0.146262
2000-01-03 -0.131959 0.089012 0.667450
2000-01-04 0.169405 -1.358046 -0.105563
2000-01-05 0.492195 0.076693 0.213685
2000-01-06 -0.285283 -1.210529 -1.408386
2000-01-07 0.941577 -0.342447 0.222031
2000-01-08 0.052607 2.093214 1.064908
Storing types#
Storing mixed types in a table#
Storing mixed-dtype data is supported. Strings are stored as a
fixed-width using the maximum size of the appended column. Subsequent attempts
at appending longer strings will raise a ValueError.
Passing min_itemsize={`values`: size} as a parameter to append
will set a larger minimum for the string columns. Storing floats,
strings, ints, bools, datetime64 are currently supported. For string
columns, passing nan_rep = 'nan' to append will change the default
nan representation on disk (which converts to/from np.nan), this
defaults to nan.
In [478]: df_mixed = pd.DataFrame(
.....: {
.....: "A": np.random.randn(8),
.....: "B": np.random.randn(8),
.....: "C": np.array(np.random.randn(8), dtype="float32"),
.....: "string": "string",
.....: "int": 1,
.....: "bool": True,
.....: "datetime64": pd.Timestamp("20010102"),
.....: },
.....: index=list(range(8)),
.....: )
.....:
In [479]: df_mixed.loc[df_mixed.index[3:5], ["A", "B", "string", "datetime64"]] = np.nan
In [480]: store.append("df_mixed", df_mixed, min_itemsize={"values": 50})
In [481]: df_mixed1 = store.select("df_mixed")
In [482]: df_mixed1
Out[482]:
A B C string int bool datetime64
0 1.778161 -0.898283 -0.263043 string 1 True 2001-01-02
1 -0.913867 -0.218499 -0.639244 string 1 True 2001-01-02
2 -0.030004 1.408028 -0.866305 string 1 True 2001-01-02
3 NaN NaN -0.225250 NaN 1 True NaT
4 NaN NaN -0.890978 NaN 1 True NaT
5 0.081323 0.520995 -0.553839 string 1 True 2001-01-02
6 -0.268494 0.620028 -2.762875 string 1 True 2001-01-02
7 0.168016 0.159416 -1.244763 string 1 True 2001-01-02
In [483]: df_mixed1.dtypes.value_counts()
Out[483]:
float64 2
float32 1
object 1
int64 1
bool 1
datetime64[ns] 1
dtype: int64
# we have provided a minimum string column size
In [484]: store.root.df_mixed.table
Out[484]:
/df_mixed/table (Table(8,)) ''
description := {
"index": Int64Col(shape=(), dflt=0, pos=0),
"values_block_0": Float64Col(shape=(2,), dflt=0.0, pos=1),
"values_block_1": Float32Col(shape=(1,), dflt=0.0, pos=2),
"values_block_2": StringCol(itemsize=50, shape=(1,), dflt=b'', pos=3),
"values_block_3": Int64Col(shape=(1,), dflt=0, pos=4),
"values_block_4": BoolCol(shape=(1,), dflt=False, pos=5),
"values_block_5": Int64Col(shape=(1,), dflt=0, pos=6)}
byteorder := 'little'
chunkshape := (689,)
autoindex := True
colindexes := {
"index": Index(6, mediumshuffle, zlib(1)).is_csi=False}
Storing MultiIndex DataFrames#
Storing MultiIndex DataFrames as tables is very similar to
storing/selecting from homogeneous index DataFrames.
In [485]: index = pd.MultiIndex(
.....: levels=[["foo", "bar", "baz", "qux"], ["one", "two", "three"]],
.....: codes=[[0, 0, 0, 1, 1, 2, 2, 3, 3, 3], [0, 1, 2, 0, 1, 1, 2, 0, 1, 2]],
.....: names=["foo", "bar"],
.....: )
.....:
In [486]: df_mi = pd.DataFrame(np.random.randn(10, 3), index=index, columns=["A", "B", "C"])
In [487]: df_mi
Out[487]:
A B C
foo bar
foo one -1.280289 0.692545 -0.536722
two 1.005707 0.296917 0.139796
three -1.083889 0.811865 1.648435
bar one -0.164377 -0.402227 1.618922
two -1.424723 -0.023232 0.948196
baz two 0.183573 0.145277 0.308146
three -1.043530 -0.708145 1.430905
qux one -0.850136 0.813949 1.508891
two -1.556154 0.187597 1.176488
three -1.246093 -0.002726 -0.444249
In [488]: store.append("df_mi", df_mi)
In [489]: store.select("df_mi")
Out[489]:
A B C
foo bar
foo one -1.280289 0.692545 -0.536722
two 1.005707 0.296917 0.139796
three -1.083889 0.811865 1.648435
bar one -0.164377 -0.402227 1.618922
two -1.424723 -0.023232 0.948196
baz two 0.183573 0.145277 0.308146
three -1.043530 -0.708145 1.430905
qux one -0.850136 0.813949 1.508891
two -1.556154 0.187597 1.176488
three -1.246093 -0.002726 -0.444249
# the levels are automatically included as data columns
In [490]: store.select("df_mi", "foo=bar")
Out[490]:
A B C
foo bar
bar one -0.164377 -0.402227 1.618922
two -1.424723 -0.023232 0.948196
Note
The index keyword is reserved and cannot be use as a level name.
Querying#
Querying a table#
select and delete operations have an optional criterion that can
be specified to select/delete only a subset of the data. This allows one
to have a very large on-disk table and retrieve only a portion of the
data.
A query is specified using the Term class under the hood, as a boolean expression.
index and columns are supported indexers of DataFrames.
if data_columns are specified, these can be used as additional indexers.
level name in a MultiIndex, with default name level_0, level_1, … if not provided.
Valid comparison operators are:
=, ==, !=, >, >=, <, <=
Valid boolean expressions are combined with:
| : or
& : and
( and ) : for grouping
These rules are similar to how boolean expressions are used in pandas for indexing.
Note
= will be automatically expanded to the comparison operator ==
~ is the not operator, but can only be used in very limited
circumstances
If a list/tuple of expressions is passed they will be combined via &
The following are valid expressions:
'index >= date'
"columns = ['A', 'D']"
"columns in ['A', 'D']"
'columns = A'
'columns == A'
"~(columns = ['A', 'B'])"
'index > df.index[3] & string = "bar"'
'(index > df.index[3] & index <= df.index[6]) | string = "bar"'
"ts >= Timestamp('2012-02-01')"
"major_axis>=20130101"
The indexers are on the left-hand side of the sub-expression:
columns, major_axis, ts
The right-hand side of the sub-expression (after a comparison operator) can be:
functions that will be evaluated, e.g. Timestamp('2012-02-01')
strings, e.g. "bar"
date-like, e.g. 20130101, or "20130101"
lists, e.g. "['A', 'B']"
variables that are defined in the local names space, e.g. date
Note
Passing a string to a query by interpolating it into the query
expression is not recommended. Simply assign the string of interest to a
variable and use that variable in an expression. For example, do this
string = "HolyMoly'"
store.select("df", "index == string")
instead of this
string = "HolyMoly'"
store.select('df', f'index == {string}')
The latter will not work and will raise a SyntaxError.Note that
there’s a single quote followed by a double quote in the string
variable.
If you must interpolate, use the '%r' format specifier
store.select("df", "index == %r" % string)
which will quote string.
Here are some examples:
In [491]: dfq = pd.DataFrame(
.....: np.random.randn(10, 4),
.....: columns=list("ABCD"),
.....: index=pd.date_range("20130101", periods=10),
.....: )
.....:
In [492]: store.append("dfq", dfq, format="table", data_columns=True)
Use boolean expressions, with in-line function evaluation.
In [493]: store.select("dfq", "index>pd.Timestamp('20130104') & columns=['A', 'B']")
Out[493]:
A B
2013-01-05 1.366810 1.073372
2013-01-06 2.119746 -2.628174
2013-01-07 0.337920 -0.634027
2013-01-08 1.053434 1.109090
2013-01-09 -0.772942 -0.269415
2013-01-10 0.048562 -0.285920
Use inline column reference.
In [494]: store.select("dfq", where="A>0 or C>0")
Out[494]:
A B C D
2013-01-01 0.856838 1.491776 0.001283 0.701816
2013-01-02 -1.097917 0.102588 0.661740 0.443531
2013-01-03 0.559313 -0.459055 -1.222598 -0.455304
2013-01-05 1.366810 1.073372 -0.994957 0.755314
2013-01-06 2.119746 -2.628174 -0.089460 -0.133636
2013-01-07 0.337920 -0.634027 0.421107 0.604303
2013-01-08 1.053434 1.109090 -0.367891 -0.846206
2013-01-10 0.048562 -0.285920 1.334100 0.194462
The columns keyword can be supplied to select a list of columns to be
returned, this is equivalent to passing a
'columns=list_of_columns_to_filter':
In [495]: store.select("df", "columns=['A', 'B']")
Out[495]:
A B
2000-01-01 -0.398501 -0.677311
2000-01-02 -1.167564 -0.593353
2000-01-03 -0.131959 0.089012
2000-01-04 0.169405 -1.358046
2000-01-05 0.492195 0.076693
2000-01-06 -0.285283 -1.210529
2000-01-07 0.941577 -0.342447
2000-01-08 0.052607 2.093214
start and stop parameters can be specified to limit the total search
space. These are in terms of the total number of rows in a table.
Note
select will raise a ValueError if the query expression has an unknown
variable reference. Usually this means that you are trying to select on a column
that is not a data_column.
select will raise a SyntaxError if the query expression is not valid.
Query timedelta64[ns]#
You can store and query using the timedelta64[ns] type. Terms can be
specified in the format: <float>(<unit>), where float may be signed (and fractional), and unit can be
D,s,ms,us,ns for the timedelta. Here’s an example:
In [496]: from datetime import timedelta
In [497]: dftd = pd.DataFrame(
.....: {
.....: "A": pd.Timestamp("20130101"),
.....: "B": [
.....: pd.Timestamp("20130101") + timedelta(days=i, seconds=10)
.....: for i in range(10)
.....: ],
.....: }
.....: )
.....:
In [498]: dftd["C"] = dftd["A"] - dftd["B"]
In [499]: dftd
Out[499]:
A B C
0 2013-01-01 2013-01-01 00:00:10 -1 days +23:59:50
1 2013-01-01 2013-01-02 00:00:10 -2 days +23:59:50
2 2013-01-01 2013-01-03 00:00:10 -3 days +23:59:50
3 2013-01-01 2013-01-04 00:00:10 -4 days +23:59:50
4 2013-01-01 2013-01-05 00:00:10 -5 days +23:59:50
5 2013-01-01 2013-01-06 00:00:10 -6 days +23:59:50
6 2013-01-01 2013-01-07 00:00:10 -7 days +23:59:50
7 2013-01-01 2013-01-08 00:00:10 -8 days +23:59:50
8 2013-01-01 2013-01-09 00:00:10 -9 days +23:59:50
9 2013-01-01 2013-01-10 00:00:10 -10 days +23:59:50
In [500]: store.append("dftd", dftd, data_columns=True)
In [501]: store.select("dftd", "C<'-3.5D'")
Out[501]:
A B C
4 2013-01-01 2013-01-05 00:00:10 -5 days +23:59:50
5 2013-01-01 2013-01-06 00:00:10 -6 days +23:59:50
6 2013-01-01 2013-01-07 00:00:10 -7 days +23:59:50
7 2013-01-01 2013-01-08 00:00:10 -8 days +23:59:50
8 2013-01-01 2013-01-09 00:00:10 -9 days +23:59:50
9 2013-01-01 2013-01-10 00:00:10 -10 days +23:59:50
Query MultiIndex#
Selecting from a MultiIndex can be achieved by using the name of the level.
In [502]: df_mi.index.names
Out[502]: FrozenList(['foo', 'bar'])
In [503]: store.select("df_mi", "foo=baz and bar=two")
Out[503]:
A B C
foo bar
baz two 0.183573 0.145277 0.308146
If the MultiIndex levels names are None, the levels are automatically made available via
the level_n keyword with n the level of the MultiIndex you want to select from.
In [504]: index = pd.MultiIndex(
.....: levels=[["foo", "bar", "baz", "qux"], ["one", "two", "three"]],
.....: codes=[[0, 0, 0, 1, 1, 2, 2, 3, 3, 3], [0, 1, 2, 0, 1, 1, 2, 0, 1, 2]],
.....: )
.....:
In [505]: df_mi_2 = pd.DataFrame(np.random.randn(10, 3), index=index, columns=["A", "B", "C"])
In [506]: df_mi_2
Out[506]:
A B C
foo one -0.646538 1.210676 -0.315409
two 1.528366 0.376542 0.174490
three 1.247943 -0.742283 0.710400
bar one 0.434128 -1.246384 1.139595
two 1.388668 -0.413554 -0.666287
baz two 0.010150 -0.163820 -0.115305
three 0.216467 0.633720 0.473945
qux one -0.155446 1.287082 0.320201
two -1.256989 0.874920 0.765944
three 0.025557 -0.729782 -0.127439
In [507]: store.append("df_mi_2", df_mi_2)
# the levels are automatically included as data columns with keyword level_n
In [508]: store.select("df_mi_2", "level_0=foo and level_1=two")
Out[508]:
A B C
foo two 1.528366 0.376542 0.17449
Indexing#
You can create/modify an index for a table with create_table_index
after data is already in the table (after and append/put
operation). Creating a table index is highly encouraged. This will
speed your queries a great deal when you use a select with the
indexed dimension as the where.
Note
Indexes are automagically created on the indexables
and any data columns you specify. This behavior can be turned off by passing
index=False to append.
# we have automagically already created an index (in the first section)
In [509]: i = store.root.df.table.cols.index.index
In [510]: i.optlevel, i.kind
Out[510]: (6, 'medium')
# change an index by passing new parameters
In [511]: store.create_table_index("df", optlevel=9, kind="full")
In [512]: i = store.root.df.table.cols.index.index
In [513]: i.optlevel, i.kind
Out[513]: (9, 'full')
Oftentimes when appending large amounts of data to a store, it is useful to turn off index creation for each append, then recreate at the end.
In [514]: df_1 = pd.DataFrame(np.random.randn(10, 2), columns=list("AB"))
In [515]: df_2 = pd.DataFrame(np.random.randn(10, 2), columns=list("AB"))
In [516]: st = pd.HDFStore("appends.h5", mode="w")
In [517]: st.append("df", df_1, data_columns=["B"], index=False)
In [518]: st.append("df", df_2, data_columns=["B"], index=False)
In [519]: st.get_storer("df").table
Out[519]:
/df/table (Table(20,)) ''
description := {
"index": Int64Col(shape=(), dflt=0, pos=0),
"values_block_0": Float64Col(shape=(1,), dflt=0.0, pos=1),
"B": Float64Col(shape=(), dflt=0.0, pos=2)}
byteorder := 'little'
chunkshape := (2730,)
Then create the index when finished appending.
In [520]: st.create_table_index("df", columns=["B"], optlevel=9, kind="full")
In [521]: st.get_storer("df").table
Out[521]:
/df/table (Table(20,)) ''
description := {
"index": Int64Col(shape=(), dflt=0, pos=0),
"values_block_0": Float64Col(shape=(1,), dflt=0.0, pos=1),
"B": Float64Col(shape=(), dflt=0.0, pos=2)}
byteorder := 'little'
chunkshape := (2730,)
autoindex := True
colindexes := {
"B": Index(9, fullshuffle, zlib(1)).is_csi=True}
In [522]: st.close()
See here for how to create a completely-sorted-index (CSI) on an existing store.
Query via data columns#
You can designate (and index) certain columns that you want to be able
to perform queries (other than the indexable columns, which you can
always query). For instance say you want to perform this common
operation, on-disk, and return just the frame that matches this
query. You can specify data_columns = True to force all columns to
be data_columns.
In [523]: df_dc = df.copy()
In [524]: df_dc["string"] = "foo"
In [525]: df_dc.loc[df_dc.index[4:6], "string"] = np.nan
In [526]: df_dc.loc[df_dc.index[7:9], "string"] = "bar"
In [527]: df_dc["string2"] = "cool"
In [528]: df_dc.loc[df_dc.index[1:3], ["B", "C"]] = 1.0
In [529]: df_dc
Out[529]:
A B C string string2
2000-01-01 -0.398501 -0.677311 -0.874991 foo cool
2000-01-02 -1.167564 1.000000 1.000000 foo cool
2000-01-03 -0.131959 1.000000 1.000000 foo cool
2000-01-04 0.169405 -1.358046 -0.105563 foo cool
2000-01-05 0.492195 0.076693 0.213685 NaN cool
2000-01-06 -0.285283 -1.210529 -1.408386 NaN cool
2000-01-07 0.941577 -0.342447 0.222031 foo cool
2000-01-08 0.052607 2.093214 1.064908 bar cool
# on-disk operations
In [530]: store.append("df_dc", df_dc, data_columns=["B", "C", "string", "string2"])
In [531]: store.select("df_dc", where="B > 0")
Out[531]:
A B C string string2
2000-01-02 -1.167564 1.000000 1.000000 foo cool
2000-01-03 -0.131959 1.000000 1.000000 foo cool
2000-01-05 0.492195 0.076693 0.213685 NaN cool
2000-01-08 0.052607 2.093214 1.064908 bar cool
# getting creative
In [532]: store.select("df_dc", "B > 0 & C > 0 & string == foo")
Out[532]:
A B C string string2
2000-01-02 -1.167564 1.0 1.0 foo cool
2000-01-03 -0.131959 1.0 1.0 foo cool
# this is in-memory version of this type of selection
In [533]: df_dc[(df_dc.B > 0) & (df_dc.C > 0) & (df_dc.string == "foo")]
Out[533]:
A B C string string2
2000-01-02 -1.167564 1.0 1.0 foo cool
2000-01-03 -0.131959 1.0 1.0 foo cool
# we have automagically created this index and the B/C/string/string2
# columns are stored separately as ``PyTables`` columns
In [534]: store.root.df_dc.table
Out[534]:
/df_dc/table (Table(8,)) ''
description := {
"index": Int64Col(shape=(), dflt=0, pos=0),
"values_block_0": Float64Col(shape=(1,), dflt=0.0, pos=1),
"B": Float64Col(shape=(), dflt=0.0, pos=2),
"C": Float64Col(shape=(), dflt=0.0, pos=3),
"string": StringCol(itemsize=3, shape=(), dflt=b'', pos=4),
"string2": StringCol(itemsize=4, shape=(), dflt=b'', pos=5)}
byteorder := 'little'
chunkshape := (1680,)
autoindex := True
colindexes := {
"index": Index(6, mediumshuffle, zlib(1)).is_csi=False,
"B": Index(6, mediumshuffle, zlib(1)).is_csi=False,
"C": Index(6, mediumshuffle, zlib(1)).is_csi=False,
"string": Index(6, mediumshuffle, zlib(1)).is_csi=False,
"string2": Index(6, mediumshuffle, zlib(1)).is_csi=False}
There is some performance degradation by making lots of columns into
data columns, so it is up to the user to designate these. In addition,
you cannot change data columns (nor indexables) after the first
append/put operation (Of course you can simply read in the data and
create a new table!).
Iterator#
You can pass iterator=True or chunksize=number_in_a_chunk
to select and select_as_multiple to return an iterator on the results.
The default is 50,000 rows returned in a chunk.
In [535]: for df in store.select("df", chunksize=3):
.....: print(df)
.....:
A B C
2000-01-01 -0.398501 -0.677311 -0.874991
2000-01-02 -1.167564 -0.593353 0.146262
2000-01-03 -0.131959 0.089012 0.667450
A B C
2000-01-04 0.169405 -1.358046 -0.105563
2000-01-05 0.492195 0.076693 0.213685
2000-01-06 -0.285283 -1.210529 -1.408386
A B C
2000-01-07 0.941577 -0.342447 0.222031
2000-01-08 0.052607 2.093214 1.064908
Note
You can also use the iterator with read_hdf which will open, then
automatically close the store when finished iterating.
for df in pd.read_hdf("store.h5", "df", chunksize=3):
print(df)
Note, that the chunksize keyword applies to the source rows. So if you
are doing a query, then the chunksize will subdivide the total rows in the table
and the query applied, returning an iterator on potentially unequal sized chunks.
Here is a recipe for generating a query and using it to create equal sized return
chunks.
In [536]: dfeq = pd.DataFrame({"number": np.arange(1, 11)})
In [537]: dfeq
Out[537]:
number
0 1
1 2
2 3
3 4
4 5
5 6
6 7
7 8
8 9
9 10
In [538]: store.append("dfeq", dfeq, data_columns=["number"])
In [539]: def chunks(l, n):
.....: return [l[i: i + n] for i in range(0, len(l), n)]
.....:
In [540]: evens = [2, 4, 6, 8, 10]
In [541]: coordinates = store.select_as_coordinates("dfeq", "number=evens")
In [542]: for c in chunks(coordinates, 2):
.....: print(store.select("dfeq", where=c))
.....:
number
1 2
3 4
number
5 6
7 8
number
9 10
Advanced queries#
Select a single column#
To retrieve a single indexable or data column, use the
method select_column. This will, for example, enable you to get the index
very quickly. These return a Series of the result, indexed by the row number.
These do not currently accept the where selector.
In [543]: store.select_column("df_dc", "index")
Out[543]:
0 2000-01-01
1 2000-01-02
2 2000-01-03
3 2000-01-04
4 2000-01-05
5 2000-01-06
6 2000-01-07
7 2000-01-08
Name: index, dtype: datetime64[ns]
In [544]: store.select_column("df_dc", "string")
Out[544]:
0 foo
1 foo
2 foo
3 foo
4 NaN
5 NaN
6 foo
7 bar
Name: string, dtype: object
Selecting coordinates#
Sometimes you want to get the coordinates (a.k.a the index locations) of your query. This returns an
Int64Index of the resulting locations. These coordinates can also be passed to subsequent
where operations.
In [545]: df_coord = pd.DataFrame(
.....: np.random.randn(1000, 2), index=pd.date_range("20000101", periods=1000)
.....: )
.....:
In [546]: store.append("df_coord", df_coord)
In [547]: c = store.select_as_coordinates("df_coord", "index > 20020101")
In [548]: c
Out[548]:
Int64Index([732, 733, 734, 735, 736, 737, 738, 739, 740, 741,
...
990, 991, 992, 993, 994, 995, 996, 997, 998, 999],
dtype='int64', length=268)
In [549]: store.select("df_coord", where=c)
Out[549]:
0 1
2002-01-02 0.009035 0.921784
2002-01-03 -1.476563 -1.376375
2002-01-04 1.266731 2.173681
2002-01-05 0.147621 0.616468
2002-01-06 0.008611 2.136001
... ... ...
2002-09-22 0.781169 -0.791687
2002-09-23 -0.764810 -2.000933
2002-09-24 -0.345662 0.393915
2002-09-25 -0.116661 0.834638
2002-09-26 -1.341780 0.686366
[268 rows x 2 columns]
Selecting using a where mask#
Sometime your query can involve creating a list of rows to select. Usually this mask would
be a resulting index from an indexing operation. This example selects the months of
a datetimeindex which are 5.
In [550]: df_mask = pd.DataFrame(
.....: np.random.randn(1000, 2), index=pd.date_range("20000101", periods=1000)
.....: )
.....:
In [551]: store.append("df_mask", df_mask)
In [552]: c = store.select_column("df_mask", "index")
In [553]: where = c[pd.DatetimeIndex(c).month == 5].index
In [554]: store.select("df_mask", where=where)
Out[554]:
0 1
2000-05-01 -0.386742 -0.977433
2000-05-02 -0.228819 0.471671
2000-05-03 0.337307 1.840494
2000-05-04 0.050249 0.307149
2000-05-05 -0.802947 -0.946730
... ... ...
2002-05-27 1.605281 1.741415
2002-05-28 -0.804450 -0.715040
2002-05-29 -0.874851 0.037178
2002-05-30 -0.161167 -1.294944
2002-05-31 -0.258463 -0.731969
[93 rows x 2 columns]
Storer object#
If you want to inspect the stored object, retrieve via
get_storer. You could use this programmatically to say get the number
of rows in an object.
In [555]: store.get_storer("df_dc").nrows
Out[555]: 8
Multiple table queries#
The methods append_to_multiple and
select_as_multiple can perform appending/selecting from
multiple tables at once. The idea is to have one table (call it the
selector table) that you index most/all of the columns, and perform your
queries. The other table(s) are data tables with an index matching the
selector table’s index. You can then perform a very fast query
on the selector table, yet get lots of data back. This method is similar to
having a very wide table, but enables more efficient queries.
The append_to_multiple method splits a given single DataFrame
into multiple tables according to d, a dictionary that maps the
table names to a list of ‘columns’ you want in that table. If None
is used in place of a list, that table will have the remaining
unspecified columns of the given DataFrame. The argument selector
defines which table is the selector table (which you can make queries from).
The argument dropna will drop rows from the input DataFrame to ensure
tables are synchronized. This means that if a row for one of the tables
being written to is entirely np.NaN, that row will be dropped from all tables.
If dropna is False, THE USER IS RESPONSIBLE FOR SYNCHRONIZING THE TABLES.
Remember that entirely np.Nan rows are not written to the HDFStore, so if
you choose to call dropna=False, some tables may have more rows than others,
and therefore select_as_multiple may not work or it may return unexpected
results.
In [556]: df_mt = pd.DataFrame(
.....: np.random.randn(8, 6),
.....: index=pd.date_range("1/1/2000", periods=8),
.....: columns=["A", "B", "C", "D", "E", "F"],
.....: )
.....:
In [557]: df_mt["foo"] = "bar"
In [558]: df_mt.loc[df_mt.index[1], ("A", "B")] = np.nan
# you can also create the tables individually
In [559]: store.append_to_multiple(
.....: {"df1_mt": ["A", "B"], "df2_mt": None}, df_mt, selector="df1_mt"
.....: )
.....:
In [560]: store
Out[560]:
<class 'pandas.io.pytables.HDFStore'>
File path: store.h5
# individual tables were created
In [561]: store.select("df1_mt")
Out[561]:
A B
2000-01-01 0.079529 -1.459471
2000-01-02 NaN NaN
2000-01-03 -0.423113 2.314361
2000-01-04 0.756744 -0.792372
2000-01-05 -0.184971 0.170852
2000-01-06 0.678830 0.633974
2000-01-07 0.034973 0.974369
2000-01-08 -2.110103 0.243062
In [562]: store.select("df2_mt")
Out[562]:
C D E F foo
2000-01-01 -0.596306 -0.910022 -1.057072 -0.864360 bar
2000-01-02 0.477849 0.283128 -2.045700 -0.338206 bar
2000-01-03 -0.033100 -0.965461 -0.001079 -0.351689 bar
2000-01-04 -0.513555 -1.484776 -0.796280 -0.182321 bar
2000-01-05 -0.872407 -1.751515 0.934334 0.938818 bar
2000-01-06 -1.398256 1.347142 -0.029520 0.082738 bar
2000-01-07 -0.755544 0.380786 -1.634116 1.293610 bar
2000-01-08 1.453064 0.500558 -0.574475 0.694324 bar
# as a multiple
In [563]: store.select_as_multiple(
.....: ["df1_mt", "df2_mt"],
.....: where=["A>0", "B>0"],
.....: selector="df1_mt",
.....: )
.....:
Out[563]:
A B C D E F foo
2000-01-06 0.678830 0.633974 -1.398256 1.347142 -0.029520 0.082738 bar
2000-01-07 0.034973 0.974369 -0.755544 0.380786 -1.634116 1.293610 bar
Delete from a table#
You can delete from a table selectively by specifying a where. In
deleting rows, it is important to understand the PyTables deletes
rows by erasing the rows, then moving the following data. Thus
deleting can potentially be a very expensive operation depending on the
orientation of your data. To get optimal performance, it’s
worthwhile to have the dimension you are deleting be the first of the
indexables.
Data is ordered (on the disk) in terms of the indexables. Here’s a
simple use case. You store panel-type data, with dates in the
major_axis and ids in the minor_axis. The data is then
interleaved like this:
date_1
id_1
id_2
.
id_n
date_2
id_1
.
id_n
It should be clear that a delete operation on the major_axis will be
fairly quick, as one chunk is removed, then the following data moved. On
the other hand a delete operation on the minor_axis will be very
expensive. In this case it would almost certainly be faster to rewrite
the table using a where that selects all but the missing data.
Warning
Please note that HDF5 DOES NOT RECLAIM SPACE in the h5 files
automatically. Thus, repeatedly deleting (or removing nodes) and adding
again, WILL TEND TO INCREASE THE FILE SIZE.
To repack and clean the file, use ptrepack.
Notes & caveats#
Compression#
PyTables allows the stored data to be compressed. This applies to
all kinds of stores, not just tables. Two parameters are used to
control compression: complevel and complib.
complevel specifies if and how hard data is to be compressed.
complevel=0 and complevel=None disables compression and
0<complevel<10 enables compression.
complib specifies which compression library to use.
If nothing is specified the default library zlib is used. A
compression library usually optimizes for either good compression rates
or speed and the results will depend on the type of data. Which type of
compression to choose depends on your specific needs and data. The list
of supported compression libraries:
zlib: The default compression library.
A classic in terms of compression, achieves good compression
rates but is somewhat slow.
lzo: Fast
compression and decompression.
bzip2: Good compression rates.
blosc: Fast compression and
decompression.
Support for alternative blosc compressors:
blosc:blosclz This is the
default compressor for blosc
blosc:lz4:
A compact, very popular and fast compressor.
blosc:lz4hc:
A tweaked version of LZ4, produces better
compression ratios at the expense of speed.
blosc:snappy:
A popular compressor used in many places.
blosc:zlib: A classic;
somewhat slower than the previous ones, but
achieving better compression ratios.
blosc:zstd: An
extremely well balanced codec; it provides the best
compression ratios among the others above, and at
reasonably fast speed.
If complib is defined as something other than the listed libraries a
ValueError exception is issued.
Note
If the library specified with the complib option is missing on your platform,
compression defaults to zlib without further ado.
Enable compression for all objects within the file:
store_compressed = pd.HDFStore(
"store_compressed.h5", complevel=9, complib="blosc:blosclz"
)
Or on-the-fly compression (this only applies to tables) in stores where compression is not enabled:
store.append("df", df, complib="zlib", complevel=5)
ptrepack#
PyTables offers better write performance when tables are compressed after
they are written, as opposed to turning on compression at the very
beginning. You can use the supplied PyTables utility
ptrepack. In addition, ptrepack can change compression levels
after the fact.
ptrepack --chunkshape=auto --propindexes --complevel=9 --complib=blosc in.h5 out.h5
Furthermore ptrepack in.h5 out.h5 will repack the file to allow
you to reuse previously deleted space. Alternatively, one can simply
remove the file and write again, or use the copy method.
Caveats#
Warning
HDFStore is not-threadsafe for writing. The underlying
PyTables only supports concurrent reads (via threading or
processes). If you need reading and writing at the same time, you
need to serialize these operations in a single thread in a single
process. You will corrupt your data otherwise. See the (GH2397) for more information.
If you use locks to manage write access between multiple processes, you
may want to use fsync() before releasing write locks. For
convenience you can use store.flush(fsync=True) to do this for you.
Once a table is created columns (DataFrame)
are fixed; only exactly the same columns can be appended
Be aware that timezones (e.g., pytz.timezone('US/Eastern'))
are not necessarily equal across timezone versions. So if data is
localized to a specific timezone in the HDFStore using one version
of a timezone library and that data is updated with another version, the data
will be converted to UTC since these timezones are not considered
equal. Either use the same version of timezone library or use tz_convert with
the updated timezone definition.
Warning
PyTables will show a NaturalNameWarning if a column name
cannot be used as an attribute selector.
Natural identifiers contain only letters, numbers, and underscores,
and may not begin with a number.
Other identifiers cannot be used in a where clause
and are generally a bad idea.
DataTypes#
HDFStore will map an object dtype to the PyTables underlying
dtype. This means the following types are known to work:
Type
Represents missing values
floating : float64, float32, float16
np.nan
integer : int64, int32, int8, uint64,uint32, uint8
boolean
datetime64[ns]
NaT
timedelta64[ns]
NaT
categorical : see the section below
object : strings
np.nan
unicode columns are not supported, and WILL FAIL.
Categorical data#
You can write data that contains category dtypes to a HDFStore.
Queries work the same as if it was an object array. However, the category dtyped data is
stored in a more efficient manner.
In [564]: dfcat = pd.DataFrame(
.....: {"A": pd.Series(list("aabbcdba")).astype("category"), "B": np.random.randn(8)}
.....: )
.....:
In [565]: dfcat
Out[565]:
A B
0 a -1.608059
1 a 0.851060
2 b -0.736931
3 b 0.003538
4 c -1.422611
5 d 2.060901
6 b 0.993899
7 a -1.371768
In [566]: dfcat.dtypes
Out[566]:
A category
B float64
dtype: object
In [567]: cstore = pd.HDFStore("cats.h5", mode="w")
In [568]: cstore.append("dfcat", dfcat, format="table", data_columns=["A"])
In [569]: result = cstore.select("dfcat", where="A in ['b', 'c']")
In [570]: result
Out[570]:
A B
2 b -0.736931
3 b 0.003538
4 c -1.422611
6 b 0.993899
In [571]: result.dtypes
Out[571]:
A category
B float64
dtype: object
String columns#
min_itemsize
The underlying implementation of HDFStore uses a fixed column width (itemsize) for string columns.
A string column itemsize is calculated as the maximum of the
length of data (for that column) that is passed to the HDFStore, in the first append. Subsequent appends,
may introduce a string for a column larger than the column can hold, an Exception will be raised (otherwise you
could have a silent truncation of these columns, leading to loss of information). In the future we may relax this and
allow a user-specified truncation to occur.
Pass min_itemsize on the first table creation to a-priori specify the minimum length of a particular string column.
min_itemsize can be an integer, or a dict mapping a column name to an integer. You can pass values as a key to
allow all indexables or data_columns to have this min_itemsize.
Passing a min_itemsize dict will cause all passed columns to be created as data_columns automatically.
Note
If you are not passing any data_columns, then the min_itemsize will be the maximum of the length of any string passed
In [572]: dfs = pd.DataFrame({"A": "foo", "B": "bar"}, index=list(range(5)))
In [573]: dfs
Out[573]:
A B
0 foo bar
1 foo bar
2 foo bar
3 foo bar
4 foo bar
# A and B have a size of 30
In [574]: store.append("dfs", dfs, min_itemsize=30)
In [575]: store.get_storer("dfs").table
Out[575]:
/dfs/table (Table(5,)) ''
description := {
"index": Int64Col(shape=(), dflt=0, pos=0),
"values_block_0": StringCol(itemsize=30, shape=(2,), dflt=b'', pos=1)}
byteorder := 'little'
chunkshape := (963,)
autoindex := True
colindexes := {
"index": Index(6, mediumshuffle, zlib(1)).is_csi=False}
# A is created as a data_column with a size of 30
# B is size is calculated
In [576]: store.append("dfs2", dfs, min_itemsize={"A": 30})
In [577]: store.get_storer("dfs2").table
Out[577]:
/dfs2/table (Table(5,)) ''
description := {
"index": Int64Col(shape=(), dflt=0, pos=0),
"values_block_0": StringCol(itemsize=3, shape=(1,), dflt=b'', pos=1),
"A": StringCol(itemsize=30, shape=(), dflt=b'', pos=2)}
byteorder := 'little'
chunkshape := (1598,)
autoindex := True
colindexes := {
"index": Index(6, mediumshuffle, zlib(1)).is_csi=False,
"A": Index(6, mediumshuffle, zlib(1)).is_csi=False}
nan_rep
String columns will serialize a np.nan (a missing value) with the nan_rep string representation. This defaults to the string value nan.
You could inadvertently turn an actual nan value into a missing value.
In [578]: dfss = pd.DataFrame({"A": ["foo", "bar", "nan"]})
In [579]: dfss
Out[579]:
A
0 foo
1 bar
2 nan
In [580]: store.append("dfss", dfss)
In [581]: store.select("dfss")
Out[581]:
A
0 foo
1 bar
2 NaN
# here you need to specify a different nan rep
In [582]: store.append("dfss2", dfss, nan_rep="_nan_")
In [583]: store.select("dfss2")
Out[583]:
A
0 foo
1 bar
2 nan
External compatibility#
HDFStore writes table format objects in specific formats suitable for
producing loss-less round trips to pandas objects. For external
compatibility, HDFStore can read native PyTables format
tables.
It is possible to write an HDFStore object that can easily be imported into R using the
rhdf5 library (Package website). Create a table format store like this:
In [584]: df_for_r = pd.DataFrame(
.....: {
.....: "first": np.random.rand(100),
.....: "second": np.random.rand(100),
.....: "class": np.random.randint(0, 2, (100,)),
.....: },
.....: index=range(100),
.....: )
.....:
In [585]: df_for_r.head()
Out[585]:
first second class
0 0.013480 0.504941 0
1 0.690984 0.898188 1
2 0.510113 0.618748 1
3 0.357698 0.004972 0
4 0.451658 0.012065 1
In [586]: store_export = pd.HDFStore("export.h5")
In [587]: store_export.append("df_for_r", df_for_r, data_columns=df_dc.columns)
In [588]: store_export
Out[588]:
<class 'pandas.io.pytables.HDFStore'>
File path: export.h5
In R this file can be read into a data.frame object using the rhdf5
library. The following example function reads the corresponding column names
and data values from the values and assembles them into a data.frame:
# Load values and column names for all datasets from corresponding nodes and
# insert them into one data.frame object.
library(rhdf5)
loadhdf5data <- function(h5File) {
listing <- h5ls(h5File)
# Find all data nodes, values are stored in *_values and corresponding column
# titles in *_items
data_nodes <- grep("_values", listing$name)
name_nodes <- grep("_items", listing$name)
data_paths = paste(listing$group[data_nodes], listing$name[data_nodes], sep = "/")
name_paths = paste(listing$group[name_nodes], listing$name[name_nodes], sep = "/")
columns = list()
for (idx in seq(data_paths)) {
# NOTE: matrices returned by h5read have to be transposed to obtain
# required Fortran order!
data <- data.frame(t(h5read(h5File, data_paths[idx])))
names <- t(h5read(h5File, name_paths[idx]))
entry <- data.frame(data)
colnames(entry) <- names
columns <- append(columns, entry)
}
data <- data.frame(columns)
return(data)
}
Now you can import the DataFrame into R:
> data = loadhdf5data("transfer.hdf5")
> head(data)
first second class
1 0.4170220047 0.3266449 0
2 0.7203244934 0.5270581 0
3 0.0001143748 0.8859421 1
4 0.3023325726 0.3572698 1
5 0.1467558908 0.9085352 1
6 0.0923385948 0.6233601 1
Note
The R function lists the entire HDF5 file’s contents and assembles the
data.frame object from all matching nodes, so use this only as a
starting point if you have stored multiple DataFrame objects to a
single HDF5 file.
Performance#
tables format come with a writing performance penalty as compared to
fixed stores. The benefit is the ability to append/delete and
query (potentially very large amounts of data). Write times are
generally longer as compared with regular stores. Query times can
be quite fast, especially on an indexed axis.
You can pass chunksize=<int> to append, specifying the
write chunksize (default is 50000). This will significantly lower
your memory usage on writing.
You can pass expectedrows=<int> to the first append,
to set the TOTAL number of rows that PyTables will expect.
This will optimize read/write performance.
Duplicate rows can be written to tables, but are filtered out in
selection (with the last items being selected; thus a table is
unique on major, minor pairs)
A PerformanceWarning will be raised if you are attempting to
store types that will be pickled by PyTables (rather than stored as
endemic types). See
Here
for more information and some solutions.
Feather#
Feather provides binary columnar serialization for data frames. It is designed to make reading and writing data
frames efficient, and to make sharing data across data analysis languages easy.
Feather is designed to faithfully serialize and de-serialize DataFrames, supporting all of the pandas
dtypes, including extension dtypes such as categorical and datetime with tz.
Several caveats:
The format will NOT write an Index, or MultiIndex for the
DataFrame and will raise an error if a non-default one is provided. You
can .reset_index() to store the index or .reset_index(drop=True) to
ignore it.
Duplicate column names and non-string columns names are not supported
Actual Python objects in object dtype columns are not supported. These will
raise a helpful error message on an attempt at serialization.
See the Full Documentation.
In [589]: df = pd.DataFrame(
.....: {
.....: "a": list("abc"),
.....: "b": list(range(1, 4)),
.....: "c": np.arange(3, 6).astype("u1"),
.....: "d": np.arange(4.0, 7.0, dtype="float64"),
.....: "e": [True, False, True],
.....: "f": pd.Categorical(list("abc")),
.....: "g": pd.date_range("20130101", periods=3),
.....: "h": pd.date_range("20130101", periods=3, tz="US/Eastern"),
.....: "i": pd.date_range("20130101", periods=3, freq="ns"),
.....: }
.....: )
.....:
In [590]: df
Out[590]:
a b c ... g h i
0 a 1 3 ... 2013-01-01 2013-01-01 00:00:00-05:00 2013-01-01 00:00:00.000000000
1 b 2 4 ... 2013-01-02 2013-01-02 00:00:00-05:00 2013-01-01 00:00:00.000000001
2 c 3 5 ... 2013-01-03 2013-01-03 00:00:00-05:00 2013-01-01 00:00:00.000000002
[3 rows x 9 columns]
In [591]: df.dtypes
Out[591]:
a object
b int64
c uint8
d float64
e bool
f category
g datetime64[ns]
h datetime64[ns, US/Eastern]
i datetime64[ns]
dtype: object
Write to a feather file.
In [592]: df.to_feather("example.feather")
Read from a feather file.
In [593]: result = pd.read_feather("example.feather")
In [594]: result
Out[594]:
a b c ... g h i
0 a 1 3 ... 2013-01-01 2013-01-01 00:00:00-05:00 2013-01-01 00:00:00.000000000
1 b 2 4 ... 2013-01-02 2013-01-02 00:00:00-05:00 2013-01-01 00:00:00.000000001
2 c 3 5 ... 2013-01-03 2013-01-03 00:00:00-05:00 2013-01-01 00:00:00.000000002
[3 rows x 9 columns]
# we preserve dtypes
In [595]: result.dtypes
Out[595]:
a object
b int64
c uint8
d float64
e bool
f category
g datetime64[ns]
h datetime64[ns, US/Eastern]
i datetime64[ns]
dtype: object
Parquet#
Apache Parquet provides a partitioned binary columnar serialization for data frames. It is designed to
make reading and writing data frames efficient, and to make sharing data across data analysis
languages easy. Parquet can use a variety of compression techniques to shrink the file size as much as possible
while still maintaining good read performance.
Parquet is designed to faithfully serialize and de-serialize DataFrame s, supporting all of the pandas
dtypes, including extension dtypes such as datetime with tz.
Several caveats.
Duplicate column names and non-string columns names are not supported.
The pyarrow engine always writes the index to the output, but fastparquet only writes non-default
indexes. This extra column can cause problems for non-pandas consumers that are not expecting it. You can
force including or omitting indexes with the index argument, regardless of the underlying engine.
Index level names, if specified, must be strings.
In the pyarrow engine, categorical dtypes for non-string types can be serialized to parquet, but will de-serialize as their primitive dtype.
The pyarrow engine preserves the ordered flag of categorical dtypes with string types. fastparquet does not preserve the ordered flag.
Non supported types include Interval and actual Python object types. These will raise a helpful error message
on an attempt at serialization. Period type is supported with pyarrow >= 0.16.0.
The pyarrow engine preserves extension data types such as the nullable integer and string data
type (requiring pyarrow >= 0.16.0, and requiring the extension type to implement the needed protocols,
see the extension types documentation).
You can specify an engine to direct the serialization. This can be one of pyarrow, or fastparquet, or auto.
If the engine is NOT specified, then the pd.options.io.parquet.engine option is checked; if this is also auto,
then pyarrow is tried, and falling back to fastparquet.
See the documentation for pyarrow and fastparquet.
Note
These engines are very similar and should read/write nearly identical parquet format files.
pyarrow>=8.0.0 supports timedelta data, fastparquet>=0.1.4 supports timezone aware datetimes.
These libraries differ by having different underlying dependencies (fastparquet by using numba, while pyarrow uses a c-library).
In [596]: df = pd.DataFrame(
.....: {
.....: "a": list("abc"),
.....: "b": list(range(1, 4)),
.....: "c": np.arange(3, 6).astype("u1"),
.....: "d": np.arange(4.0, 7.0, dtype="float64"),
.....: "e": [True, False, True],
.....: "f": pd.date_range("20130101", periods=3),
.....: "g": pd.date_range("20130101", periods=3, tz="US/Eastern"),
.....: "h": pd.Categorical(list("abc")),
.....: "i": pd.Categorical(list("abc"), ordered=True),
.....: }
.....: )
.....:
In [597]: df
Out[597]:
a b c d e f g h i
0 a 1 3 4.0 True 2013-01-01 2013-01-01 00:00:00-05:00 a a
1 b 2 4 5.0 False 2013-01-02 2013-01-02 00:00:00-05:00 b b
2 c 3 5 6.0 True 2013-01-03 2013-01-03 00:00:00-05:00 c c
In [598]: df.dtypes
Out[598]:
a object
b int64
c uint8
d float64
e bool
f datetime64[ns]
g datetime64[ns, US/Eastern]
h category
i category
dtype: object
Write to a parquet file.
In [599]: df.to_parquet("example_pa.parquet", engine="pyarrow")
In [600]: df.to_parquet("example_fp.parquet", engine="fastparquet")
Read from a parquet file.
In [601]: result = pd.read_parquet("example_fp.parquet", engine="fastparquet")
In [602]: result = pd.read_parquet("example_pa.parquet", engine="pyarrow")
In [603]: result.dtypes
Out[603]:
a object
b int64
c uint8
d float64
e bool
f datetime64[ns]
g datetime64[ns, US/Eastern]
h category
i category
dtype: object
Read only certain columns of a parquet file.
In [604]: result = pd.read_parquet(
.....: "example_fp.parquet",
.....: engine="fastparquet",
.....: columns=["a", "b"],
.....: )
.....:
In [605]: result = pd.read_parquet(
.....: "example_pa.parquet",
.....: engine="pyarrow",
.....: columns=["a", "b"],
.....: )
.....:
In [606]: result.dtypes
Out[606]:
a object
b int64
dtype: object
Handling indexes#
Serializing a DataFrame to parquet may include the implicit index as one or
more columns in the output file. Thus, this code:
In [607]: df = pd.DataFrame({"a": [1, 2], "b": [3, 4]})
In [608]: df.to_parquet("test.parquet", engine="pyarrow")
creates a parquet file with three columns if you use pyarrow for serialization:
a, b, and __index_level_0__. If you’re using fastparquet, the
index may or may not
be written to the file.
This unexpected extra column causes some databases like Amazon Redshift to reject
the file, because that column doesn’t exist in the target table.
If you want to omit a dataframe’s indexes when writing, pass index=False to
to_parquet():
In [609]: df.to_parquet("test.parquet", index=False)
This creates a parquet file with just the two expected columns, a and b.
If your DataFrame has a custom index, you won’t get it back when you load
this file into a DataFrame.
Passing index=True will always write the index, even if that’s not the
underlying engine’s default behavior.
Partitioning Parquet files#
Parquet supports partitioning of data based on the values of one or more columns.
In [610]: df = pd.DataFrame({"a": [0, 0, 1, 1], "b": [0, 1, 0, 1]})
In [611]: df.to_parquet(path="test", engine="pyarrow", partition_cols=["a"], compression=None)
The path specifies the parent directory to which data will be saved.
The partition_cols are the column names by which the dataset will be partitioned.
Columns are partitioned in the order they are given. The partition splits are
determined by the unique values in the partition columns.
The above example creates a partitioned dataset that may look like:
test
├── a=0
│ ├── 0bac803e32dc42ae83fddfd029cbdebc.parquet
│ └── ...
└── a=1
├── e6ab24a4f45147b49b54a662f0c412a3.parquet
└── ...
ORC#
New in version 1.0.0.
Similar to the parquet format, the ORC Format is a binary columnar serialization
for data frames. It is designed to make reading data frames efficient. pandas provides both the reader and the writer for the
ORC format, read_orc() and to_orc(). This requires the pyarrow library.
Warning
It is highly recommended to install pyarrow using conda due to some issues occurred by pyarrow.
to_orc() requires pyarrow>=7.0.0.
read_orc() and to_orc() are not supported on Windows yet, you can find valid environments on install optional dependencies.
For supported dtypes please refer to supported ORC features in Arrow.
Currently timezones in datetime columns are not preserved when a dataframe is converted into ORC files.
In [612]: df = pd.DataFrame(
.....: {
.....: "a": list("abc"),
.....: "b": list(range(1, 4)),
.....: "c": np.arange(4.0, 7.0, dtype="float64"),
.....: "d": [True, False, True],
.....: "e": pd.date_range("20130101", periods=3),
.....: }
.....: )
.....:
In [613]: df
Out[613]:
a b c d e
0 a 1 4.0 True 2013-01-01
1 b 2 5.0 False 2013-01-02
2 c 3 6.0 True 2013-01-03
In [614]: df.dtypes
Out[614]:
a object
b int64
c float64
d bool
e datetime64[ns]
dtype: object
Write to an orc file.
In [615]: df.to_orc("example_pa.orc", engine="pyarrow")
Read from an orc file.
In [616]: result = pd.read_orc("example_pa.orc")
In [617]: result.dtypes
Out[617]:
a object
b int64
c float64
d bool
e datetime64[ns]
dtype: object
Read only certain columns of an orc file.
In [618]: result = pd.read_orc(
.....: "example_pa.orc",
.....: columns=["a", "b"],
.....: )
.....:
In [619]: result.dtypes
Out[619]:
a object
b int64
dtype: object
SQL queries#
The pandas.io.sql module provides a collection of query wrappers to both
facilitate data retrieval and to reduce dependency on DB-specific API. Database abstraction
is provided by SQLAlchemy if installed. In addition you will need a driver library for
your database. Examples of such drivers are psycopg2
for PostgreSQL or pymysql for MySQL.
For SQLite this is
included in Python’s standard library by default.
You can find an overview of supported drivers for each SQL dialect in the
SQLAlchemy docs.
If SQLAlchemy is not installed, a fallback is only provided for sqlite (and
for mysql for backwards compatibility, but this is deprecated and will be
removed in a future version).
This mode requires a Python database adapter which respect the Python
DB-API.
See also some cookbook examples for some advanced strategies.
The key functions are:
read_sql_table(table_name, con[, schema, ...])
Read SQL database table into a DataFrame.
read_sql_query(sql, con[, index_col, ...])
Read SQL query into a DataFrame.
read_sql(sql, con[, index_col, ...])
Read SQL query or database table into a DataFrame.
DataFrame.to_sql(name, con[, schema, ...])
Write records stored in a DataFrame to a SQL database.
Note
The function read_sql() is a convenience wrapper around
read_sql_table() and read_sql_query() (and for
backward compatibility) and will delegate to specific function depending on
the provided input (database table name or sql query).
Table names do not need to be quoted if they have special characters.
In the following example, we use the SQlite SQL database
engine. You can use a temporary SQLite database where data are stored in
“memory”.
To connect with SQLAlchemy you use the create_engine() function to create an engine
object from database URI. You only need to create the engine once per database you are
connecting to.
For more information on create_engine() and the URI formatting, see the examples
below and the SQLAlchemy documentation
In [620]: from sqlalchemy import create_engine
# Create your engine.
In [621]: engine = create_engine("sqlite:///:memory:")
If you want to manage your own connections you can pass one of those instead. The example below opens a
connection to the database using a Python context manager that automatically closes the connection after
the block has completed.
See the SQLAlchemy docs
for an explanation of how the database connection is handled.
with engine.connect() as conn, conn.begin():
data = pd.read_sql_table("data", conn)
Warning
When you open a connection to a database you are also responsible for closing it.
Side effects of leaving a connection open may include locking the database or
other breaking behaviour.
Writing DataFrames#
Assuming the following data is in a DataFrame data, we can insert it into
the database using to_sql().
id
Date
Col_1
Col_2
Col_3
26
2012-10-18
X
25.7
True
42
2012-10-19
Y
-12.4
False
63
2012-10-20
Z
5.73
True
In [622]: import datetime
In [623]: c = ["id", "Date", "Col_1", "Col_2", "Col_3"]
In [624]: d = [
.....: (26, datetime.datetime(2010, 10, 18), "X", 27.5, True),
.....: (42, datetime.datetime(2010, 10, 19), "Y", -12.5, False),
.....: (63, datetime.datetime(2010, 10, 20), "Z", 5.73, True),
.....: ]
.....:
In [625]: data = pd.DataFrame(d, columns=c)
In [626]: data
Out[626]:
id Date Col_1 Col_2 Col_3
0 26 2010-10-18 X 27.50 True
1 42 2010-10-19 Y -12.50 False
2 63 2010-10-20 Z 5.73 True
In [627]: data.to_sql("data", engine)
Out[627]: 3
With some databases, writing large DataFrames can result in errors due to
packet size limitations being exceeded. This can be avoided by setting the
chunksize parameter when calling to_sql. For example, the following
writes data to the database in batches of 1000 rows at a time:
In [628]: data.to_sql("data_chunked", engine, chunksize=1000)
Out[628]: 3
SQL data types#
to_sql() will try to map your data to an appropriate
SQL data type based on the dtype of the data. When you have columns of dtype
object, pandas will try to infer the data type.
You can always override the default type by specifying the desired SQL type of
any of the columns by using the dtype argument. This argument needs a
dictionary mapping column names to SQLAlchemy types (or strings for the sqlite3
fallback mode).
For example, specifying to use the sqlalchemy String type instead of the
default Text type for string columns:
In [629]: from sqlalchemy.types import String
In [630]: data.to_sql("data_dtype", engine, dtype={"Col_1": String})
Out[630]: 3
Note
Due to the limited support for timedelta’s in the different database
flavors, columns with type timedelta64 will be written as integer
values as nanoseconds to the database and a warning will be raised.
Note
Columns of category dtype will be converted to the dense representation
as you would get with np.asarray(categorical) (e.g. for string categories
this gives an array of strings).
Because of this, reading the database table back in does not generate
a categorical.
Datetime data types#
Using SQLAlchemy, to_sql() is capable of writing
datetime data that is timezone naive or timezone aware. However, the resulting
data stored in the database ultimately depends on the supported data type
for datetime data of the database system being used.
The following table lists supported data types for datetime data for some
common databases. Other database dialects may have different data types for
datetime data.
Database
SQL Datetime Types
Timezone Support
SQLite
TEXT
No
MySQL
TIMESTAMP or DATETIME
No
PostgreSQL
TIMESTAMP or TIMESTAMP WITH TIME ZONE
Yes
When writing timezone aware data to databases that do not support timezones,
the data will be written as timezone naive timestamps that are in local time
with respect to the timezone.
read_sql_table() is also capable of reading datetime data that is
timezone aware or naive. When reading TIMESTAMP WITH TIME ZONE types, pandas
will convert the data to UTC.
Insertion method#
The parameter method controls the SQL insertion clause used.
Possible values are:
None: Uses standard SQL INSERT clause (one per row).
'multi': Pass multiple values in a single INSERT clause.
It uses a special SQL syntax not supported by all backends.
This usually provides better performance for analytic databases
like Presto and Redshift, but has worse performance for
traditional SQL backend if the table contains many columns.
For more information check the SQLAlchemy documentation.
callable with signature (pd_table, conn, keys, data_iter):
This can be used to implement a more performant insertion method based on
specific backend dialect features.
Example of a callable using PostgreSQL COPY clause:
# Alternative to_sql() *method* for DBs that support COPY FROM
import csv
from io import StringIO
def psql_insert_copy(table, conn, keys, data_iter):
"""
Execute SQL statement inserting data
Parameters
----------
table : pandas.io.sql.SQLTable
conn : sqlalchemy.engine.Engine or sqlalchemy.engine.Connection
keys : list of str
Column names
data_iter : Iterable that iterates the values to be inserted
"""
# gets a DBAPI connection that can provide a cursor
dbapi_conn = conn.connection
with dbapi_conn.cursor() as cur:
s_buf = StringIO()
writer = csv.writer(s_buf)
writer.writerows(data_iter)
s_buf.seek(0)
columns = ', '.join(['"{}"'.format(k) for k in keys])
if table.schema:
table_name = '{}.{}'.format(table.schema, table.name)
else:
table_name = table.name
sql = 'COPY {} ({}) FROM STDIN WITH CSV'.format(
table_name, columns)
cur.copy_expert(sql=sql, file=s_buf)
Reading tables#
read_sql_table() will read a database table given the
table name and optionally a subset of columns to read.
Note
In order to use read_sql_table(), you must have the
SQLAlchemy optional dependency installed.
In [631]: pd.read_sql_table("data", engine)
Out[631]:
index id Date Col_1 Col_2 Col_3
0 0 26 2010-10-18 X 27.50 True
1 1 42 2010-10-19 Y -12.50 False
2 2 63 2010-10-20 Z 5.73 True
Note
Note that pandas infers column dtypes from query outputs, and not by looking
up data types in the physical database schema. For example, assume userid
is an integer column in a table. Then, intuitively, select userid ... will
return integer-valued series, while select cast(userid as text) ... will
return object-valued (str) series. Accordingly, if the query output is empty,
then all resulting columns will be returned as object-valued (since they are
most general). If you foresee that your query will sometimes generate an empty
result, you may want to explicitly typecast afterwards to ensure dtype
integrity.
You can also specify the name of the column as the DataFrame index,
and specify a subset of columns to be read.
In [632]: pd.read_sql_table("data", engine, index_col="id")
Out[632]:
index Date Col_1 Col_2 Col_3
id
26 0 2010-10-18 X 27.50 True
42 1 2010-10-19 Y -12.50 False
63 2 2010-10-20 Z 5.73 True
In [633]: pd.read_sql_table("data", engine, columns=["Col_1", "Col_2"])
Out[633]:
Col_1 Col_2
0 X 27.50
1 Y -12.50
2 Z 5.73
And you can explicitly force columns to be parsed as dates:
In [634]: pd.read_sql_table("data", engine, parse_dates=["Date"])
Out[634]:
index id Date Col_1 Col_2 Col_3
0 0 26 2010-10-18 X 27.50 True
1 1 42 2010-10-19 Y -12.50 False
2 2 63 2010-10-20 Z 5.73 True
If needed you can explicitly specify a format string, or a dict of arguments
to pass to pandas.to_datetime():
pd.read_sql_table("data", engine, parse_dates={"Date": "%Y-%m-%d"})
pd.read_sql_table(
"data",
engine,
parse_dates={"Date": {"format": "%Y-%m-%d %H:%M:%S"}},
)
You can check if a table exists using has_table()
Schema support#
Reading from and writing to different schema’s is supported through the schema
keyword in the read_sql_table() and to_sql()
functions. Note however that this depends on the database flavor (sqlite does not
have schema’s). For example:
df.to_sql("table", engine, schema="other_schema")
pd.read_sql_table("table", engine, schema="other_schema")
Querying#
You can query using raw SQL in the read_sql_query() function.
In this case you must use the SQL variant appropriate for your database.
When using SQLAlchemy, you can also pass SQLAlchemy Expression language constructs,
which are database-agnostic.
In [635]: pd.read_sql_query("SELECT * FROM data", engine)
Out[635]:
index id Date Col_1 Col_2 Col_3
0 0 26 2010-10-18 00:00:00.000000 X 27.50 1
1 1 42 2010-10-19 00:00:00.000000 Y -12.50 0
2 2 63 2010-10-20 00:00:00.000000 Z 5.73 1
Of course, you can specify a more “complex” query.
In [636]: pd.read_sql_query("SELECT id, Col_1, Col_2 FROM data WHERE id = 42;", engine)
Out[636]:
id Col_1 Col_2
0 42 Y -12.5
The read_sql_query() function supports a chunksize argument.
Specifying this will return an iterator through chunks of the query result:
In [637]: df = pd.DataFrame(np.random.randn(20, 3), columns=list("abc"))
In [638]: df.to_sql("data_chunks", engine, index=False)
Out[638]: 20
In [639]: for chunk in pd.read_sql_query("SELECT * FROM data_chunks", engine, chunksize=5):
.....: print(chunk)
.....:
a b c
0 0.070470 0.901320 0.937577
1 0.295770 1.420548 -0.005283
2 -1.518598 -0.730065 0.226497
3 -2.061465 0.632115 0.853619
4 2.719155 0.139018 0.214557
a b c
0 -1.538924 -0.366973 -0.748801
1 -0.478137 -1.559153 -3.097759
2 -2.320335 -0.221090 0.119763
3 0.608228 1.064810 -0.780506
4 -2.736887 0.143539 1.170191
a b c
0 -1.573076 0.075792 -1.722223
1 -0.774650 0.803627 0.221665
2 0.584637 0.147264 1.057825
3 -0.284136 0.912395 1.552808
4 0.189376 -0.109830 0.539341
a b c
0 0.592591 -0.155407 -1.356475
1 0.833837 1.524249 1.606722
2 -0.029487 -0.051359 1.700152
3 0.921484 -0.926347 0.979818
4 0.182380 -0.186376 0.049820
You can also run a plain query without creating a DataFrame with
execute(). This is useful for queries that don’t return values,
such as INSERT. This is functionally equivalent to calling execute on the
SQLAlchemy engine or db connection object. Again, you must use the SQL syntax
variant appropriate for your database.
from pandas.io import sql
sql.execute("SELECT * FROM table_name", engine)
sql.execute(
"INSERT INTO table_name VALUES(?, ?, ?)", engine, params=[("id", 1, 12.2, True)]
)
Engine connection examples#
To connect with SQLAlchemy you use the create_engine() function to create an engine
object from database URI. You only need to create the engine once per database you are
connecting to.
from sqlalchemy import create_engine
engine = create_engine("postgresql://scott:[email protected]:5432/mydatabase")
engine = create_engine("mysql+mysqldb://scott:[email protected]/foo")
engine = create_engine("oracle://scott:[email protected]:1521/sidname")
engine = create_engine("mssql+pyodbc://mydsn")
# sqlite://<nohostname>/<path>
# where <path> is relative:
engine = create_engine("sqlite:///foo.db")
# or absolute, starting with a slash:
engine = create_engine("sqlite:////absolute/path/to/foo.db")
For more information see the examples the SQLAlchemy documentation
Advanced SQLAlchemy queries#
You can use SQLAlchemy constructs to describe your query.
Use sqlalchemy.text() to specify query parameters in a backend-neutral way
In [640]: import sqlalchemy as sa
In [641]: pd.read_sql(
.....: sa.text("SELECT * FROM data where Col_1=:col1"), engine, params={"col1": "X"}
.....: )
.....:
Out[641]:
index id Date Col_1 Col_2 Col_3
0 0 26 2010-10-18 00:00:00.000000 X 27.5 1
If you have an SQLAlchemy description of your database you can express where conditions using SQLAlchemy expressions
In [642]: metadata = sa.MetaData()
In [643]: data_table = sa.Table(
.....: "data",
.....: metadata,
.....: sa.Column("index", sa.Integer),
.....: sa.Column("Date", sa.DateTime),
.....: sa.Column("Col_1", sa.String),
.....: sa.Column("Col_2", sa.Float),
.....: sa.Column("Col_3", sa.Boolean),
.....: )
.....:
In [644]: pd.read_sql(sa.select([data_table]).where(data_table.c.Col_3 is True), engine)
Out[644]:
Empty DataFrame
Columns: [index, Date, Col_1, Col_2, Col_3]
Index: []
You can combine SQLAlchemy expressions with parameters passed to read_sql() using sqlalchemy.bindparam()
In [645]: import datetime as dt
In [646]: expr = sa.select([data_table]).where(data_table.c.Date > sa.bindparam("date"))
In [647]: pd.read_sql(expr, engine, params={"date": dt.datetime(2010, 10, 18)})
Out[647]:
index Date Col_1 Col_2 Col_3
0 1 2010-10-19 Y -12.50 False
1 2 2010-10-20 Z 5.73 True
Sqlite fallback#
The use of sqlite is supported without using SQLAlchemy.
This mode requires a Python database adapter which respect the Python
DB-API.
You can create connections like so:
import sqlite3
con = sqlite3.connect(":memory:")
And then issue the following queries:
data.to_sql("data", con)
pd.read_sql_query("SELECT * FROM data", con)
Google BigQuery#
Warning
Starting in 0.20.0, pandas has split off Google BigQuery support into the
separate package pandas-gbq. You can pip install pandas-gbq to get it.
The pandas-gbq package provides functionality to read/write from Google BigQuery.
pandas integrates with this external package. if pandas-gbq is installed, you can
use the pandas methods pd.read_gbq and DataFrame.to_gbq, which will call the
respective functions from pandas-gbq.
Full documentation can be found here.
Stata format#
Writing to stata format#
The method to_stata() will write a DataFrame
into a .dta file. The format version of this file is always 115 (Stata 12).
In [648]: df = pd.DataFrame(np.random.randn(10, 2), columns=list("AB"))
In [649]: df.to_stata("stata.dta")
Stata data files have limited data type support; only strings with
244 or fewer characters, int8, int16, int32, float32
and float64 can be stored in .dta files. Additionally,
Stata reserves certain values to represent missing data. Exporting a
non-missing value that is outside of the permitted range in Stata for
a particular data type will retype the variable to the next larger
size. For example, int8 values are restricted to lie between -127
and 100 in Stata, and so variables with values above 100 will trigger
a conversion to int16. nan values in floating points data
types are stored as the basic missing data type (. in Stata).
Note
It is not possible to export missing data values for integer data types.
The Stata writer gracefully handles other data types including int64,
bool, uint8, uint16, uint32 by casting to
the smallest supported type that can represent the data. For example, data
with a type of uint8 will be cast to int8 if all values are less than
100 (the upper bound for non-missing int8 data in Stata), or, if values are
outside of this range, the variable is cast to int16.
Warning
Conversion from int64 to float64 may result in a loss of precision
if int64 values are larger than 2**53.
Warning
StataWriter and
to_stata() only support fixed width
strings containing up to 244 characters, a limitation imposed by the version
115 dta file format. Attempting to write Stata dta files with strings
longer than 244 characters raises a ValueError.
Reading from Stata format#
The top-level function read_stata will read a dta file and return
either a DataFrame or a StataReader that can
be used to read the file incrementally.
In [650]: pd.read_stata("stata.dta")
Out[650]:
index A B
0 0 -1.690072 0.405144
1 1 -1.511309 -1.531396
2 2 0.572698 -1.106845
3 3 -1.185859 0.174564
4 4 0.603797 -1.796129
5 5 -0.791679 1.173795
6 6 -0.277710 1.859988
7 7 -0.258413 1.251808
8 8 1.443262 0.441553
9 9 1.168163 -2.054946
Specifying a chunksize yields a
StataReader instance that can be used to
read chunksize lines from the file at a time. The StataReader
object can be used as an iterator.
In [651]: with pd.read_stata("stata.dta", chunksize=3) as reader:
.....: for df in reader:
.....: print(df.shape)
.....:
(3, 3)
(3, 3)
(3, 3)
(1, 3)
For more fine-grained control, use iterator=True and specify
chunksize with each call to
read().
In [652]: with pd.read_stata("stata.dta", iterator=True) as reader:
.....: chunk1 = reader.read(5)
.....: chunk2 = reader.read(5)
.....:
Currently the index is retrieved as a column.
The parameter convert_categoricals indicates whether value labels should be
read and used to create a Categorical variable from them. Value labels can
also be retrieved by the function value_labels, which requires read()
to be called before use.
The parameter convert_missing indicates whether missing value
representations in Stata should be preserved. If False (the default),
missing values are represented as np.nan. If True, missing values are
represented using StataMissingValue objects, and columns containing missing
values will have object data type.
Note
read_stata() and
StataReader support .dta formats 113-115
(Stata 10-12), 117 (Stata 13), and 118 (Stata 14).
Note
Setting preserve_dtypes=False will upcast to the standard pandas data types:
int64 for all integer types and float64 for floating point data. By default,
the Stata data types are preserved when importing.
Categorical data#
Categorical data can be exported to Stata data files as value labeled data.
The exported data consists of the underlying category codes as integer data values
and the categories as value labels. Stata does not have an explicit equivalent
to a Categorical and information about whether the variable is ordered
is lost when exporting.
Warning
Stata only supports string value labels, and so str is called on the
categories when exporting data. Exporting Categorical variables with
non-string categories produces a warning, and can result a loss of
information if the str representations of the categories are not unique.
Labeled data can similarly be imported from Stata data files as Categorical
variables using the keyword argument convert_categoricals (True by default).
The keyword argument order_categoricals (True by default) determines
whether imported Categorical variables are ordered.
Note
When importing categorical data, the values of the variables in the Stata
data file are not preserved since Categorical variables always
use integer data types between -1 and n-1 where n is the number
of categories. If the original values in the Stata data file are required,
these can be imported by setting convert_categoricals=False, which will
import original data (but not the variable labels). The original values can
be matched to the imported categorical data since there is a simple mapping
between the original Stata data values and the category codes of imported
Categorical variables: missing values are assigned code -1, and the
smallest original value is assigned 0, the second smallest is assigned
1 and so on until the largest original value is assigned the code n-1.
Note
Stata supports partially labeled series. These series have value labels for
some but not all data values. Importing a partially labeled series will produce
a Categorical with string categories for the values that are labeled and
numeric categories for values with no label.
SAS formats#
The top-level function read_sas() can read (but not write) SAS
XPORT (.xpt) and (since v0.18.0) SAS7BDAT (.sas7bdat) format files.
SAS files only contain two value types: ASCII text and floating point
values (usually 8 bytes but sometimes truncated). For xport files,
there is no automatic type conversion to integers, dates, or
categoricals. For SAS7BDAT files, the format codes may allow date
variables to be automatically converted to dates. By default the
whole file is read and returned as a DataFrame.
Specify a chunksize or use iterator=True to obtain reader
objects (XportReader or SAS7BDATReader) for incrementally
reading the file. The reader objects also have attributes that
contain additional information about the file and its variables.
Read a SAS7BDAT file:
df = pd.read_sas("sas_data.sas7bdat")
Obtain an iterator and read an XPORT file 100,000 lines at a time:
def do_something(chunk):
pass
with pd.read_sas("sas_xport.xpt", chunk=100000) as rdr:
for chunk in rdr:
do_something(chunk)
The specification for the xport file format is available from the SAS
web site.
No official documentation is available for the SAS7BDAT format.
SPSS formats#
New in version 0.25.0.
The top-level function read_spss() can read (but not write) SPSS
SAV (.sav) and ZSAV (.zsav) format files.
SPSS files contain column names. By default the
whole file is read, categorical columns are converted into pd.Categorical,
and a DataFrame with all columns is returned.
Specify the usecols parameter to obtain a subset of columns. Specify convert_categoricals=False
to avoid converting categorical columns into pd.Categorical.
Read an SPSS file:
df = pd.read_spss("spss_data.sav")
Extract a subset of columns contained in usecols from an SPSS file and
avoid converting categorical columns into pd.Categorical:
df = pd.read_spss(
"spss_data.sav",
usecols=["foo", "bar"],
convert_categoricals=False,
)
More information about the SAV and ZSAV file formats is available here.
Other file formats#
pandas itself only supports IO with a limited set of file formats that map
cleanly to its tabular data model. For reading and writing other file formats
into and from pandas, we recommend these packages from the broader community.
netCDF#
xarray provides data structures inspired by the pandas DataFrame for working
with multi-dimensional datasets, with a focus on the netCDF file format and
easy conversion to and from pandas.
Performance considerations#
This is an informal comparison of various IO methods, using pandas
0.24.2. Timings are machine dependent and small differences should be
ignored.
In [1]: sz = 1000000
In [2]: df = pd.DataFrame({'A': np.random.randn(sz), 'B': [1] * sz})
In [3]: df.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1000000 entries, 0 to 999999
Data columns (total 2 columns):
A 1000000 non-null float64
B 1000000 non-null int64
dtypes: float64(1), int64(1)
memory usage: 15.3 MB
The following test functions will be used below to compare the performance of several IO methods:
import numpy as np
import os
sz = 1000000
df = pd.DataFrame({"A": np.random.randn(sz), "B": [1] * sz})
sz = 1000000
np.random.seed(42)
df = pd.DataFrame({"A": np.random.randn(sz), "B": [1] * sz})
def test_sql_write(df):
if os.path.exists("test.sql"):
os.remove("test.sql")
sql_db = sqlite3.connect("test.sql")
df.to_sql(name="test_table", con=sql_db)
sql_db.close()
def test_sql_read():
sql_db = sqlite3.connect("test.sql")
pd.read_sql_query("select * from test_table", sql_db)
sql_db.close()
def test_hdf_fixed_write(df):
df.to_hdf("test_fixed.hdf", "test", mode="w")
def test_hdf_fixed_read():
pd.read_hdf("test_fixed.hdf", "test")
def test_hdf_fixed_write_compress(df):
df.to_hdf("test_fixed_compress.hdf", "test", mode="w", complib="blosc")
def test_hdf_fixed_read_compress():
pd.read_hdf("test_fixed_compress.hdf", "test")
def test_hdf_table_write(df):
df.to_hdf("test_table.hdf", "test", mode="w", format="table")
def test_hdf_table_read():
pd.read_hdf("test_table.hdf", "test")
def test_hdf_table_write_compress(df):
df.to_hdf(
"test_table_compress.hdf", "test", mode="w", complib="blosc", format="table"
)
def test_hdf_table_read_compress():
pd.read_hdf("test_table_compress.hdf", "test")
def test_csv_write(df):
df.to_csv("test.csv", mode="w")
def test_csv_read():
pd.read_csv("test.csv", index_col=0)
def test_feather_write(df):
df.to_feather("test.feather")
def test_feather_read():
pd.read_feather("test.feather")
def test_pickle_write(df):
df.to_pickle("test.pkl")
def test_pickle_read():
pd.read_pickle("test.pkl")
def test_pickle_write_compress(df):
df.to_pickle("test.pkl.compress", compression="xz")
def test_pickle_read_compress():
pd.read_pickle("test.pkl.compress", compression="xz")
def test_parquet_write(df):
df.to_parquet("test.parquet")
def test_parquet_read():
pd.read_parquet("test.parquet")
When writing, the top three functions in terms of speed are test_feather_write, test_hdf_fixed_write and test_hdf_fixed_write_compress.
In [4]: %timeit test_sql_write(df)
3.29 s ± 43.2 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [5]: %timeit test_hdf_fixed_write(df)
19.4 ms ± 560 µs per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [6]: %timeit test_hdf_fixed_write_compress(df)
19.6 ms ± 308 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [7]: %timeit test_hdf_table_write(df)
449 ms ± 5.61 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [8]: %timeit test_hdf_table_write_compress(df)
448 ms ± 11.9 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [9]: %timeit test_csv_write(df)
3.66 s ± 26.2 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [10]: %timeit test_feather_write(df)
9.75 ms ± 117 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [11]: %timeit test_pickle_write(df)
30.1 ms ± 229 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [12]: %timeit test_pickle_write_compress(df)
4.29 s ± 15.9 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [13]: %timeit test_parquet_write(df)
67.6 ms ± 706 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
When reading, the top three functions in terms of speed are test_feather_read, test_pickle_read and
test_hdf_fixed_read.
In [14]: %timeit test_sql_read()
1.77 s ± 17.7 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [15]: %timeit test_hdf_fixed_read()
19.4 ms ± 436 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [16]: %timeit test_hdf_fixed_read_compress()
19.5 ms ± 222 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [17]: %timeit test_hdf_table_read()
38.6 ms ± 857 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [18]: %timeit test_hdf_table_read_compress()
38.8 ms ± 1.49 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [19]: %timeit test_csv_read()
452 ms ± 9.04 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [20]: %timeit test_feather_read()
12.4 ms ± 99.7 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [21]: %timeit test_pickle_read()
18.4 ms ± 191 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [22]: %timeit test_pickle_read_compress()
915 ms ± 7.48 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [23]: %timeit test_parquet_read()
24.4 ms ± 146 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
The files test.pkl.compress, test.parquet and test.feather took the least space on disk (in bytes).
29519500 Oct 10 06:45 test.csv
16000248 Oct 10 06:45 test.feather
8281983 Oct 10 06:49 test.parquet
16000857 Oct 10 06:47 test.pkl
7552144 Oct 10 06:48 test.pkl.compress
34816000 Oct 10 06:42 test.sql
24009288 Oct 10 06:43 test_fixed.hdf
24009288 Oct 10 06:43 test_fixed_compress.hdf
24458940 Oct 10 06:44 test_table.hdf
24458940 Oct 10 06:44 test_table_compress.hdf
| 933
| 1,110
|
Pandas can't read in excel file
Something is wrong with my pandas module. I tried to read in an excel file using the following code, which works on my classmate's computer, but it's giving me an error on my computer:
FFT1=pd.read_excel('FFT1.xlsx', sheet_name='sheet1')
The file named 'FFT1.xlsx' is in the same directory as my jupyter notebook. The error message says:
XLRDError Traceback (most recent call last)
~\AppData\Local\Temp/ipykernel_7436/2793485739.py in <module>
----> 1 FFT1=pd.read_excel('FFT1.xlsx', sheet_name='sheet1')
D:\Softwares\Anaconda\lib\site-packages\pandas\io\excel\_base.py in read_excel(io, sheet_name, header, names, index_col, usecols, squeeze, dtype, engine, converters, true_values, false_values, skiprows, nrows, na_values, keep_default_na, verbose, parse_dates, date_parser, thousands, comment, skipfooter, convert_float, mangle_dupe_cols, **kwds)
302
303 if not isinstance(io, ExcelFile):
--> 304 io = ExcelFile(io, engine=engine)
305 elif engine and engine != io.engine:
306 raise ValueError(
D:\Softwares\Anaconda\lib\site-packages\pandas\io\excel\_base.py in __init__(self, io, engine)
819 self._io = stringify_path(io)
820
--> 821 self._reader = self._engines[engine](self._io)
822
823 def __fspath__(self):
D:\Softwares\Anaconda\lib\site-packages\pandas\io\excel\_xlrd.py in __init__(self, filepath_or_buffer)
19 err_msg = "Install xlrd >= 1.0.0 for Excel support"
20 import_optional_dependency("xlrd", extra=err_msg)
---> 21 super().__init__(filepath_or_buffer)
22
23 @property
D:\Softwares\Anaconda\lib\site-packages\pandas\io\excel\_base.py in __init__(self, filepath_or_buffer)
351 self.book = self.load_workbook(filepath_or_buffer)
352 elif isinstance(filepath_or_buffer, str):
--> 353 self.book = self.load_workbook(filepath_or_buffer)
354 elif isinstance(filepath_or_buffer, bytes):
355 self.book = self.load_workbook(BytesIO(filepath_or_buffer))
D:\Softwares\Anaconda\lib\site-packages\pandas\io\excel\_xlrd.py in load_workbook(self, filepath_or_buffer)
34 return open_workbook(file_contents=data)
35 else:
---> 36 return open_workbook(filepath_or_buffer)
37
38 @property
D:\Softwares\Anaconda\lib\site-packages\xlrd\__init__.py in open_workbook(filename, logfile, verbosity, use_mmap, file_contents, encoding_override, formatting_info, on_demand, ragged_rows, ignore_workbook_corruption)
168 # files that xlrd can parse don't start with the expected signature.
169 if file_format and file_format != 'xls':
--> 170 raise XLRDError(FILE_FORMAT_DESCRIPTIONS[file_format]+'; not supported')
171
172 bk = open_workbook_xls(
XLRDError: Excel xlsx file; not supported
How should I fix this?
|
68,678,603
|
How to drop the columns by using pandas.Series.str.contains
|
<p>DataFrame like this:</p>
<pre><code>import pandas
df = pandas.DataFrame({'id':[1,2,3,4,5,6],'name':['test1','test2','test','D','E','F'],'sex':['man','woman','woman','man','woman','man']},index=['a','b','c','d','e','f'])
print(df)
print('*'*100)
</code></pre>
<p>I can drop the rows by index label:</p>
<pre><code>df.drop(df[df.name.str.contains('test')|df.sex.str.contains('woman')].index,inplace=True)
print(df)
</code></pre>
<p>How can i find out the columns label which contains 'test' or 'woman' and remove the columns</p>
| 68,678,853
| 2021-08-06T08:35:31.977000
| 1
| null | 0
| 101
|
python|pandas
|
<p>use a <code>bitwise</code> ampersand <code>&</code> for an AND condition and just re-assign the dataframe.</p>
<p>you can invert conditions with <code>~</code></p>
<p>it's recommended not to use <code>inplace</code> anymore see <a href="https://stackoverflow.com/questions/43893457/understanding-inplace-true">this post</a></p>
<pre><code>df1 = df[~(df['name'].str.contains('test')
) & ~(df['sex'].str.contains('woman'))]
print(df1)
id name sex
d 4 D man
f 6 F man
</code></pre>
| 2021-08-06T08:55:18.807000
| 0
|
https://pandas.pydata.org/docs/reference/api/pandas.Series.str.contains.html
|
pandas.Series.str.contains#
pandas.Series.str.contains#
Series.str.contains(pat, case=True, flags=0, na=None, regex=True)[source]#
Test if pattern or regex is contained within a string of a Series or Index.
Return boolean Series or Index based on whether a given pattern or regex is
use a bitwise ampersand & for an AND condition and just re-assign the dataframe.
you can invert conditions with ~
it's recommended not to use inplace anymore see this post
df1 = df[~(df['name'].str.contains('test')
) & ~(df['sex'].str.contains('woman'))]
print(df1)
id name sex
d 4 D man
f 6 F man
contained within a string of a Series or Index.
Parameters
patstrCharacter sequence or regular expression.
casebool, default TrueIf True, case sensitive.
flagsint, default 0 (no flags)Flags to pass through to the re module, e.g. re.IGNORECASE.
nascalar, optionalFill value for missing values. The default depends on dtype of the
array. For object-dtype, numpy.nan is used. For StringDtype,
pandas.NA is used.
regexbool, default TrueIf True, assumes the pat is a regular expression.
If False, treats the pat as a literal string.
Returns
Series or Index of boolean valuesA Series or Index of boolean values indicating whether the
given pattern is contained within the string of each element
of the Series or Index.
See also
matchAnalogous, but stricter, relying on re.match instead of re.search.
Series.str.startswithTest if the start of each string element matches a pattern.
Series.str.endswithSame as startswith, but tests the end of string.
Examples
Returning a Series of booleans using only a literal pattern.
>>> s1 = pd.Series(['Mouse', 'dog', 'house and parrot', '23', np.NaN])
>>> s1.str.contains('og', regex=False)
0 False
1 True
2 False
3 False
4 NaN
dtype: object
Returning an Index of booleans using only a literal pattern.
>>> ind = pd.Index(['Mouse', 'dog', 'house and parrot', '23.0', np.NaN])
>>> ind.str.contains('23', regex=False)
Index([False, False, False, True, nan], dtype='object')
Specifying case sensitivity using case.
>>> s1.str.contains('oG', case=True, regex=True)
0 False
1 False
2 False
3 False
4 NaN
dtype: object
Specifying na to be False instead of NaN replaces NaN values
with False. If Series or Index does not contain NaN values
the resultant dtype will be bool, otherwise, an object dtype.
>>> s1.str.contains('og', na=False, regex=True)
0 False
1 True
2 False
3 False
4 False
dtype: bool
Returning ‘house’ or ‘dog’ when either expression occurs in a string.
>>> s1.str.contains('house|dog', regex=True)
0 False
1 True
2 True
3 False
4 NaN
dtype: object
Ignoring case sensitivity using flags with regex.
>>> import re
>>> s1.str.contains('PARROT', flags=re.IGNORECASE, regex=True)
0 False
1 False
2 True
3 False
4 NaN
dtype: object
Returning any digit using regular expression.
>>> s1.str.contains('\\d', regex=True)
0 False
1 False
2 False
3 True
4 NaN
dtype: object
Ensure pat is a not a literal pattern when regex is set to True.
Note in the following example one might expect only s2[1] and s2[3] to
return True. However, ‘.0’ as a regex matches any character
followed by a 0.
>>> s2 = pd.Series(['40', '40.0', '41', '41.0', '35'])
>>> s2.str.contains('.0', regex=True)
0 True
1 True
2 False
3 True
4 False
dtype: bool
| 287
| 629
|
How to drop the columns by using pandas.Series.str.contains
DataFrame like this:
import pandas
df = pandas.DataFrame({'id':[1,2,3,4,5,6],'name':['test1','test2','test','D','E','F'],'sex':['man','woman','woman','man','woman','man']},index=['a','b','c','d','e','f'])
print(df)
print('*'*100)
I can drop the rows by index label:
df.drop(df[df.name.str.contains('test')|df.sex.str.contains('woman')].index,inplace=True)
print(df)
How can i find out the columns label which contains 'test' or 'woman' and remove the columns
|
64,480,022
|
how to extract rows of the date of the last row in a dataframe and if date is not present then pick the previous date?
|
<p>I have a dataframe with the dates and some other columns and I want to pick all the dates as of the last date of the dataframe for all the months present in that dataframe and if the dates are not present then pick the previous date.</p>
<pre><code>eg.
Date Month Year
0 2018-03-21 3 2018
1 2018-03-22 3 2018
2 2018-03-25 3 2018
3 2018-03-26 3 2018
4 2018-03-27 3 2018
...
77 2020-05-12 5 2020
78 2020-05-13 5 2020
</code></pre>
<p>so I want to extract all the 13th between these dates and if 13 is not present let's say Saturday and Sunday is excluded the datapoint is not there for these two days then we need to check whether 13 is on Sunday if it is on Sunday then we have to pick Friday that is 11 and if it is Saturday then 12. Like that I want all the dates in a separate dataframe.</p>
<p>I have got this much by doing this</p>
<pre><code>df[df['Date'][i].day==df['Date'].iloc[-1].day] # i is the looping variable to get the indices
</code></pre>
<p>but it prints only the rows with the same date as the last one but there can be some months that are left behind so I want to extract date prior to this day.</p>
<p>Thanks!</p>
| 64,481,932
| 2020-10-22T10:09:13.957000
| 1
| null | 0
| 101
|
python|pandas
|
<p>You can build filters that have the behavior you want by extracting the day and the weekday (encoded as Monday=0 to Sunday=6) from your date.</p>
<pre><code>business_day = (df["Date"].dt.day == 13) & (df["Date"].dt.weekday < 5)
if_saturday_use_friday = (df["Date"].dt.day == 12) & (df["Date"].dt.weekday == 4)
if_sunday_use_friday = (df["Date"].dt.day == 11) & (df["Date"].dt.weekday == 4)
</code></pre>
<p>Now you have to link the filters with an logical OR using the <code>|</code> operator and apply the filter.</p>
<pre><code>df[business_day | if_saturday_use_friday | if_sunday_use_friday]
</code></pre>
| 2020-10-22T12:10:12.513000
| 0
|
https://pandas.pydata.org/docs/dev/user_guide/merging.html
|
Merge, join, concatenate and compare#
Merge, join, concatenate and compare#
pandas provides various facilities for easily combining together Series or
DataFrame with various kinds of set logic for the indexes
and relational algebra functionality in the case of join / merge-type
operations.
In addition, pandas also provides utilities to compare two Series or DataFrame
and summarize their differences.
Concatenating objects#
The concat() function (in the main pandas namespace) does all of
You can build filters that have the behavior you want by extracting the day and the weekday (encoded as Monday=0 to Sunday=6) from your date.
business_day = (df["Date"].dt.day == 13) & (df["Date"].dt.weekday < 5)
if_saturday_use_friday = (df["Date"].dt.day == 12) & (df["Date"].dt.weekday == 4)
if_sunday_use_friday = (df["Date"].dt.day == 11) & (df["Date"].dt.weekday == 4)
Now you have to link the filters with an logical OR using the | operator and apply the filter.
df[business_day | if_saturday_use_friday | if_sunday_use_friday]
the heavy lifting of performing concatenation operations along an axis while
performing optional set logic (union or intersection) of the indexes (if any) on
the other axes. Note that I say “if any” because there is only a single possible
axis of concatenation for Series.
Before diving into all of the details of concat and what it can do, here is
a simple example:
In [1]: df1 = pd.DataFrame(
...: {
...: "A": ["A0", "A1", "A2", "A3"],
...: "B": ["B0", "B1", "B2", "B3"],
...: "C": ["C0", "C1", "C2", "C3"],
...: "D": ["D0", "D1", "D2", "D3"],
...: },
...: index=[0, 1, 2, 3],
...: )
...:
In [2]: df2 = pd.DataFrame(
...: {
...: "A": ["A4", "A5", "A6", "A7"],
...: "B": ["B4", "B5", "B6", "B7"],
...: "C": ["C4", "C5", "C6", "C7"],
...: "D": ["D4", "D5", "D6", "D7"],
...: },
...: index=[4, 5, 6, 7],
...: )
...:
In [3]: df3 = pd.DataFrame(
...: {
...: "A": ["A8", "A9", "A10", "A11"],
...: "B": ["B8", "B9", "B10", "B11"],
...: "C": ["C8", "C9", "C10", "C11"],
...: "D": ["D8", "D9", "D10", "D11"],
...: },
...: index=[8, 9, 10, 11],
...: )
...:
In [4]: frames = [df1, df2, df3]
In [5]: result = pd.concat(frames)
Like its sibling function on ndarrays, numpy.concatenate, pandas.concat
takes a list or dict of homogeneously-typed objects and concatenates them with
some configurable handling of “what to do with the other axes”:
pd.concat(
objs,
axis=0,
join="outer",
ignore_index=False,
keys=None,
levels=None,
names=None,
verify_integrity=False,
copy=True,
)
objs : a sequence or mapping of Series or DataFrame objects. If a
dict is passed, the sorted keys will be used as the keys argument, unless
it is passed, in which case the values will be selected (see below). Any None
objects will be dropped silently unless they are all None in which case a
ValueError will be raised.
axis : {0, 1, …}, default 0. The axis to concatenate along.
join : {‘inner’, ‘outer’}, default ‘outer’. How to handle indexes on
other axis(es). Outer for union and inner for intersection.
ignore_index : boolean, default False. If True, do not use the index
values on the concatenation axis. The resulting axis will be labeled 0, …,
n - 1. This is useful if you are concatenating objects where the
concatenation axis does not have meaningful indexing information. Note
the index values on the other axes are still respected in the join.
keys : sequence, default None. Construct hierarchical index using the
passed keys as the outermost level. If multiple levels passed, should
contain tuples.
levels : list of sequences, default None. Specific levels (unique values)
to use for constructing a MultiIndex. Otherwise they will be inferred from the
keys.
names : list, default None. Names for the levels in the resulting
hierarchical index.
verify_integrity : boolean, default False. Check whether the new
concatenated axis contains duplicates. This can be very expensive relative
to the actual data concatenation.
copy : boolean, default True. If False, do not copy data unnecessarily.
Without a little bit of context many of these arguments don’t make much sense.
Let’s revisit the above example. Suppose we wanted to associate specific keys
with each of the pieces of the chopped up DataFrame. We can do this using the
keys argument:
In [6]: result = pd.concat(frames, keys=["x", "y", "z"])
As you can see (if you’ve read the rest of the documentation), the resulting
object’s index has a hierarchical index. This
means that we can now select out each chunk by key:
In [7]: result.loc["y"]
Out[7]:
A B C D
4 A4 B4 C4 D4
5 A5 B5 C5 D5
6 A6 B6 C6 D6
7 A7 B7 C7 D7
It’s not a stretch to see how this can be very useful. More detail on this
functionality below.
Note
It is worth noting that concat() makes a full copy of the data, and that constantly
reusing this function can create a significant performance hit. If you need
to use the operation over several datasets, use a list comprehension.
frames = [ process_your_file(f) for f in files ]
result = pd.concat(frames)
Note
When concatenating DataFrames with named axes, pandas will attempt to preserve
these index/column names whenever possible. In the case where all inputs share a
common name, this name will be assigned to the result. When the input names do
not all agree, the result will be unnamed. The same is true for MultiIndex,
but the logic is applied separately on a level-by-level basis.
Set logic on the other axes#
When gluing together multiple DataFrames, you have a choice of how to handle
the other axes (other than the one being concatenated). This can be done in
the following two ways:
Take the union of them all, join='outer'. This is the default
option as it results in zero information loss.
Take the intersection, join='inner'.
Here is an example of each of these methods. First, the default join='outer'
behavior:
In [8]: df4 = pd.DataFrame(
...: {
...: "B": ["B2", "B3", "B6", "B7"],
...: "D": ["D2", "D3", "D6", "D7"],
...: "F": ["F2", "F3", "F6", "F7"],
...: },
...: index=[2, 3, 6, 7],
...: )
...:
In [9]: result = pd.concat([df1, df4], axis=1)
Here is the same thing with join='inner':
In [10]: result = pd.concat([df1, df4], axis=1, join="inner")
Lastly, suppose we just wanted to reuse the exact index from the original
DataFrame:
In [11]: result = pd.concat([df1, df4], axis=1).reindex(df1.index)
Similarly, we could index before the concatenation:
In [12]: pd.concat([df1, df4.reindex(df1.index)], axis=1)
Out[12]:
A B C D B D F
0 A0 B0 C0 D0 NaN NaN NaN
1 A1 B1 C1 D1 NaN NaN NaN
2 A2 B2 C2 D2 B2 D2 F2
3 A3 B3 C3 D3 B3 D3 F3
Ignoring indexes on the concatenation axis#
For DataFrame objects which don’t have a meaningful index, you may wish
to append them and ignore the fact that they may have overlapping indexes. To
do this, use the ignore_index argument:
In [13]: result = pd.concat([df1, df4], ignore_index=True, sort=False)
Concatenating with mixed ndims#
You can concatenate a mix of Series and DataFrame objects. The
Series will be transformed to DataFrame with the column name as
the name of the Series.
In [14]: s1 = pd.Series(["X0", "X1", "X2", "X3"], name="X")
In [15]: result = pd.concat([df1, s1], axis=1)
Note
Since we’re concatenating a Series to a DataFrame, we could have
achieved the same result with DataFrame.assign(). To concatenate an
arbitrary number of pandas objects (DataFrame or Series), use
concat.
If unnamed Series are passed they will be numbered consecutively.
In [16]: s2 = pd.Series(["_0", "_1", "_2", "_3"])
In [17]: result = pd.concat([df1, s2, s2, s2], axis=1)
Passing ignore_index=True will drop all name references.
In [18]: result = pd.concat([df1, s1], axis=1, ignore_index=True)
More concatenating with group keys#
A fairly common use of the keys argument is to override the column names
when creating a new DataFrame based on existing Series.
Notice how the default behaviour consists on letting the resulting DataFrame
inherit the parent Series’ name, when these existed.
In [19]: s3 = pd.Series([0, 1, 2, 3], name="foo")
In [20]: s4 = pd.Series([0, 1, 2, 3])
In [21]: s5 = pd.Series([0, 1, 4, 5])
In [22]: pd.concat([s3, s4, s5], axis=1)
Out[22]:
foo 0 1
0 0 0 0
1 1 1 1
2 2 2 4
3 3 3 5
Through the keys argument we can override the existing column names.
In [23]: pd.concat([s3, s4, s5], axis=1, keys=["red", "blue", "yellow"])
Out[23]:
red blue yellow
0 0 0 0
1 1 1 1
2 2 2 4
3 3 3 5
Let’s consider a variation of the very first example presented:
In [24]: result = pd.concat(frames, keys=["x", "y", "z"])
You can also pass a dict to concat in which case the dict keys will be used
for the keys argument (unless other keys are specified):
In [25]: pieces = {"x": df1, "y": df2, "z": df3}
In [26]: result = pd.concat(pieces)
In [27]: result = pd.concat(pieces, keys=["z", "y"])
The MultiIndex created has levels that are constructed from the passed keys and
the index of the DataFrame pieces:
In [28]: result.index.levels
Out[28]: FrozenList([['z', 'y'], [4, 5, 6, 7, 8, 9, 10, 11]])
If you wish to specify other levels (as will occasionally be the case), you can
do so using the levels argument:
In [29]: result = pd.concat(
....: pieces, keys=["x", "y", "z"], levels=[["z", "y", "x", "w"]], names=["group_key"]
....: )
....:
In [30]: result.index.levels
Out[30]: FrozenList([['z', 'y', 'x', 'w'], [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]])
This is fairly esoteric, but it is actually necessary for implementing things
like GroupBy where the order of a categorical variable is meaningful.
Appending rows to a DataFrame#
If you have a series that you want to append as a single row to a DataFrame, you can convert the row into a
DataFrame and use concat
In [31]: s2 = pd.Series(["X0", "X1", "X2", "X3"], index=["A", "B", "C", "D"])
In [32]: result = pd.concat([df1, s2.to_frame().T], ignore_index=True)
You should use ignore_index with this method to instruct DataFrame to
discard its index. If you wish to preserve the index, you should construct an
appropriately-indexed DataFrame and append or concatenate those objects.
Database-style DataFrame or named Series joining/merging#
pandas has full-featured, high performance in-memory join operations
idiomatically very similar to relational databases like SQL. These methods
perform significantly better (in some cases well over an order of magnitude
better) than other open source implementations (like base::merge.data.frame
in R). The reason for this is careful algorithmic design and the internal layout
of the data in DataFrame.
See the cookbook for some advanced strategies.
Users who are familiar with SQL but new to pandas might be interested in a
comparison with SQL.
pandas provides a single function, merge(), as the entry point for
all standard database join operations between DataFrame or named Series objects:
pd.merge(
left,
right,
how="inner",
on=None,
left_on=None,
right_on=None,
left_index=False,
right_index=False,
sort=True,
suffixes=("_x", "_y"),
copy=True,
indicator=False,
validate=None,
)
left: A DataFrame or named Series object.
right: Another DataFrame or named Series object.
on: Column or index level names to join on. Must be found in both the left
and right DataFrame and/or Series objects. If not passed and left_index and
right_index are False, the intersection of the columns in the
DataFrames and/or Series will be inferred to be the join keys.
left_on: Columns or index levels from the left DataFrame or Series to use as
keys. Can either be column names, index level names, or arrays with length
equal to the length of the DataFrame or Series.
right_on: Columns or index levels from the right DataFrame or Series to use as
keys. Can either be column names, index level names, or arrays with length
equal to the length of the DataFrame or Series.
left_index: If True, use the index (row labels) from the left
DataFrame or Series as its join key(s). In the case of a DataFrame or Series with a MultiIndex
(hierarchical), the number of levels must match the number of join keys
from the right DataFrame or Series.
right_index: Same usage as left_index for the right DataFrame or Series
how: One of 'left', 'right', 'outer', 'inner', 'cross'. Defaults
to inner. See below for more detailed description of each method.
sort: Sort the result DataFrame by the join keys in lexicographical
order. Defaults to True, setting to False will improve performance
substantially in many cases.
suffixes: A tuple of string suffixes to apply to overlapping
columns. Defaults to ('_x', '_y').
copy: Always copy data (default True) from the passed DataFrame or named Series
objects, even when reindexing is not necessary. Cannot be avoided in many
cases but may improve performance / memory usage. The cases where copying
can be avoided are somewhat pathological but this option is provided
nonetheless.
indicator: Add a column to the output DataFrame called _merge
with information on the source of each row. _merge is Categorical-type
and takes on a value of left_only for observations whose merge key
only appears in 'left' DataFrame or Series, right_only for observations whose
merge key only appears in 'right' DataFrame or Series, and both if the
observation’s merge key is found in both.
validate : string, default None.
If specified, checks if merge is of specified type.
“one_to_one” or “1:1”: checks if merge keys are unique in both
left and right datasets.
“one_to_many” or “1:m”: checks if merge keys are unique in left
dataset.
“many_to_one” or “m:1”: checks if merge keys are unique in right
dataset.
“many_to_many” or “m:m”: allowed, but does not result in checks.
Note
Support for specifying index levels as the on, left_on, and
right_on parameters was added in version 0.23.0.
Support for merging named Series objects was added in version 0.24.0.
The return type will be the same as left. If left is a DataFrame or named Series
and right is a subclass of DataFrame, the return type will still be DataFrame.
merge is a function in the pandas namespace, and it is also available as a
DataFrame instance method merge(), with the calling
DataFrame being implicitly considered the left object in the join.
The related join() method, uses merge internally for the
index-on-index (by default) and column(s)-on-index join. If you are joining on
index only, you may wish to use DataFrame.join to save yourself some typing.
Brief primer on merge methods (relational algebra)#
Experienced users of relational databases like SQL will be familiar with the
terminology used to describe join operations between two SQL-table like
structures (DataFrame objects). There are several cases to consider which
are very important to understand:
one-to-one joins: for example when joining two DataFrame objects on
their indexes (which must contain unique values).
many-to-one joins: for example when joining an index (unique) to one or
more columns in a different DataFrame.
many-to-many joins: joining columns on columns.
Note
When joining columns on columns (potentially a many-to-many join), any
indexes on the passed DataFrame objects will be discarded.
It is worth spending some time understanding the result of the many-to-many
join case. In SQL / standard relational algebra, if a key combination appears
more than once in both tables, the resulting table will have the Cartesian
product of the associated data. Here is a very basic example with one unique
key combination:
In [33]: left = pd.DataFrame(
....: {
....: "key": ["K0", "K1", "K2", "K3"],
....: "A": ["A0", "A1", "A2", "A3"],
....: "B": ["B0", "B1", "B2", "B3"],
....: }
....: )
....:
In [34]: right = pd.DataFrame(
....: {
....: "key": ["K0", "K1", "K2", "K3"],
....: "C": ["C0", "C1", "C2", "C3"],
....: "D": ["D0", "D1", "D2", "D3"],
....: }
....: )
....:
In [35]: result = pd.merge(left, right, on="key")
Here is a more complicated example with multiple join keys. Only the keys
appearing in left and right are present (the intersection), since
how='inner' by default.
In [36]: left = pd.DataFrame(
....: {
....: "key1": ["K0", "K0", "K1", "K2"],
....: "key2": ["K0", "K1", "K0", "K1"],
....: "A": ["A0", "A1", "A2", "A3"],
....: "B": ["B0", "B1", "B2", "B3"],
....: }
....: )
....:
In [37]: right = pd.DataFrame(
....: {
....: "key1": ["K0", "K1", "K1", "K2"],
....: "key2": ["K0", "K0", "K0", "K0"],
....: "C": ["C0", "C1", "C2", "C3"],
....: "D": ["D0", "D1", "D2", "D3"],
....: }
....: )
....:
In [38]: result = pd.merge(left, right, on=["key1", "key2"])
The how argument to merge specifies how to determine which keys are to
be included in the resulting table. If a key combination does not appear in
either the left or right tables, the values in the joined table will be
NA. Here is a summary of the how options and their SQL equivalent names:
Merge method
SQL Join Name
Description
left
LEFT OUTER JOIN
Use keys from left frame only
right
RIGHT OUTER JOIN
Use keys from right frame only
outer
FULL OUTER JOIN
Use union of keys from both frames
inner
INNER JOIN
Use intersection of keys from both frames
cross
CROSS JOIN
Create the cartesian product of rows of both frames
In [39]: result = pd.merge(left, right, how="left", on=["key1", "key2"])
In [40]: result = pd.merge(left, right, how="right", on=["key1", "key2"])
In [41]: result = pd.merge(left, right, how="outer", on=["key1", "key2"])
In [42]: result = pd.merge(left, right, how="inner", on=["key1", "key2"])
In [43]: result = pd.merge(left, right, how="cross")
You can merge a mult-indexed Series and a DataFrame, if the names of
the MultiIndex correspond to the columns from the DataFrame. Transform
the Series to a DataFrame using Series.reset_index() before merging,
as shown in the following example.
In [44]: df = pd.DataFrame({"Let": ["A", "B", "C"], "Num": [1, 2, 3]})
In [45]: df
Out[45]:
Let Num
0 A 1
1 B 2
2 C 3
In [46]: ser = pd.Series(
....: ["a", "b", "c", "d", "e", "f"],
....: index=pd.MultiIndex.from_arrays(
....: [["A", "B", "C"] * 2, [1, 2, 3, 4, 5, 6]], names=["Let", "Num"]
....: ),
....: )
....:
In [47]: ser
Out[47]:
Let Num
A 1 a
B 2 b
C 3 c
A 4 d
B 5 e
C 6 f
dtype: object
In [48]: pd.merge(df, ser.reset_index(), on=["Let", "Num"])
Out[48]:
Let Num 0
0 A 1 a
1 B 2 b
2 C 3 c
Here is another example with duplicate join keys in DataFrames:
In [49]: left = pd.DataFrame({"A": [1, 2], "B": [2, 2]})
In [50]: right = pd.DataFrame({"A": [4, 5, 6], "B": [2, 2, 2]})
In [51]: result = pd.merge(left, right, on="B", how="outer")
Warning
Joining / merging on duplicate keys can cause a returned frame that is the multiplication of the row dimensions, which may result in memory overflow. It is the user’ s responsibility to manage duplicate values in keys before joining large DataFrames.
Checking for duplicate keys#
Users can use the validate argument to automatically check whether there
are unexpected duplicates in their merge keys. Key uniqueness is checked before
merge operations and so should protect against memory overflows. Checking key
uniqueness is also a good way to ensure user data structures are as expected.
In the following example, there are duplicate values of B in the right
DataFrame. As this is not a one-to-one merge – as specified in the
validate argument – an exception will be raised.
In [52]: left = pd.DataFrame({"A": [1, 2], "B": [1, 2]})
In [53]: right = pd.DataFrame({"A": [4, 5, 6], "B": [2, 2, 2]})
In [53]: result = pd.merge(left, right, on="B", how="outer", validate="one_to_one")
...
MergeError: Merge keys are not unique in right dataset; not a one-to-one merge
If the user is aware of the duplicates in the right DataFrame but wants to
ensure there are no duplicates in the left DataFrame, one can use the
validate='one_to_many' argument instead, which will not raise an exception.
In [54]: pd.merge(left, right, on="B", how="outer", validate="one_to_many")
Out[54]:
A_x B A_y
0 1 1 NaN
1 2 2 4.0
2 2 2 5.0
3 2 2 6.0
The merge indicator#
merge() accepts the argument indicator. If True, a
Categorical-type column called _merge will be added to the output object
that takes on values:
Observation Origin
_merge value
Merge key only in 'left' frame
left_only
Merge key only in 'right' frame
right_only
Merge key in both frames
both
In [55]: df1 = pd.DataFrame({"col1": [0, 1], "col_left": ["a", "b"]})
In [56]: df2 = pd.DataFrame({"col1": [1, 2, 2], "col_right": [2, 2, 2]})
In [57]: pd.merge(df1, df2, on="col1", how="outer", indicator=True)
Out[57]:
col1 col_left col_right _merge
0 0 a NaN left_only
1 1 b 2.0 both
2 2 NaN 2.0 right_only
3 2 NaN 2.0 right_only
The indicator argument will also accept string arguments, in which case the indicator function will use the value of the passed string as the name for the indicator column.
In [58]: pd.merge(df1, df2, on="col1", how="outer", indicator="indicator_column")
Out[58]:
col1 col_left col_right indicator_column
0 0 a NaN left_only
1 1 b 2.0 both
2 2 NaN 2.0 right_only
3 2 NaN 2.0 right_only
Merge dtypes#
Merging will preserve the dtype of the join keys.
In [59]: left = pd.DataFrame({"key": [1], "v1": [10]})
In [60]: left
Out[60]:
key v1
0 1 10
In [61]: right = pd.DataFrame({"key": [1, 2], "v1": [20, 30]})
In [62]: right
Out[62]:
key v1
0 1 20
1 2 30
We are able to preserve the join keys:
In [63]: pd.merge(left, right, how="outer")
Out[63]:
key v1
0 1 10
1 1 20
2 2 30
In [64]: pd.merge(left, right, how="outer").dtypes
Out[64]:
key int64
v1 int64
dtype: object
Of course if you have missing values that are introduced, then the
resulting dtype will be upcast.
In [65]: pd.merge(left, right, how="outer", on="key")
Out[65]:
key v1_x v1_y
0 1 10.0 20
1 2 NaN 30
In [66]: pd.merge(left, right, how="outer", on="key").dtypes
Out[66]:
key int64
v1_x float64
v1_y int64
dtype: object
Merging will preserve category dtypes of the mergands. See also the section on categoricals.
The left frame.
In [67]: from pandas.api.types import CategoricalDtype
In [68]: X = pd.Series(np.random.choice(["foo", "bar"], size=(10,)))
In [69]: X = X.astype(CategoricalDtype(categories=["foo", "bar"]))
In [70]: left = pd.DataFrame(
....: {"X": X, "Y": np.random.choice(["one", "two", "three"], size=(10,))}
....: )
....:
In [71]: left
Out[71]:
X Y
0 bar one
1 foo one
2 foo three
3 bar three
4 foo one
5 bar one
6 bar three
7 bar three
8 bar three
9 foo three
In [72]: left.dtypes
Out[72]:
X category
Y object
dtype: object
The right frame.
In [73]: right = pd.DataFrame(
....: {
....: "X": pd.Series(["foo", "bar"], dtype=CategoricalDtype(["foo", "bar"])),
....: "Z": [1, 2],
....: }
....: )
....:
In [74]: right
Out[74]:
X Z
0 foo 1
1 bar 2
In [75]: right.dtypes
Out[75]:
X category
Z int64
dtype: object
The merged result:
In [76]: result = pd.merge(left, right, how="outer")
In [77]: result
Out[77]:
X Y Z
0 bar one 2
1 bar three 2
2 bar one 2
3 bar three 2
4 bar three 2
5 bar three 2
6 foo one 1
7 foo three 1
8 foo one 1
9 foo three 1
In [78]: result.dtypes
Out[78]:
X category
Y object
Z int64
dtype: object
Note
The category dtypes must be exactly the same, meaning the same categories and the ordered attribute.
Otherwise the result will coerce to the categories’ dtype.
Note
Merging on category dtypes that are the same can be quite performant compared to object dtype merging.
Joining on index#
DataFrame.join() is a convenient method for combining the columns of two
potentially differently-indexed DataFrames into a single result
DataFrame. Here is a very basic example:
In [79]: left = pd.DataFrame(
....: {"A": ["A0", "A1", "A2"], "B": ["B0", "B1", "B2"]}, index=["K0", "K1", "K2"]
....: )
....:
In [80]: right = pd.DataFrame(
....: {"C": ["C0", "C2", "C3"], "D": ["D0", "D2", "D3"]}, index=["K0", "K2", "K3"]
....: )
....:
In [81]: result = left.join(right)
In [82]: result = left.join(right, how="outer")
The same as above, but with how='inner'.
In [83]: result = left.join(right, how="inner")
The data alignment here is on the indexes (row labels). This same behavior can
be achieved using merge plus additional arguments instructing it to use the
indexes:
In [84]: result = pd.merge(left, right, left_index=True, right_index=True, how="outer")
In [85]: result = pd.merge(left, right, left_index=True, right_index=True, how="inner")
Joining key columns on an index#
join() takes an optional on argument which may be a column
or multiple column names, which specifies that the passed DataFrame is to be
aligned on that column in the DataFrame. These two function calls are
completely equivalent:
left.join(right, on=key_or_keys)
pd.merge(
left, right, left_on=key_or_keys, right_index=True, how="left", sort=False
)
Obviously you can choose whichever form you find more convenient. For
many-to-one joins (where one of the DataFrame’s is already indexed by the
join key), using join may be more convenient. Here is a simple example:
In [86]: left = pd.DataFrame(
....: {
....: "A": ["A0", "A1", "A2", "A3"],
....: "B": ["B0", "B1", "B2", "B3"],
....: "key": ["K0", "K1", "K0", "K1"],
....: }
....: )
....:
In [87]: right = pd.DataFrame({"C": ["C0", "C1"], "D": ["D0", "D1"]}, index=["K0", "K1"])
In [88]: result = left.join(right, on="key")
In [89]: result = pd.merge(
....: left, right, left_on="key", right_index=True, how="left", sort=False
....: )
....:
To join on multiple keys, the passed DataFrame must have a MultiIndex:
In [90]: left = pd.DataFrame(
....: {
....: "A": ["A0", "A1", "A2", "A3"],
....: "B": ["B0", "B1", "B2", "B3"],
....: "key1": ["K0", "K0", "K1", "K2"],
....: "key2": ["K0", "K1", "K0", "K1"],
....: }
....: )
....:
In [91]: index = pd.MultiIndex.from_tuples(
....: [("K0", "K0"), ("K1", "K0"), ("K2", "K0"), ("K2", "K1")]
....: )
....:
In [92]: right = pd.DataFrame(
....: {"C": ["C0", "C1", "C2", "C3"], "D": ["D0", "D1", "D2", "D3"]}, index=index
....: )
....:
Now this can be joined by passing the two key column names:
In [93]: result = left.join(right, on=["key1", "key2"])
The default for DataFrame.join is to perform a left join (essentially a
“VLOOKUP” operation, for Excel users), which uses only the keys found in the
calling DataFrame. Other join types, for example inner join, can be just as
easily performed:
In [94]: result = left.join(right, on=["key1", "key2"], how="inner")
As you can see, this drops any rows where there was no match.
Joining a single Index to a MultiIndex#
You can join a singly-indexed DataFrame with a level of a MultiIndexed DataFrame.
The level will match on the name of the index of the singly-indexed frame against
a level name of the MultiIndexed frame.
In [95]: left = pd.DataFrame(
....: {"A": ["A0", "A1", "A2"], "B": ["B0", "B1", "B2"]},
....: index=pd.Index(["K0", "K1", "K2"], name="key"),
....: )
....:
In [96]: index = pd.MultiIndex.from_tuples(
....: [("K0", "Y0"), ("K1", "Y1"), ("K2", "Y2"), ("K2", "Y3")],
....: names=["key", "Y"],
....: )
....:
In [97]: right = pd.DataFrame(
....: {"C": ["C0", "C1", "C2", "C3"], "D": ["D0", "D1", "D2", "D3"]},
....: index=index,
....: )
....:
In [98]: result = left.join(right, how="inner")
This is equivalent but less verbose and more memory efficient / faster than this.
In [99]: result = pd.merge(
....: left.reset_index(), right.reset_index(), on=["key"], how="inner"
....: ).set_index(["key","Y"])
....:
Joining with two MultiIndexes#
This is supported in a limited way, provided that the index for the right
argument is completely used in the join, and is a subset of the indices in
the left argument, as in this example:
In [100]: leftindex = pd.MultiIndex.from_product(
.....: [list("abc"), list("xy"), [1, 2]], names=["abc", "xy", "num"]
.....: )
.....:
In [101]: left = pd.DataFrame({"v1": range(12)}, index=leftindex)
In [102]: left
Out[102]:
v1
abc xy num
a x 1 0
2 1
y 1 2
2 3
b x 1 4
2 5
y 1 6
2 7
c x 1 8
2 9
y 1 10
2 11
In [103]: rightindex = pd.MultiIndex.from_product(
.....: [list("abc"), list("xy")], names=["abc", "xy"]
.....: )
.....:
In [104]: right = pd.DataFrame({"v2": [100 * i for i in range(1, 7)]}, index=rightindex)
In [105]: right
Out[105]:
v2
abc xy
a x 100
y 200
b x 300
y 400
c x 500
y 600
In [106]: left.join(right, on=["abc", "xy"], how="inner")
Out[106]:
v1 v2
abc xy num
a x 1 0 100
2 1 100
y 1 2 200
2 3 200
b x 1 4 300
2 5 300
y 1 6 400
2 7 400
c x 1 8 500
2 9 500
y 1 10 600
2 11 600
If that condition is not satisfied, a join with two multi-indexes can be
done using the following code.
In [107]: leftindex = pd.MultiIndex.from_tuples(
.....: [("K0", "X0"), ("K0", "X1"), ("K1", "X2")], names=["key", "X"]
.....: )
.....:
In [108]: left = pd.DataFrame(
.....: {"A": ["A0", "A1", "A2"], "B": ["B0", "B1", "B2"]}, index=leftindex
.....: )
.....:
In [109]: rightindex = pd.MultiIndex.from_tuples(
.....: [("K0", "Y0"), ("K1", "Y1"), ("K2", "Y2"), ("K2", "Y3")], names=["key", "Y"]
.....: )
.....:
In [110]: right = pd.DataFrame(
.....: {"C": ["C0", "C1", "C2", "C3"], "D": ["D0", "D1", "D2", "D3"]}, index=rightindex
.....: )
.....:
In [111]: result = pd.merge(
.....: left.reset_index(), right.reset_index(), on=["key"], how="inner"
.....: ).set_index(["key", "X", "Y"])
.....:
Merging on a combination of columns and index levels#
Strings passed as the on, left_on, and right_on parameters
may refer to either column names or index level names. This enables merging
DataFrame instances on a combination of index levels and columns without
resetting indexes.
In [112]: left_index = pd.Index(["K0", "K0", "K1", "K2"], name="key1")
In [113]: left = pd.DataFrame(
.....: {
.....: "A": ["A0", "A1", "A2", "A3"],
.....: "B": ["B0", "B1", "B2", "B3"],
.....: "key2": ["K0", "K1", "K0", "K1"],
.....: },
.....: index=left_index,
.....: )
.....:
In [114]: right_index = pd.Index(["K0", "K1", "K2", "K2"], name="key1")
In [115]: right = pd.DataFrame(
.....: {
.....: "C": ["C0", "C1", "C2", "C3"],
.....: "D": ["D0", "D1", "D2", "D3"],
.....: "key2": ["K0", "K0", "K0", "K1"],
.....: },
.....: index=right_index,
.....: )
.....:
In [116]: result = left.merge(right, on=["key1", "key2"])
Note
When DataFrames are merged on a string that matches an index level in both
frames, the index level is preserved as an index level in the resulting
DataFrame.
Note
When DataFrames are merged using only some of the levels of a MultiIndex,
the extra levels will be dropped from the resulting merge. In order to
preserve those levels, use reset_index on those level names to move
those levels to columns prior to doing the merge.
Note
If a string matches both a column name and an index level name, then a
warning is issued and the column takes precedence. This will result in an
ambiguity error in a future version.
Overlapping value columns#
The merge suffixes argument takes a tuple of list of strings to append to
overlapping column names in the input DataFrames to disambiguate the result
columns:
In [117]: left = pd.DataFrame({"k": ["K0", "K1", "K2"], "v": [1, 2, 3]})
In [118]: right = pd.DataFrame({"k": ["K0", "K0", "K3"], "v": [4, 5, 6]})
In [119]: result = pd.merge(left, right, on="k")
In [120]: result = pd.merge(left, right, on="k", suffixes=("_l", "_r"))
DataFrame.join() has lsuffix and rsuffix arguments which behave
similarly.
In [121]: left = left.set_index("k")
In [122]: right = right.set_index("k")
In [123]: result = left.join(right, lsuffix="_l", rsuffix="_r")
Joining multiple DataFrames#
A list or tuple of DataFrames can also be passed to join()
to join them together on their indexes.
In [124]: right2 = pd.DataFrame({"v": [7, 8, 9]}, index=["K1", "K1", "K2"])
In [125]: result = left.join([right, right2])
Merging together values within Series or DataFrame columns#
Another fairly common situation is to have two like-indexed (or similarly
indexed) Series or DataFrame objects and wanting to “patch” values in
one object from values for matching indices in the other. Here is an example:
In [126]: df1 = pd.DataFrame(
.....: [[np.nan, 3.0, 5.0], [-4.6, np.nan, np.nan], [np.nan, 7.0, np.nan]]
.....: )
.....:
In [127]: df2 = pd.DataFrame([[-42.6, np.nan, -8.2], [-5.0, 1.6, 4]], index=[1, 2])
For this, use the combine_first() method:
In [128]: result = df1.combine_first(df2)
Note that this method only takes values from the right DataFrame if they are
missing in the left DataFrame. A related method, update(),
alters non-NA values in place:
In [129]: df1.update(df2)
Timeseries friendly merging#
Merging ordered data#
A merge_ordered() function allows combining time series and other
ordered data. In particular it has an optional fill_method keyword to
fill/interpolate missing data:
In [130]: left = pd.DataFrame(
.....: {"k": ["K0", "K1", "K1", "K2"], "lv": [1, 2, 3, 4], "s": ["a", "b", "c", "d"]}
.....: )
.....:
In [131]: right = pd.DataFrame({"k": ["K1", "K2", "K4"], "rv": [1, 2, 3]})
In [132]: pd.merge_ordered(left, right, fill_method="ffill", left_by="s")
Out[132]:
k lv s rv
0 K0 1.0 a NaN
1 K1 1.0 a 1.0
2 K2 1.0 a 2.0
3 K4 1.0 a 3.0
4 K1 2.0 b 1.0
5 K2 2.0 b 2.0
6 K4 2.0 b 3.0
7 K1 3.0 c 1.0
8 K2 3.0 c 2.0
9 K4 3.0 c 3.0
10 K1 NaN d 1.0
11 K2 4.0 d 2.0
12 K4 4.0 d 3.0
Merging asof#
A merge_asof() is similar to an ordered left-join except that we match on
nearest key rather than equal keys. For each row in the left DataFrame,
we select the last row in the right DataFrame whose on key is less
than the left’s key. Both DataFrames must be sorted by the key.
Optionally an asof merge can perform a group-wise merge. This matches the
by key equally, in addition to the nearest match on the on key.
For example; we might have trades and quotes and we want to asof
merge them.
In [133]: trades = pd.DataFrame(
.....: {
.....: "time": pd.to_datetime(
.....: [
.....: "20160525 13:30:00.023",
.....: "20160525 13:30:00.038",
.....: "20160525 13:30:00.048",
.....: "20160525 13:30:00.048",
.....: "20160525 13:30:00.048",
.....: ]
.....: ),
.....: "ticker": ["MSFT", "MSFT", "GOOG", "GOOG", "AAPL"],
.....: "price": [51.95, 51.95, 720.77, 720.92, 98.00],
.....: "quantity": [75, 155, 100, 100, 100],
.....: },
.....: columns=["time", "ticker", "price", "quantity"],
.....: )
.....:
In [134]: quotes = pd.DataFrame(
.....: {
.....: "time": pd.to_datetime(
.....: [
.....: "20160525 13:30:00.023",
.....: "20160525 13:30:00.023",
.....: "20160525 13:30:00.030",
.....: "20160525 13:30:00.041",
.....: "20160525 13:30:00.048",
.....: "20160525 13:30:00.049",
.....: "20160525 13:30:00.072",
.....: "20160525 13:30:00.075",
.....: ]
.....: ),
.....: "ticker": ["GOOG", "MSFT", "MSFT", "MSFT", "GOOG", "AAPL", "GOOG", "MSFT"],
.....: "bid": [720.50, 51.95, 51.97, 51.99, 720.50, 97.99, 720.50, 52.01],
.....: "ask": [720.93, 51.96, 51.98, 52.00, 720.93, 98.01, 720.88, 52.03],
.....: },
.....: columns=["time", "ticker", "bid", "ask"],
.....: )
.....:
In [135]: trades
Out[135]:
time ticker price quantity
0 2016-05-25 13:30:00.023 MSFT 51.95 75
1 2016-05-25 13:30:00.038 MSFT 51.95 155
2 2016-05-25 13:30:00.048 GOOG 720.77 100
3 2016-05-25 13:30:00.048 GOOG 720.92 100
4 2016-05-25 13:30:00.048 AAPL 98.00 100
In [136]: quotes
Out[136]:
time ticker bid ask
0 2016-05-25 13:30:00.023 GOOG 720.50 720.93
1 2016-05-25 13:30:00.023 MSFT 51.95 51.96
2 2016-05-25 13:30:00.030 MSFT 51.97 51.98
3 2016-05-25 13:30:00.041 MSFT 51.99 52.00
4 2016-05-25 13:30:00.048 GOOG 720.50 720.93
5 2016-05-25 13:30:00.049 AAPL 97.99 98.01
6 2016-05-25 13:30:00.072 GOOG 720.50 720.88
7 2016-05-25 13:30:00.075 MSFT 52.01 52.03
By default we are taking the asof of the quotes.
In [137]: pd.merge_asof(trades, quotes, on="time", by="ticker")
Out[137]:
time ticker price quantity bid ask
0 2016-05-25 13:30:00.023 MSFT 51.95 75 51.95 51.96
1 2016-05-25 13:30:00.038 MSFT 51.95 155 51.97 51.98
2 2016-05-25 13:30:00.048 GOOG 720.77 100 720.50 720.93
3 2016-05-25 13:30:00.048 GOOG 720.92 100 720.50 720.93
4 2016-05-25 13:30:00.048 AAPL 98.00 100 NaN NaN
We only asof within 2ms between the quote time and the trade time.
In [138]: pd.merge_asof(trades, quotes, on="time", by="ticker", tolerance=pd.Timedelta("2ms"))
Out[138]:
time ticker price quantity bid ask
0 2016-05-25 13:30:00.023 MSFT 51.95 75 51.95 51.96
1 2016-05-25 13:30:00.038 MSFT 51.95 155 NaN NaN
2 2016-05-25 13:30:00.048 GOOG 720.77 100 720.50 720.93
3 2016-05-25 13:30:00.048 GOOG 720.92 100 720.50 720.93
4 2016-05-25 13:30:00.048 AAPL 98.00 100 NaN NaN
We only asof within 10ms between the quote time and the trade time and we
exclude exact matches on time. Note that though we exclude the exact matches
(of the quotes), prior quotes do propagate to that point in time.
In [139]: pd.merge_asof(
.....: trades,
.....: quotes,
.....: on="time",
.....: by="ticker",
.....: tolerance=pd.Timedelta("10ms"),
.....: allow_exact_matches=False,
.....: )
.....:
Out[139]:
time ticker price quantity bid ask
0 2016-05-25 13:30:00.023 MSFT 51.95 75 NaN NaN
1 2016-05-25 13:30:00.038 MSFT 51.95 155 51.97 51.98
2 2016-05-25 13:30:00.048 GOOG 720.77 100 NaN NaN
3 2016-05-25 13:30:00.048 GOOG 720.92 100 NaN NaN
4 2016-05-25 13:30:00.048 AAPL 98.00 100 NaN NaN
Comparing objects#
The compare() and compare() methods allow you to
compare two DataFrame or Series, respectively, and summarize their differences.
This feature was added in V1.1.0.
For example, you might want to compare two DataFrame and stack their differences
side by side.
In [140]: df = pd.DataFrame(
.....: {
.....: "col1": ["a", "a", "b", "b", "a"],
.....: "col2": [1.0, 2.0, 3.0, np.nan, 5.0],
.....: "col3": [1.0, 2.0, 3.0, 4.0, 5.0],
.....: },
.....: columns=["col1", "col2", "col3"],
.....: )
.....:
In [141]: df
Out[141]:
col1 col2 col3
0 a 1.0 1.0
1 a 2.0 2.0
2 b 3.0 3.0
3 b NaN 4.0
4 a 5.0 5.0
In [142]: df2 = df.copy()
In [143]: df2.loc[0, "col1"] = "c"
In [144]: df2.loc[2, "col3"] = 4.0
In [145]: df2
Out[145]:
col1 col2 col3
0 c 1.0 1.0
1 a 2.0 2.0
2 b 3.0 4.0
3 b NaN 4.0
4 a 5.0 5.0
In [146]: df.compare(df2)
Out[146]:
col1 col3
self other self other
0 a c NaN NaN
2 NaN NaN 3.0 4.0
By default, if two corresponding values are equal, they will be shown as NaN.
Furthermore, if all values in an entire row / column, the row / column will be
omitted from the result. The remaining differences will be aligned on columns.
If you wish, you may choose to stack the differences on rows.
In [147]: df.compare(df2, align_axis=0)
Out[147]:
col1 col3
0 self a NaN
other c NaN
2 self NaN 3.0
other NaN 4.0
If you wish to keep all original rows and columns, set keep_shape argument
to True.
In [148]: df.compare(df2, keep_shape=True)
Out[148]:
col1 col2 col3
self other self other self other
0 a c NaN NaN NaN NaN
1 NaN NaN NaN NaN NaN NaN
2 NaN NaN NaN NaN 3.0 4.0
3 NaN NaN NaN NaN NaN NaN
4 NaN NaN NaN NaN NaN NaN
You may also keep all the original values even if they are equal.
In [149]: df.compare(df2, keep_shape=True, keep_equal=True)
Out[149]:
col1 col2 col3
self other self other self other
0 a c 1.0 1.0 1.0 1.0
1 a a 2.0 2.0 2.0 2.0
2 b b 3.0 3.0 3.0 4.0
3 b b NaN NaN 4.0 4.0
4 a a 5.0 5.0 5.0 5.0
| 494
| 1,030
|
how to extract rows of the date of the last row in a dataframe and if date is not present then pick the previous date?
I have a dataframe with the dates and some other columns and I want to pick all the dates as of the last date of the dataframe for all the months present in that dataframe and if the dates are not present then pick the previous date.
eg.
Date Month Year
0 2018-03-21 3 2018
1 2018-03-22 3 2018
2 2018-03-25 3 2018
3 2018-03-26 3 2018
4 2018-03-27 3 2018
...
77 2020-05-12 5 2020
78 2020-05-13 5 2020
so I want to extract all the 13th between these dates and if 13 is not present let's say Saturday and Sunday is excluded the datapoint is not there for these two days then we need to check whether 13 is on Sunday if it is on Sunday then we have to pick Friday that is 11 and if it is Saturday then 12. Like that I want all the dates in a separate dataframe.
I have got this much by doing this
df[df['Date'][i].day==df['Date'].iloc[-1].day] # i is the looping variable to get the indices
but it prints only the rows with the same date as the last one but there can be some months that are left behind so I want to extract date prior to this day.
Thanks!
|
66,110,475
|
How to organize pandas so the first column is just dates which correspond with 4 countries with percentage data in their cells?
|
<p>The data here is web-scraped from a website, and this initial data in the variable 'r' has three columns, where there are three columns: 'Country', 'Date', '% vs 2019 (Daily)'. From these three columns I was able to extract only the ones I wanted from dates: "2021-01-01" to current/today. What I am trying to do (have spent hours), is trying to organize the data in such a way where there is one column with just the dates which correspond to the percentage data, then 4 other columns which are the country names: Denmark, Finland, Norway, Sweden. Underneath those four countries should have cells populated with the percent data. Have tried using [], loc, and iloc and various other combinations to filter the panda dataframes in such a way to make this happen, but to no avail.</p>
<p>Here is the code I have so far:</p>
<pre><code>import requests
import pandas as pd
import json
import math
import datetime
from jinja2 import Template, Environment
from datetime import date
r = requests.get('https://docs.google.com/spreadsheets/d/1GJ6CvZ_mgtjdrUyo3h2dU3YvWOahbYvPHpGLgovyhtI/gviz/tq?usp=sharing&tqx=reqId%3A0output=jspn')
data = r.content
data = json.loads(data.decode('utf-8').split("(", 1)[1].rsplit(")", 1)[0])
d = [[i['c'][0]['v'], i['c'][2]['f'], (i['c'][5]['v'])*100 ] for i in data['table']['rows']]
df = pd.DataFrame(d, columns=['Country', 'Date', '% vs 2019 (Daily)'])
pd.set_option('display.max_columns', None)
pd.set_option('display.width', None)
# EXTRACTING BETWEEN TWO DATES
df['Date'] = pd.to_datetime(df['Date'])
startdate = datetime.datetime.strptime('2021-01-01', "%Y-%m-%d").date()
enddate = datetime.datetime.strptime('2021-02-02', "%Y-%m-%d").date()
pd.Timestamp('today').floor('D')
df = df[(df['Date'] > pd.Timestamp(startdate).floor('D')) & (df['Date'] <= pd.Timestamp(enddate).floor('D'))]
Den = df.loc[df['Country'] == 'Denmark']
Fin = df.loc[df['Country'] == 'Finland']
Swe = df.loc[df['Country'] == 'Sweden']
Nor = df.loc[df['Country'] == 'Norway']
Den_data = Den.loc[: , "% vs 2019 (Daily)"]
Den_date = Den.loc[: , "Date"]
Nor_data = Nor.loc[: , "% vs 2019 (Daily)"]
Swe_data = Swe.loc[: , "% vs 2019 (Daily)"]
Fin_data = Fin.loc[: , "% vs 2019 (Daily)"]
Fin_date = Fin.loc[: , "Date"]
Den_data = Den.loc[: , "% vs 2019 (Daily)"]
df2 = pd.DataFrame()
df2['DEN_DATE'] = Den_date
df2['DENMARK'] = Den_data
df3 = pd.DataFrame()
df3['FIN_DATE'] = Fin_date
df3['FINLAND'] = Fin_data
</code></pre>
<p>Want it to be organized like this so I can eventually export it to excel:</p>
<pre><code>Date | Denmark | Finland| Norway | Sweden
</code></pre>
<hr />
<pre><code>2020-01-01 | 1234 | 4321 | 5432 | 6574
</code></pre>
<p>...</p>
<p>Any help is greatly appreicated.
Thank you</p>
| 66,148,287
| 2021-02-08T22:46:40.350000
| 1
| null | 1
| 102
|
python|pandas
|
<p>Use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.isin.html" rel="nofollow noreferrer">isin</a> to filter only the countries you are interested in getting the data. Then use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.pivot.html" rel="nofollow noreferrer">pivot</a> to return a reshaped dataframe organized by a given index and column values, in this case the index is the <code>Date</code> column, and the column values are the countries from the previous selection.</p>
<pre class="lang-py prettyprint-override"><code>...
...
pd.Timestamp('today').floor('D')
df = df[(df['Date'] > pd.Timestamp(startdate).floor('D')) & (df['Date'] <= pd.Timestamp(enddate).floor('D'))]
countries_list=['Denmark', 'Finland', 'Norway', 'Sweden']
countries_selected = df[df.Country.isin(countries_list)]
result = countries_selected.pivot(index="Date", columns="Country")
print(result)
</code></pre>
<p>Output from <em>result</em></p>
<pre><code> % vs 2019 (Daily)
Country Denmark Finland Norway Sweden
Date
2021-01-02 -65.261383 -75.416667 -39.164087 -65.853659
2021-01-03 -60.405405 -77.408056 -31.763620 -66.385669
2021-01-04 -69.371429 -75.598086 -34.002770 -70.704467
2021-01-05 -73.690932 -79.251701 -33.815689 -73.450509
2021-01-06 -76.257310 -80.445151 -43.454791 -80.805484
...
...
2021-01-30 -83.931624 -75.545852 -63.751763 -76.260163
2021-01-31 -80.654339 -74.468085 -55.565777 -65.451895
2021-02-01 -81.494253 -72.419106 -49.610390 -75.473322
2021-02-02 -81.741233 -73.898305 -46.164021 -78.215223
</code></pre>
| 2021-02-11T03:18:04.053000
| 0
|
https://pandas.pydata.org/docs/user_guide/dsintro.html
|
Use isin to filter only the countries you are interested in getting the data. Then use pivot to return a reshaped dataframe organized by a given index and column values, in this case the index is the Date column, and the column values are the countries from the previous selection.
...
...
pd.Timestamp('today').floor('D')
df = df[(df['Date'] > pd.Timestamp(startdate).floor('D')) & (df['Date'] <= pd.Timestamp(enddate).floor('D'))]
countries_list=['Denmark', 'Finland', 'Norway', 'Sweden']
countries_selected = df[df.Country.isin(countries_list)]
result = countries_selected.pivot(index="Date", columns="Country")
print(result)
Output from result
% vs 2019 (Daily)
Country Denmark Finland Norway Sweden
Date
2021-01-02 -65.261383 -75.416667 -39.164087 -65.853659
2021-01-03 -60.405405 -77.408056 -31.763620 -66.385669
2021-01-04 -69.371429 -75.598086 -34.002770 -70.704467
2021-01-05 -73.690932 -79.251701 -33.815689 -73.450509
2021-01-06 -76.257310 -80.445151 -43.454791 -80.805484
...
...
2021-01-30 -83.931624 -75.545852 -63.751763 -76.260163
2021-01-31 -80.654339 -74.468085 -55.565777 -65.451895
2021-02-01 -81.494253 -72.419106 -49.610390 -75.473322
2021-02-02 -81.741233 -73.898305 -46.164021 -78.215223
| 0
| 1,313
|
How to organize pandas so the first column is just dates which correspond with 4 countries with percentage data in their cells?
The data here is web-scraped from a website, and this initial data in the variable 'r' has three columns, where there are three columns: 'Country', 'Date', '% vs 2019 (Daily)'. From these three columns I was able to extract only the ones I wanted from dates: "2021-01-01" to current/today. What I am trying to do (have spent hours), is trying to organize the data in such a way where there is one column with just the dates which correspond to the percentage data, then 4 other columns which are the country names: Denmark, Finland, Norway, Sweden. Underneath those four countries should have cells populated with the percent data. Have tried using [], loc, and iloc and various other combinations to filter the panda dataframes in such a way to make this happen, but to no avail.
Here is the code I have so far:
import requests
import pandas as pd
import json
import math
import datetime
from jinja2 import Template, Environment
from datetime import date
r = requests.get('https://docs.google.com/spreadsheets/d/1GJ6CvZ_mgtjdrUyo3h2dU3YvWOahbYvPHpGLgovyhtI/gviz/tq?usp=sharing&tqx=reqId%3A0output=jspn')
data = r.content
data = json.loads(data.decode('utf-8').split("(", 1)[1].rsplit(")", 1)[0])
d = [[i['c'][0]['v'], i['c'][2]['f'], (i['c'][5]['v'])*100 ] for i in data['table']['rows']]
df = pd.DataFrame(d, columns=['Country', 'Date', '% vs 2019 (Daily)'])
pd.set_option('display.max_columns', None)
pd.set_option('display.width', None)
# EXTRACTING BETWEEN TWO DATES
df['Date'] = pd.to_datetime(df['Date'])
startdate = datetime.datetime.strptime('2021-01-01', "%Y-%m-%d").date()
enddate = datetime.datetime.strptime('2021-02-02', "%Y-%m-%d").date()
pd.Timestamp('today').floor('D')
df = df[(df['Date'] > pd.Timestamp(startdate).floor('D')) & (df['Date'] <= pd.Timestamp(enddate).floor('D'))]
Den = df.loc[df['Country'] == 'Denmark']
Fin = df.loc[df['Country'] == 'Finland']
Swe = df.loc[df['Country'] == 'Sweden']
Nor = df.loc[df['Country'] == 'Norway']
Den_data = Den.loc[: , "% vs 2019 (Daily)"]
Den_date = Den.loc[: , "Date"]
Nor_data = Nor.loc[: , "% vs 2019 (Daily)"]
Swe_data = Swe.loc[: , "% vs 2019 (Daily)"]
Fin_data = Fin.loc[: , "% vs 2019 (Daily)"]
Fin_date = Fin.loc[: , "Date"]
Den_data = Den.loc[: , "% vs 2019 (Daily)"]
df2 = pd.DataFrame()
df2['DEN_DATE'] = Den_date
df2['DENMARK'] = Den_data
df3 = pd.DataFrame()
df3['FIN_DATE'] = Fin_date
df3['FINLAND'] = Fin_data
Want it to be organized like this so I can eventually export it to excel:
Date | Denmark | Finland| Norway | Sweden
2020-01-01 | 1234 | 4321 | 5432 | 6574
...
Any help is greatly appreicated.
Thank you
|
66,288,032
|
Applying df.get() function to each row in pandas df
|
<p>I am using Python Pandas DataFrame to look at a dataset for with information on different schools.</p>
<p>In one particular column <code>df['Grades_Offered']</code>, the data, which can be seen below, exists for each school in the dataframe. This is what the column in the csv looks like, with the gaps representing the different cells:</p>
<pre><code>Grades_Offered
PK,K,1,2,3,4,5
PK,K,1,2,3,4,5,6,7,8
PK,K,1,2,3,4,5,6,7,8
9,10,11,12
</code></pre>
<p>I am trying to extract only the lowest grade from each row in this column. For example, I want to make a Lowest_Grade column in the dataframe where it would list out PK, PK, PK, 9 ... for the column I showed above.</p>
<p>I tried this:</p>
<pre><code>for i in range(len(df)):
df['Grades_Offered'].values[i] = df.append(df['Grades_Offered'].get(0))
</code></pre>
<p>But it doesn't work. I am also trying to extract the highest grade as well, but hopefully with help on extracting the lowest grade I could manipulate that to get the highest grade.</p>
<p>Thanks for your help.</p>
| 66,308,016
| 2021-02-20T04:31:05.670000
| 1
| null | 0
| 103
|
python|pandas
|
<p>As I understand it - you want to extract from a comma-delimited column. If highest/lowest is defined as either end of this list, solution is as follows.</p>
<pre><code>df = pd.read_csv(io.StringIO("""Grades_Offered
PK,K,1,2,3,4,5
PK,K,1,2,3,4,5,6,7,8
PK,K,1,2,3,4,5,6,7,8
9,10,11,12"""),sep="\s+")
df = df.assign(lowest_grade=df.Grades_Offered.apply(lambda s: s.split(",")[0]),
highest_grade=df.Grades_Offered.apply(lambda s: s.split(",")[-1]))
</code></pre>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: right;"></th>
<th style="text-align: left;">Grades_Offered</th>
<th style="text-align: left;">lowest_grade</th>
<th style="text-align: right;">highest_grade</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: right;">0</td>
<td style="text-align: left;">PK,K,1,2,3,4,5</td>
<td style="text-align: left;">PK</td>
<td style="text-align: right;">5</td>
</tr>
<tr>
<td style="text-align: right;">1</td>
<td style="text-align: left;">PK,K,1,2,3,4,5,6,7,8</td>
<td style="text-align: left;">PK</td>
<td style="text-align: right;">8</td>
</tr>
<tr>
<td style="text-align: right;">2</td>
<td style="text-align: left;">PK,K,1,2,3,4,5,6,7,8</td>
<td style="text-align: left;">PK</td>
<td style="text-align: right;">8</td>
</tr>
<tr>
<td style="text-align: right;">3</td>
<td style="text-align: left;">9,10,11,12</td>
<td style="text-align: left;">9</td>
<td style="text-align: right;">12</td>
</tr>
</tbody>
</table>
</div>
| 2021-02-21T22:53:25.057000
| 0
|
https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.apply.html
|
pandas.DataFrame.apply#
pandas.DataFrame.apply#
DataFrame.apply(func, axis=0, raw=False, result_type=None, args=(), **kwargs)[source]#
Apply a function along an axis of the DataFrame.
Objects passed to the function are Series objects whose index is
either the DataFrame’s index (axis=0) or the DataFrame’s columns
(axis=1). By default (result_type=None), the final return type
is inferred from the return type of the applied function. Otherwise,
it depends on the result_type argument.
Parameters
funcfunctionFunction to apply to each column or row.
As I understand it - you want to extract from a comma-delimited column. If highest/lowest is defined as either end of this list, solution is as follows.
df = pd.read_csv(io.StringIO("""Grades_Offered
PK,K,1,2,3,4,5
PK,K,1,2,3,4,5,6,7,8
PK,K,1,2,3,4,5,6,7,8
9,10,11,12"""),sep="\s+")
df = df.assign(lowest_grade=df.Grades_Offered.apply(lambda s: s.split(",")[0]),
highest_grade=df.Grades_Offered.apply(lambda s: s.split(",")[-1]))
Grades_Offered
lowest_grade
highest_grade
0
PK,K,1,2,3,4,5
PK
5
1
PK,K,1,2,3,4,5,6,7,8
PK
8
2
PK,K,1,2,3,4,5,6,7,8
PK
8
3
9,10,11,12
9
12
axis{0 or ‘index’, 1 or ‘columns’}, default 0Axis along which the function is applied:
0 or ‘index’: apply function to each column.
1 or ‘columns’: apply function to each row.
rawbool, default FalseDetermines if row or column is passed as a Series or ndarray object:
False : passes each row or column as a Series to the
function.
True : the passed function will receive ndarray objects
instead.
If you are just applying a NumPy reduction function this will
achieve much better performance.
result_type{‘expand’, ‘reduce’, ‘broadcast’, None}, default NoneThese only act when axis=1 (columns):
‘expand’ : list-like results will be turned into columns.
‘reduce’ : returns a Series if possible rather than expanding
list-like results. This is the opposite of ‘expand’.
‘broadcast’ : results will be broadcast to the original shape
of the DataFrame, the original index and columns will be
retained.
The default behaviour (None) depends on the return value of the
applied function: list-like results will be returned as a Series
of those. However if the apply function returns a Series these
are expanded to columns.
argstuplePositional arguments to pass to func in addition to the
array/series.
**kwargsAdditional keyword arguments to pass as keywords arguments to
func.
Returns
Series or DataFrameResult of applying func along the given axis of the
DataFrame.
See also
DataFrame.applymapFor elementwise operations.
DataFrame.aggregateOnly perform aggregating type operations.
DataFrame.transformOnly perform transforming type operations.
Notes
Functions that mutate the passed object can produce unexpected
behavior or errors and are not supported. See Mutating with User Defined Function (UDF) methods
for more details.
Examples
>>> df = pd.DataFrame([[4, 9]] * 3, columns=['A', 'B'])
>>> df
A B
0 4 9
1 4 9
2 4 9
Using a numpy universal function (in this case the same as
np.sqrt(df)):
>>> df.apply(np.sqrt)
A B
0 2.0 3.0
1 2.0 3.0
2 2.0 3.0
Using a reducing function on either axis
>>> df.apply(np.sum, axis=0)
A 12
B 27
dtype: int64
>>> df.apply(np.sum, axis=1)
0 13
1 13
2 13
dtype: int64
Returning a list-like will result in a Series
>>> df.apply(lambda x: [1, 2], axis=1)
0 [1, 2]
1 [1, 2]
2 [1, 2]
dtype: object
Passing result_type='expand' will expand list-like results
to columns of a Dataframe
>>> df.apply(lambda x: [1, 2], axis=1, result_type='expand')
0 1
0 1 2
1 1 2
2 1 2
Returning a Series inside the function is similar to passing
result_type='expand'. The resulting column names
will be the Series index.
>>> df.apply(lambda x: pd.Series([1, 2], index=['foo', 'bar']), axis=1)
foo bar
0 1 2
1 1 2
2 1 2
Passing result_type='broadcast' will ensure the same shape
result, whether list-like or scalar is returned by the function,
and broadcast it along the axis. The resulting column names will
be the originals.
>>> df.apply(lambda x: [1, 2], axis=1, result_type='broadcast')
A B
0 1 2
1 1 2
2 1 2
| 556
| 1,156
|
Applying df.get() function to each row in pandas df
I am using Python Pandas DataFrame to look at a dataset for with information on different schools.
In one particular column df['Grades_Offered'], the data, which can be seen below, exists for each school in the dataframe. This is what the column in the csv looks like, with the gaps representing the different cells:
Grades_Offered
PK,K,1,2,3,4,5
PK,K,1,2,3,4,5,6,7,8
PK,K,1,2,3,4,5,6,7,8
9,10,11,12
I am trying to extract only the lowest grade from each row in this column. For example, I want to make a Lowest_Grade column in the dataframe where it would list out PK, PK, PK, 9 ... for the column I showed above.
I tried this:
for i in range(len(df)):
df['Grades_Offered'].values[i] = df.append(df['Grades_Offered'].get(0))
But it doesn't work. I am also trying to extract the highest grade as well, but hopefully with help on extracting the lowest grade I could manipulate that to get the highest grade.
Thanks for your help.
|
67,206,583
|
how to zip and also melt any number of columns in python
|
<p>My table looks like this:</p>
<pre><code>no type 2020-01-01 2020-01-02 2020-01-03 ...................
1 x 1 2 3
2 b 4 3 0
</code></pre>
<p>and what I want to do is to melt down the column date and also value to be in separated new columns. I have done it, but I specified the columns that I want to melt like this script below:</p>
<pre><code>cols_dict = dict(zip(df.iloc[:, 3:100].columns, df.iloc[:, 3:100].values[0]))
id_vars = [col for col in df.columns if isinstance(col, str)]
df = df.melt(id_vars = [col for col in df.columns if isinstance(col, str)], var_name = "date", value_name = 'value')
</code></pre>
<p>The expected result I want is:</p>
<pre><code>no type date value
1 x 2020-01-01 1
1 x 2020-01-02 2
1 x 2020-01-03 3
2 b 2020-01-01 4
2 b 2020-01-02 3
2 b 2020-01-03 0
</code></pre>
<p>I assume that the column dates will be always added into the data frame as time goes by, so my script would not be worked anymore when the column date is more than 100.</p>
<p>How should I write my script so it will provide any number of date column in the future, as basically my current script could only access until columns number 100.</p>
<p>Thanks in advance.</p>
| 67,211,544
| 2021-04-22T04:00:03.187000
| 1
| null | 0
| 111
|
python|pandas
|
<pre><code>>>> df.set_index(["no", "type"]) \
.rename_axis(columns="date") \
.stack() \
.rename("value") \
.reset_index()
no type date value
0 1 x 2020-01-01 1
1 1 x 2020-01-02 2
2 1 x 2020-01-03 3
3 2 b 2020-01-01 4
4 2 b 2020-01-02 3
5 2 b 2020-01-03 0
</code></pre>
| 2021-04-22T10:33:49.317000
| 0
|
https://pandas.pydata.org/docs/user_guide/reshaping.html
|
Reshaping and pivot tables#
Reshaping and pivot tables#
Reshaping by pivoting DataFrame objects#
Data is often stored in so-called “stacked” or “record” format:
In [1]: import pandas._testing as tm
In [2]: def unpivot(frame):
...: N, K = frame.shape
...: data = {
...: "value": frame.to_numpy().ravel("F"),
...: "variable": np.asarray(frame.columns).repeat(N),
...: "date": np.tile(np.asarray(frame.index), K),
...: }
...: return pd.DataFrame(data, columns=["date", "variable", "value"])
...:
In [3]: df = unpivot(tm.makeTimeDataFrame(3))
In [4]: df
Out[4]:
date variable value
0 2000-01-03 A 0.469112
>>> df.set_index(["no", "type"]) \
.rename_axis(columns="date") \
.stack() \
.rename("value") \
.reset_index()
no type date value
0 1 x 2020-01-01 1
1 1 x 2020-01-02 2
2 1 x 2020-01-03 3
3 2 b 2020-01-01 4
4 2 b 2020-01-02 3
5 2 b 2020-01-03 0
1 2000-01-04 A -0.282863
2 2000-01-05 A -1.509059
3 2000-01-03 B -1.135632
4 2000-01-04 B 1.212112
5 2000-01-05 B -0.173215
6 2000-01-03 C 0.119209
7 2000-01-04 C -1.044236
8 2000-01-05 C -0.861849
9 2000-01-03 D -2.104569
10 2000-01-04 D -0.494929
11 2000-01-05 D 1.071804
To select out everything for variable A we could do:
In [5]: filtered = df[df["variable"] == "A"]
In [6]: filtered
Out[6]:
date variable value
0 2000-01-03 A 0.469112
1 2000-01-04 A -0.282863
2 2000-01-05 A -1.509059
But suppose we wish to do time series operations with the variables. A better
representation would be where the columns are the unique variables and an
index of dates identifies individual observations. To reshape the data into
this form, we use the DataFrame.pivot() method (also implemented as a
top level function pivot()):
In [7]: pivoted = df.pivot(index="date", columns="variable", values="value")
In [8]: pivoted
Out[8]:
variable A B C D
date
2000-01-03 0.469112 -1.135632 0.119209 -2.104569
2000-01-04 -0.282863 1.212112 -1.044236 -0.494929
2000-01-05 -1.509059 -0.173215 -0.861849 1.071804
If the values argument is omitted, and the input DataFrame has more than
one column of values which are not used as column or index inputs to pivot(),
then the resulting “pivoted” DataFrame will have hierarchical columns whose topmost level indicates the respective value
column:
In [9]: df["value2"] = df["value"] * 2
In [10]: pivoted = df.pivot(index="date", columns="variable")
In [11]: pivoted
Out[11]:
value ... value2
variable A B C ... B C D
date ...
2000-01-03 0.469112 -1.135632 0.119209 ... -2.271265 0.238417 -4.209138
2000-01-04 -0.282863 1.212112 -1.044236 ... 2.424224 -2.088472 -0.989859
2000-01-05 -1.509059 -0.173215 -0.861849 ... -0.346429 -1.723698 2.143608
[3 rows x 8 columns]
You can then select subsets from the pivoted DataFrame:
In [12]: pivoted["value2"]
Out[12]:
variable A B C D
date
2000-01-03 0.938225 -2.271265 0.238417 -4.209138
2000-01-04 -0.565727 2.424224 -2.088472 -0.989859
2000-01-05 -3.018117 -0.346429 -1.723698 2.143608
Note that this returns a view on the underlying data in the case where the data
are homogeneously-typed.
Note
pivot() will error with a ValueError: Index contains duplicate
entries, cannot reshape if the index/column pair is not unique. In this
case, consider using pivot_table() which is a generalization
of pivot that can handle duplicate values for one index/column pair.
Reshaping by stacking and unstacking#
Closely related to the pivot() method are the related
stack() and unstack() methods available on
Series and DataFrame. These methods are designed to work together with
MultiIndex objects (see the section on hierarchical indexing). Here are essentially what these methods do:
stack(): “pivot” a level of the (possibly hierarchical) column labels,
returning a DataFrame with an index with a new inner-most level of row
labels.
unstack(): (inverse operation of stack()) “pivot” a level of the
(possibly hierarchical) row index to the column axis, producing a reshaped
DataFrame with a new inner-most level of column labels.
The clearest way to explain is by example. Let’s take a prior example data set
from the hierarchical indexing section:
In [13]: tuples = list(
....: zip(
....: *[
....: ["bar", "bar", "baz", "baz", "foo", "foo", "qux", "qux"],
....: ["one", "two", "one", "two", "one", "two", "one", "two"],
....: ]
....: )
....: )
....:
In [14]: index = pd.MultiIndex.from_tuples(tuples, names=["first", "second"])
In [15]: df = pd.DataFrame(np.random.randn(8, 2), index=index, columns=["A", "B"])
In [16]: df2 = df[:4]
In [17]: df2
Out[17]:
A B
first second
bar one 0.721555 -0.706771
two -1.039575 0.271860
baz one -0.424972 0.567020
two 0.276232 -1.087401
The stack() function “compresses” a level in the DataFrame columns to
produce either:
A Series, in the case of a simple column Index.
A DataFrame, in the case of a MultiIndex in the columns.
If the columns have a MultiIndex, you can choose which level to stack. The
stacked level becomes the new lowest level in a MultiIndex on the columns:
In [18]: stacked = df2.stack()
In [19]: stacked
Out[19]:
first second
bar one A 0.721555
B -0.706771
two A -1.039575
B 0.271860
baz one A -0.424972
B 0.567020
two A 0.276232
B -1.087401
dtype: float64
With a “stacked” DataFrame or Series (having a MultiIndex as the
index), the inverse operation of stack() is unstack(), which by default
unstacks the last level:
In [20]: stacked.unstack()
Out[20]:
A B
first second
bar one 0.721555 -0.706771
two -1.039575 0.271860
baz one -0.424972 0.567020
two 0.276232 -1.087401
In [21]: stacked.unstack(1)
Out[21]:
second one two
first
bar A 0.721555 -1.039575
B -0.706771 0.271860
baz A -0.424972 0.276232
B 0.567020 -1.087401
In [22]: stacked.unstack(0)
Out[22]:
first bar baz
second
one A 0.721555 -0.424972
B -0.706771 0.567020
two A -1.039575 0.276232
B 0.271860 -1.087401
If the indexes have names, you can use the level names instead of specifying
the level numbers:
In [23]: stacked.unstack("second")
Out[23]:
second one two
first
bar A 0.721555 -1.039575
B -0.706771 0.271860
baz A -0.424972 0.276232
B 0.567020 -1.087401
Notice that the stack() and unstack() methods implicitly sort the index
levels involved. Hence a call to stack() and then unstack(), or vice versa,
will result in a sorted copy of the original DataFrame or Series:
In [24]: index = pd.MultiIndex.from_product([[2, 1], ["a", "b"]])
In [25]: df = pd.DataFrame(np.random.randn(4), index=index, columns=["A"])
In [26]: df
Out[26]:
A
2 a -0.370647
b -1.157892
1 a -1.344312
b 0.844885
In [27]: all(df.unstack().stack() == df.sort_index())
Out[27]: True
The above code will raise a TypeError if the call to sort_index() is
removed.
Multiple levels#
You may also stack or unstack more than one level at a time by passing a list
of levels, in which case the end result is as if each level in the list were
processed individually.
In [28]: columns = pd.MultiIndex.from_tuples(
....: [
....: ("A", "cat", "long"),
....: ("B", "cat", "long"),
....: ("A", "dog", "short"),
....: ("B", "dog", "short"),
....: ],
....: names=["exp", "animal", "hair_length"],
....: )
....:
In [29]: df = pd.DataFrame(np.random.randn(4, 4), columns=columns)
In [30]: df
Out[30]:
exp A B A B
animal cat cat dog dog
hair_length long long short short
0 1.075770 -0.109050 1.643563 -1.469388
1 0.357021 -0.674600 -1.776904 -0.968914
2 -1.294524 0.413738 0.276662 -0.472035
3 -0.013960 -0.362543 -0.006154 -0.923061
In [31]: df.stack(level=["animal", "hair_length"])
Out[31]:
exp A B
animal hair_length
0 cat long 1.075770 -0.109050
dog short 1.643563 -1.469388
1 cat long 0.357021 -0.674600
dog short -1.776904 -0.968914
2 cat long -1.294524 0.413738
dog short 0.276662 -0.472035
3 cat long -0.013960 -0.362543
dog short -0.006154 -0.923061
The list of levels can contain either level names or level numbers (but
not a mixture of the two).
# df.stack(level=['animal', 'hair_length'])
# from above is equivalent to:
In [32]: df.stack(level=[1, 2])
Out[32]:
exp A B
animal hair_length
0 cat long 1.075770 -0.109050
dog short 1.643563 -1.469388
1 cat long 0.357021 -0.674600
dog short -1.776904 -0.968914
2 cat long -1.294524 0.413738
dog short 0.276662 -0.472035
3 cat long -0.013960 -0.362543
dog short -0.006154 -0.923061
Missing data#
These functions are intelligent about handling missing data and do not expect
each subgroup within the hierarchical index to have the same set of labels.
They also can handle the index being unsorted (but you can make it sorted by
calling sort_index(), of course). Here is a more complex example:
In [33]: columns = pd.MultiIndex.from_tuples(
....: [
....: ("A", "cat"),
....: ("B", "dog"),
....: ("B", "cat"),
....: ("A", "dog"),
....: ],
....: names=["exp", "animal"],
....: )
....:
In [34]: index = pd.MultiIndex.from_product(
....: [("bar", "baz", "foo", "qux"), ("one", "two")], names=["first", "second"]
....: )
....:
In [35]: df = pd.DataFrame(np.random.randn(8, 4), index=index, columns=columns)
In [36]: df2 = df.iloc[[0, 1, 2, 4, 5, 7]]
In [37]: df2
Out[37]:
exp A B A
animal cat dog cat dog
first second
bar one 0.895717 0.805244 -1.206412 2.565646
two 1.431256 1.340309 -1.170299 -0.226169
baz one 0.410835 0.813850 0.132003 -0.827317
foo one -1.413681 1.607920 1.024180 0.569605
two 0.875906 -2.211372 0.974466 -2.006747
qux two -1.226825 0.769804 -1.281247 -0.727707
As mentioned above, stack() can be called with a level argument to select
which level in the columns to stack:
In [38]: df2.stack("exp")
Out[38]:
animal cat dog
first second exp
bar one A 0.895717 2.565646
B -1.206412 0.805244
two A 1.431256 -0.226169
B -1.170299 1.340309
baz one A 0.410835 -0.827317
B 0.132003 0.813850
foo one A -1.413681 0.569605
B 1.024180 1.607920
two A 0.875906 -2.006747
B 0.974466 -2.211372
qux two A -1.226825 -0.727707
B -1.281247 0.769804
In [39]: df2.stack("animal")
Out[39]:
exp A B
first second animal
bar one cat 0.895717 -1.206412
dog 2.565646 0.805244
two cat 1.431256 -1.170299
dog -0.226169 1.340309
baz one cat 0.410835 0.132003
dog -0.827317 0.813850
foo one cat -1.413681 1.024180
dog 0.569605 1.607920
two cat 0.875906 0.974466
dog -2.006747 -2.211372
qux two cat -1.226825 -1.281247
dog -0.727707 0.769804
Unstacking can result in missing values if subgroups do not have the same
set of labels. By default, missing values will be replaced with the default
fill value for that data type, NaN for float, NaT for datetimelike,
etc. For integer types, by default data will converted to float and missing
values will be set to NaN.
In [40]: df3 = df.iloc[[0, 1, 4, 7], [1, 2]]
In [41]: df3
Out[41]:
exp B
animal dog cat
first second
bar one 0.805244 -1.206412
two 1.340309 -1.170299
foo one 1.607920 1.024180
qux two 0.769804 -1.281247
In [42]: df3.unstack()
Out[42]:
exp B
animal dog cat
second one two one two
first
bar 0.805244 1.340309 -1.206412 -1.170299
foo 1.607920 NaN 1.024180 NaN
qux NaN 0.769804 NaN -1.281247
Alternatively, unstack takes an optional fill_value argument, for specifying
the value of missing data.
In [43]: df3.unstack(fill_value=-1e9)
Out[43]:
exp B
animal dog cat
second one two one two
first
bar 8.052440e-01 1.340309e+00 -1.206412e+00 -1.170299e+00
foo 1.607920e+00 -1.000000e+09 1.024180e+00 -1.000000e+09
qux -1.000000e+09 7.698036e-01 -1.000000e+09 -1.281247e+00
With a MultiIndex#
Unstacking when the columns are a MultiIndex is also careful about doing
the right thing:
In [44]: df[:3].unstack(0)
Out[44]:
exp A B ... A
animal cat dog ... cat dog
first bar baz bar ... baz bar baz
second ...
one 0.895717 0.410835 0.805244 ... 0.132003 2.565646 -0.827317
two 1.431256 NaN 1.340309 ... NaN -0.226169 NaN
[2 rows x 8 columns]
In [45]: df2.unstack(1)
Out[45]:
exp A B ... A
animal cat dog ... cat dog
second one two one ... two one two
first ...
bar 0.895717 1.431256 0.805244 ... -1.170299 2.565646 -0.226169
baz 0.410835 NaN 0.813850 ... NaN -0.827317 NaN
foo -1.413681 0.875906 1.607920 ... 0.974466 0.569605 -2.006747
qux NaN -1.226825 NaN ... -1.281247 NaN -0.727707
[4 rows x 8 columns]
Reshaping by melt#
The top-level melt() function and the corresponding DataFrame.melt()
are useful to massage a DataFrame into a format where one or more columns
are identifier variables, while all other columns, considered measured
variables, are “unpivoted” to the row axis, leaving just two non-identifier
columns, “variable” and “value”. The names of those columns can be customized
by supplying the var_name and value_name parameters.
For instance,
In [46]: cheese = pd.DataFrame(
....: {
....: "first": ["John", "Mary"],
....: "last": ["Doe", "Bo"],
....: "height": [5.5, 6.0],
....: "weight": [130, 150],
....: }
....: )
....:
In [47]: cheese
Out[47]:
first last height weight
0 John Doe 5.5 130
1 Mary Bo 6.0 150
In [48]: cheese.melt(id_vars=["first", "last"])
Out[48]:
first last variable value
0 John Doe height 5.5
1 Mary Bo height 6.0
2 John Doe weight 130.0
3 Mary Bo weight 150.0
In [49]: cheese.melt(id_vars=["first", "last"], var_name="quantity")
Out[49]:
first last quantity value
0 John Doe height 5.5
1 Mary Bo height 6.0
2 John Doe weight 130.0
3 Mary Bo weight 150.0
When transforming a DataFrame using melt(), the index will be ignored. The original index values can be kept around by setting the ignore_index parameter to False (default is True). This will however duplicate them.
New in version 1.1.0.
In [50]: index = pd.MultiIndex.from_tuples([("person", "A"), ("person", "B")])
In [51]: cheese = pd.DataFrame(
....: {
....: "first": ["John", "Mary"],
....: "last": ["Doe", "Bo"],
....: "height": [5.5, 6.0],
....: "weight": [130, 150],
....: },
....: index=index,
....: )
....:
In [52]: cheese
Out[52]:
first last height weight
person A John Doe 5.5 130
B Mary Bo 6.0 150
In [53]: cheese.melt(id_vars=["first", "last"])
Out[53]:
first last variable value
0 John Doe height 5.5
1 Mary Bo height 6.0
2 John Doe weight 130.0
3 Mary Bo weight 150.0
In [54]: cheese.melt(id_vars=["first", "last"], ignore_index=False)
Out[54]:
first last variable value
person A John Doe height 5.5
B Mary Bo height 6.0
A John Doe weight 130.0
B Mary Bo weight 150.0
Another way to transform is to use the wide_to_long() panel data
convenience function. It is less flexible than melt(), but more
user-friendly.
In [55]: dft = pd.DataFrame(
....: {
....: "A1970": {0: "a", 1: "b", 2: "c"},
....: "A1980": {0: "d", 1: "e", 2: "f"},
....: "B1970": {0: 2.5, 1: 1.2, 2: 0.7},
....: "B1980": {0: 3.2, 1: 1.3, 2: 0.1},
....: "X": dict(zip(range(3), np.random.randn(3))),
....: }
....: )
....:
In [56]: dft["id"] = dft.index
In [57]: dft
Out[57]:
A1970 A1980 B1970 B1980 X id
0 a d 2.5 3.2 -0.121306 0
1 b e 1.2 1.3 -0.097883 1
2 c f 0.7 0.1 0.695775 2
In [58]: pd.wide_to_long(dft, ["A", "B"], i="id", j="year")
Out[58]:
X A B
id year
0 1970 -0.121306 a 2.5
1 1970 -0.097883 b 1.2
2 1970 0.695775 c 0.7
0 1980 -0.121306 d 3.2
1 1980 -0.097883 e 1.3
2 1980 0.695775 f 0.1
Combining with stats and GroupBy#
It should be no shock that combining pivot() / stack() / unstack() with
GroupBy and the basic Series and DataFrame statistical functions can produce
some very expressive and fast data manipulations.
In [59]: df
Out[59]:
exp A B A
animal cat dog cat dog
first second
bar one 0.895717 0.805244 -1.206412 2.565646
two 1.431256 1.340309 -1.170299 -0.226169
baz one 0.410835 0.813850 0.132003 -0.827317
two -0.076467 -1.187678 1.130127 -1.436737
foo one -1.413681 1.607920 1.024180 0.569605
two 0.875906 -2.211372 0.974466 -2.006747
qux one -0.410001 -0.078638 0.545952 -1.219217
two -1.226825 0.769804 -1.281247 -0.727707
In [60]: df.stack().mean(1).unstack()
Out[60]:
animal cat dog
first second
bar one -0.155347 1.685445
two 0.130479 0.557070
baz one 0.271419 -0.006733
two 0.526830 -1.312207
foo one -0.194750 1.088763
two 0.925186 -2.109060
qux one 0.067976 -0.648927
two -1.254036 0.021048
# same result, another way
In [61]: df.groupby(level=1, axis=1).mean()
Out[61]:
animal cat dog
first second
bar one -0.155347 1.685445
two 0.130479 0.557070
baz one 0.271419 -0.006733
two 0.526830 -1.312207
foo one -0.194750 1.088763
two 0.925186 -2.109060
qux one 0.067976 -0.648927
two -1.254036 0.021048
In [62]: df.stack().groupby(level=1).mean()
Out[62]:
exp A B
second
one 0.071448 0.455513
two -0.424186 -0.204486
In [63]: df.mean().unstack(0)
Out[63]:
exp A B
animal
cat 0.060843 0.018596
dog -0.413580 0.232430
Pivot tables#
While pivot() provides general purpose pivoting with various
data types (strings, numerics, etc.), pandas also provides pivot_table()
for pivoting with aggregation of numeric data.
The function pivot_table() can be used to create spreadsheet-style
pivot tables. See the cookbook for some advanced
strategies.
It takes a number of arguments:
data: a DataFrame object.
values: a column or a list of columns to aggregate.
index: a column, Grouper, array which has the same length as data, or list of them.
Keys to group by on the pivot table index. If an array is passed, it is being used as the same manner as column values.
columns: a column, Grouper, array which has the same length as data, or list of them.
Keys to group by on the pivot table column. If an array is passed, it is being used as the same manner as column values.
aggfunc: function to use for aggregation, defaulting to numpy.mean.
Consider a data set like this:
In [64]: import datetime
In [65]: df = pd.DataFrame(
....: {
....: "A": ["one", "one", "two", "three"] * 6,
....: "B": ["A", "B", "C"] * 8,
....: "C": ["foo", "foo", "foo", "bar", "bar", "bar"] * 4,
....: "D": np.random.randn(24),
....: "E": np.random.randn(24),
....: "F": [datetime.datetime(2013, i, 1) for i in range(1, 13)]
....: + [datetime.datetime(2013, i, 15) for i in range(1, 13)],
....: }
....: )
....:
In [66]: df
Out[66]:
A B C D E F
0 one A foo 0.341734 -0.317441 2013-01-01
1 one B foo 0.959726 -1.236269 2013-02-01
2 two C foo -1.110336 0.896171 2013-03-01
3 three A bar -0.619976 -0.487602 2013-04-01
4 one B bar 0.149748 -0.082240 2013-05-01
.. ... .. ... ... ... ...
19 three B foo 0.690579 -2.213588 2013-08-15
20 one C foo 0.995761 1.063327 2013-09-15
21 one A bar 2.396780 1.266143 2013-10-15
22 two B bar 0.014871 0.299368 2013-11-15
23 three C bar 3.357427 -0.863838 2013-12-15
[24 rows x 6 columns]
We can produce pivot tables from this data very easily:
In [67]: pd.pivot_table(df, values="D", index=["A", "B"], columns=["C"])
Out[67]:
C bar foo
A B
one A 1.120915 -0.514058
B -0.338421 0.002759
C -0.538846 0.699535
three A -1.181568 NaN
B NaN 0.433512
C 0.588783 NaN
two A NaN 1.000985
B 0.158248 NaN
C NaN 0.176180
In [68]: pd.pivot_table(df, values="D", index=["B"], columns=["A", "C"], aggfunc=np.sum)
Out[68]:
A one three two
C bar foo bar foo bar foo
B
A 2.241830 -1.028115 -2.363137 NaN NaN 2.001971
B -0.676843 0.005518 NaN 0.867024 0.316495 NaN
C -1.077692 1.399070 1.177566 NaN NaN 0.352360
In [69]: pd.pivot_table(
....: df, values=["D", "E"],
....: index=["B"],
....: columns=["A", "C"],
....: aggfunc=np.sum,
....: )
....:
Out[69]:
D ... E
A one three ... three two
C bar foo bar ... foo bar foo
B ...
A 2.241830 -1.028115 -2.363137 ... NaN NaN 0.128491
B -0.676843 0.005518 NaN ... -2.128743 -0.194294 NaN
C -1.077692 1.399070 1.177566 ... NaN NaN 0.872482
[3 rows x 12 columns]
The result object is a DataFrame having potentially hierarchical indexes on the
rows and columns. If the values column name is not given, the pivot table
will include all of the data in an additional level of hierarchy in the columns:
In [70]: pd.pivot_table(df[["A", "B", "C", "D", "E"]], index=["A", "B"], columns=["C"])
Out[70]:
D E
C bar foo bar foo
A B
one A 1.120915 -0.514058 1.393057 -0.021605
B -0.338421 0.002759 0.684140 -0.551692
C -0.538846 0.699535 -0.988442 0.747859
three A -1.181568 NaN 0.961289 NaN
B NaN 0.433512 NaN -1.064372
C 0.588783 NaN -0.131830 NaN
two A NaN 1.000985 NaN 0.064245
B 0.158248 NaN -0.097147 NaN
C NaN 0.176180 NaN 0.436241
Also, you can use Grouper for index and columns keywords. For detail of Grouper, see Grouping with a Grouper specification.
In [71]: pd.pivot_table(df, values="D", index=pd.Grouper(freq="M", key="F"), columns="C")
Out[71]:
C bar foo
F
2013-01-31 NaN -0.514058
2013-02-28 NaN 0.002759
2013-03-31 NaN 0.176180
2013-04-30 -1.181568 NaN
2013-05-31 -0.338421 NaN
2013-06-30 -0.538846 NaN
2013-07-31 NaN 1.000985
2013-08-31 NaN 0.433512
2013-09-30 NaN 0.699535
2013-10-31 1.120915 NaN
2013-11-30 0.158248 NaN
2013-12-31 0.588783 NaN
You can render a nice output of the table omitting the missing values by
calling to_string() if you wish:
In [72]: table = pd.pivot_table(df, index=["A", "B"], columns=["C"], values=["D", "E"])
In [73]: print(table.to_string(na_rep=""))
D E
C bar foo bar foo
A B
one A 1.120915 -0.514058 1.393057 -0.021605
B -0.338421 0.002759 0.684140 -0.551692
C -0.538846 0.699535 -0.988442 0.747859
three A -1.181568 0.961289
B 0.433512 -1.064372
C 0.588783 -0.131830
two A 1.000985 0.064245
B 0.158248 -0.097147
C 0.176180 0.436241
Note that pivot_table() is also available as an instance method on DataFrame,i.e. DataFrame.pivot_table().
Adding margins#
If you pass margins=True to pivot_table(), special All columns and
rows will be added with partial group aggregates across the categories on the
rows and columns:
In [74]: table = df.pivot_table(
....: index=["A", "B"],
....: columns="C",
....: values=["D", "E"],
....: margins=True,
....: aggfunc=np.std
....: )
....:
In [75]: table
Out[75]:
D E
C bar foo All bar foo All
A B
one A 1.804346 1.210272 1.569879 0.179483 0.418374 0.858005
B 0.690376 1.353355 0.898998 1.083825 0.968138 1.101401
C 0.273641 0.418926 0.771139 1.689271 0.446140 1.422136
three A 0.794212 NaN 0.794212 2.049040 NaN 2.049040
B NaN 0.363548 0.363548 NaN 1.625237 1.625237
C 3.915454 NaN 3.915454 1.035215 NaN 1.035215
two A NaN 0.442998 0.442998 NaN 0.447104 0.447104
B 0.202765 NaN 0.202765 0.560757 NaN 0.560757
C NaN 1.819408 1.819408 NaN 0.650439 0.650439
All 1.556686 0.952552 1.246608 1.250924 0.899904 1.059389
Additionally, you can call DataFrame.stack() to display a pivoted DataFrame
as having a multi-level index:
In [76]: table.stack()
Out[76]:
D E
A B C
one A All 1.569879 0.858005
bar 1.804346 0.179483
foo 1.210272 0.418374
B All 0.898998 1.101401
bar 0.690376 1.083825
... ... ...
two C All 1.819408 0.650439
foo 1.819408 0.650439
All All 1.246608 1.059389
bar 1.556686 1.250924
foo 0.952552 0.899904
[24 rows x 2 columns]
Cross tabulations#
Use crosstab() to compute a cross-tabulation of two (or more)
factors. By default crosstab() computes a frequency table of the factors
unless an array of values and an aggregation function are passed.
It takes a number of arguments
index: array-like, values to group by in the rows.
columns: array-like, values to group by in the columns.
values: array-like, optional, array of values to aggregate according to
the factors.
aggfunc: function, optional, If no values array is passed, computes a
frequency table.
rownames: sequence, default None, must match number of row arrays passed.
colnames: sequence, default None, if passed, must match number of column
arrays passed.
margins: boolean, default False, Add row/column margins (subtotals)
normalize: boolean, {‘all’, ‘index’, ‘columns’}, or {0,1}, default False.
Normalize by dividing all values by the sum of values.
Any Series passed will have their name attributes used unless row or column
names for the cross-tabulation are specified
For example:
In [77]: foo, bar, dull, shiny, one, two = "foo", "bar", "dull", "shiny", "one", "two"
In [78]: a = np.array([foo, foo, bar, bar, foo, foo], dtype=object)
In [79]: b = np.array([one, one, two, one, two, one], dtype=object)
In [80]: c = np.array([dull, dull, shiny, dull, dull, shiny], dtype=object)
In [81]: pd.crosstab(a, [b, c], rownames=["a"], colnames=["b", "c"])
Out[81]:
b one two
c dull shiny dull shiny
a
bar 1 0 0 1
foo 2 1 1 0
If crosstab() receives only two Series, it will provide a frequency table.
In [82]: df = pd.DataFrame(
....: {"A": [1, 2, 2, 2, 2], "B": [3, 3, 4, 4, 4], "C": [1, 1, np.nan, 1, 1]}
....: )
....:
In [83]: df
Out[83]:
A B C
0 1 3 1.0
1 2 3 1.0
2 2 4 NaN
3 2 4 1.0
4 2 4 1.0
In [84]: pd.crosstab(df["A"], df["B"])
Out[84]:
B 3 4
A
1 1 0
2 1 3
crosstab() can also be implemented
to Categorical data.
In [85]: foo = pd.Categorical(["a", "b"], categories=["a", "b", "c"])
In [86]: bar = pd.Categorical(["d", "e"], categories=["d", "e", "f"])
In [87]: pd.crosstab(foo, bar)
Out[87]:
col_0 d e
row_0
a 1 0
b 0 1
If you want to include all of data categories even if the actual data does
not contain any instances of a particular category, you should set dropna=False.
For example:
In [88]: pd.crosstab(foo, bar, dropna=False)
Out[88]:
col_0 d e f
row_0
a 1 0 0
b 0 1 0
c 0 0 0
Normalization#
Frequency tables can also be normalized to show percentages rather than counts
using the normalize argument:
In [89]: pd.crosstab(df["A"], df["B"], normalize=True)
Out[89]:
B 3 4
A
1 0.2 0.0
2 0.2 0.6
normalize can also normalize values within each row or within each column:
In [90]: pd.crosstab(df["A"], df["B"], normalize="columns")
Out[90]:
B 3 4
A
1 0.5 0.0
2 0.5 1.0
crosstab() can also be passed a third Series and an aggregation function
(aggfunc) that will be applied to the values of the third Series within
each group defined by the first two Series:
In [91]: pd.crosstab(df["A"], df["B"], values=df["C"], aggfunc=np.sum)
Out[91]:
B 3 4
A
1 1.0 NaN
2 1.0 2.0
Adding margins#
Finally, one can also add margins or normalize this output.
In [92]: pd.crosstab(
....: df["A"], df["B"], values=df["C"], aggfunc=np.sum, normalize=True, margins=True
....: )
....:
Out[92]:
B 3 4 All
A
1 0.25 0.0 0.25
2 0.25 0.5 0.75
All 0.50 0.5 1.00
Tiling#
The cut() function computes groupings for the values of the input
array and is often used to transform continuous variables to discrete or
categorical variables:
In [93]: ages = np.array([10, 15, 13, 12, 23, 25, 28, 59, 60])
In [94]: pd.cut(ages, bins=3)
Out[94]:
[(9.95, 26.667], (9.95, 26.667], (9.95, 26.667], (9.95, 26.667], (9.95, 26.667], (9.95, 26.667], (26.667, 43.333], (43.333, 60.0], (43.333, 60.0]]
Categories (3, interval[float64, right]): [(9.95, 26.667] < (26.667, 43.333] < (43.333, 60.0]]
If the bins keyword is an integer, then equal-width bins are formed.
Alternatively we can specify custom bin-edges:
In [95]: c = pd.cut(ages, bins=[0, 18, 35, 70])
In [96]: c
Out[96]:
[(0, 18], (0, 18], (0, 18], (0, 18], (18, 35], (18, 35], (18, 35], (35, 70], (35, 70]]
Categories (3, interval[int64, right]): [(0, 18] < (18, 35] < (35, 70]]
If the bins keyword is an IntervalIndex, then these will be
used to bin the passed data.:
pd.cut([25, 20, 50], bins=c.categories)
Computing indicator / dummy variables#
To convert a categorical variable into a “dummy” or “indicator” DataFrame,
for example a column in a DataFrame (a Series) which has k distinct
values, can derive a DataFrame containing k columns of 1s and 0s using
get_dummies():
In [97]: df = pd.DataFrame({"key": list("bbacab"), "data1": range(6)})
In [98]: pd.get_dummies(df["key"])
Out[98]:
a b c
0 0 1 0
1 0 1 0
2 1 0 0
3 0 0 1
4 1 0 0
5 0 1 0
Sometimes it’s useful to prefix the column names, for example when merging the result
with the original DataFrame:
In [99]: dummies = pd.get_dummies(df["key"], prefix="key")
In [100]: dummies
Out[100]:
key_a key_b key_c
0 0 1 0
1 0 1 0
2 1 0 0
3 0 0 1
4 1 0 0
5 0 1 0
In [101]: df[["data1"]].join(dummies)
Out[101]:
data1 key_a key_b key_c
0 0 0 1 0
1 1 0 1 0
2 2 1 0 0
3 3 0 0 1
4 4 1 0 0
5 5 0 1 0
This function is often used along with discretization functions like cut():
In [102]: values = np.random.randn(10)
In [103]: values
Out[103]:
array([ 0.4082, -1.0481, -0.0257, -0.9884, 0.0941, 1.2627, 1.29 ,
0.0824, -0.0558, 0.5366])
In [104]: bins = [0, 0.2, 0.4, 0.6, 0.8, 1]
In [105]: pd.get_dummies(pd.cut(values, bins))
Out[105]:
(0.0, 0.2] (0.2, 0.4] (0.4, 0.6] (0.6, 0.8] (0.8, 1.0]
0 0 0 1 0 0
1 0 0 0 0 0
2 0 0 0 0 0
3 0 0 0 0 0
4 1 0 0 0 0
5 0 0 0 0 0
6 0 0 0 0 0
7 1 0 0 0 0
8 0 0 0 0 0
9 0 0 1 0 0
See also Series.str.get_dummies.
get_dummies() also accepts a DataFrame. By default all categorical
variables (categorical in the statistical sense, those with object or
categorical dtype) are encoded as dummy variables.
In [106]: df = pd.DataFrame({"A": ["a", "b", "a"], "B": ["c", "c", "b"], "C": [1, 2, 3]})
In [107]: pd.get_dummies(df)
Out[107]:
C A_a A_b B_b B_c
0 1 1 0 0 1
1 2 0 1 0 1
2 3 1 0 1 0
All non-object columns are included untouched in the output. You can control
the columns that are encoded with the columns keyword.
In [108]: pd.get_dummies(df, columns=["A"])
Out[108]:
B C A_a A_b
0 c 1 1 0
1 c 2 0 1
2 b 3 1 0
Notice that the B column is still included in the output, it just hasn’t
been encoded. You can drop B before calling get_dummies if you don’t
want to include it in the output.
As with the Series version, you can pass values for the prefix and
prefix_sep. By default the column name is used as the prefix, and _ as
the prefix separator. You can specify prefix and prefix_sep in 3 ways:
string: Use the same value for prefix or prefix_sep for each column
to be encoded.
list: Must be the same length as the number of columns being encoded.
dict: Mapping column name to prefix.
In [109]: simple = pd.get_dummies(df, prefix="new_prefix")
In [110]: simple
Out[110]:
C new_prefix_a new_prefix_b new_prefix_b new_prefix_c
0 1 1 0 0 1
1 2 0 1 0 1
2 3 1 0 1 0
In [111]: from_list = pd.get_dummies(df, prefix=["from_A", "from_B"])
In [112]: from_list
Out[112]:
C from_A_a from_A_b from_B_b from_B_c
0 1 1 0 0 1
1 2 0 1 0 1
2 3 1 0 1 0
In [113]: from_dict = pd.get_dummies(df, prefix={"B": "from_B", "A": "from_A"})
In [114]: from_dict
Out[114]:
C from_A_a from_A_b from_B_b from_B_c
0 1 1 0 0 1
1 2 0 1 0 1
2 3 1 0 1 0
Sometimes it will be useful to only keep k-1 levels of a categorical
variable to avoid collinearity when feeding the result to statistical models.
You can switch to this mode by turn on drop_first.
In [115]: s = pd.Series(list("abcaa"))
In [116]: pd.get_dummies(s)
Out[116]:
a b c
0 1 0 0
1 0 1 0
2 0 0 1
3 1 0 0
4 1 0 0
In [117]: pd.get_dummies(s, drop_first=True)
Out[117]:
b c
0 0 0
1 1 0
2 0 1
3 0 0
4 0 0
When a column contains only one level, it will be omitted in the result.
In [118]: df = pd.DataFrame({"A": list("aaaaa"), "B": list("ababc")})
In [119]: pd.get_dummies(df)
Out[119]:
A_a B_a B_b B_c
0 1 1 0 0
1 1 0 1 0
2 1 1 0 0
3 1 0 1 0
4 1 0 0 1
In [120]: pd.get_dummies(df, drop_first=True)
Out[120]:
B_b B_c
0 0 0
1 1 0
2 0 0
3 1 0
4 0 1
By default new columns will have np.uint8 dtype.
To choose another dtype, use the dtype argument:
In [121]: df = pd.DataFrame({"A": list("abc"), "B": [1.1, 2.2, 3.3]})
In [122]: pd.get_dummies(df, dtype=bool).dtypes
Out[122]:
B float64
A_a bool
A_b bool
A_c bool
dtype: object
New in version 1.5.0.
To convert a “dummy” or “indicator” DataFrame, into a categorical DataFrame,
for example k columns of a DataFrame containing 1s and 0s can derive a
DataFrame which has k distinct values using
from_dummies():
In [123]: df = pd.DataFrame({"prefix_a": [0, 1, 0], "prefix_b": [1, 0, 1]})
In [124]: df
Out[124]:
prefix_a prefix_b
0 0 1
1 1 0
2 0 1
In [125]: pd.from_dummies(df, sep="_")
Out[125]:
prefix
0 b
1 a
2 b
Dummy coded data only requires k - 1 categories to be included, in this case
the k th category is the default category, implied by not being assigned any of
the other k - 1 categories, can be passed via default_category.
In [126]: df = pd.DataFrame({"prefix_a": [0, 1, 0]})
In [127]: df
Out[127]:
prefix_a
0 0
1 1
2 0
In [128]: pd.from_dummies(df, sep="_", default_category="b")
Out[128]:
prefix
0 b
1 a
2 b
Factorizing values#
To encode 1-d values as an enumerated type use factorize():
In [129]: x = pd.Series(["A", "A", np.nan, "B", 3.14, np.inf])
In [130]: x
Out[130]:
0 A
1 A
2 NaN
3 B
4 3.14
5 inf
dtype: object
In [131]: labels, uniques = pd.factorize(x)
In [132]: labels
Out[132]: array([ 0, 0, -1, 1, 2, 3])
In [133]: uniques
Out[133]: Index(['A', 'B', 3.14, inf], dtype='object')
Note that factorize() is similar to numpy.unique, but differs in its
handling of NaN:
Note
The following numpy.unique will fail under Python 3 with a TypeError
because of an ordering bug. See also
here.
In [134]: ser = pd.Series(['A', 'A', np.nan, 'B', 3.14, np.inf])
In [135]: pd.factorize(ser, sort=True)
Out[135]: (array([ 2, 2, -1, 3, 0, 1]), Index([3.14, inf, 'A', 'B'], dtype='object'))
In [136]: np.unique(ser, return_inverse=True)[::-1]
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[136], line 1
----> 1 np.unique(ser, return_inverse=True)[::-1]
File <__array_function__ internals>:180, in unique(*args, **kwargs)
File ~/micromamba/envs/test/lib/python3.8/site-packages/numpy/lib/arraysetops.py:274, in unique(ar, return_index, return_inverse, return_counts, axis, equal_nan)
272 ar = np.asanyarray(ar)
273 if axis is None:
--> 274 ret = _unique1d(ar, return_index, return_inverse, return_counts,
275 equal_nan=equal_nan)
276 return _unpack_tuple(ret)
278 # axis was specified and not None
File ~/micromamba/envs/test/lib/python3.8/site-packages/numpy/lib/arraysetops.py:333, in _unique1d(ar, return_index, return_inverse, return_counts, equal_nan)
330 optional_indices = return_index or return_inverse
332 if optional_indices:
--> 333 perm = ar.argsort(kind='mergesort' if return_index else 'quicksort')
334 aux = ar[perm]
335 else:
TypeError: '<' not supported between instances of 'float' and 'str'
Note
If you just want to handle one column as a categorical variable (like R’s factor),
you can use df["cat_col"] = pd.Categorical(df["col"]) or
df["cat_col"] = df["col"].astype("category"). For full docs on Categorical,
see the Categorical introduction and the
API documentation.
Examples#
In this section, we will review frequently asked questions and examples. The
column names and relevant column values are named to correspond with how this
DataFrame will be pivoted in the answers below.
In [137]: np.random.seed([3, 1415])
In [138]: n = 20
In [139]: cols = np.array(["key", "row", "item", "col"])
In [140]: df = cols + pd.DataFrame(
.....: (np.random.randint(5, size=(n, 4)) // [2, 1, 2, 1]).astype(str)
.....: )
.....:
In [141]: df.columns = cols
In [142]: df = df.join(pd.DataFrame(np.random.rand(n, 2).round(2)).add_prefix("val"))
In [143]: df
Out[143]:
key row item col val0 val1
0 key0 row3 item1 col3 0.81 0.04
1 key1 row2 item1 col2 0.44 0.07
2 key1 row0 item1 col0 0.77 0.01
3 key0 row4 item0 col2 0.15 0.59
4 key1 row0 item2 col1 0.81 0.64
.. ... ... ... ... ... ...
15 key0 row3 item1 col1 0.31 0.23
16 key0 row0 item2 col3 0.86 0.01
17 key0 row4 item0 col3 0.64 0.21
18 key2 row2 item2 col0 0.13 0.45
19 key0 row2 item0 col4 0.37 0.70
[20 rows x 6 columns]
Pivoting with single aggregations#
Suppose we wanted to pivot df such that the col values are columns,
row values are the index, and the mean of val0 are the values? In
particular, the resulting DataFrame should look like:
col col0 col1 col2 col3 col4
row
row0 0.77 0.605 NaN 0.860 0.65
row2 0.13 NaN 0.395 0.500 0.25
row3 NaN 0.310 NaN 0.545 NaN
row4 NaN 0.100 0.395 0.760 0.24
This solution uses pivot_table(). Also note that
aggfunc='mean' is the default. It is included here to be explicit.
In [144]: df.pivot_table(values="val0", index="row", columns="col", aggfunc="mean")
Out[144]:
col col0 col1 col2 col3 col4
row
row0 0.77 0.605 NaN 0.860 0.65
row2 0.13 NaN 0.395 0.500 0.25
row3 NaN 0.310 NaN 0.545 NaN
row4 NaN 0.100 0.395 0.760 0.24
Note that we can also replace the missing values by using the fill_value
parameter.
In [145]: df.pivot_table(
.....: values="val0",
.....: index="row",
.....: columns="col",
.....: aggfunc="mean",
.....: fill_value=0,
.....: )
.....:
Out[145]:
col col0 col1 col2 col3 col4
row
row0 0.77 0.605 0.000 0.860 0.65
row2 0.13 0.000 0.395 0.500 0.25
row3 0.00 0.310 0.000 0.545 0.00
row4 0.00 0.100 0.395 0.760 0.24
Also note that we can pass in other aggregation functions as well. For example,
we can also pass in sum.
In [146]: df.pivot_table(
.....: values="val0",
.....: index="row",
.....: columns="col",
.....: aggfunc="sum",
.....: fill_value=0,
.....: )
.....:
Out[146]:
col col0 col1 col2 col3 col4
row
row0 0.77 1.21 0.00 0.86 0.65
row2 0.13 0.00 0.79 0.50 0.50
row3 0.00 0.31 0.00 1.09 0.00
row4 0.00 0.10 0.79 1.52 0.24
Another aggregation we can do is calculate the frequency in which the columns
and rows occur together a.k.a. “cross tabulation”. To do this, we can pass
size to the aggfunc parameter.
In [147]: df.pivot_table(index="row", columns="col", fill_value=0, aggfunc="size")
Out[147]:
col col0 col1 col2 col3 col4
row
row0 1 2 0 1 1
row2 1 0 2 1 2
row3 0 1 0 2 0
row4 0 1 2 2 1
Pivoting with multiple aggregations#
We can also perform multiple aggregations. For example, to perform both a
sum and mean, we can pass in a list to the aggfunc argument.
In [148]: df.pivot_table(
.....: values="val0",
.....: index="row",
.....: columns="col",
.....: aggfunc=["mean", "sum"],
.....: )
.....:
Out[148]:
mean sum
col col0 col1 col2 col3 col4 col0 col1 col2 col3 col4
row
row0 0.77 0.605 NaN 0.860 0.65 0.77 1.21 NaN 0.86 0.65
row2 0.13 NaN 0.395 0.500 0.25 0.13 NaN 0.79 0.50 0.50
row3 NaN 0.310 NaN 0.545 NaN NaN 0.31 NaN 1.09 NaN
row4 NaN 0.100 0.395 0.760 0.24 NaN 0.10 0.79 1.52 0.24
Note to aggregate over multiple value columns, we can pass in a list to the
values parameter.
In [149]: df.pivot_table(
.....: values=["val0", "val1"],
.....: index="row",
.....: columns="col",
.....: aggfunc=["mean"],
.....: )
.....:
Out[149]:
mean
val0 val1
col col0 col1 col2 col3 col4 col0 col1 col2 col3 col4
row
row0 0.77 0.605 NaN 0.860 0.65 0.01 0.745 NaN 0.010 0.02
row2 0.13 NaN 0.395 0.500 0.25 0.45 NaN 0.34 0.440 0.79
row3 NaN 0.310 NaN 0.545 NaN NaN 0.230 NaN 0.075 NaN
row4 NaN 0.100 0.395 0.760 0.24 NaN 0.070 0.42 0.300 0.46
Note to subdivide over multiple columns we can pass in a list to the
columns parameter.
In [150]: df.pivot_table(
.....: values=["val0"],
.....: index="row",
.....: columns=["item", "col"],
.....: aggfunc=["mean"],
.....: )
.....:
Out[150]:
mean
val0
item item0 item1 item2
col col2 col3 col4 col0 col1 col2 col3 col4 col0 col1 col3 col4
row
row0 NaN NaN NaN 0.77 NaN NaN NaN NaN NaN 0.605 0.86 0.65
row2 0.35 NaN 0.37 NaN NaN 0.44 NaN NaN 0.13 NaN 0.50 0.13
row3 NaN NaN NaN NaN 0.31 NaN 0.81 NaN NaN NaN 0.28 NaN
row4 0.15 0.64 NaN NaN 0.10 0.64 0.88 0.24 NaN NaN NaN NaN
Exploding a list-like column#
New in version 0.25.0.
Sometimes the values in a column are list-like.
In [151]: keys = ["panda1", "panda2", "panda3"]
In [152]: values = [["eats", "shoots"], ["shoots", "leaves"], ["eats", "leaves"]]
In [153]: df = pd.DataFrame({"keys": keys, "values": values})
In [154]: df
Out[154]:
keys values
0 panda1 [eats, shoots]
1 panda2 [shoots, leaves]
2 panda3 [eats, leaves]
We can ‘explode’ the values column, transforming each list-like to a separate row, by using explode(). This will replicate the index values from the original row:
In [155]: df["values"].explode()
Out[155]:
0 eats
0 shoots
1 shoots
1 leaves
2 eats
2 leaves
Name: values, dtype: object
You can also explode the column in the DataFrame.
In [156]: df.explode("values")
Out[156]:
keys values
0 panda1 eats
0 panda1 shoots
1 panda2 shoots
1 panda2 leaves
2 panda3 eats
2 panda3 leaves
Series.explode() will replace empty lists with np.nan and preserve scalar entries. The dtype of the resulting Series is always object.
In [157]: s = pd.Series([[1, 2, 3], "foo", [], ["a", "b"]])
In [158]: s
Out[158]:
0 [1, 2, 3]
1 foo
2 []
3 [a, b]
dtype: object
In [159]: s.explode()
Out[159]:
0 1
0 2
0 3
1 foo
2 NaN
3 a
3 b
dtype: object
Here is a typical usecase. You have comma separated strings in a column and want to expand this.
In [160]: df = pd.DataFrame([{"var1": "a,b,c", "var2": 1}, {"var1": "d,e,f", "var2": 2}])
In [161]: df
Out[161]:
var1 var2
0 a,b,c 1
1 d,e,f 2
Creating a long form DataFrame is now straightforward using explode and chained operations
In [162]: df.assign(var1=df.var1.str.split(",")).explode("var1")
Out[162]:
var1 var2
0 a 1
0 b 1
0 c 1
1 d 2
1 e 2
1 f 2
| 697
| 1,043
|
how to zip and also melt any number of columns in python
My table looks like this:
no type 2020-01-01 2020-01-02 2020-01-03 ...................
1 x 1 2 3
2 b 4 3 0
and what I want to do is to melt down the column date and also value to be in separated new columns. I have done it, but I specified the columns that I want to melt like this script below:
cols_dict = dict(zip(df.iloc[:, 3:100].columns, df.iloc[:, 3:100].values[0]))
id_vars = [col for col in df.columns if isinstance(col, str)]
df = df.melt(id_vars = [col for col in df.columns if isinstance(col, str)], var_name = "date", value_name = 'value')
The expected result I want is:
no type date value
1 x 2020-01-01 1
1 x 2020-01-02 2
1 x 2020-01-03 3
2 b 2020-01-01 4
2 b 2020-01-02 3
2 b 2020-01-03 0
I assume that the column dates will be always added into the data frame as time goes by, so my script would not be worked anymore when the column date is more than 100.
How should I write my script so it will provide any number of date column in the future, as basically my current script could only access until columns number 100.
Thanks in advance.
|
61,011,328
|
df query on Timedelta column where duration <= 1 hour
|
<pre><code>#query that fetches all items where duration <= 1 hours
df = df[(df['Td'].dt.total_seconds() <= 3600) & (df['Td'].dt.total_seconds() >= 0)]
</code></pre>
<p>For example, the above query excludes items that start on <code>01/01/20 23:30:00</code> and end on <code>01/02/20 00:18:00</code>, however they need to be included!</p>
<p>If I add additional condition <code>(df['Td'].dt.total_seconds() >= -3600)</code> to the above query it starts including items such as <code>pd.Timedelta(days=-1, hours=23)</code>.</p>
<p>How can I make sure that the only items I fetch are within the duration of 1 hour regardless of the day change that makes <code>pd.Timedelta(days=-1, hours=23)</code> evaluate to <code>hours=-1</code>?</p>
<p><strong>Example:</strong></p>
<p><code>-3600 <= pd.Timedelta(days=-1, hours=23).total_seconds() <= 3600
True</code></p>
<p>I don't want this included because 23 hours elapsed from the previous day not -3600 seconds/ -1 hours.</p>
| 61,778,679
| 2020-04-03T11:54:19.523000
| 1
| null | 0
| 116
|
python|pandas
|
<p>Ended up using unix timestamp. Deducted from given timedelta a timedelta object of the beginning of epoch and dividend the result by 60 to get minutes.</p>
| 2020-05-13T15:32:26.417000
| 0
|
https://pandas.pydata.org/docs/getting_started/intro_tutorials/09_timeseries.html
|
How to handle time series data with ease?#
In [1]: import pandas as pd
In [2]: import matplotlib.pyplot as plt
Data used for this tutorial:
Air quality data
For this tutorial, air quality data about \(NO_2\) and Particulate
matter less than 2.5 micrometers is used, made available by
OpenAQ and downloaded using the
py-openaq package.
Ended up using unix timestamp. Deducted from given timedelta a timedelta object of the beginning of epoch and dividend the result by 60 to get minutes.
The air_quality_no2_long.csv" data set provides \(NO_2\) values
for the measurement stations FR04014, BETR801 and London
Westminster in respectively Paris, Antwerp and London.
To raw data
In [3]: air_quality = pd.read_csv("data/air_quality_no2_long.csv")
In [4]: air_quality = air_quality.rename(columns={"date.utc": "datetime"})
In [5]: air_quality.head()
Out[5]:
city country datetime location parameter value unit
0 Paris FR 2019-06-21 00:00:00+00:00 FR04014 no2 20.0 µg/m³
1 Paris FR 2019-06-20 23:00:00+00:00 FR04014 no2 21.8 µg/m³
2 Paris FR 2019-06-20 22:00:00+00:00 FR04014 no2 26.5 µg/m³
3 Paris FR 2019-06-20 21:00:00+00:00 FR04014 no2 24.9 µg/m³
4 Paris FR 2019-06-20 20:00:00+00:00 FR04014 no2 21.4 µg/m³
In [6]: air_quality.city.unique()
Out[6]: array(['Paris', 'Antwerpen', 'London'], dtype=object)
How to handle time series data with ease?#
Using pandas datetime properties#
I want to work with the dates in the column datetime as datetime objects instead of plain text
In [7]: air_quality["datetime"] = pd.to_datetime(air_quality["datetime"])
In [8]: air_quality["datetime"]
Out[8]:
0 2019-06-21 00:00:00+00:00
1 2019-06-20 23:00:00+00:00
2 2019-06-20 22:00:00+00:00
3 2019-06-20 21:00:00+00:00
4 2019-06-20 20:00:00+00:00
...
2063 2019-05-07 06:00:00+00:00
2064 2019-05-07 04:00:00+00:00
2065 2019-05-07 03:00:00+00:00
2066 2019-05-07 02:00:00+00:00
2067 2019-05-07 01:00:00+00:00
Name: datetime, Length: 2068, dtype: datetime64[ns, UTC]
Initially, the values in datetime are character strings and do not
provide any datetime operations (e.g. extract the year, day of the
week,…). By applying the to_datetime function, pandas interprets the
strings and convert these to datetime (i.e. datetime64[ns, UTC])
objects. In pandas we call these datetime objects similar to
datetime.datetime from the standard library as pandas.Timestamp.
Note
As many data sets do contain datetime information in one of
the columns, pandas input function like pandas.read_csv() and pandas.read_json()
can do the transformation to dates when reading the data using the
parse_dates parameter with a list of the columns to read as
Timestamp:
pd.read_csv("../data/air_quality_no2_long.csv", parse_dates=["datetime"])
Why are these pandas.Timestamp objects useful? Let’s illustrate the added
value with some example cases.
What is the start and end date of the time series data set we are working
with?
In [9]: air_quality["datetime"].min(), air_quality["datetime"].max()
Out[9]:
(Timestamp('2019-05-07 01:00:00+0000', tz='UTC'),
Timestamp('2019-06-21 00:00:00+0000', tz='UTC'))
Using pandas.Timestamp for datetimes enables us to calculate with date
information and make them comparable. Hence, we can use this to get the
length of our time series:
In [10]: air_quality["datetime"].max() - air_quality["datetime"].min()
Out[10]: Timedelta('44 days 23:00:00')
The result is a pandas.Timedelta object, similar to datetime.timedelta
from the standard Python library and defining a time duration.
To user guideThe various time concepts supported by pandas are explained in the user guide section on time related concepts.
I want to add a new column to the DataFrame containing only the month of the measurement
In [11]: air_quality["month"] = air_quality["datetime"].dt.month
In [12]: air_quality.head()
Out[12]:
city country datetime ... value unit month
0 Paris FR 2019-06-21 00:00:00+00:00 ... 20.0 µg/m³ 6
1 Paris FR 2019-06-20 23:00:00+00:00 ... 21.8 µg/m³ 6
2 Paris FR 2019-06-20 22:00:00+00:00 ... 26.5 µg/m³ 6
3 Paris FR 2019-06-20 21:00:00+00:00 ... 24.9 µg/m³ 6
4 Paris FR 2019-06-20 20:00:00+00:00 ... 21.4 µg/m³ 6
[5 rows x 8 columns]
By using Timestamp objects for dates, a lot of time-related
properties are provided by pandas. For example the month, but also
year, weekofyear, quarter,… All of these properties are
accessible by the dt accessor.
To user guideAn overview of the existing date properties is given in the
time and date components overview table. More details about the dt accessor
to return datetime like properties are explained in a dedicated section on the dt accessor.
What is the average \(NO_2\) concentration for each day of the week for each of the measurement locations?
In [13]: air_quality.groupby(
....: [air_quality["datetime"].dt.weekday, "location"])["value"].mean()
....:
Out[13]:
datetime location
0 BETR801 27.875000
FR04014 24.856250
London Westminster 23.969697
1 BETR801 22.214286
FR04014 30.999359
...
5 FR04014 25.266154
London Westminster 24.977612
6 BETR801 21.896552
FR04014 23.274306
London Westminster 24.859155
Name: value, Length: 21, dtype: float64
Remember the split-apply-combine pattern provided by groupby from the
tutorial on statistics calculation?
Here, we want to calculate a given statistic (e.g. mean \(NO_2\))
for each weekday and for each measurement location. To group on
weekdays, we use the datetime property weekday (with Monday=0 and
Sunday=6) of pandas Timestamp, which is also accessible by the
dt accessor. The grouping on both locations and weekdays can be done
to split the calculation of the mean on each of these combinations.
Danger
As we are working with a very short time series in these
examples, the analysis does not provide a long-term representative
result!
Plot the typical \(NO_2\) pattern during the day of our time series of all stations together. In other words, what is the average value for each hour of the day?
In [14]: fig, axs = plt.subplots(figsize=(12, 4))
In [15]: air_quality.groupby(air_quality["datetime"].dt.hour)["value"].mean().plot(
....: kind='bar', rot=0, ax=axs
....: )
....:
Out[15]: <AxesSubplot: xlabel='datetime'>
In [16]: plt.xlabel("Hour of the day"); # custom x label using Matplotlib
In [17]: plt.ylabel("$NO_2 (µg/m^3)$");
Similar to the previous case, we want to calculate a given statistic
(e.g. mean \(NO_2\)) for each hour of the day and we can use the
split-apply-combine approach again. For this case, we use the datetime property hour
of pandas Timestamp, which is also accessible by the dt accessor.
Datetime as index#
In the tutorial on reshaping,
pivot() was introduced to reshape the data table with each of the
measurements locations as a separate column:
In [18]: no_2 = air_quality.pivot(index="datetime", columns="location", values="value")
In [19]: no_2.head()
Out[19]:
location BETR801 FR04014 London Westminster
datetime
2019-05-07 01:00:00+00:00 50.5 25.0 23.0
2019-05-07 02:00:00+00:00 45.0 27.7 19.0
2019-05-07 03:00:00+00:00 NaN 50.4 19.0
2019-05-07 04:00:00+00:00 NaN 61.9 16.0
2019-05-07 05:00:00+00:00 NaN 72.4 NaN
Note
By pivoting the data, the datetime information became the
index of the table. In general, setting a column as an index can be
achieved by the set_index function.
Working with a datetime index (i.e. DatetimeIndex) provides powerful
functionalities. For example, we do not need the dt accessor to get
the time series properties, but have these properties available on the
index directly:
In [20]: no_2.index.year, no_2.index.weekday
Out[20]:
(Int64Index([2019, 2019, 2019, 2019, 2019, 2019, 2019, 2019, 2019, 2019,
...
2019, 2019, 2019, 2019, 2019, 2019, 2019, 2019, 2019, 2019],
dtype='int64', name='datetime', length=1033),
Int64Index([1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
...
3, 3, 3, 3, 3, 3, 3, 3, 3, 4],
dtype='int64', name='datetime', length=1033))
Some other advantages are the convenient subsetting of time period or
the adapted time scale on plots. Let’s apply this on our data.
Create a plot of the \(NO_2\) values in the different stations from the 20th of May till the end of 21st of May
In [21]: no_2["2019-05-20":"2019-05-21"].plot();
By providing a string that parses to a datetime, a specific subset of the data can be selected on a DatetimeIndex.
To user guideMore information on the DatetimeIndex and the slicing by using strings is provided in the section on time series indexing.
Resample a time series to another frequency#
Aggregate the current hourly time series values to the monthly maximum value in each of the stations.
In [22]: monthly_max = no_2.resample("M").max()
In [23]: monthly_max
Out[23]:
location BETR801 FR04014 London Westminster
datetime
2019-05-31 00:00:00+00:00 74.5 97.0 97.0
2019-06-30 00:00:00+00:00 52.5 84.7 52.0
A very powerful method on time series data with a datetime index, is the
ability to resample() time series to another frequency (e.g.,
converting secondly data into 5-minutely data).
The resample() method is similar to a groupby operation:
it provides a time-based grouping, by using a string (e.g. M,
5H,…) that defines the target frequency
it requires an aggregation function such as mean, max,…
To user guideAn overview of the aliases used to define time series frequencies is given in the offset aliases overview table.
When defined, the frequency of the time series is provided by the
freq attribute:
In [24]: monthly_max.index.freq
Out[24]: <MonthEnd>
Make a plot of the daily mean \(NO_2\) value in each of the stations.
In [25]: no_2.resample("D").mean().plot(style="-o", figsize=(10, 5));
To user guideMore details on the power of time series resampling is provided in the user guide section on resampling.
REMEMBER
Valid date strings can be converted to datetime objects using
to_datetime function or as part of read functions.
Datetime objects in pandas support calculations, logical operations
and convenient date-related properties using the dt accessor.
A DatetimeIndex contains these date-related properties and
supports convenient slicing.
Resample is a powerful method to change the frequency of a time
series.
To user guideA full overview on time series is given on the pages on time series and date functionality.
| 350
| 501
|
df query on Timedelta column where duration <= 1 hour
#query that fetches all items where duration <= 1 hours
df = df[(df['Td'].dt.total_seconds() <= 3600) & (df['Td'].dt.total_seconds() >= 0)]
For example, the above query excludes items that start on 01/01/20 23:30:00 and end on 01/02/20 00:18:00, however they need to be included!
If I add additional condition (df['Td'].dt.total_seconds() >= -3600) to the above query it starts including items such as pd.Timedelta(days=-1, hours=23).
How can I make sure that the only items I fetch are within the duration of 1 hour regardless of the day change that makes pd.Timedelta(days=-1, hours=23) evaluate to hours=-1?
Example:
-3600 <= pd.Timedelta(days=-1, hours=23).total_seconds() <= 3600
True
I don't want this included because 23 hours elapsed from the previous day not -3600 seconds/ -1 hours.
|
69,814,139
|
Python Pandas read_excel without converting int to float
|
<p>im reading an Excel file:</p>
<pre><code>df = pd.read_excel(r'C:\test.xlsx', 'Sheet0', skiprows = 1)
</code></pre>
<p>The Excel file contains a column formatted General and a value like "405788", after reading this with pandas the output looks like "405788.0" so its converted as float. I need any value as String without changing the values, can someone help me out with this?</p>
<p>[Edit]</p>
<p>If i copy the values in a new Excel file and load this, the integers does not get converted to float. But i need to get the Values correct of the original file, so is there anything i can do?</p>
<p>Options dtype and converted changes the type as i need in str but as a floating number with .0</p>
| 69,814,212
| 2021-11-02T16:48:05.107000
| 1
| null | 0
| 1,141
|
python|pandas
|
<p>You can try to use the dtype attribute of the read_excel method.</p>
<pre><code>df = pd.read_excel(r'C:\test.xlsx', 'Sheet0', skiprows = 1,
dtype={'Name': str, 'Value': str})
</code></pre>
<p>More information in the pandas docs:
<a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_excel.html" rel="nofollow noreferrer">https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_excel.html</a></p>
| 2021-11-02T16:53:45.227000
| 0
|
https://pandas.pydata.org/docs/reference/api/pandas.read_excel.html
|
pandas.read_excel#
pandas.read_excel#
pandas.read_excel(io, sheet_name=0, *, header=0, names=None, index_col=None, usecols=None, squeeze=None, dtype=None, engine=None, converters=None, true_values=None, false_values=None, skiprows=None, nrows=None, na_values=None, keep_default_na=True, na_filter=True, verbose=False, parse_dates=False, date_parser=None, thousands=None, decimal='.', comment=None, skipfooter=0, convert_float=None, mangle_dupe_cols=True, storage_options=None)[source]#
Read an Excel file into a pandas DataFrame.
Supports xls, xlsx, xlsm, xlsb, odf, ods and odt file extensions
You can try to use the dtype attribute of the read_excel method.
df = pd.read_excel(r'C:\test.xlsx', 'Sheet0', skiprows = 1,
dtype={'Name': str, 'Value': str})
More information in the pandas docs:
https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_excel.html
read from a local filesystem or URL. Supports an option to read
a single sheet or a list of sheets.
Parameters
iostr, bytes, ExcelFile, xlrd.Book, path object, or file-like objectAny valid string path is acceptable. The string could be a URL. Valid
URL schemes include http, ftp, s3, and file. For file URLs, a host is
expected. A local file could be: file://localhost/path/to/table.xlsx.
If you want to pass in a path object, pandas accepts any os.PathLike.
By file-like object, we refer to objects with a read() method,
such as a file handle (e.g. via builtin open function)
or StringIO.
sheet_namestr, int, list, or None, default 0Strings are used for sheet names. Integers are used in zero-indexed
sheet positions (chart sheets do not count as a sheet position).
Lists of strings/integers are used to request multiple sheets.
Specify None to get all worksheets.
Available cases:
Defaults to 0: 1st sheet as a DataFrame
1: 2nd sheet as a DataFrame
"Sheet1": Load sheet with name “Sheet1”
[0, 1, "Sheet5"]: Load first, second and sheet named “Sheet5”
as a dict of DataFrame
None: All worksheets.
headerint, list of int, default 0Row (0-indexed) to use for the column labels of the parsed
DataFrame. If a list of integers is passed those row positions will
be combined into a MultiIndex. Use None if there is no header.
namesarray-like, default NoneList of column names to use. If file contains no header row,
then you should explicitly pass header=None.
index_colint, list of int, default NoneColumn (0-indexed) to use as the row labels of the DataFrame.
Pass None if there is no such column. If a list is passed,
those columns will be combined into a MultiIndex. If a
subset of data is selected with usecols, index_col
is based on the subset.
Missing values will be forward filled to allow roundtripping with
to_excel for merged_cells=True. To avoid forward filling the
missing values use set_index after reading the data instead of
index_col.
usecolsstr, list-like, or callable, default None
If None, then parse all columns.
If str, then indicates comma separated list of Excel column letters
and column ranges (e.g. “A:E” or “A,C,E:F”). Ranges are inclusive of
both sides.
If list of int, then indicates list of column numbers to be parsed
(0-indexed).
If list of string, then indicates list of column names to be parsed.
If callable, then evaluate each column name against it and parse the
column if the callable returns True.
Returns a subset of the columns according to behavior above.
squeezebool, default FalseIf the parsed data only contains one column then return a Series.
Deprecated since version 1.4.0: Append .squeeze("columns") to the call to read_excel to squeeze
the data.
dtypeType name or dict of column -> type, default NoneData type for data or columns. E.g. {‘a’: np.float64, ‘b’: np.int32}
Use object to preserve data as stored in Excel and not interpret dtype.
If converters are specified, they will be applied INSTEAD
of dtype conversion.
enginestr, default NoneIf io is not a buffer or path, this must be set to identify io.
Supported engines: “xlrd”, “openpyxl”, “odf”, “pyxlsb”.
Engine compatibility :
“xlrd” supports old-style Excel files (.xls).
“openpyxl” supports newer Excel file formats.
“odf” supports OpenDocument file formats (.odf, .ods, .odt).
“pyxlsb” supports Binary Excel files.
Changed in version 1.2.0: The engine xlrd
now only supports old-style .xls files.
When engine=None, the following logic will be
used to determine the engine:
If path_or_buffer is an OpenDocument format (.odf, .ods, .odt),
then odf will be used.
Otherwise if path_or_buffer is an xls format,
xlrd will be used.
Otherwise if path_or_buffer is in xlsb format,
pyxlsb will be used.
New in version 1.3.0.
Otherwise openpyxl will be used.
Changed in version 1.3.0.
convertersdict, default NoneDict of functions for converting values in certain columns. Keys can
either be integers or column labels, values are functions that take one
input argument, the Excel cell content, and return the transformed
content.
true_valueslist, default NoneValues to consider as True.
false_valueslist, default NoneValues to consider as False.
skiprowslist-like, int, or callable, optionalLine numbers to skip (0-indexed) or number of lines to skip (int) at the
start of the file. If callable, the callable function will be evaluated
against the row indices, returning True if the row should be skipped and
False otherwise. An example of a valid callable argument would be lambda
x: x in [0, 2].
nrowsint, default NoneNumber of rows to parse.
na_valuesscalar, str, list-like, or dict, default NoneAdditional strings to recognize as NA/NaN. If dict passed, specific
per-column NA values. By default the following values are interpreted
as NaN: ‘’, ‘#N/A’, ‘#N/A N/A’, ‘#NA’, ‘-1.#IND’, ‘-1.#QNAN’, ‘-NaN’, ‘-nan’,
‘1.#IND’, ‘1.#QNAN’, ‘<NA>’, ‘N/A’, ‘NA’, ‘NULL’, ‘NaN’, ‘n/a’,
‘nan’, ‘null’.
keep_default_nabool, default TrueWhether or not to include the default NaN values when parsing the data.
Depending on whether na_values is passed in, the behavior is as follows:
If keep_default_na is True, and na_values are specified, na_values
is appended to the default NaN values used for parsing.
If keep_default_na is True, and na_values are not specified, only
the default NaN values are used for parsing.
If keep_default_na is False, and na_values are specified, only
the NaN values specified na_values are used for parsing.
If keep_default_na is False, and na_values are not specified, no
strings will be parsed as NaN.
Note that if na_filter is passed in as False, the keep_default_na and
na_values parameters will be ignored.
na_filterbool, default TrueDetect missing value markers (empty strings and the value of na_values). In
data without any NAs, passing na_filter=False can improve the performance
of reading a large file.
verbosebool, default FalseIndicate number of NA values placed in non-numeric columns.
parse_datesbool, list-like, or dict, default FalseThe behavior is as follows:
bool. If True -> try parsing the index.
list of int or names. e.g. If [1, 2, 3] -> try parsing columns 1, 2, 3
each as a separate date column.
list of lists. e.g. If [[1, 3]] -> combine columns 1 and 3 and parse as
a single date column.
dict, e.g. {‘foo’ : [1, 3]} -> parse columns 1, 3 as date and call
result ‘foo’
If a column or index contains an unparsable date, the entire column or
index will be returned unaltered as an object data type. If you don`t want to
parse some cells as date just change their type in Excel to “Text”.
For non-standard datetime parsing, use pd.to_datetime after pd.read_excel.
Note: A fast-path exists for iso8601-formatted dates.
date_parserfunction, optionalFunction to use for converting a sequence of string columns to an array of
datetime instances. The default uses dateutil.parser.parser to do the
conversion. Pandas will try to call date_parser in three different ways,
advancing to the next if an exception occurs: 1) Pass one or more arrays
(as defined by parse_dates) as arguments; 2) concatenate (row-wise) the
string values from the columns defined by parse_dates into a single array
and pass that; and 3) call date_parser once for each row using one or
more strings (corresponding to the columns defined by parse_dates) as
arguments.
thousandsstr, default NoneThousands separator for parsing string columns to numeric. Note that
this parameter is only necessary for columns stored as TEXT in Excel,
any numeric columns will automatically be parsed, regardless of display
format.
decimalstr, default ‘.’Character to recognize as decimal point for parsing string columns to numeric.
Note that this parameter is only necessary for columns stored as TEXT in Excel,
any numeric columns will automatically be parsed, regardless of display
format.(e.g. use ‘,’ for European data).
New in version 1.4.0.
commentstr, default NoneComments out remainder of line. Pass a character or characters to this
argument to indicate comments in the input file. Any data between the
comment string and the end of the current line is ignored.
skipfooterint, default 0Rows at the end to skip (0-indexed).
convert_floatbool, default TrueConvert integral floats to int (i.e., 1.0 –> 1). If False, all numeric
data will be read in as floats: Excel stores all numbers as floats
internally.
Deprecated since version 1.3.0: convert_float will be removed in a future version
mangle_dupe_colsbool, default TrueDuplicate columns will be specified as ‘X’, ‘X.1’, …’X.N’, rather than
‘X’…’X’. Passing in False will cause data to be overwritten if there
are duplicate names in the columns.
Deprecated since version 1.5.0: Not implemented, and a new argument to specify the pattern for the
names of duplicated columns will be added instead
storage_optionsdict, optionalExtra options that make sense for a particular storage connection, e.g.
host, port, username, password, etc. For HTTP(S) URLs the key-value pairs
are forwarded to urllib.request.Request as header options. For other
URLs (e.g. starting with “s3://”, and “gcs://”) the key-value pairs are
forwarded to fsspec.open. Please see fsspec and urllib for more
details, and for more examples on storage options refer here.
New in version 1.2.0.
Returns
DataFrame or dict of DataFramesDataFrame from the passed in Excel file. See notes in sheet_name
argument for more information on when a dict of DataFrames is returned.
See also
DataFrame.to_excelWrite DataFrame to an Excel file.
DataFrame.to_csvWrite DataFrame to a comma-separated values (csv) file.
read_csvRead a comma-separated values (csv) file into DataFrame.
read_fwfRead a table of fixed-width formatted lines into DataFrame.
Examples
The file can be read using the file name as string or an open file object:
>>> pd.read_excel('tmp.xlsx', index_col=0)
Name Value
0 string1 1
1 string2 2
2 #Comment 3
>>> pd.read_excel(open('tmp.xlsx', 'rb'),
... sheet_name='Sheet3')
Unnamed: 0 Name Value
0 0 string1 1
1 1 string2 2
2 2 #Comment 3
Index and header can be specified via the index_col and header arguments
>>> pd.read_excel('tmp.xlsx', index_col=None, header=None)
0 1 2
0 NaN Name Value
1 0.0 string1 1
2 1.0 string2 2
3 2.0 #Comment 3
Column types are inferred but can be explicitly specified
>>> pd.read_excel('tmp.xlsx', index_col=0,
... dtype={'Name': str, 'Value': float})
Name Value
0 string1 1.0
1 string2 2.0
2 #Comment 3.0
True, False, and NA values, and thousands separators have defaults,
but can be explicitly specified, too. Supply the values you would like
as strings or lists of strings!
>>> pd.read_excel('tmp.xlsx', index_col=0,
... na_values=['string1', 'string2'])
Name Value
0 NaN 1
1 NaN 2
2 #Comment 3
Comment lines in the excel input file can be skipped using the comment kwarg
>>> pd.read_excel('tmp.xlsx', index_col=0, comment='#')
Name Value
0 string1 1.0
1 string2 2.0
2 None NaN
| 599
| 878
|
Python Pandas read_excel without converting int to float
im reading an Excel file:
df = pd.read_excel(r'C:\test.xlsx', 'Sheet0', skiprows = 1)
The Excel file contains a column formatted General and a value like "405788", after reading this with pandas the output looks like "405788.0" so its converted as float. I need any value as String without changing the values, can someone help me out with this?
[Edit]
If i copy the values in a new Excel file and load this, the integers does not get converted to float. But i need to get the Values correct of the original file, so is there anything i can do?
Options dtype and converted changes the type as i need in str but as a floating number with .0
|
63,431,953
|
Filtering out words having only 1 letter
|
<p>Could you please help me to understand how extract only words with length greater than 1?</p>
<pre><code>WORD
TPI is a new program
as E stands for Eimear
your are using an extra L
</code></pre>
<p>The code below select upper case words/letters :</p>
<pre><code>dt['WORD'].str.extractall(r'([A-Z]+)')
</code></pre>
<p>The problem is that I would like only filter letters with more than one (TPI) and not (TPI, E, L).</p>
<p>How can I get these words (TPI)?</p>
| 63,431,963
| 2020-08-16T00:15:38.657000
| 2
| null | 1
| 126
|
python|pandas
|
<p>Check <code>findall</code></p>
<pre><code>df.WORD.str.findall(r'([A-Z]{2,})')
Out[120]:
0 [TPI]
1 []
2 []
Name: WORD, dtype: object
</code></pre>
| 2020-08-16T00:18:22.103000
| 0
|
https://pandas.pydata.org/docs/user_guide/groupby.html
|
Group by: split-apply-combine#
Group by: split-apply-combine#
By “group by” we are referring to a process involving one or more of the following
steps:
Splitting the data into groups based on some criteria.
Applying a function to each group independently.
Combining the results into a data structure.
Out of these, the split step is the most straightforward. In fact, in many
situations we may wish to split the data set into groups and do something with
those groups. In the apply step, we might wish to do one of the
following:
Aggregation: compute a summary statistic (or statistics) for each
group. Some examples:
Compute group sums or means.
Compute group sizes / counts.
Transformation: perform some group-specific computations and return a
like-indexed object. Some examples:
Standardize data (zscore) within a group.
Filling NAs within groups with a value derived from each group.
Filtration: discard some groups, according to a group-wise computation
Check findall
df.WORD.str.findall(r'([A-Z]{2,})')
Out[120]:
0 [TPI]
1 []
2 []
Name: WORD, dtype: object
that evaluates True or False. Some examples:
Discard data that belongs to groups with only a few members.
Filter out data based on the group sum or mean.
Some combination of the above: GroupBy will examine the results of the apply
step and try to return a sensibly combined result if it doesn’t fit into
either of the above two categories.
Since the set of object instance methods on pandas data structures are generally
rich and expressive, we often simply want to invoke, say, a DataFrame function
on each group. The name GroupBy should be quite familiar to those who have used
a SQL-based tool (or itertools), in which you can write code like:
SELECT Column1, Column2, mean(Column3), sum(Column4)
FROM SomeTable
GROUP BY Column1, Column2
We aim to make operations like this natural and easy to express using
pandas. We’ll address each area of GroupBy functionality then provide some
non-trivial examples / use cases.
See the cookbook for some advanced strategies.
Splitting an object into groups#
pandas objects can be split on any of their axes. The abstract definition of
grouping is to provide a mapping of labels to group names. To create a GroupBy
object (more on what the GroupBy object is later), you may do the following:
In [1]: df = pd.DataFrame(
...: [
...: ("bird", "Falconiformes", 389.0),
...: ("bird", "Psittaciformes", 24.0),
...: ("mammal", "Carnivora", 80.2),
...: ("mammal", "Primates", np.nan),
...: ("mammal", "Carnivora", 58),
...: ],
...: index=["falcon", "parrot", "lion", "monkey", "leopard"],
...: columns=("class", "order", "max_speed"),
...: )
...:
In [2]: df
Out[2]:
class order max_speed
falcon bird Falconiformes 389.0
parrot bird Psittaciformes 24.0
lion mammal Carnivora 80.2
monkey mammal Primates NaN
leopard mammal Carnivora 58.0
# default is axis=0
In [3]: grouped = df.groupby("class")
In [4]: grouped = df.groupby("order", axis="columns")
In [5]: grouped = df.groupby(["class", "order"])
The mapping can be specified many different ways:
A Python function, to be called on each of the axis labels.
A list or NumPy array of the same length as the selected axis.
A dict or Series, providing a label -> group name mapping.
For DataFrame objects, a string indicating either a column name or
an index level name to be used to group.
df.groupby('A') is just syntactic sugar for df.groupby(df['A']).
A list of any of the above things.
Collectively we refer to the grouping objects as the keys. For example,
consider the following DataFrame:
Note
A string passed to groupby may refer to either a column or an index level.
If a string matches both a column name and an index level name, a
ValueError will be raised.
In [6]: df = pd.DataFrame(
...: {
...: "A": ["foo", "bar", "foo", "bar", "foo", "bar", "foo", "foo"],
...: "B": ["one", "one", "two", "three", "two", "two", "one", "three"],
...: "C": np.random.randn(8),
...: "D": np.random.randn(8),
...: }
...: )
...:
In [7]: df
Out[7]:
A B C D
0 foo one 0.469112 -0.861849
1 bar one -0.282863 -2.104569
2 foo two -1.509059 -0.494929
3 bar three -1.135632 1.071804
4 foo two 1.212112 0.721555
5 bar two -0.173215 -0.706771
6 foo one 0.119209 -1.039575
7 foo three -1.044236 0.271860
On a DataFrame, we obtain a GroupBy object by calling groupby().
We could naturally group by either the A or B columns, or both:
In [8]: grouped = df.groupby("A")
In [9]: grouped = df.groupby(["A", "B"])
If we also have a MultiIndex on columns A and B, we can group by all
but the specified columns
In [10]: df2 = df.set_index(["A", "B"])
In [11]: grouped = df2.groupby(level=df2.index.names.difference(["B"]))
In [12]: grouped.sum()
Out[12]:
C D
A
bar -1.591710 -1.739537
foo -0.752861 -1.402938
These will split the DataFrame on its index (rows). We could also split by the
columns:
In [13]: def get_letter_type(letter):
....: if letter.lower() in 'aeiou':
....: return 'vowel'
....: else:
....: return 'consonant'
....:
In [14]: grouped = df.groupby(get_letter_type, axis=1)
pandas Index objects support duplicate values. If a
non-unique index is used as the group key in a groupby operation, all values
for the same index value will be considered to be in one group and thus the
output of aggregation functions will only contain unique index values:
In [15]: lst = [1, 2, 3, 1, 2, 3]
In [16]: s = pd.Series([1, 2, 3, 10, 20, 30], lst)
In [17]: grouped = s.groupby(level=0)
In [18]: grouped.first()
Out[18]:
1 1
2 2
3 3
dtype: int64
In [19]: grouped.last()
Out[19]:
1 10
2 20
3 30
dtype: int64
In [20]: grouped.sum()
Out[20]:
1 11
2 22
3 33
dtype: int64
Note that no splitting occurs until it’s needed. Creating the GroupBy object
only verifies that you’ve passed a valid mapping.
Note
Many kinds of complicated data manipulations can be expressed in terms of
GroupBy operations (though can’t be guaranteed to be the most
efficient). You can get quite creative with the label mapping functions.
GroupBy sorting#
By default the group keys are sorted during the groupby operation. You may however pass sort=False for potential speedups:
In [21]: df2 = pd.DataFrame({"X": ["B", "B", "A", "A"], "Y": [1, 2, 3, 4]})
In [22]: df2.groupby(["X"]).sum()
Out[22]:
Y
X
A 7
B 3
In [23]: df2.groupby(["X"], sort=False).sum()
Out[23]:
Y
X
B 3
A 7
Note that groupby will preserve the order in which observations are sorted within each group.
For example, the groups created by groupby() below are in the order they appeared in the original DataFrame:
In [24]: df3 = pd.DataFrame({"X": ["A", "B", "A", "B"], "Y": [1, 4, 3, 2]})
In [25]: df3.groupby(["X"]).get_group("A")
Out[25]:
X Y
0 A 1
2 A 3
In [26]: df3.groupby(["X"]).get_group("B")
Out[26]:
X Y
1 B 4
3 B 2
New in version 1.1.0.
GroupBy dropna#
By default NA values are excluded from group keys during the groupby operation. However,
in case you want to include NA values in group keys, you could pass dropna=False to achieve it.
In [27]: df_list = [[1, 2, 3], [1, None, 4], [2, 1, 3], [1, 2, 2]]
In [28]: df_dropna = pd.DataFrame(df_list, columns=["a", "b", "c"])
In [29]: df_dropna
Out[29]:
a b c
0 1 2.0 3
1 1 NaN 4
2 2 1.0 3
3 1 2.0 2
# Default ``dropna`` is set to True, which will exclude NaNs in keys
In [30]: df_dropna.groupby(by=["b"], dropna=True).sum()
Out[30]:
a c
b
1.0 2 3
2.0 2 5
# In order to allow NaN in keys, set ``dropna`` to False
In [31]: df_dropna.groupby(by=["b"], dropna=False).sum()
Out[31]:
a c
b
1.0 2 3
2.0 2 5
NaN 1 4
The default setting of dropna argument is True which means NA are not included in group keys.
GroupBy object attributes#
The groups attribute is a dict whose keys are the computed unique groups
and corresponding values being the axis labels belonging to each group. In the
above example we have:
In [32]: df.groupby("A").groups
Out[32]: {'bar': [1, 3, 5], 'foo': [0, 2, 4, 6, 7]}
In [33]: df.groupby(get_letter_type, axis=1).groups
Out[33]: {'consonant': ['B', 'C', 'D'], 'vowel': ['A']}
Calling the standard Python len function on the GroupBy object just returns
the length of the groups dict, so it is largely just a convenience:
In [34]: grouped = df.groupby(["A", "B"])
In [35]: grouped.groups
Out[35]: {('bar', 'one'): [1], ('bar', 'three'): [3], ('bar', 'two'): [5], ('foo', 'one'): [0, 6], ('foo', 'three'): [7], ('foo', 'two'): [2, 4]}
In [36]: len(grouped)
Out[36]: 6
GroupBy will tab complete column names (and other attributes):
In [37]: df
Out[37]:
height weight gender
2000-01-01 42.849980 157.500553 male
2000-01-02 49.607315 177.340407 male
2000-01-03 56.293531 171.524640 male
2000-01-04 48.421077 144.251986 female
2000-01-05 46.556882 152.526206 male
2000-01-06 68.448851 168.272968 female
2000-01-07 70.757698 136.431469 male
2000-01-08 58.909500 176.499753 female
2000-01-09 76.435631 174.094104 female
2000-01-10 45.306120 177.540920 male
In [38]: gb = df.groupby("gender")
In [39]: gb.<TAB> # noqa: E225, E999
gb.agg gb.boxplot gb.cummin gb.describe gb.filter gb.get_group gb.height gb.last gb.median gb.ngroups gb.plot gb.rank gb.std gb.transform
gb.aggregate gb.count gb.cumprod gb.dtype gb.first gb.groups gb.hist gb.max gb.min gb.nth gb.prod gb.resample gb.sum gb.var
gb.apply gb.cummax gb.cumsum gb.fillna gb.gender gb.head gb.indices gb.mean gb.name gb.ohlc gb.quantile gb.size gb.tail gb.weight
GroupBy with MultiIndex#
With hierarchically-indexed data, it’s quite
natural to group by one of the levels of the hierarchy.
Let’s create a Series with a two-level MultiIndex.
In [40]: arrays = [
....: ["bar", "bar", "baz", "baz", "foo", "foo", "qux", "qux"],
....: ["one", "two", "one", "two", "one", "two", "one", "two"],
....: ]
....:
In [41]: index = pd.MultiIndex.from_arrays(arrays, names=["first", "second"])
In [42]: s = pd.Series(np.random.randn(8), index=index)
In [43]: s
Out[43]:
first second
bar one -0.919854
two -0.042379
baz one 1.247642
two -0.009920
foo one 0.290213
two 0.495767
qux one 0.362949
two 1.548106
dtype: float64
We can then group by one of the levels in s.
In [44]: grouped = s.groupby(level=0)
In [45]: grouped.sum()
Out[45]:
first
bar -0.962232
baz 1.237723
foo 0.785980
qux 1.911055
dtype: float64
If the MultiIndex has names specified, these can be passed instead of the level
number:
In [46]: s.groupby(level="second").sum()
Out[46]:
second
one 0.980950
two 1.991575
dtype: float64
Grouping with multiple levels is supported.
In [47]: s
Out[47]:
first second third
bar doo one -1.131345
two -0.089329
baz bee one 0.337863
two -0.945867
foo bop one -0.932132
two 1.956030
qux bop one 0.017587
two -0.016692
dtype: float64
In [48]: s.groupby(level=["first", "second"]).sum()
Out[48]:
first second
bar doo -1.220674
baz bee -0.608004
foo bop 1.023898
qux bop 0.000895
dtype: float64
Index level names may be supplied as keys.
In [49]: s.groupby(["first", "second"]).sum()
Out[49]:
first second
bar doo -1.220674
baz bee -0.608004
foo bop 1.023898
qux bop 0.000895
dtype: float64
More on the sum function and aggregation later.
Grouping DataFrame with Index levels and columns#
A DataFrame may be grouped by a combination of columns and index levels by
specifying the column names as strings and the index levels as pd.Grouper
objects.
In [50]: arrays = [
....: ["bar", "bar", "baz", "baz", "foo", "foo", "qux", "qux"],
....: ["one", "two", "one", "two", "one", "two", "one", "two"],
....: ]
....:
In [51]: index = pd.MultiIndex.from_arrays(arrays, names=["first", "second"])
In [52]: df = pd.DataFrame({"A": [1, 1, 1, 1, 2, 2, 3, 3], "B": np.arange(8)}, index=index)
In [53]: df
Out[53]:
A B
first second
bar one 1 0
two 1 1
baz one 1 2
two 1 3
foo one 2 4
two 2 5
qux one 3 6
two 3 7
The following example groups df by the second index level and
the A column.
In [54]: df.groupby([pd.Grouper(level=1), "A"]).sum()
Out[54]:
B
second A
one 1 2
2 4
3 6
two 1 4
2 5
3 7
Index levels may also be specified by name.
In [55]: df.groupby([pd.Grouper(level="second"), "A"]).sum()
Out[55]:
B
second A
one 1 2
2 4
3 6
two 1 4
2 5
3 7
Index level names may be specified as keys directly to groupby.
In [56]: df.groupby(["second", "A"]).sum()
Out[56]:
B
second A
one 1 2
2 4
3 6
two 1 4
2 5
3 7
DataFrame column selection in GroupBy#
Once you have created the GroupBy object from a DataFrame, you might want to do
something different for each of the columns. Thus, using [] similar to
getting a column from a DataFrame, you can do:
In [57]: df = pd.DataFrame(
....: {
....: "A": ["foo", "bar", "foo", "bar", "foo", "bar", "foo", "foo"],
....: "B": ["one", "one", "two", "three", "two", "two", "one", "three"],
....: "C": np.random.randn(8),
....: "D": np.random.randn(8),
....: }
....: )
....:
In [58]: df
Out[58]:
A B C D
0 foo one -0.575247 1.346061
1 bar one 0.254161 1.511763
2 foo two -1.143704 1.627081
3 bar three 0.215897 -0.990582
4 foo two 1.193555 -0.441652
5 bar two -0.077118 1.211526
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
In [59]: grouped = df.groupby(["A"])
In [60]: grouped_C = grouped["C"]
In [61]: grouped_D = grouped["D"]
This is mainly syntactic sugar for the alternative and much more verbose:
In [62]: df["C"].groupby(df["A"])
Out[62]: <pandas.core.groupby.generic.SeriesGroupBy object at 0x7f1ea100a490>
Additionally this method avoids recomputing the internal grouping information
derived from the passed key.
Iterating through groups#
With the GroupBy object in hand, iterating through the grouped data is very
natural and functions similarly to itertools.groupby():
In [63]: grouped = df.groupby('A')
In [64]: for name, group in grouped:
....: print(name)
....: print(group)
....:
bar
A B C D
1 bar one 0.254161 1.511763
3 bar three 0.215897 -0.990582
5 bar two -0.077118 1.211526
foo
A B C D
0 foo one -0.575247 1.346061
2 foo two -1.143704 1.627081
4 foo two 1.193555 -0.441652
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
In the case of grouping by multiple keys, the group name will be a tuple:
In [65]: for name, group in df.groupby(['A', 'B']):
....: print(name)
....: print(group)
....:
('bar', 'one')
A B C D
1 bar one 0.254161 1.511763
('bar', 'three')
A B C D
3 bar three 0.215897 -0.990582
('bar', 'two')
A B C D
5 bar two -0.077118 1.211526
('foo', 'one')
A B C D
0 foo one -0.575247 1.346061
6 foo one -0.408530 0.268520
('foo', 'three')
A B C D
7 foo three -0.862495 0.02458
('foo', 'two')
A B C D
2 foo two -1.143704 1.627081
4 foo two 1.193555 -0.441652
See Iterating through groups.
Selecting a group#
A single group can be selected using
get_group():
In [66]: grouped.get_group("bar")
Out[66]:
A B C D
1 bar one 0.254161 1.511763
3 bar three 0.215897 -0.990582
5 bar two -0.077118 1.211526
Or for an object grouped on multiple columns:
In [67]: df.groupby(["A", "B"]).get_group(("bar", "one"))
Out[67]:
A B C D
1 bar one 0.254161 1.511763
Aggregation#
Once the GroupBy object has been created, several methods are available to
perform a computation on the grouped data. These operations are similar to the
aggregating API, window API,
and resample API.
An obvious one is aggregation via the
aggregate() or equivalently
agg() method:
In [68]: grouped = df.groupby("A")
In [69]: grouped[["C", "D"]].aggregate(np.sum)
Out[69]:
C D
A
bar 0.392940 1.732707
foo -1.796421 2.824590
In [70]: grouped = df.groupby(["A", "B"])
In [71]: grouped.aggregate(np.sum)
Out[71]:
C D
A B
bar one 0.254161 1.511763
three 0.215897 -0.990582
two -0.077118 1.211526
foo one -0.983776 1.614581
three -0.862495 0.024580
two 0.049851 1.185429
As you can see, the result of the aggregation will have the group names as the
new index along the grouped axis. In the case of multiple keys, the result is a
MultiIndex by default, though this can be
changed by using the as_index option:
In [72]: grouped = df.groupby(["A", "B"], as_index=False)
In [73]: grouped.aggregate(np.sum)
Out[73]:
A B C D
0 bar one 0.254161 1.511763
1 bar three 0.215897 -0.990582
2 bar two -0.077118 1.211526
3 foo one -0.983776 1.614581
4 foo three -0.862495 0.024580
5 foo two 0.049851 1.185429
In [74]: df.groupby("A", as_index=False)[["C", "D"]].sum()
Out[74]:
A C D
0 bar 0.392940 1.732707
1 foo -1.796421 2.824590
Note that you could use the reset_index DataFrame function to achieve the
same result as the column names are stored in the resulting MultiIndex:
In [75]: df.groupby(["A", "B"]).sum().reset_index()
Out[75]:
A B C D
0 bar one 0.254161 1.511763
1 bar three 0.215897 -0.990582
2 bar two -0.077118 1.211526
3 foo one -0.983776 1.614581
4 foo three -0.862495 0.024580
5 foo two 0.049851 1.185429
Another simple aggregation example is to compute the size of each group.
This is included in GroupBy as the size method. It returns a Series whose
index are the group names and whose values are the sizes of each group.
In [76]: grouped.size()
Out[76]:
A B size
0 bar one 1
1 bar three 1
2 bar two 1
3 foo one 2
4 foo three 1
5 foo two 2
In [77]: grouped.describe()
Out[77]:
C ... D
count mean std min ... 25% 50% 75% max
0 1.0 0.254161 NaN 0.254161 ... 1.511763 1.511763 1.511763 1.511763
1 1.0 0.215897 NaN 0.215897 ... -0.990582 -0.990582 -0.990582 -0.990582
2 1.0 -0.077118 NaN -0.077118 ... 1.211526 1.211526 1.211526 1.211526
3 2.0 -0.491888 0.117887 -0.575247 ... 0.537905 0.807291 1.076676 1.346061
4 1.0 -0.862495 NaN -0.862495 ... 0.024580 0.024580 0.024580 0.024580
5 2.0 0.024925 1.652692 -1.143704 ... 0.075531 0.592714 1.109898 1.627081
[6 rows x 16 columns]
Another aggregation example is to compute the number of unique values of each group. This is similar to the value_counts function, except that it only counts unique values.
In [78]: ll = [['foo', 1], ['foo', 2], ['foo', 2], ['bar', 1], ['bar', 1]]
In [79]: df4 = pd.DataFrame(ll, columns=["A", "B"])
In [80]: df4
Out[80]:
A B
0 foo 1
1 foo 2
2 foo 2
3 bar 1
4 bar 1
In [81]: df4.groupby("A")["B"].nunique()
Out[81]:
A
bar 1
foo 2
Name: B, dtype: int64
Note
Aggregation functions will not return the groups that you are aggregating over
if they are named columns, when as_index=True, the default. The grouped columns will
be the indices of the returned object.
Passing as_index=False will return the groups that you are aggregating over, if they are
named columns.
Aggregating functions are the ones that reduce the dimension of the returned objects.
Some common aggregating functions are tabulated below:
Function
Description
mean()
Compute mean of groups
sum()
Compute sum of group values
size()
Compute group sizes
count()
Compute count of group
std()
Standard deviation of groups
var()
Compute variance of groups
sem()
Standard error of the mean of groups
describe()
Generates descriptive statistics
first()
Compute first of group values
last()
Compute last of group values
nth()
Take nth value, or a subset if n is a list
min()
Compute min of group values
max()
Compute max of group values
The aggregating functions above will exclude NA values. Any function which
reduces a Series to a scalar value is an aggregation function and will work,
a trivial example is df.groupby('A').agg(lambda ser: 1). Note that
nth() can act as a reducer or a
filter, see here.
Applying multiple functions at once#
With grouped Series you can also pass a list or dict of functions to do
aggregation with, outputting a DataFrame:
In [82]: grouped = df.groupby("A")
In [83]: grouped["C"].agg([np.sum, np.mean, np.std])
Out[83]:
sum mean std
A
bar 0.392940 0.130980 0.181231
foo -1.796421 -0.359284 0.912265
On a grouped DataFrame, you can pass a list of functions to apply to each
column, which produces an aggregated result with a hierarchical index:
In [84]: grouped[["C", "D"]].agg([np.sum, np.mean, np.std])
Out[84]:
C D
sum mean std sum mean std
A
bar 0.392940 0.130980 0.181231 1.732707 0.577569 1.366330
foo -1.796421 -0.359284 0.912265 2.824590 0.564918 0.884785
The resulting aggregations are named for the functions themselves. If you
need to rename, then you can add in a chained operation for a Series like this:
In [85]: (
....: grouped["C"]
....: .agg([np.sum, np.mean, np.std])
....: .rename(columns={"sum": "foo", "mean": "bar", "std": "baz"})
....: )
....:
Out[85]:
foo bar baz
A
bar 0.392940 0.130980 0.181231
foo -1.796421 -0.359284 0.912265
For a grouped DataFrame, you can rename in a similar manner:
In [86]: (
....: grouped[["C", "D"]].agg([np.sum, np.mean, np.std]).rename(
....: columns={"sum": "foo", "mean": "bar", "std": "baz"}
....: )
....: )
....:
Out[86]:
C D
foo bar baz foo bar baz
A
bar 0.392940 0.130980 0.181231 1.732707 0.577569 1.366330
foo -1.796421 -0.359284 0.912265 2.824590 0.564918 0.884785
Note
In general, the output column names should be unique. You can’t apply
the same function (or two functions with the same name) to the same
column.
In [87]: grouped["C"].agg(["sum", "sum"])
Out[87]:
sum sum
A
bar 0.392940 0.392940
foo -1.796421 -1.796421
pandas does allow you to provide multiple lambdas. In this case, pandas
will mangle the name of the (nameless) lambda functions, appending _<i>
to each subsequent lambda.
In [88]: grouped["C"].agg([lambda x: x.max() - x.min(), lambda x: x.median() - x.mean()])
Out[88]:
<lambda_0> <lambda_1>
A
bar 0.331279 0.084917
foo 2.337259 -0.215962
Named aggregation#
New in version 0.25.0.
To support column-specific aggregation with control over the output column names, pandas
accepts the special syntax in GroupBy.agg(), known as “named aggregation”, where
The keywords are the output column names
The values are tuples whose first element is the column to select
and the second element is the aggregation to apply to that column. pandas
provides the pandas.NamedAgg namedtuple with the fields ['column', 'aggfunc']
to make it clearer what the arguments are. As usual, the aggregation can
be a callable or a string alias.
In [89]: animals = pd.DataFrame(
....: {
....: "kind": ["cat", "dog", "cat", "dog"],
....: "height": [9.1, 6.0, 9.5, 34.0],
....: "weight": [7.9, 7.5, 9.9, 198.0],
....: }
....: )
....:
In [90]: animals
Out[90]:
kind height weight
0 cat 9.1 7.9
1 dog 6.0 7.5
2 cat 9.5 9.9
3 dog 34.0 198.0
In [91]: animals.groupby("kind").agg(
....: min_height=pd.NamedAgg(column="height", aggfunc="min"),
....: max_height=pd.NamedAgg(column="height", aggfunc="max"),
....: average_weight=pd.NamedAgg(column="weight", aggfunc=np.mean),
....: )
....:
Out[91]:
min_height max_height average_weight
kind
cat 9.1 9.5 8.90
dog 6.0 34.0 102.75
pandas.NamedAgg is just a namedtuple. Plain tuples are allowed as well.
In [92]: animals.groupby("kind").agg(
....: min_height=("height", "min"),
....: max_height=("height", "max"),
....: average_weight=("weight", np.mean),
....: )
....:
Out[92]:
min_height max_height average_weight
kind
cat 9.1 9.5 8.90
dog 6.0 34.0 102.75
If your desired output column names are not valid Python keywords, construct a dictionary
and unpack the keyword arguments
In [93]: animals.groupby("kind").agg(
....: **{
....: "total weight": pd.NamedAgg(column="weight", aggfunc=sum)
....: }
....: )
....:
Out[93]:
total weight
kind
cat 17.8
dog 205.5
Additional keyword arguments are not passed through to the aggregation functions. Only pairs
of (column, aggfunc) should be passed as **kwargs. If your aggregation functions
requires additional arguments, partially apply them with functools.partial().
Note
For Python 3.5 and earlier, the order of **kwargs in a functions was not
preserved. This means that the output column ordering would not be
consistent. To ensure consistent ordering, the keys (and so output columns)
will always be sorted for Python 3.5.
Named aggregation is also valid for Series groupby aggregations. In this case there’s
no column selection, so the values are just the functions.
In [94]: animals.groupby("kind").height.agg(
....: min_height="min",
....: max_height="max",
....: )
....:
Out[94]:
min_height max_height
kind
cat 9.1 9.5
dog 6.0 34.0
Applying different functions to DataFrame columns#
By passing a dict to aggregate you can apply a different aggregation to the
columns of a DataFrame:
In [95]: grouped.agg({"C": np.sum, "D": lambda x: np.std(x, ddof=1)})
Out[95]:
C D
A
bar 0.392940 1.366330
foo -1.796421 0.884785
The function names can also be strings. In order for a string to be valid it
must be either implemented on GroupBy or available via dispatching:
In [96]: grouped.agg({"C": "sum", "D": "std"})
Out[96]:
C D
A
bar 0.392940 1.366330
foo -1.796421 0.884785
Cython-optimized aggregation functions#
Some common aggregations, currently only sum, mean, std, and sem, have
optimized Cython implementations:
In [97]: df.groupby("A")[["C", "D"]].sum()
Out[97]:
C D
A
bar 0.392940 1.732707
foo -1.796421 2.824590
In [98]: df.groupby(["A", "B"]).mean()
Out[98]:
C D
A B
bar one 0.254161 1.511763
three 0.215897 -0.990582
two -0.077118 1.211526
foo one -0.491888 0.807291
three -0.862495 0.024580
two 0.024925 0.592714
Of course sum and mean are implemented on pandas objects, so the above
code would work even without the special versions via dispatching (see below).
Aggregations with User-Defined Functions#
Users can also provide their own functions for custom aggregations. When aggregating
with a User-Defined Function (UDF), the UDF should not mutate the provided Series, see
Mutating with User Defined Function (UDF) methods for more information.
In [99]: animals.groupby("kind")[["height"]].agg(lambda x: set(x))
Out[99]:
height
kind
cat {9.1, 9.5}
dog {34.0, 6.0}
The resulting dtype will reflect that of the aggregating function. If the results from different groups have
different dtypes, then a common dtype will be determined in the same way as DataFrame construction.
In [100]: animals.groupby("kind")[["height"]].agg(lambda x: x.astype(int).sum())
Out[100]:
height
kind
cat 18
dog 40
Transformation#
The transform method returns an object that is indexed the same
as the one being grouped. The transform function must:
Return a result that is either the same size as the group chunk or
broadcastable to the size of the group chunk (e.g., a scalar,
grouped.transform(lambda x: x.iloc[-1])).
Operate column-by-column on the group chunk. The transform is applied to
the first group chunk using chunk.apply.
Not perform in-place operations on the group chunk. Group chunks should
be treated as immutable, and changes to a group chunk may produce unexpected
results. For example, when using fillna, inplace must be False
(grouped.transform(lambda x: x.fillna(inplace=False))).
(Optionally) operates on the entire group chunk. If this is supported, a
fast path is used starting from the second chunk.
Deprecated since version 1.5.0: When using .transform on a grouped DataFrame and the transformation function
returns a DataFrame, currently pandas does not align the result’s index
with the input’s index. This behavior is deprecated and alignment will
be performed in a future version of pandas. You can apply .to_numpy() to the
result of the transformation function to avoid alignment.
Similar to Aggregations with User-Defined Functions, the resulting dtype will reflect that of the
transformation function. If the results from different groups have different dtypes, then
a common dtype will be determined in the same way as DataFrame construction.
Suppose we wished to standardize the data within each group:
In [101]: index = pd.date_range("10/1/1999", periods=1100)
In [102]: ts = pd.Series(np.random.normal(0.5, 2, 1100), index)
In [103]: ts = ts.rolling(window=100, min_periods=100).mean().dropna()
In [104]: ts.head()
Out[104]:
2000-01-08 0.779333
2000-01-09 0.778852
2000-01-10 0.786476
2000-01-11 0.782797
2000-01-12 0.798110
Freq: D, dtype: float64
In [105]: ts.tail()
Out[105]:
2002-09-30 0.660294
2002-10-01 0.631095
2002-10-02 0.673601
2002-10-03 0.709213
2002-10-04 0.719369
Freq: D, dtype: float64
In [106]: transformed = ts.groupby(lambda x: x.year).transform(
.....: lambda x: (x - x.mean()) / x.std()
.....: )
.....:
We would expect the result to now have mean 0 and standard deviation 1 within
each group, which we can easily check:
# Original Data
In [107]: grouped = ts.groupby(lambda x: x.year)
In [108]: grouped.mean()
Out[108]:
2000 0.442441
2001 0.526246
2002 0.459365
dtype: float64
In [109]: grouped.std()
Out[109]:
2000 0.131752
2001 0.210945
2002 0.128753
dtype: float64
# Transformed Data
In [110]: grouped_trans = transformed.groupby(lambda x: x.year)
In [111]: grouped_trans.mean()
Out[111]:
2000 -4.870756e-16
2001 -1.545187e-16
2002 4.136282e-16
dtype: float64
In [112]: grouped_trans.std()
Out[112]:
2000 1.0
2001 1.0
2002 1.0
dtype: float64
We can also visually compare the original and transformed data sets.
In [113]: compare = pd.DataFrame({"Original": ts, "Transformed": transformed})
In [114]: compare.plot()
Out[114]: <AxesSubplot: >
Transformation functions that have lower dimension outputs are broadcast to
match the shape of the input array.
In [115]: ts.groupby(lambda x: x.year).transform(lambda x: x.max() - x.min())
Out[115]:
2000-01-08 0.623893
2000-01-09 0.623893
2000-01-10 0.623893
2000-01-11 0.623893
2000-01-12 0.623893
...
2002-09-30 0.558275
2002-10-01 0.558275
2002-10-02 0.558275
2002-10-03 0.558275
2002-10-04 0.558275
Freq: D, Length: 1001, dtype: float64
Alternatively, the built-in methods could be used to produce the same outputs.
In [116]: max_ts = ts.groupby(lambda x: x.year).transform("max")
In [117]: min_ts = ts.groupby(lambda x: x.year).transform("min")
In [118]: max_ts - min_ts
Out[118]:
2000-01-08 0.623893
2000-01-09 0.623893
2000-01-10 0.623893
2000-01-11 0.623893
2000-01-12 0.623893
...
2002-09-30 0.558275
2002-10-01 0.558275
2002-10-02 0.558275
2002-10-03 0.558275
2002-10-04 0.558275
Freq: D, Length: 1001, dtype: float64
Another common data transform is to replace missing data with the group mean.
In [119]: data_df
Out[119]:
A B C
0 1.539708 -1.166480 0.533026
1 1.302092 -0.505754 NaN
2 -0.371983 1.104803 -0.651520
3 -1.309622 1.118697 -1.161657
4 -1.924296 0.396437 0.812436
.. ... ... ...
995 -0.093110 0.683847 -0.774753
996 -0.185043 1.438572 NaN
997 -0.394469 -0.642343 0.011374
998 -1.174126 1.857148 NaN
999 0.234564 0.517098 0.393534
[1000 rows x 3 columns]
In [120]: countries = np.array(["US", "UK", "GR", "JP"])
In [121]: key = countries[np.random.randint(0, 4, 1000)]
In [122]: grouped = data_df.groupby(key)
# Non-NA count in each group
In [123]: grouped.count()
Out[123]:
A B C
GR 209 217 189
JP 240 255 217
UK 216 231 193
US 239 250 217
In [124]: transformed = grouped.transform(lambda x: x.fillna(x.mean()))
We can verify that the group means have not changed in the transformed data
and that the transformed data contains no NAs.
In [125]: grouped_trans = transformed.groupby(key)
In [126]: grouped.mean() # original group means
Out[126]:
A B C
GR -0.098371 -0.015420 0.068053
JP 0.069025 0.023100 -0.077324
UK 0.034069 -0.052580 -0.116525
US 0.058664 -0.020399 0.028603
In [127]: grouped_trans.mean() # transformation did not change group means
Out[127]:
A B C
GR -0.098371 -0.015420 0.068053
JP 0.069025 0.023100 -0.077324
UK 0.034069 -0.052580 -0.116525
US 0.058664 -0.020399 0.028603
In [128]: grouped.count() # original has some missing data points
Out[128]:
A B C
GR 209 217 189
JP 240 255 217
UK 216 231 193
US 239 250 217
In [129]: grouped_trans.count() # counts after transformation
Out[129]:
A B C
GR 228 228 228
JP 267 267 267
UK 247 247 247
US 258 258 258
In [130]: grouped_trans.size() # Verify non-NA count equals group size
Out[130]:
GR 228
JP 267
UK 247
US 258
dtype: int64
Note
Some functions will automatically transform the input when applied to a
GroupBy object, but returning an object of the same shape as the original.
Passing as_index=False will not affect these transformation methods.
For example: fillna, ffill, bfill, shift..
In [131]: grouped.ffill()
Out[131]:
A B C
0 1.539708 -1.166480 0.533026
1 1.302092 -0.505754 0.533026
2 -0.371983 1.104803 -0.651520
3 -1.309622 1.118697 -1.161657
4 -1.924296 0.396437 0.812436
.. ... ... ...
995 -0.093110 0.683847 -0.774753
996 -0.185043 1.438572 -0.774753
997 -0.394469 -0.642343 0.011374
998 -1.174126 1.857148 -0.774753
999 0.234564 0.517098 0.393534
[1000 rows x 3 columns]
Window and resample operations#
It is possible to use resample(), expanding() and
rolling() as methods on groupbys.
The example below will apply the rolling() method on the samples of
the column B based on the groups of column A.
In [132]: df_re = pd.DataFrame({"A": [1] * 10 + [5] * 10, "B": np.arange(20)})
In [133]: df_re
Out[133]:
A B
0 1 0
1 1 1
2 1 2
3 1 3
4 1 4
.. .. ..
15 5 15
16 5 16
17 5 17
18 5 18
19 5 19
[20 rows x 2 columns]
In [134]: df_re.groupby("A").rolling(4).B.mean()
Out[134]:
A
1 0 NaN
1 NaN
2 NaN
3 1.5
4 2.5
...
5 15 13.5
16 14.5
17 15.5
18 16.5
19 17.5
Name: B, Length: 20, dtype: float64
The expanding() method will accumulate a given operation
(sum() in the example) for all the members of each particular
group.
In [135]: df_re.groupby("A").expanding().sum()
Out[135]:
B
A
1 0 0.0
1 1.0
2 3.0
3 6.0
4 10.0
... ...
5 15 75.0
16 91.0
17 108.0
18 126.0
19 145.0
[20 rows x 1 columns]
Suppose you want to use the resample() method to get a daily
frequency in each group of your dataframe and wish to complete the
missing values with the ffill() method.
In [136]: df_re = pd.DataFrame(
.....: {
.....: "date": pd.date_range(start="2016-01-01", periods=4, freq="W"),
.....: "group": [1, 1, 2, 2],
.....: "val": [5, 6, 7, 8],
.....: }
.....: ).set_index("date")
.....:
In [137]: df_re
Out[137]:
group val
date
2016-01-03 1 5
2016-01-10 1 6
2016-01-17 2 7
2016-01-24 2 8
In [138]: df_re.groupby("group").resample("1D").ffill()
Out[138]:
group val
group date
1 2016-01-03 1 5
2016-01-04 1 5
2016-01-05 1 5
2016-01-06 1 5
2016-01-07 1 5
... ... ...
2 2016-01-20 2 7
2016-01-21 2 7
2016-01-22 2 7
2016-01-23 2 7
2016-01-24 2 8
[16 rows x 2 columns]
Filtration#
The filter method returns a subset of the original object. Suppose we
want to take only elements that belong to groups with a group sum greater
than 2.
In [139]: sf = pd.Series([1, 1, 2, 3, 3, 3])
In [140]: sf.groupby(sf).filter(lambda x: x.sum() > 2)
Out[140]:
3 3
4 3
5 3
dtype: int64
The argument of filter must be a function that, applied to the group as a
whole, returns True or False.
Another useful operation is filtering out elements that belong to groups
with only a couple members.
In [141]: dff = pd.DataFrame({"A": np.arange(8), "B": list("aabbbbcc")})
In [142]: dff.groupby("B").filter(lambda x: len(x) > 2)
Out[142]:
A B
2 2 b
3 3 b
4 4 b
5 5 b
Alternatively, instead of dropping the offending groups, we can return a
like-indexed objects where the groups that do not pass the filter are filled
with NaNs.
In [143]: dff.groupby("B").filter(lambda x: len(x) > 2, dropna=False)
Out[143]:
A B
0 NaN NaN
1 NaN NaN
2 2.0 b
3 3.0 b
4 4.0 b
5 5.0 b
6 NaN NaN
7 NaN NaN
For DataFrames with multiple columns, filters should explicitly specify a column as the filter criterion.
In [144]: dff["C"] = np.arange(8)
In [145]: dff.groupby("B").filter(lambda x: len(x["C"]) > 2)
Out[145]:
A B C
2 2 b 2
3 3 b 3
4 4 b 4
5 5 b 5
Note
Some functions when applied to a groupby object will act as a filter on the input, returning
a reduced shape of the original (and potentially eliminating groups), but with the index unchanged.
Passing as_index=False will not affect these transformation methods.
For example: head, tail.
In [146]: dff.groupby("B").head(2)
Out[146]:
A B C
0 0 a 0
1 1 a 1
2 2 b 2
3 3 b 3
6 6 c 6
7 7 c 7
Dispatching to instance methods#
When doing an aggregation or transformation, you might just want to call an
instance method on each data group. This is pretty easy to do by passing lambda
functions:
In [147]: grouped = df.groupby("A")
In [148]: grouped.agg(lambda x: x.std())
Out[148]:
C D
A
bar 0.181231 1.366330
foo 0.912265 0.884785
But, it’s rather verbose and can be untidy if you need to pass additional
arguments. Using a bit of metaprogramming cleverness, GroupBy now has the
ability to “dispatch” method calls to the groups:
In [149]: grouped.std()
Out[149]:
C D
A
bar 0.181231 1.366330
foo 0.912265 0.884785
What is actually happening here is that a function wrapper is being
generated. When invoked, it takes any passed arguments and invokes the function
with any arguments on each group (in the above example, the std
function). The results are then combined together much in the style of agg
and transform (it actually uses apply to infer the gluing, documented
next). This enables some operations to be carried out rather succinctly:
In [150]: tsdf = pd.DataFrame(
.....: np.random.randn(1000, 3),
.....: index=pd.date_range("1/1/2000", periods=1000),
.....: columns=["A", "B", "C"],
.....: )
.....:
In [151]: tsdf.iloc[::2] = np.nan
In [152]: grouped = tsdf.groupby(lambda x: x.year)
In [153]: grouped.fillna(method="pad")
Out[153]:
A B C
2000-01-01 NaN NaN NaN
2000-01-02 -0.353501 -0.080957 -0.876864
2000-01-03 -0.353501 -0.080957 -0.876864
2000-01-04 0.050976 0.044273 -0.559849
2000-01-05 0.050976 0.044273 -0.559849
... ... ... ...
2002-09-22 0.005011 0.053897 -1.026922
2002-09-23 0.005011 0.053897 -1.026922
2002-09-24 -0.456542 -1.849051 1.559856
2002-09-25 -0.456542 -1.849051 1.559856
2002-09-26 1.123162 0.354660 1.128135
[1000 rows x 3 columns]
In this example, we chopped the collection of time series into yearly chunks
then independently called fillna on the
groups.
The nlargest and nsmallest methods work on Series style groupbys:
In [154]: s = pd.Series([9, 8, 7, 5, 19, 1, 4.2, 3.3])
In [155]: g = pd.Series(list("abababab"))
In [156]: gb = s.groupby(g)
In [157]: gb.nlargest(3)
Out[157]:
a 4 19.0
0 9.0
2 7.0
b 1 8.0
3 5.0
7 3.3
dtype: float64
In [158]: gb.nsmallest(3)
Out[158]:
a 6 4.2
2 7.0
0 9.0
b 5 1.0
7 3.3
3 5.0
dtype: float64
Flexible apply#
Some operations on the grouped data might not fit into either the aggregate or
transform categories. Or, you may simply want GroupBy to infer how to combine
the results. For these, use the apply function, which can be substituted
for both aggregate and transform in many standard use cases. However,
apply can handle some exceptional use cases.
Note
apply can act as a reducer, transformer, or filter function, depending
on exactly what is passed to it. It can depend on the passed function and
exactly what you are grouping. Thus the grouped column(s) may be included in
the output as well as set the indices.
In [159]: df
Out[159]:
A B C D
0 foo one -0.575247 1.346061
1 bar one 0.254161 1.511763
2 foo two -1.143704 1.627081
3 bar three 0.215897 -0.990582
4 foo two 1.193555 -0.441652
5 bar two -0.077118 1.211526
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
In [160]: grouped = df.groupby("A")
# could also just call .describe()
In [161]: grouped["C"].apply(lambda x: x.describe())
Out[161]:
A
bar count 3.000000
mean 0.130980
std 0.181231
min -0.077118
25% 0.069390
...
foo min -1.143704
25% -0.862495
50% -0.575247
75% -0.408530
max 1.193555
Name: C, Length: 16, dtype: float64
The dimension of the returned result can also change:
In [162]: grouped = df.groupby('A')['C']
In [163]: def f(group):
.....: return pd.DataFrame({'original': group,
.....: 'demeaned': group - group.mean()})
.....:
apply on a Series can operate on a returned value from the applied function,
that is itself a series, and possibly upcast the result to a DataFrame:
In [164]: def f(x):
.....: return pd.Series([x, x ** 2], index=["x", "x^2"])
.....:
In [165]: s = pd.Series(np.random.rand(5))
In [166]: s
Out[166]:
0 0.321438
1 0.493496
2 0.139505
3 0.910103
4 0.194158
dtype: float64
In [167]: s.apply(f)
Out[167]:
x x^2
0 0.321438 0.103323
1 0.493496 0.243538
2 0.139505 0.019462
3 0.910103 0.828287
4 0.194158 0.037697
Control grouped column(s) placement with group_keys#
Note
If group_keys=True is specified when calling groupby(),
functions passed to apply that return like-indexed outputs will have the
group keys added to the result index. Previous versions of pandas would add
the group keys only when the result from the applied function had a different
index than the input. If group_keys is not specified, the group keys will
not be added for like-indexed outputs. In the future this behavior
will change to always respect group_keys, which defaults to True.
Changed in version 1.5.0.
To control whether the grouped column(s) are included in the indices, you can use
the argument group_keys. Compare
In [168]: df.groupby("A", group_keys=True).apply(lambda x: x)
Out[168]:
A B C D
A
bar 1 bar one 0.254161 1.511763
3 bar three 0.215897 -0.990582
5 bar two -0.077118 1.211526
foo 0 foo one -0.575247 1.346061
2 foo two -1.143704 1.627081
4 foo two 1.193555 -0.441652
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
with
In [169]: df.groupby("A", group_keys=False).apply(lambda x: x)
Out[169]:
A B C D
0 foo one -0.575247 1.346061
1 bar one 0.254161 1.511763
2 foo two -1.143704 1.627081
3 bar three 0.215897 -0.990582
4 foo two 1.193555 -0.441652
5 bar two -0.077118 1.211526
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
Similar to Aggregations with User-Defined Functions, the resulting dtype will reflect that of the
apply function. If the results from different groups have different dtypes, then
a common dtype will be determined in the same way as DataFrame construction.
Numba Accelerated Routines#
New in version 1.1.
If Numba is installed as an optional dependency, the transform and
aggregate methods support engine='numba' and engine_kwargs arguments.
See enhancing performance with Numba for general usage of the arguments
and performance considerations.
The function signature must start with values, index exactly as the data belonging to each group
will be passed into values, and the group index will be passed into index.
Warning
When using engine='numba', there will be no “fall back” behavior internally. The group
data and group index will be passed as NumPy arrays to the JITed user defined function, and no
alternative execution attempts will be tried.
Other useful features#
Automatic exclusion of “nuisance” columns#
Again consider the example DataFrame we’ve been looking at:
In [170]: df
Out[170]:
A B C D
0 foo one -0.575247 1.346061
1 bar one 0.254161 1.511763
2 foo two -1.143704 1.627081
3 bar three 0.215897 -0.990582
4 foo two 1.193555 -0.441652
5 bar two -0.077118 1.211526
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
Suppose we wish to compute the standard deviation grouped by the A
column. There is a slight problem, namely that we don’t care about the data in
column B. We refer to this as a “nuisance” column. You can avoid nuisance
columns by specifying numeric_only=True:
In [171]: df.groupby("A").std(numeric_only=True)
Out[171]:
C D
A
bar 0.181231 1.366330
foo 0.912265 0.884785
Note that df.groupby('A').colname.std(). is more efficient than
df.groupby('A').std().colname, so if the result of an aggregation function
is only interesting over one column (here colname), it may be filtered
before applying the aggregation function.
Note
Any object column, also if it contains numerical values such as Decimal
objects, is considered as a “nuisance” columns. They are excluded from
aggregate functions automatically in groupby.
If you do wish to include decimal or object columns in an aggregation with
other non-nuisance data types, you must do so explicitly.
Warning
The automatic dropping of nuisance columns has been deprecated and will be removed
in a future version of pandas. If columns are included that cannot be operated
on, pandas will instead raise an error. In order to avoid this, either select
the columns you wish to operate on or specify numeric_only=True.
In [172]: from decimal import Decimal
In [173]: df_dec = pd.DataFrame(
.....: {
.....: "id": [1, 2, 1, 2],
.....: "int_column": [1, 2, 3, 4],
.....: "dec_column": [
.....: Decimal("0.50"),
.....: Decimal("0.15"),
.....: Decimal("0.25"),
.....: Decimal("0.40"),
.....: ],
.....: }
.....: )
.....:
# Decimal columns can be sum'd explicitly by themselves...
In [174]: df_dec.groupby(["id"])[["dec_column"]].sum()
Out[174]:
dec_column
id
1 0.75
2 0.55
# ...but cannot be combined with standard data types or they will be excluded
In [175]: df_dec.groupby(["id"])[["int_column", "dec_column"]].sum()
Out[175]:
int_column
id
1 4
2 6
# Use .agg function to aggregate over standard and "nuisance" data types
# at the same time
In [176]: df_dec.groupby(["id"]).agg({"int_column": "sum", "dec_column": "sum"})
Out[176]:
int_column dec_column
id
1 4 0.75
2 6 0.55
Handling of (un)observed Categorical values#
When using a Categorical grouper (as a single grouper, or as part of multiple groupers), the observed keyword
controls whether to return a cartesian product of all possible groupers values (observed=False) or only those
that are observed groupers (observed=True).
Show all values:
In [177]: pd.Series([1, 1, 1]).groupby(
.....: pd.Categorical(["a", "a", "a"], categories=["a", "b"]), observed=False
.....: ).count()
.....:
Out[177]:
a 3
b 0
dtype: int64
Show only the observed values:
In [178]: pd.Series([1, 1, 1]).groupby(
.....: pd.Categorical(["a", "a", "a"], categories=["a", "b"]), observed=True
.....: ).count()
.....:
Out[178]:
a 3
dtype: int64
The returned dtype of the grouped will always include all of the categories that were grouped.
In [179]: s = (
.....: pd.Series([1, 1, 1])
.....: .groupby(pd.Categorical(["a", "a", "a"], categories=["a", "b"]), observed=False)
.....: .count()
.....: )
.....:
In [180]: s.index.dtype
Out[180]: CategoricalDtype(categories=['a', 'b'], ordered=False)
NA and NaT group handling#
If there are any NaN or NaT values in the grouping key, these will be
automatically excluded. In other words, there will never be an “NA group” or
“NaT group”. This was not the case in older versions of pandas, but users were
generally discarding the NA group anyway (and supporting it was an
implementation headache).
Grouping with ordered factors#
Categorical variables represented as instance of pandas’s Categorical class
can be used as group keys. If so, the order of the levels will be preserved:
In [181]: data = pd.Series(np.random.randn(100))
In [182]: factor = pd.qcut(data, [0, 0.25, 0.5, 0.75, 1.0])
In [183]: data.groupby(factor).mean()
Out[183]:
(-2.645, -0.523] -1.362896
(-0.523, 0.0296] -0.260266
(0.0296, 0.654] 0.361802
(0.654, 2.21] 1.073801
dtype: float64
Grouping with a grouper specification#
You may need to specify a bit more data to properly group. You can
use the pd.Grouper to provide this local control.
In [184]: import datetime
In [185]: df = pd.DataFrame(
.....: {
.....: "Branch": "A A A A A A A B".split(),
.....: "Buyer": "Carl Mark Carl Carl Joe Joe Joe Carl".split(),
.....: "Quantity": [1, 3, 5, 1, 8, 1, 9, 3],
.....: "Date": [
.....: datetime.datetime(2013, 1, 1, 13, 0),
.....: datetime.datetime(2013, 1, 1, 13, 5),
.....: datetime.datetime(2013, 10, 1, 20, 0),
.....: datetime.datetime(2013, 10, 2, 10, 0),
.....: datetime.datetime(2013, 10, 1, 20, 0),
.....: datetime.datetime(2013, 10, 2, 10, 0),
.....: datetime.datetime(2013, 12, 2, 12, 0),
.....: datetime.datetime(2013, 12, 2, 14, 0),
.....: ],
.....: }
.....: )
.....:
In [186]: df
Out[186]:
Branch Buyer Quantity Date
0 A Carl 1 2013-01-01 13:00:00
1 A Mark 3 2013-01-01 13:05:00
2 A Carl 5 2013-10-01 20:00:00
3 A Carl 1 2013-10-02 10:00:00
4 A Joe 8 2013-10-01 20:00:00
5 A Joe 1 2013-10-02 10:00:00
6 A Joe 9 2013-12-02 12:00:00
7 B Carl 3 2013-12-02 14:00:00
Groupby a specific column with the desired frequency. This is like resampling.
In [187]: df.groupby([pd.Grouper(freq="1M", key="Date"), "Buyer"])[["Quantity"]].sum()
Out[187]:
Quantity
Date Buyer
2013-01-31 Carl 1
Mark 3
2013-10-31 Carl 6
Joe 9
2013-12-31 Carl 3
Joe 9
You have an ambiguous specification in that you have a named index and a column
that could be potential groupers.
In [188]: df = df.set_index("Date")
In [189]: df["Date"] = df.index + pd.offsets.MonthEnd(2)
In [190]: df.groupby([pd.Grouper(freq="6M", key="Date"), "Buyer"])[["Quantity"]].sum()
Out[190]:
Quantity
Date Buyer
2013-02-28 Carl 1
Mark 3
2014-02-28 Carl 9
Joe 18
In [191]: df.groupby([pd.Grouper(freq="6M", level="Date"), "Buyer"])[["Quantity"]].sum()
Out[191]:
Quantity
Date Buyer
2013-01-31 Carl 1
Mark 3
2014-01-31 Carl 9
Joe 18
Taking the first rows of each group#
Just like for a DataFrame or Series you can call head and tail on a groupby:
In [192]: df = pd.DataFrame([[1, 2], [1, 4], [5, 6]], columns=["A", "B"])
In [193]: df
Out[193]:
A B
0 1 2
1 1 4
2 5 6
In [194]: g = df.groupby("A")
In [195]: g.head(1)
Out[195]:
A B
0 1 2
2 5 6
In [196]: g.tail(1)
Out[196]:
A B
1 1 4
2 5 6
This shows the first or last n rows from each group.
Taking the nth row of each group#
To select from a DataFrame or Series the nth item, use
nth(). This is a reduction method, and
will return a single row (or no row) per group if you pass an int for n:
In [197]: df = pd.DataFrame([[1, np.nan], [1, 4], [5, 6]], columns=["A", "B"])
In [198]: g = df.groupby("A")
In [199]: g.nth(0)
Out[199]:
B
A
1 NaN
5 6.0
In [200]: g.nth(-1)
Out[200]:
B
A
1 4.0
5 6.0
In [201]: g.nth(1)
Out[201]:
B
A
1 4.0
If you want to select the nth not-null item, use the dropna kwarg. For a DataFrame this should be either 'any' or 'all' just like you would pass to dropna:
# nth(0) is the same as g.first()
In [202]: g.nth(0, dropna="any")
Out[202]:
B
A
1 4.0
5 6.0
In [203]: g.first()
Out[203]:
B
A
1 4.0
5 6.0
# nth(-1) is the same as g.last()
In [204]: g.nth(-1, dropna="any") # NaNs denote group exhausted when using dropna
Out[204]:
B
A
1 4.0
5 6.0
In [205]: g.last()
Out[205]:
B
A
1 4.0
5 6.0
In [206]: g.B.nth(0, dropna="all")
Out[206]:
A
1 4.0
5 6.0
Name: B, dtype: float64
As with other methods, passing as_index=False, will achieve a filtration, which returns the grouped row.
In [207]: df = pd.DataFrame([[1, np.nan], [1, 4], [5, 6]], columns=["A", "B"])
In [208]: g = df.groupby("A", as_index=False)
In [209]: g.nth(0)
Out[209]:
A B
0 1 NaN
2 5 6.0
In [210]: g.nth(-1)
Out[210]:
A B
1 1 4.0
2 5 6.0
You can also select multiple rows from each group by specifying multiple nth values as a list of ints.
In [211]: business_dates = pd.date_range(start="4/1/2014", end="6/30/2014", freq="B")
In [212]: df = pd.DataFrame(1, index=business_dates, columns=["a", "b"])
# get the first, 4th, and last date index for each month
In [213]: df.groupby([df.index.year, df.index.month]).nth([0, 3, -1])
Out[213]:
a b
2014 4 1 1
4 1 1
4 1 1
5 1 1
5 1 1
5 1 1
6 1 1
6 1 1
6 1 1
Enumerate group items#
To see the order in which each row appears within its group, use the
cumcount method:
In [214]: dfg = pd.DataFrame(list("aaabba"), columns=["A"])
In [215]: dfg
Out[215]:
A
0 a
1 a
2 a
3 b
4 b
5 a
In [216]: dfg.groupby("A").cumcount()
Out[216]:
0 0
1 1
2 2
3 0
4 1
5 3
dtype: int64
In [217]: dfg.groupby("A").cumcount(ascending=False)
Out[217]:
0 3
1 2
2 1
3 1
4 0
5 0
dtype: int64
Enumerate groups#
To see the ordering of the groups (as opposed to the order of rows
within a group given by cumcount) you can use
ngroup().
Note that the numbers given to the groups match the order in which the
groups would be seen when iterating over the groupby object, not the
order they are first observed.
In [218]: dfg = pd.DataFrame(list("aaabba"), columns=["A"])
In [219]: dfg
Out[219]:
A
0 a
1 a
2 a
3 b
4 b
5 a
In [220]: dfg.groupby("A").ngroup()
Out[220]:
0 0
1 0
2 0
3 1
4 1
5 0
dtype: int64
In [221]: dfg.groupby("A").ngroup(ascending=False)
Out[221]:
0 1
1 1
2 1
3 0
4 0
5 1
dtype: int64
Plotting#
Groupby also works with some plotting methods. For example, suppose we
suspect that some features in a DataFrame may differ by group, in this case,
the values in column 1 where the group is “B” are 3 higher on average.
In [222]: np.random.seed(1234)
In [223]: df = pd.DataFrame(np.random.randn(50, 2))
In [224]: df["g"] = np.random.choice(["A", "B"], size=50)
In [225]: df.loc[df["g"] == "B", 1] += 3
We can easily visualize this with a boxplot:
In [226]: df.groupby("g").boxplot()
Out[226]:
A AxesSubplot(0.1,0.15;0.363636x0.75)
B AxesSubplot(0.536364,0.15;0.363636x0.75)
dtype: object
The result of calling boxplot is a dictionary whose keys are the values
of our grouping column g (“A” and “B”). The values of the resulting dictionary
can be controlled by the return_type keyword of boxplot.
See the visualization documentation for more.
Warning
For historical reasons, df.groupby("g").boxplot() is not equivalent
to df.boxplot(by="g"). See here for
an explanation.
Piping function calls#
Similar to the functionality provided by DataFrame and Series, functions
that take GroupBy objects can be chained together using a pipe method to
allow for a cleaner, more readable syntax. To read about .pipe in general terms,
see here.
Combining .groupby and .pipe is often useful when you need to reuse
GroupBy objects.
As an example, imagine having a DataFrame with columns for stores, products,
revenue and quantity sold. We’d like to do a groupwise calculation of prices
(i.e. revenue/quantity) per store and per product. We could do this in a
multi-step operation, but expressing it in terms of piping can make the
code more readable. First we set the data:
In [227]: n = 1000
In [228]: df = pd.DataFrame(
.....: {
.....: "Store": np.random.choice(["Store_1", "Store_2"], n),
.....: "Product": np.random.choice(["Product_1", "Product_2"], n),
.....: "Revenue": (np.random.random(n) * 50 + 10).round(2),
.....: "Quantity": np.random.randint(1, 10, size=n),
.....: }
.....: )
.....:
In [229]: df.head(2)
Out[229]:
Store Product Revenue Quantity
0 Store_2 Product_1 26.12 1
1 Store_2 Product_1 28.86 1
Now, to find prices per store/product, we can simply do:
In [230]: (
.....: df.groupby(["Store", "Product"])
.....: .pipe(lambda grp: grp.Revenue.sum() / grp.Quantity.sum())
.....: .unstack()
.....: .round(2)
.....: )
.....:
Out[230]:
Product Product_1 Product_2
Store
Store_1 6.82 7.05
Store_2 6.30 6.64
Piping can also be expressive when you want to deliver a grouped object to some
arbitrary function, for example:
In [231]: def mean(groupby):
.....: return groupby.mean()
.....:
In [232]: df.groupby(["Store", "Product"]).pipe(mean)
Out[232]:
Revenue Quantity
Store Product
Store_1 Product_1 34.622727 5.075758
Product_2 35.482815 5.029630
Store_2 Product_1 32.972837 5.237589
Product_2 34.684360 5.224000
where mean takes a GroupBy object and finds the mean of the Revenue and Quantity
columns respectively for each Store-Product combination. The mean function can
be any function that takes in a GroupBy object; the .pipe will pass the GroupBy
object as a parameter into the function you specify.
Examples#
Regrouping by factor#
Regroup columns of a DataFrame according to their sum, and sum the aggregated ones.
In [233]: df = pd.DataFrame({"a": [1, 0, 0], "b": [0, 1, 0], "c": [1, 0, 0], "d": [2, 3, 4]})
In [234]: df
Out[234]:
a b c d
0 1 0 1 2
1 0 1 0 3
2 0 0 0 4
In [235]: df.groupby(df.sum(), axis=1).sum()
Out[235]:
1 9
0 2 2
1 1 3
2 0 4
Multi-column factorization#
By using ngroup(), we can extract
information about the groups in a way similar to factorize() (as described
further in the reshaping API) but which applies
naturally to multiple columns of mixed type and different
sources. This can be useful as an intermediate categorical-like step
in processing, when the relationships between the group rows are more
important than their content, or as input to an algorithm which only
accepts the integer encoding. (For more information about support in
pandas for full categorical data, see the Categorical
introduction and the
API documentation.)
In [236]: dfg = pd.DataFrame({"A": [1, 1, 2, 3, 2], "B": list("aaaba")})
In [237]: dfg
Out[237]:
A B
0 1 a
1 1 a
2 2 a
3 3 b
4 2 a
In [238]: dfg.groupby(["A", "B"]).ngroup()
Out[238]:
0 0
1 0
2 1
3 2
4 1
dtype: int64
In [239]: dfg.groupby(["A", [0, 0, 0, 1, 1]]).ngroup()
Out[239]:
0 0
1 0
2 1
3 3
4 2
dtype: int64
Groupby by indexer to ‘resample’ data#
Resampling produces new hypothetical samples (resamples) from already existing observed data or from a model that generates data. These new samples are similar to the pre-existing samples.
In order to resample to work on indices that are non-datetimelike, the following procedure can be utilized.
In the following examples, df.index // 5 returns a binary array which is used to determine what gets selected for the groupby operation.
Note
The below example shows how we can downsample by consolidation of samples into fewer samples. Here by using df.index // 5, we are aggregating the samples in bins. By applying std() function, we aggregate the information contained in many samples into a small subset of values which is their standard deviation thereby reducing the number of samples.
In [240]: df = pd.DataFrame(np.random.randn(10, 2))
In [241]: df
Out[241]:
0 1
0 -0.793893 0.321153
1 0.342250 1.618906
2 -0.975807 1.918201
3 -0.810847 -1.405919
4 -1.977759 0.461659
5 0.730057 -1.316938
6 -0.751328 0.528290
7 -0.257759 -1.081009
8 0.505895 -1.701948
9 -1.006349 0.020208
In [242]: df.index // 5
Out[242]: Int64Index([0, 0, 0, 0, 0, 1, 1, 1, 1, 1], dtype='int64')
In [243]: df.groupby(df.index // 5).std()
Out[243]:
0 1
0 0.823647 1.312912
1 0.760109 0.942941
Returning a Series to propagate names#
Group DataFrame columns, compute a set of metrics and return a named Series.
The Series name is used as the name for the column index. This is especially
useful in conjunction with reshaping operations such as stacking in which the
column index name will be used as the name of the inserted column:
In [244]: df = pd.DataFrame(
.....: {
.....: "a": [0, 0, 0, 0, 1, 1, 1, 1, 2, 2, 2, 2],
.....: "b": [0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1],
.....: "c": [1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0],
.....: "d": [0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1],
.....: }
.....: )
.....:
In [245]: def compute_metrics(x):
.....: result = {"b_sum": x["b"].sum(), "c_mean": x["c"].mean()}
.....: return pd.Series(result, name="metrics")
.....:
In [246]: result = df.groupby("a").apply(compute_metrics)
In [247]: result
Out[247]:
metrics b_sum c_mean
a
0 2.0 0.5
1 2.0 0.5
2 2.0 0.5
In [248]: result.stack()
Out[248]:
a metrics
0 b_sum 2.0
c_mean 0.5
1 b_sum 2.0
c_mean 0.5
2 b_sum 2.0
c_mean 0.5
dtype: float64
| 975
| 1,095
|
Filtering out words having only 1 letter
Could you please help me to understand how extract only words with length greater than 1?
WORD
TPI is a new program
as E stands for Eimear
your are using an extra L
The code below select upper case words/letters :
dt['WORD'].str.extractall(r'([A-Z]+)')
The problem is that I would like only filter letters with more than one (TPI) and not (TPI, E, L).
How can I get these words (TPI)?
|
68,623,519
|
Error when adding a new column to pandas dataframe using a rolling mean function
|
<p>I have a script where I download some fx rates from the web and would like to calculate the rolling mean. When running the script, I obtain an error in relation to the rates column that I am trying to calculate the rolling mean for. I would like to produce an extra column with the rolling average displayed. Here is what I have so far. The last 3 lines above the comments is where the error seems to be.</p>
<p>Now I get the following error "KeyError: 'rates'"</p>
<pre><code>import pandas as pd
import matplotlib.pyplot as plt
url1 = 'http://www.bankofcanada.ca/'
url2 = 'valet/observations/group/FX_RATES_DAILY/csv?start_date='
start_date = '2017-01-03' # Earliest start date is 2017-01-03
url = url1 + url2 + start_date # Complete url to download csv file
# Read in rates for different currencies for a range of dates
rates = pd.read_csv(url, skiprows=39, index_col='date')
rates.index = pd.to_datetime(rates.index) # assures data type to be a datetime
print("The pandas dataframe with the rates ")
print(rates)
# Get number of days & number of currences from shape of rates - returns a tuple in the
#format (rows, columns)
days, currencies = rates.shape
# Read in the currency codes & strip off extraneous part. Uses url string, skips the first
#10 rows and returns to the data frame columns of index 0 and 2. It will read n rows according
# to the variable currencies. This was returned in line 19 from a tuple produced by .shape
codes = pd.read_csv(url, skiprows=10, usecols=[0,2],
nrows=currencies)
#Print out the dataframe read from the web
print("Dataframe with the codes")
print(codes)
#A for loop to goe through the codes dataframe. For each ith row and for the index 1 column,
# the for loop will split the string with a string 'to Canadian'
for i in range(currencies):
codes.iloc[i, 1] = codes.iloc[i, 1].split(' to Canadian')[0]
# Report exchange rates for the most most recent date available
date = rates.index[-1] # most recent date available
print('\nCurrency values on {0}'.format(date))
#Using a for loop and zip, the values in the code and rate objects are grouped together
# and then printed to the screen with a new format
for (code, rate) in zip(codes.iloc[:, 1], rates.loc[date]):
print("{0:20s} Can$ {1:8.6g}".format(code, rate))
#Assign values into a dataframe/slice rates dataframe
FXAUDCAD_daily = pd.DataFrame(index=['dates'], columns={'dates', 'rates'})
FXAUDCAD_daily = FXAUDCAD
FXAUDCAD_daily['rolling mean'] = FXAUDCAD_daily.loc['rates'].rolling_mean()
print(FXAUDCAD_daily)
#Print the values to the screen
#Calculate the rolling average using the rolling average pandas function
#Create a figure object using matplotlib/pandas
#Plot values on figure on the figure object.
</code></pre>
<p>New updated code using feedback, I made the following
import pandas as pd
import matplotlib.pyplot as plt
import datetime</p>
<pre><code>url1 = 'http://www.bankofcanada.ca/'
url2 = 'valet/observations/group/FX_RATES_DAILY/csv?start_date='
start_date = '2017-01-03' # Earliest start date is 2017-01-03
url = url1 + url2 + start_date # Complete url to download csv file
# Read in rates for different currencies for a range of dates
rates = pd.read_csv(url, skiprows=39, index_col='date')
rates.index = pd.to_datetime(rates.index) # assures data type to be a
datetime
#print("The pandas dataframe with the rates ")
#print(rates)
# Get number of days & number of currences from shape of rates - returns
#a tuple in the
#format (rows, columns)
days, currencies = rates.shape
# Read in the currency codes & strip off extraneous part. Uses url
string, skips the first
#10 rows and returns to the data frame columns of index 0 and 2. It will
#read n rows according
# to the variable currencies. This was returned in line 19 from a tuple
#produced by .shape
codes = pd.read_csv(url, skiprows=10, usecols=[0,2],
nrows=currencies)
#Print out the dataframe read from the web
#print("Dataframe with the codes")
#print(codes)
#A for loop to goe through the codes dataframe. For each ith row and for
#the index 1 column,
# the for loop will split the string with a string 'to Canadian'
for i in range(currencies):
codes.iloc[i, 1] = codes.iloc[i, 1].split(' to Canadian')[0]
# Report exchange rates for the most most recent date available
date = rates.index[-1] # most recent date available
#print('\nCurrency values on {0}'.format(date))
#Using a for loop and zip, the values in the code and rate objects are
grouped together
# and then printed to the screen with a new format
#for (code, rate) in zip(codes.iloc[:, 1], rates.loc[date]):
#print("{0:20s} Can$ {1:8.6g}".format(code, rate))
#Create dataframe with columns of date and raters
#Assign values into a dataframe/slice rates dataframe
FXAUDCAD_daily = pd.DataFrame(index=['date'], columns={'date', 'rates'})
FXAUDCAD_daily = rates['FXAUDCAD']
print(FXAUDCAD_daily)
FXAUDCAD_daily['rolling mean'] =
FXAUDCAD_daily['rates'].rolling(1).mean()
</code></pre>
| 68,756,201
| 2021-08-02T14:40:18.813000
| 3
| null | 0
| 132
|
python|pandas
|
<p>I managed to solve it, when I sliced the original dataframe rates into FXAUDCAD_daily, it already came with the same index of date. So I was getting a key error because the currency abbreviation was used as the name of the column with index 1, not the string 'rate'</p>
<p>But now I have another trivial problem, how do I rename the FXAUDCAD column to just rate. I will post another question for this</p>
<pre><code>import pandas as pd
import matplotlib.pyplot as plt
import datetime
url1 = 'http://www.bankofcanada.ca/'
url2 = 'valet/observations/group/FX_RATES_DAILY/csv?start_date='
start_date = '2017-01-03'
url = url1 + url2 + start_date
rates = pd.read_csv(url, skiprows=39, index_col='date')
rates.index = pd.to_datetime(rates.index) # assures data type to be a
datetime
print("Print rates to the screen",rates)
#print index
print("Print index to the screen", rates.index)
days, currencies = rates.shape
codes = pd.read_csv(url, skiprows=10, usecols=[0,2],
nrows=currencies)
for i in range(currencies):
codes.iloc[i, 1] = codes.iloc[i, 1].split(' to Canadian')[0]
#date = rates.index[-1]
#Make a series of just the rates of FXAUDCAD
FXAUDCAD_daily = pd.DataFrame(rates['FXAUDCAD'])
#Print FXAUDRATES to the screen
print(FXAUDCAD_daily)
#Calculate the MA using the rolling function with a window size of 1
FXAUDCAD_daily['rolling mean'] =
FXAUDCAD_daily['FXAUDCAD'].rolling(1).mean()
#print out the new dataframe with calculation
print(FXAUDCAD_daily)
#Rename one of the data frame from FXAUDCAD to Exchange Rate
FXAUDCAD_daily.rename(columns={'rate':'FXAUDCAD'})
#print out the new dataframe with calculation
print(FXAUDCAD_daily)
</code></pre>
| 2021-08-12T11:02:18.320000
| 0
|
https://pandas.pydata.org/docs/whatsnew/v1.3.0.html
|
I managed to solve it, when I sliced the original dataframe rates into FXAUDCAD_daily, it already came with the same index of date. So I was getting a key error because the currency abbreviation was used as the name of the column with index 1, not the string 'rate'
But now I have another trivial problem, how do I rename the FXAUDCAD column to just rate. I will post another question for this
import pandas as pd
import matplotlib.pyplot as plt
import datetime
url1 = 'http://www.bankofcanada.ca/'
url2 = 'valet/observations/group/FX_RATES_DAILY/csv?start_date='
start_date = '2017-01-03'
url = url1 + url2 + start_date
rates = pd.read_csv(url, skiprows=39, index_col='date')
rates.index = pd.to_datetime(rates.index) # assures data type to be a
datetime
print("Print rates to the screen",rates)
#print index
print("Print index to the screen", rates.index)
days, currencies = rates.shape
codes = pd.read_csv(url, skiprows=10, usecols=[0,2],
nrows=currencies)
for i in range(currencies):
codes.iloc[i, 1] = codes.iloc[i, 1].split(' to Canadian')[0]
#date = rates.index[-1]
#Make a series of just the rates of FXAUDCAD
FXAUDCAD_daily = pd.DataFrame(rates['FXAUDCAD'])
#Print FXAUDRATES to the screen
print(FXAUDCAD_daily)
#Calculate the MA using the rolling function with a window size of 1
FXAUDCAD_daily['rolling mean'] =
FXAUDCAD_daily['FXAUDCAD'].rolling(1).mean()
#print out the new dataframe with calculation
print(FXAUDCAD_daily)
#Rename one of the data frame from FXAUDCAD to Exchange Rate
FXAUDCAD_daily.rename(columns={'rate':'FXAUDCAD'})
#print out the new dataframe with calculation
print(FXAUDCAD_daily)
| 0
| 1,676
|
Error when adding a new column to pandas dataframe using a rolling mean function
I have a script where I download some fx rates from the web and would like to calculate the rolling mean. When running the script, I obtain an error in relation to the rates column that I am trying to calculate the rolling mean for. I would like to produce an extra column with the rolling average displayed. Here is what I have so far. The last 3 lines above the comments is where the error seems to be.
Now I get the following error "KeyError: 'rates'"
import pandas as pd
import matplotlib.pyplot as plt
url1 = 'http://www.bankofcanada.ca/'
url2 = 'valet/observations/group/FX_RATES_DAILY/csv?start_date='
start_date = '2017-01-03' # Earliest start date is 2017-01-03
url = url1 + url2 + start_date # Complete url to download csv file
# Read in rates for different currencies for a range of dates
rates = pd.read_csv(url, skiprows=39, index_col='date')
rates.index = pd.to_datetime(rates.index) # assures data type to be a datetime
print("The pandas dataframe with the rates ")
print(rates)
# Get number of days & number of currences from shape of rates - returns a tuple in the
#format (rows, columns)
days, currencies = rates.shape
# Read in the currency codes & strip off extraneous part. Uses url string, skips the first
#10 rows and returns to the data frame columns of index 0 and 2. It will read n rows according
# to the variable currencies. This was returned in line 19 from a tuple produced by .shape
codes = pd.read_csv(url, skiprows=10, usecols=[0,2],
nrows=currencies)
#Print out the dataframe read from the web
print("Dataframe with the codes")
print(codes)
#A for loop to goe through the codes dataframe. For each ith row and for the index 1 column,
# the for loop will split the string with a string 'to Canadian'
for i in range(currencies):
codes.iloc[i, 1] = codes.iloc[i, 1].split(' to Canadian')[0]
# Report exchange rates for the most most recent date available
date = rates.index[-1] # most recent date available
print('\nCurrency values on {0}'.format(date))
#Using a for loop and zip, the values in the code and rate objects are grouped together
# and then printed to the screen with a new format
for (code, rate) in zip(codes.iloc[:, 1], rates.loc[date]):
print("{0:20s} Can$ {1:8.6g}".format(code, rate))
#Assign values into a dataframe/slice rates dataframe
FXAUDCAD_daily = pd.DataFrame(index=['dates'], columns={'dates', 'rates'})
FXAUDCAD_daily = FXAUDCAD
FXAUDCAD_daily['rolling mean'] = FXAUDCAD_daily.loc['rates'].rolling_mean()
print(FXAUDCAD_daily)
#Print the values to the screen
#Calculate the rolling average using the rolling average pandas function
#Create a figure object using matplotlib/pandas
#Plot values on figure on the figure object.
New updated code using feedback, I made the following
import pandas as pd
import matplotlib.pyplot as plt
import datetime
url1 = 'http://www.bankofcanada.ca/'
url2 = 'valet/observations/group/FX_RATES_DAILY/csv?start_date='
start_date = '2017-01-03' # Earliest start date is 2017-01-03
url = url1 + url2 + start_date # Complete url to download csv file
# Read in rates for different currencies for a range of dates
rates = pd.read_csv(url, skiprows=39, index_col='date')
rates.index = pd.to_datetime(rates.index) # assures data type to be a
datetime
#print("The pandas dataframe with the rates ")
#print(rates)
# Get number of days & number of currences from shape of rates - returns
#a tuple in the
#format (rows, columns)
days, currencies = rates.shape
# Read in the currency codes & strip off extraneous part. Uses url
string, skips the first
#10 rows and returns to the data frame columns of index 0 and 2. It will
#read n rows according
# to the variable currencies. This was returned in line 19 from a tuple
#produced by .shape
codes = pd.read_csv(url, skiprows=10, usecols=[0,2],
nrows=currencies)
#Print out the dataframe read from the web
#print("Dataframe with the codes")
#print(codes)
#A for loop to goe through the codes dataframe. For each ith row and for
#the index 1 column,
# the for loop will split the string with a string 'to Canadian'
for i in range(currencies):
codes.iloc[i, 1] = codes.iloc[i, 1].split(' to Canadian')[0]
# Report exchange rates for the most most recent date available
date = rates.index[-1] # most recent date available
#print('\nCurrency values on {0}'.format(date))
#Using a for loop and zip, the values in the code and rate objects are
grouped together
# and then printed to the screen with a new format
#for (code, rate) in zip(codes.iloc[:, 1], rates.loc[date]):
#print("{0:20s} Can$ {1:8.6g}".format(code, rate))
#Create dataframe with columns of date and raters
#Assign values into a dataframe/slice rates dataframe
FXAUDCAD_daily = pd.DataFrame(index=['date'], columns={'date', 'rates'})
FXAUDCAD_daily = rates['FXAUDCAD']
print(FXAUDCAD_daily)
FXAUDCAD_daily['rolling mean'] =
FXAUDCAD_daily['rates'].rolling(1).mean()
|
67,401,731
|
Pandas read_csv from web URL
|
<p>I am trying to read a csv-file from given URL using Python 3.</p>
<pre><code>import pandas as pd
url = 'https://www.hkex.com.hk/eng/dwrc/search/dwFullList.csv' # error
url_2 = 'https://www.cboe.com/us/options/symboldir/equity_index_options/?download=csv
df = pd.read_csv(url) # error
df = pd.read_csv(url_2) # can download csv from url
</code></pre>
<p>When I run <code>df = pd.read_csv(url)</code> the system return:</p>
<pre><code>File "pandas\_libs\parsers.pyx", line 537, in pandas._libs.parsers.TextReader.__cinit__
File "pandas\_libs\parsers.pyx", line 740, in pandas._libs.parsers.TextReader._get_header
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xff in position 0: invalid start byte
</code></pre>
<p>However, when I run <code>df = pd.read_csv(url_2)</code> the system can return the dataframe.
How can I solve this problem? I am using Python 3.7.</p>
| 67,402,681
| 2021-05-05T12:51:58.693000
| 2
| null | 0
| 1,936
|
python|pandas
|
<p>First of all, let's understand about <code>error</code>. The <code>error</code> you are facing was stated below:-</p>
<p><code>UnicodeDecodeError: 'utf-8' codec can't decode byte 0xff in position 0: invalid start byte</code></p>
<ul>
<li>You have noticed that our <code>error</code> type is the <code>UnicodeDecodeError</code> with <code>0xff</code> Codec.</li>
</ul>
<p><strong>Why this <code>error</code> occurred and how to <code>resolve</code> it?</strong></p>
<p>In our case <code>pd.read_csv()</code> module use <code>encoding = 'utf-8'</code> for <code>Encoding</code> Data. and you are facing <code>error</code> with <code>0xff</code> Codec. So, <code>0xff</code> is a number represented in the <code>hexadecimal numeral system (base 16)</code>. It's composed of two <code>f</code> numbers in <code>hex</code>. As we know, <code>f</code> in <code>hex</code> is equivalent to <code>1111</code> in the <code>binary numeral system</code>.</p>
<ul>
<li><strong>Solution</strong>:- Use <code>encoding = 'utf-16'</code> while fetching <code>Data</code>.</li>
</ul>
<hr/>
<p>After this scenario, you may face <code>Error tokenizing data. C error: Expected 1 fields in line 3, saw 3</code> <code>Error</code> Which has been occurred due to <code>Separation Error</code> of <code>header</code> and <code>footer</code>. So, the solution for your query was given below:-</p>
<pre><code># Import all the important Libraries
import pandas as pd
# Fetch 'CSV' Data Using 'URL' and store it in 'df'
url = 'https://www.hkex.com.hk/eng/dwrc/search/dwFullList.csv'
df = pd.read_csv(url, encoding = 'utf-16', sep = '\t', error_bad_lines = False, skiprows = 1, skipfooter = 3, engine = 'python')
# Print a few records of df
df.head()
</code></pre>
<p><strong>Output of Above Cell:-</strong>
<a href="https://i.stack.imgur.com/BaK3s.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BaK3s.png" alt="Output of Above Code" /></a></p>
<blockquote>
<p>To Learn more about <code>pd.read_csv()</code>:- <a href="https://pandas.pydata.org/docs/reference/api/pandas.read_csv.html" rel="nofollow noreferrer">Click Here !!!</a> <br/> To Learn more about <code>Encoding List</code>:- <a href="https://docs.python.org/3/library/codecs.html#standard-encodings" rel="nofollow noreferrer">Click Here !!!</a></p>
</blockquote>
<p>As you can see we have achieved our desired <code>Output</code>. Hope this Solution helps you.</p>
| 2021-05-05T13:51:54.533000
| 0
|
https://pandas.pydata.org/docs/reference/api/pandas.read_html.html
|
First of all, let's understand about error. The error you are facing was stated below:-
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xff in position 0: invalid start byte
You have noticed that our error type is the UnicodeDecodeError with 0xff Codec.
Why this error occurred and how to resolve it?
In our case pd.read_csv() module use encoding = 'utf-8' for Encoding Data. and you are facing error with 0xff Codec. So, 0xff is a number represented in the hexadecimal numeral system (base 16). It's composed of two f numbers in hex. As we know, f in hex is equivalent to 1111 in the binary numeral system.
Solution:- Use encoding = 'utf-16' while fetching Data.
After this scenario, you may face Error tokenizing data. C error: Expected 1 fields in line 3, saw 3 Error Which has been occurred due to Separation Error of header and footer. So, the solution for your query was given below:-
# Import all the important Libraries
import pandas as pd
# Fetch 'CSV' Data Using 'URL' and store it in 'df'
url = 'https://www.hkex.com.hk/eng/dwrc/search/dwFullList.csv'
df = pd.read_csv(url, encoding = 'utf-16', sep = '\t', error_bad_lines = False, skiprows = 1, skipfooter = 3, engine = 'python')
# Print a few records of df
df.head()
Output of Above Cell:-
To Learn more about pd.read_csv():- Click Here !!! To Learn more about Encoding List:- Click Here !!!
As you can see we have achieved our desired Output. Hope this Solution helps you.
| 0
| 1,453
|
Pandas read_csv from web URL
I am trying to read a csv-file from given URL using Python 3.
import pandas as pd
url = 'https://www.hkex.com.hk/eng/dwrc/search/dwFullList.csv' # error
url_2 = 'https://www.cboe.com/us/options/symboldir/equity_index_options/?download=csv
df = pd.read_csv(url) # error
df = pd.read_csv(url_2) # can download csv from url
When I run df = pd.read_csv(url) the system return:
File "pandas\_libs\parsers.pyx", line 537, in pandas._libs.parsers.TextReader.__cinit__
File "pandas\_libs\parsers.pyx", line 740, in pandas._libs.parsers.TextReader._get_header
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xff in position 0: invalid start byte
However, when I run df = pd.read_csv(url_2) the system can return the dataframe.
How can I solve this problem? I am using Python 3.7.
|
68,880,129
|
How to display a dataframe multiple times?
|
<p>Is there a way to display multiple times dataframe?
Basically, I would like to see the df X time in a row.
I've tried via for loop but didn't manage to do so.</p>
<pre class="lang-py prettyprint-override"><code>data = {'Counter':list(range(1, 10)),
'Country':['USA','UK','UK','USA','UK','USA','UK','USA','UK'],
'A':[0,0,1,1,1,1,1,1,1],
'B':[0,0,0,0,1,1,1,1,1],
'C':[0,0,0,0,0,0,0,1,1],
'D':[0,0,0,0,0,0,0,0,1],
'AA':[0,0,0,0,0,0,0,0,0],
'BB':[0,0,0,0,0,0,0,0,0],
'CC':[0,0,0,0,0,0,0,0,0],
'DD':[0,0,0,0,0,0,0,0,0]
}
df=pd.DataFrame(data)
df
for x in range(3):
df
</code></pre>
<p>I've tried to use print but I don't see the results as a dataframe.</p>
| 68,880,648
| 2021-08-22T09:43:01.867000
| 1
| null | 0
| 160
|
python|pandas
|
<p>This is a peculiar request, details on what you really want to achieve would be appreciated.</p>
<p>Nevertheless, you can use the following loop (example for 3 times):</p>
<pre><code>for i in range(3):
print(df)
</code></pre>
<p>or concatenate your data n times:</p>
<pre><code>print(pd.concat([df]*3))
</code></pre>
<p>To print in jupyter:</p>
<pre><code>from IPython.display import display
for i in range(3):
display(df.style.background_gradient(axis=None))
</code></pre>
| 2021-08-22T10:57:32.123000
| 0
|
https://pandas.pydata.org/docs/getting_started/intro_tutorials/09_timeseries.html
|
How to handle time series data with ease?#
In [1]: import pandas as pd
In [2]: import matplotlib.pyplot as plt
Data used for this tutorial:
Air quality data
For this tutorial, air quality data about \(NO_2\) and Particulate
This is a peculiar request, details on what you really want to achieve would be appreciated.
Nevertheless, you can use the following loop (example for 3 times):
for i in range(3):
print(df)
or concatenate your data n times:
print(pd.concat([df]*3))
To print in jupyter:
from IPython.display import display
for i in range(3):
display(df.style.background_gradient(axis=None))
matter less than 2.5 micrometers is used, made available by
OpenAQ and downloaded using the
py-openaq package.
The air_quality_no2_long.csv" data set provides \(NO_2\) values
for the measurement stations FR04014, BETR801 and London
Westminster in respectively Paris, Antwerp and London.
To raw data
In [3]: air_quality = pd.read_csv("data/air_quality_no2_long.csv")
In [4]: air_quality = air_quality.rename(columns={"date.utc": "datetime"})
In [5]: air_quality.head()
Out[5]:
city country datetime location parameter value unit
0 Paris FR 2019-06-21 00:00:00+00:00 FR04014 no2 20.0 µg/m³
1 Paris FR 2019-06-20 23:00:00+00:00 FR04014 no2 21.8 µg/m³
2 Paris FR 2019-06-20 22:00:00+00:00 FR04014 no2 26.5 µg/m³
3 Paris FR 2019-06-20 21:00:00+00:00 FR04014 no2 24.9 µg/m³
4 Paris FR 2019-06-20 20:00:00+00:00 FR04014 no2 21.4 µg/m³
In [6]: air_quality.city.unique()
Out[6]: array(['Paris', 'Antwerpen', 'London'], dtype=object)
How to handle time series data with ease?#
Using pandas datetime properties#
I want to work with the dates in the column datetime as datetime objects instead of plain text
In [7]: air_quality["datetime"] = pd.to_datetime(air_quality["datetime"])
In [8]: air_quality["datetime"]
Out[8]:
0 2019-06-21 00:00:00+00:00
1 2019-06-20 23:00:00+00:00
2 2019-06-20 22:00:00+00:00
3 2019-06-20 21:00:00+00:00
4 2019-06-20 20:00:00+00:00
...
2063 2019-05-07 06:00:00+00:00
2064 2019-05-07 04:00:00+00:00
2065 2019-05-07 03:00:00+00:00
2066 2019-05-07 02:00:00+00:00
2067 2019-05-07 01:00:00+00:00
Name: datetime, Length: 2068, dtype: datetime64[ns, UTC]
Initially, the values in datetime are character strings and do not
provide any datetime operations (e.g. extract the year, day of the
week,…). By applying the to_datetime function, pandas interprets the
strings and convert these to datetime (i.e. datetime64[ns, UTC])
objects. In pandas we call these datetime objects similar to
datetime.datetime from the standard library as pandas.Timestamp.
Note
As many data sets do contain datetime information in one of
the columns, pandas input function like pandas.read_csv() and pandas.read_json()
can do the transformation to dates when reading the data using the
parse_dates parameter with a list of the columns to read as
Timestamp:
pd.read_csv("../data/air_quality_no2_long.csv", parse_dates=["datetime"])
Why are these pandas.Timestamp objects useful? Let’s illustrate the added
value with some example cases.
What is the start and end date of the time series data set we are working
with?
In [9]: air_quality["datetime"].min(), air_quality["datetime"].max()
Out[9]:
(Timestamp('2019-05-07 01:00:00+0000', tz='UTC'),
Timestamp('2019-06-21 00:00:00+0000', tz='UTC'))
Using pandas.Timestamp for datetimes enables us to calculate with date
information and make them comparable. Hence, we can use this to get the
length of our time series:
In [10]: air_quality["datetime"].max() - air_quality["datetime"].min()
Out[10]: Timedelta('44 days 23:00:00')
The result is a pandas.Timedelta object, similar to datetime.timedelta
from the standard Python library and defining a time duration.
To user guideThe various time concepts supported by pandas are explained in the user guide section on time related concepts.
I want to add a new column to the DataFrame containing only the month of the measurement
In [11]: air_quality["month"] = air_quality["datetime"].dt.month
In [12]: air_quality.head()
Out[12]:
city country datetime ... value unit month
0 Paris FR 2019-06-21 00:00:00+00:00 ... 20.0 µg/m³ 6
1 Paris FR 2019-06-20 23:00:00+00:00 ... 21.8 µg/m³ 6
2 Paris FR 2019-06-20 22:00:00+00:00 ... 26.5 µg/m³ 6
3 Paris FR 2019-06-20 21:00:00+00:00 ... 24.9 µg/m³ 6
4 Paris FR 2019-06-20 20:00:00+00:00 ... 21.4 µg/m³ 6
[5 rows x 8 columns]
By using Timestamp objects for dates, a lot of time-related
properties are provided by pandas. For example the month, but also
year, weekofyear, quarter,… All of these properties are
accessible by the dt accessor.
To user guideAn overview of the existing date properties is given in the
time and date components overview table. More details about the dt accessor
to return datetime like properties are explained in a dedicated section on the dt accessor.
What is the average \(NO_2\) concentration for each day of the week for each of the measurement locations?
In [13]: air_quality.groupby(
....: [air_quality["datetime"].dt.weekday, "location"])["value"].mean()
....:
Out[13]:
datetime location
0 BETR801 27.875000
FR04014 24.856250
London Westminster 23.969697
1 BETR801 22.214286
FR04014 30.999359
...
5 FR04014 25.266154
London Westminster 24.977612
6 BETR801 21.896552
FR04014 23.274306
London Westminster 24.859155
Name: value, Length: 21, dtype: float64
Remember the split-apply-combine pattern provided by groupby from the
tutorial on statistics calculation?
Here, we want to calculate a given statistic (e.g. mean \(NO_2\))
for each weekday and for each measurement location. To group on
weekdays, we use the datetime property weekday (with Monday=0 and
Sunday=6) of pandas Timestamp, which is also accessible by the
dt accessor. The grouping on both locations and weekdays can be done
to split the calculation of the mean on each of these combinations.
Danger
As we are working with a very short time series in these
examples, the analysis does not provide a long-term representative
result!
Plot the typical \(NO_2\) pattern during the day of our time series of all stations together. In other words, what is the average value for each hour of the day?
In [14]: fig, axs = plt.subplots(figsize=(12, 4))
In [15]: air_quality.groupby(air_quality["datetime"].dt.hour)["value"].mean().plot(
....: kind='bar', rot=0, ax=axs
....: )
....:
Out[15]: <AxesSubplot: xlabel='datetime'>
In [16]: plt.xlabel("Hour of the day"); # custom x label using Matplotlib
In [17]: plt.ylabel("$NO_2 (µg/m^3)$");
Similar to the previous case, we want to calculate a given statistic
(e.g. mean \(NO_2\)) for each hour of the day and we can use the
split-apply-combine approach again. For this case, we use the datetime property hour
of pandas Timestamp, which is also accessible by the dt accessor.
Datetime as index#
In the tutorial on reshaping,
pivot() was introduced to reshape the data table with each of the
measurements locations as a separate column:
In [18]: no_2 = air_quality.pivot(index="datetime", columns="location", values="value")
In [19]: no_2.head()
Out[19]:
location BETR801 FR04014 London Westminster
datetime
2019-05-07 01:00:00+00:00 50.5 25.0 23.0
2019-05-07 02:00:00+00:00 45.0 27.7 19.0
2019-05-07 03:00:00+00:00 NaN 50.4 19.0
2019-05-07 04:00:00+00:00 NaN 61.9 16.0
2019-05-07 05:00:00+00:00 NaN 72.4 NaN
Note
By pivoting the data, the datetime information became the
index of the table. In general, setting a column as an index can be
achieved by the set_index function.
Working with a datetime index (i.e. DatetimeIndex) provides powerful
functionalities. For example, we do not need the dt accessor to get
the time series properties, but have these properties available on the
index directly:
In [20]: no_2.index.year, no_2.index.weekday
Out[20]:
(Int64Index([2019, 2019, 2019, 2019, 2019, 2019, 2019, 2019, 2019, 2019,
...
2019, 2019, 2019, 2019, 2019, 2019, 2019, 2019, 2019, 2019],
dtype='int64', name='datetime', length=1033),
Int64Index([1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
...
3, 3, 3, 3, 3, 3, 3, 3, 3, 4],
dtype='int64', name='datetime', length=1033))
Some other advantages are the convenient subsetting of time period or
the adapted time scale on plots. Let’s apply this on our data.
Create a plot of the \(NO_2\) values in the different stations from the 20th of May till the end of 21st of May
In [21]: no_2["2019-05-20":"2019-05-21"].plot();
By providing a string that parses to a datetime, a specific subset of the data can be selected on a DatetimeIndex.
To user guideMore information on the DatetimeIndex and the slicing by using strings is provided in the section on time series indexing.
Resample a time series to another frequency#
Aggregate the current hourly time series values to the monthly maximum value in each of the stations.
In [22]: monthly_max = no_2.resample("M").max()
In [23]: monthly_max
Out[23]:
location BETR801 FR04014 London Westminster
datetime
2019-05-31 00:00:00+00:00 74.5 97.0 97.0
2019-06-30 00:00:00+00:00 52.5 84.7 52.0
A very powerful method on time series data with a datetime index, is the
ability to resample() time series to another frequency (e.g.,
converting secondly data into 5-minutely data).
The resample() method is similar to a groupby operation:
it provides a time-based grouping, by using a string (e.g. M,
5H,…) that defines the target frequency
it requires an aggregation function such as mean, max,…
To user guideAn overview of the aliases used to define time series frequencies is given in the offset aliases overview table.
When defined, the frequency of the time series is provided by the
freq attribute:
In [24]: monthly_max.index.freq
Out[24]: <MonthEnd>
Make a plot of the daily mean \(NO_2\) value in each of the stations.
In [25]: no_2.resample("D").mean().plot(style="-o", figsize=(10, 5));
To user guideMore details on the power of time series resampling is provided in the user guide section on resampling.
REMEMBER
Valid date strings can be converted to datetime objects using
to_datetime function or as part of read functions.
Datetime objects in pandas support calculations, logical operations
and convenient date-related properties using the dt accessor.
A DatetimeIndex contains these date-related properties and
supports convenient slicing.
Resample is a powerful method to change the frequency of a time
series.
To user guideA full overview on time series is given on the pages on time series and date functionality.
| 239
| 623
|
How to display a dataframe multiple times?
Is there a way to display multiple times dataframe?
Basically, I would like to see the df X time in a row.
I've tried via for loop but didn't manage to do so.
data = {'Counter':list(range(1, 10)),
'Country':['USA','UK','UK','USA','UK','USA','UK','USA','UK'],
'A':[0,0,1,1,1,1,1,1,1],
'B':[0,0,0,0,1,1,1,1,1],
'C':[0,0,0,0,0,0,0,1,1],
'D':[0,0,0,0,0,0,0,0,1],
'AA':[0,0,0,0,0,0,0,0,0],
'BB':[0,0,0,0,0,0,0,0,0],
'CC':[0,0,0,0,0,0,0,0,0],
'DD':[0,0,0,0,0,0,0,0,0]
}
df=pd.DataFrame(data)
df
for x in range(3):
df
I've tried to use print but I don't see the results as a dataframe.
|
63,023,973
|
python/ pandas how to convert a list to a single cell and store in excel or in cvs format
|
<p>[I am expecting the output as shown in the left side, i am getting the output as shown in the right side]</p>
<p><a href="https://i.stack.imgur.com/hMUKl.png" rel="nofollow noreferrer">1</a>I have a list:</p>
<pre><code>listA = ['Vlan VN-Segment', '==== ==========', '800 30800', '801 30801', '3951 33951']
</code></pre>
<p>My output should be</p>
<pre><code>vlan vn-segment
==== ==========
800 30800
801 30801
3951 33951
</code></pre>
<p>But all the 4 rows show be in a single CELL in Excel. as above</p>
<p>I tried the following, but the output will be in 4 different rows in the Excel/cvs</p>
<pre><code>my_input_file = open('n9k-1.txt')
my_string = my_input_file.read().strip()
my_list = json.loads(my_string)
#print(type(my_list))
x = (my_list[2])
print(x)
t = StringIO('\n'.join(map(str, x)))
df = pd.read_csv(t)
df2 = df.to_csv('/Users/masam/Python-Scripts/new.csv', index=False)
</code></pre>
| 63,025,303
| 2020-07-21T22:22:25.247000
| 2
| null | -1
| 432
|
python|pandas
|
<pre><code>from xlsxwriter.workbook import Workbook
for i in listA:
itm = i.split(' ')
listA1 += f'\n{itm[0]}'
listA2 += f'\n{itm[len(itm)-1]}'
workbook = Workbook('data.xlsx')
worksheet = workbook.add_worksheet()
worksheet.set_column('A:A', 20)
worksheet.set_column('B:B', 20)
# Add a cell format with text wrap on.
cell_format = workbook.add_format({'text_wrap': True})
# Write a wrapped string to a cell.
worksheet.write('A1', listA1, cell_format)
worksheet.write('B1', listA2, cell_format)
workbook.close()```
https://stackoverflow.com/questions/43537598/write-strings-text-and-pandas-dataframe-to-excel
</code></pre>
| 2020-07-22T01:14:45.020000
| 0
|
https://pandas.pydata.org/docs/user_guide/io.html
|
IO tools (text, CSV, HDF5, …)#
IO tools (text, CSV, HDF5, …)#
The pandas I/O API is a set of top level reader functions accessed like
pandas.read_csv() that generally return a pandas object. The corresponding
writer functions are object methods that are accessed like
DataFrame.to_csv(). Below is a table containing available readers and
from xlsxwriter.workbook import Workbook
for i in listA:
itm = i.split(' ')
listA1 += f'\n{itm[0]}'
listA2 += f'\n{itm[len(itm)-1]}'
workbook = Workbook('data.xlsx')
worksheet = workbook.add_worksheet()
worksheet.set_column('A:A', 20)
worksheet.set_column('B:B', 20)
# Add a cell format with text wrap on.
cell_format = workbook.add_format({'text_wrap': True})
# Write a wrapped string to a cell.
worksheet.write('A1', listA1, cell_format)
worksheet.write('B1', listA2, cell_format)
workbook.close()```
https://stackoverflow.com/questions/43537598/write-strings-text-and-pandas-dataframe-to-excel
writers.
Format Type
Data Description
Reader
Writer
text
CSV
read_csv
to_csv
text
Fixed-Width Text File
read_fwf
text
JSON
read_json
to_json
text
HTML
read_html
to_html
text
LaTeX
Styler.to_latex
text
XML
read_xml
to_xml
text
Local clipboard
read_clipboard
to_clipboard
binary
MS Excel
read_excel
to_excel
binary
OpenDocument
read_excel
binary
HDF5 Format
read_hdf
to_hdf
binary
Feather Format
read_feather
to_feather
binary
Parquet Format
read_parquet
to_parquet
binary
ORC Format
read_orc
to_orc
binary
Stata
read_stata
to_stata
binary
SAS
read_sas
binary
SPSS
read_spss
binary
Python Pickle Format
read_pickle
to_pickle
SQL
SQL
read_sql
to_sql
SQL
Google BigQuery
read_gbq
to_gbq
Here is an informal performance comparison for some of these IO methods.
Note
For examples that use the StringIO class, make sure you import it
with from io import StringIO for Python 3.
CSV & text files#
The workhorse function for reading text files (a.k.a. flat files) is
read_csv(). See the cookbook for some advanced strategies.
Parsing options#
read_csv() accepts the following common arguments:
Basic#
filepath_or_buffervariousEither a path to a file (a str, pathlib.Path,
or py:py._path.local.LocalPath), URL (including http, ftp, and S3
locations), or any object with a read() method (such as an open file or
StringIO).
sepstr, defaults to ',' for read_csv(), \t for read_table()Delimiter to use. If sep is None, the C engine cannot automatically detect
the separator, but the Python parsing engine can, meaning the latter will be
used and automatically detect the separator by Python’s builtin sniffer tool,
csv.Sniffer. In addition, separators longer than 1 character and
different from '\s+' will be interpreted as regular expressions and
will also force the use of the Python parsing engine. Note that regex
delimiters are prone to ignoring quoted data. Regex example: '\\r\\t'.
delimiterstr, default NoneAlternative argument name for sep.
delim_whitespaceboolean, default FalseSpecifies whether or not whitespace (e.g. ' ' or '\t')
will be used as the delimiter. Equivalent to setting sep='\s+'.
If this option is set to True, nothing should be passed in for the
delimiter parameter.
Column and index locations and names#
headerint or list of ints, default 'infer'Row number(s) to use as the column names, and the start of the
data. Default behavior is to infer the column names: if no names are
passed the behavior is identical to header=0 and column names
are inferred from the first line of the file, if column names are
passed explicitly then the behavior is identical to
header=None. Explicitly pass header=0 to be able to replace
existing names.
The header can be a list of ints that specify row locations
for a MultiIndex on the columns e.g. [0,1,3]. Intervening rows
that are not specified will be skipped (e.g. 2 in this example is
skipped). Note that this parameter ignores commented lines and empty
lines if skip_blank_lines=True, so header=0 denotes the first
line of data rather than the first line of the file.
namesarray-like, default NoneList of column names to use. If file contains no header row, then you should
explicitly pass header=None. Duplicates in this list are not allowed.
index_colint, str, sequence of int / str, or False, optional, default NoneColumn(s) to use as the row labels of the DataFrame, either given as
string name or column index. If a sequence of int / str is given, a
MultiIndex is used.
Note
index_col=False can be used to force pandas to not use the first
column as the index, e.g. when you have a malformed file with delimiters at
the end of each line.
The default value of None instructs pandas to guess. If the number of
fields in the column header row is equal to the number of fields in the body
of the data file, then a default index is used. If it is larger, then
the first columns are used as index so that the remaining number of fields in
the body are equal to the number of fields in the header.
The first row after the header is used to determine the number of columns,
which will go into the index. If the subsequent rows contain less columns
than the first row, they are filled with NaN.
This can be avoided through usecols. This ensures that the columns are
taken as is and the trailing data are ignored.
usecolslist-like or callable, default NoneReturn a subset of the columns. If list-like, all elements must either
be positional (i.e. integer indices into the document columns) or strings
that correspond to column names provided either by the user in names or
inferred from the document header row(s). If names are given, the document
header row(s) are not taken into account. For example, a valid list-like
usecols parameter would be [0, 1, 2] or ['foo', 'bar', 'baz'].
Element order is ignored, so usecols=[0, 1] is the same as [1, 0]. To
instantiate a DataFrame from data with element order preserved use
pd.read_csv(data, usecols=['foo', 'bar'])[['foo', 'bar']] for columns
in ['foo', 'bar'] order or
pd.read_csv(data, usecols=['foo', 'bar'])[['bar', 'foo']] for
['bar', 'foo'] order.
If callable, the callable function will be evaluated against the column names,
returning names where the callable function evaluates to True:
In [1]: import pandas as pd
In [2]: from io import StringIO
In [3]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"
In [4]: pd.read_csv(StringIO(data))
Out[4]:
col1 col2 col3
0 a b 1
1 a b 2
2 c d 3
In [5]: pd.read_csv(StringIO(data), usecols=lambda x: x.upper() in ["COL1", "COL3"])
Out[5]:
col1 col3
0 a 1
1 a 2
2 c 3
Using this parameter results in much faster parsing time and lower memory usage
when using the c engine. The Python engine loads the data first before deciding
which columns to drop.
squeezeboolean, default FalseIf the parsed data only contains one column then return a Series.
Deprecated since version 1.4.0: Append .squeeze("columns") to the call to {func_name} to squeeze
the data.
prefixstr, default NonePrefix to add to column numbers when no header, e.g. ‘X’ for X0, X1, …
Deprecated since version 1.4.0: Use a list comprehension on the DataFrame’s columns after calling read_csv.
In [6]: data = "col1,col2,col3\na,b,1"
In [7]: df = pd.read_csv(StringIO(data))
In [8]: df.columns = [f"pre_{col}" for col in df.columns]
In [9]: df
Out[9]:
pre_col1 pre_col2 pre_col3
0 a b 1
mangle_dupe_colsboolean, default TrueDuplicate columns will be specified as ‘X’, ‘X.1’…’X.N’, rather than ‘X’…’X’.
Passing in False will cause data to be overwritten if there are duplicate
names in the columns.
Deprecated since version 1.5.0: The argument was never implemented, and a new argument where the
renaming pattern can be specified will be added instead.
General parsing configuration#
dtypeType name or dict of column -> type, default NoneData type for data or columns. E.g. {'a': np.float64, 'b': np.int32, 'c': 'Int64'}
Use str or object together with suitable na_values settings to preserve
and not interpret dtype. If converters are specified, they will be applied INSTEAD
of dtype conversion.
New in version 1.5.0: Support for defaultdict was added. Specify a defaultdict as input where
the default determines the dtype of the columns which are not explicitly
listed.
engine{'c', 'python', 'pyarrow'}Parser engine to use. The C and pyarrow engines are faster, while the python engine
is currently more feature-complete. Multithreading is currently only supported by
the pyarrow engine.
New in version 1.4.0: The “pyarrow” engine was added as an experimental engine, and some features
are unsupported, or may not work correctly, with this engine.
convertersdict, default NoneDict of functions for converting values in certain columns. Keys can either be
integers or column labels.
true_valueslist, default NoneValues to consider as True.
false_valueslist, default NoneValues to consider as False.
skipinitialspaceboolean, default FalseSkip spaces after delimiter.
skiprowslist-like or integer, default NoneLine numbers to skip (0-indexed) or number of lines to skip (int) at the start
of the file.
If callable, the callable function will be evaluated against the row
indices, returning True if the row should be skipped and False otherwise:
In [10]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"
In [11]: pd.read_csv(StringIO(data))
Out[11]:
col1 col2 col3
0 a b 1
1 a b 2
2 c d 3
In [12]: pd.read_csv(StringIO(data), skiprows=lambda x: x % 2 != 0)
Out[12]:
col1 col2 col3
0 a b 2
skipfooterint, default 0Number of lines at bottom of file to skip (unsupported with engine=’c’).
nrowsint, default NoneNumber of rows of file to read. Useful for reading pieces of large files.
low_memoryboolean, default TrueInternally process the file in chunks, resulting in lower memory use
while parsing, but possibly mixed type inference. To ensure no mixed
types either set False, or specify the type with the dtype parameter.
Note that the entire file is read into a single DataFrame regardless,
use the chunksize or iterator parameter to return the data in chunks.
(Only valid with C parser)
memory_mapboolean, default FalseIf a filepath is provided for filepath_or_buffer, map the file object
directly onto memory and access the data directly from there. Using this
option can improve performance because there is no longer any I/O overhead.
NA and missing data handling#
na_valuesscalar, str, list-like, or dict, default NoneAdditional strings to recognize as NA/NaN. If dict passed, specific per-column
NA values. See na values const below
for a list of the values interpreted as NaN by default.
keep_default_naboolean, default TrueWhether or not to include the default NaN values when parsing the data.
Depending on whether na_values is passed in, the behavior is as follows:
If keep_default_na is True, and na_values are specified, na_values
is appended to the default NaN values used for parsing.
If keep_default_na is True, and na_values are not specified, only
the default NaN values are used for parsing.
If keep_default_na is False, and na_values are specified, only
the NaN values specified na_values are used for parsing.
If keep_default_na is False, and na_values are not specified, no
strings will be parsed as NaN.
Note that if na_filter is passed in as False, the keep_default_na and
na_values parameters will be ignored.
na_filterboolean, default TrueDetect missing value markers (empty strings and the value of na_values). In
data without any NAs, passing na_filter=False can improve the performance
of reading a large file.
verboseboolean, default FalseIndicate number of NA values placed in non-numeric columns.
skip_blank_linesboolean, default TrueIf True, skip over blank lines rather than interpreting as NaN values.
Datetime handling#
parse_datesboolean or list of ints or names or list of lists or dict, default False.
If True -> try parsing the index.
If [1, 2, 3] -> try parsing columns 1, 2, 3 each as a separate date
column.
If [[1, 3]] -> combine columns 1 and 3 and parse as a single date
column.
If {'foo': [1, 3]} -> parse columns 1, 3 as date and call result ‘foo’.
Note
A fast-path exists for iso8601-formatted dates.
infer_datetime_formatboolean, default FalseIf True and parse_dates is enabled for a column, attempt to infer the
datetime format to speed up the processing.
keep_date_colboolean, default FalseIf True and parse_dates specifies combining multiple columns then keep the
original columns.
date_parserfunction, default NoneFunction to use for converting a sequence of string columns to an array of
datetime instances. The default uses dateutil.parser.parser to do the
conversion. pandas will try to call date_parser in three different ways,
advancing to the next if an exception occurs: 1) Pass one or more arrays (as
defined by parse_dates) as arguments; 2) concatenate (row-wise) the string
values from the columns defined by parse_dates into a single array and pass
that; and 3) call date_parser once for each row using one or more strings
(corresponding to the columns defined by parse_dates) as arguments.
dayfirstboolean, default FalseDD/MM format dates, international and European format.
cache_datesboolean, default TrueIf True, use a cache of unique, converted dates to apply the datetime
conversion. May produce significant speed-up when parsing duplicate
date strings, especially ones with timezone offsets.
New in version 0.25.0.
Iteration#
iteratorboolean, default FalseReturn TextFileReader object for iteration or getting chunks with
get_chunk().
chunksizeint, default NoneReturn TextFileReader object for iteration. See iterating and chunking below.
Quoting, compression, and file format#
compression{'infer', 'gzip', 'bz2', 'zip', 'xz', 'zstd', None, dict}, default 'infer'For on-the-fly decompression of on-disk data. If ‘infer’, then use gzip,
bz2, zip, xz, or zstandard if filepath_or_buffer is path-like ending in ‘.gz’, ‘.bz2’,
‘.zip’, ‘.xz’, ‘.zst’, respectively, and no decompression otherwise. If using ‘zip’,
the ZIP file must contain only one data file to be read in.
Set to None for no decompression. Can also be a dict with key 'method'
set to one of {'zip', 'gzip', 'bz2', 'zstd'} and other key-value pairs are
forwarded to zipfile.ZipFile, gzip.GzipFile, bz2.BZ2File, or zstandard.ZstdDecompressor.
As an example, the following could be passed for faster compression and to
create a reproducible gzip archive:
compression={'method': 'gzip', 'compresslevel': 1, 'mtime': 1}.
Changed in version 1.1.0: dict option extended to support gzip and bz2.
Changed in version 1.2.0: Previous versions forwarded dict entries for ‘gzip’ to gzip.open.
thousandsstr, default NoneThousands separator.
decimalstr, default '.'Character to recognize as decimal point. E.g. use ',' for European data.
float_precisionstring, default NoneSpecifies which converter the C engine should use for floating-point values.
The options are None for the ordinary converter, high for the
high-precision converter, and round_trip for the round-trip converter.
lineterminatorstr (length 1), default NoneCharacter to break file into lines. Only valid with C parser.
quotecharstr (length 1)The character used to denote the start and end of a quoted item. Quoted items
can include the delimiter and it will be ignored.
quotingint or csv.QUOTE_* instance, default 0Control field quoting behavior per csv.QUOTE_* constants. Use one of
QUOTE_MINIMAL (0), QUOTE_ALL (1), QUOTE_NONNUMERIC (2) or
QUOTE_NONE (3).
doublequoteboolean, default TrueWhen quotechar is specified and quoting is not QUOTE_NONE,
indicate whether or not to interpret two consecutive quotechar elements
inside a field as a single quotechar element.
escapecharstr (length 1), default NoneOne-character string used to escape delimiter when quoting is QUOTE_NONE.
commentstr, default NoneIndicates remainder of line should not be parsed. If found at the beginning of
a line, the line will be ignored altogether. This parameter must be a single
character. Like empty lines (as long as skip_blank_lines=True), fully
commented lines are ignored by the parameter header but not by skiprows.
For example, if comment='#', parsing ‘#empty\na,b,c\n1,2,3’ with
header=0 will result in ‘a,b,c’ being treated as the header.
encodingstr, default NoneEncoding to use for UTF when reading/writing (e.g. 'utf-8'). List of
Python standard encodings.
dialectstr or csv.Dialect instance, default NoneIf provided, this parameter will override values (default or not) for the
following parameters: delimiter, doublequote, escapechar,
skipinitialspace, quotechar, and quoting. If it is necessary to
override values, a ParserWarning will be issued. See csv.Dialect
documentation for more details.
Error handling#
error_bad_linesboolean, optional, default NoneLines with too many fields (e.g. a csv line with too many commas) will by
default cause an exception to be raised, and no DataFrame will be
returned. If False, then these “bad lines” will dropped from the
DataFrame that is returned. See bad lines
below.
Deprecated since version 1.3.0: The on_bad_lines parameter should be used instead to specify behavior upon
encountering a bad line instead.
warn_bad_linesboolean, optional, default NoneIf error_bad_lines is False, and warn_bad_lines is True, a warning for
each “bad line” will be output.
Deprecated since version 1.3.0: The on_bad_lines parameter should be used instead to specify behavior upon
encountering a bad line instead.
on_bad_lines(‘error’, ‘warn’, ‘skip’), default ‘error’Specifies what to do upon encountering a bad line (a line with too many fields).
Allowed values are :
‘error’, raise an ParserError when a bad line is encountered.
‘warn’, print a warning when a bad line is encountered and skip that line.
‘skip’, skip bad lines without raising or warning when they are encountered.
New in version 1.3.0.
Specifying column data types#
You can indicate the data type for the whole DataFrame or individual
columns:
In [13]: import numpy as np
In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"
In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11
In [16]: df = pd.read_csv(StringIO(data), dtype=object)
In [17]: df
Out[17]:
a b c d
0 1 2 3 4
1 5 6 7 8
2 9 10 11 NaN
In [18]: df["a"][0]
Out[18]: '1'
In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})
In [20]: df.dtypes
Out[20]:
a int64
b object
c float64
d Int64
dtype: object
Fortunately, pandas offers more than one way to ensure that your column(s)
contain only one dtype. If you’re unfamiliar with these concepts, you can
see here to learn more about dtypes, and
here to learn more about object conversion in
pandas.
For instance, you can use the converters argument
of read_csv():
In [21]: data = "col_1\n1\n2\n'A'\n4.22"
In [22]: df = pd.read_csv(StringIO(data), converters={"col_1": str})
In [23]: df
Out[23]:
col_1
0 1
1 2
2 'A'
3 4.22
In [24]: df["col_1"].apply(type).value_counts()
Out[24]:
<class 'str'> 4
Name: col_1, dtype: int64
Or you can use the to_numeric() function to coerce the
dtypes after reading in the data,
In [25]: df2 = pd.read_csv(StringIO(data))
In [26]: df2["col_1"] = pd.to_numeric(df2["col_1"], errors="coerce")
In [27]: df2
Out[27]:
col_1
0 1.00
1 2.00
2 NaN
3 4.22
In [28]: df2["col_1"].apply(type).value_counts()
Out[28]:
<class 'float'> 4
Name: col_1, dtype: int64
which will convert all valid parsing to floats, leaving the invalid parsing
as NaN.
Ultimately, how you deal with reading in columns containing mixed dtypes
depends on your specific needs. In the case above, if you wanted to NaN out
the data anomalies, then to_numeric() is probably your best option.
However, if you wanted for all the data to be coerced, no matter the type, then
using the converters argument of read_csv() would certainly be
worth trying.
Note
In some cases, reading in abnormal data with columns containing mixed dtypes
will result in an inconsistent dataset. If you rely on pandas to infer the
dtypes of your columns, the parsing engine will go and infer the dtypes for
different chunks of the data, rather than the whole dataset at once. Consequently,
you can end up with column(s) with mixed dtypes. For example,
In [29]: col_1 = list(range(500000)) + ["a", "b"] + list(range(500000))
In [30]: df = pd.DataFrame({"col_1": col_1})
In [31]: df.to_csv("foo.csv")
In [32]: mixed_df = pd.read_csv("foo.csv")
In [33]: mixed_df["col_1"].apply(type).value_counts()
Out[33]:
<class 'int'> 737858
<class 'str'> 262144
Name: col_1, dtype: int64
In [34]: mixed_df["col_1"].dtype
Out[34]: dtype('O')
will result with mixed_df containing an int dtype for certain chunks
of the column, and str for others due to the mixed dtypes from the
data that was read in. It is important to note that the overall column will be
marked with a dtype of object, which is used for columns with mixed dtypes.
Specifying categorical dtype#
Categorical columns can be parsed directly by specifying dtype='category' or
dtype=CategoricalDtype(categories, ordered).
In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"
In [36]: pd.read_csv(StringIO(data))
Out[36]:
col1 col2 col3
0 a b 1
1 a b 2
2 c d 3
In [37]: pd.read_csv(StringIO(data)).dtypes
Out[37]:
col1 object
col2 object
col3 int64
dtype: object
In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
Out[38]:
col1 category
col2 category
col3 category
dtype: object
Individual columns can be parsed as a Categorical using a dict
specification:
In [39]: pd.read_csv(StringIO(data), dtype={"col1": "category"}).dtypes
Out[39]:
col1 category
col2 object
col3 int64
dtype: object
Specifying dtype='category' will result in an unordered Categorical
whose categories are the unique values observed in the data. For more
control on the categories and order, create a
CategoricalDtype ahead of time, and pass that for
that column’s dtype.
In [40]: from pandas.api.types import CategoricalDtype
In [41]: dtype = CategoricalDtype(["d", "c", "b", "a"], ordered=True)
In [42]: pd.read_csv(StringIO(data), dtype={"col1": dtype}).dtypes
Out[42]:
col1 category
col2 object
col3 int64
dtype: object
When using dtype=CategoricalDtype, “unexpected” values outside of
dtype.categories are treated as missing values.
In [43]: dtype = CategoricalDtype(["a", "b", "d"]) # No 'c'
In [44]: pd.read_csv(StringIO(data), dtype={"col1": dtype}).col1
Out[44]:
0 a
1 a
2 NaN
Name: col1, dtype: category
Categories (3, object): ['a', 'b', 'd']
This matches the behavior of Categorical.set_categories().
Note
With dtype='category', the resulting categories will always be parsed
as strings (object dtype). If the categories are numeric they can be
converted using the to_numeric() function, or as appropriate, another
converter such as to_datetime().
When dtype is a CategoricalDtype with homogeneous categories (
all numeric, all datetimes, etc.), the conversion is done automatically.
In [45]: df = pd.read_csv(StringIO(data), dtype="category")
In [46]: df.dtypes
Out[46]:
col1 category
col2 category
col3 category
dtype: object
In [47]: df["col3"]
Out[47]:
0 1
1 2
2 3
Name: col3, dtype: category
Categories (3, object): ['1', '2', '3']
In [48]: new_categories = pd.to_numeric(df["col3"].cat.categories)
In [49]: df["col3"] = df["col3"].cat.rename_categories(new_categories)
In [50]: df["col3"]
Out[50]:
0 1
1 2
2 3
Name: col3, dtype: category
Categories (3, int64): [1, 2, 3]
Naming and using columns#
Handling column names#
A file may or may not have a header row. pandas assumes the first row should be
used as the column names:
In [51]: data = "a,b,c\n1,2,3\n4,5,6\n7,8,9"
In [52]: print(data)
a,b,c
1,2,3
4,5,6
7,8,9
In [53]: pd.read_csv(StringIO(data))
Out[53]:
a b c
0 1 2 3
1 4 5 6
2 7 8 9
By specifying the names argument in conjunction with header you can
indicate other names to use and whether or not to throw away the header row (if
any):
In [54]: print(data)
a,b,c
1,2,3
4,5,6
7,8,9
In [55]: pd.read_csv(StringIO(data), names=["foo", "bar", "baz"], header=0)
Out[55]:
foo bar baz
0 1 2 3
1 4 5 6
2 7 8 9
In [56]: pd.read_csv(StringIO(data), names=["foo", "bar", "baz"], header=None)
Out[56]:
foo bar baz
0 a b c
1 1 2 3
2 4 5 6
3 7 8 9
If the header is in a row other than the first, pass the row number to
header. This will skip the preceding rows:
In [57]: data = "skip this skip it\na,b,c\n1,2,3\n4,5,6\n7,8,9"
In [58]: pd.read_csv(StringIO(data), header=1)
Out[58]:
a b c
0 1 2 3
1 4 5 6
2 7 8 9
Note
Default behavior is to infer the column names: if no names are
passed the behavior is identical to header=0 and column names
are inferred from the first non-blank line of the file, if column
names are passed explicitly then the behavior is identical to
header=None.
Duplicate names parsing#
Deprecated since version 1.5.0: mangle_dupe_cols was never implemented, and a new argument where the
renaming pattern can be specified will be added instead.
If the file or header contains duplicate names, pandas will by default
distinguish between them so as to prevent overwriting data:
In [59]: data = "a,b,a\n0,1,2\n3,4,5"
In [60]: pd.read_csv(StringIO(data))
Out[60]:
a b a.1
0 0 1 2
1 3 4 5
There is no more duplicate data because mangle_dupe_cols=True by default,
which modifies a series of duplicate columns ‘X’, …, ‘X’ to become
‘X’, ‘X.1’, …, ‘X.N’.
Filtering columns (usecols)#
The usecols argument allows you to select any subset of the columns in a
file, either using the column names, position numbers or a callable:
In [61]: data = "a,b,c,d\n1,2,3,foo\n4,5,6,bar\n7,8,9,baz"
In [62]: pd.read_csv(StringIO(data))
Out[62]:
a b c d
0 1 2 3 foo
1 4 5 6 bar
2 7 8 9 baz
In [63]: pd.read_csv(StringIO(data), usecols=["b", "d"])
Out[63]:
b d
0 2 foo
1 5 bar
2 8 baz
In [64]: pd.read_csv(StringIO(data), usecols=[0, 2, 3])
Out[64]:
a c d
0 1 3 foo
1 4 6 bar
2 7 9 baz
In [65]: pd.read_csv(StringIO(data), usecols=lambda x: x.upper() in ["A", "C"])
Out[65]:
a c
0 1 3
1 4 6
2 7 9
The usecols argument can also be used to specify which columns not to
use in the final result:
In [66]: pd.read_csv(StringIO(data), usecols=lambda x: x not in ["a", "c"])
Out[66]:
b d
0 2 foo
1 5 bar
2 8 baz
In this case, the callable is specifying that we exclude the “a” and “c”
columns from the output.
Comments and empty lines#
Ignoring line comments and empty lines#
If the comment parameter is specified, then completely commented lines will
be ignored. By default, completely blank lines will be ignored as well.
In [67]: data = "\na,b,c\n \n# commented line\n1,2,3\n\n4,5,6"
In [68]: print(data)
a,b,c
# commented line
1,2,3
4,5,6
In [69]: pd.read_csv(StringIO(data), comment="#")
Out[69]:
a b c
0 1 2 3
1 4 5 6
If skip_blank_lines=False, then read_csv will not ignore blank lines:
In [70]: data = "a,b,c\n\n1,2,3\n\n\n4,5,6"
In [71]: pd.read_csv(StringIO(data), skip_blank_lines=False)
Out[71]:
a b c
0 NaN NaN NaN
1 1.0 2.0 3.0
2 NaN NaN NaN
3 NaN NaN NaN
4 4.0 5.0 6.0
Warning
The presence of ignored lines might create ambiguities involving line numbers;
the parameter header uses row numbers (ignoring commented/empty
lines), while skiprows uses line numbers (including commented/empty lines):
In [72]: data = "#comment\na,b,c\nA,B,C\n1,2,3"
In [73]: pd.read_csv(StringIO(data), comment="#", header=1)
Out[73]:
A B C
0 1 2 3
In [74]: data = "A,B,C\n#comment\na,b,c\n1,2,3"
In [75]: pd.read_csv(StringIO(data), comment="#", skiprows=2)
Out[75]:
a b c
0 1 2 3
If both header and skiprows are specified, header will be
relative to the end of skiprows. For example:
In [76]: data = (
....: "# empty\n"
....: "# second empty line\n"
....: "# third emptyline\n"
....: "X,Y,Z\n"
....: "1,2,3\n"
....: "A,B,C\n"
....: "1,2.,4.\n"
....: "5.,NaN,10.0\n"
....: )
....:
In [77]: print(data)
# empty
# second empty line
# third emptyline
X,Y,Z
1,2,3
A,B,C
1,2.,4.
5.,NaN,10.0
In [78]: pd.read_csv(StringIO(data), comment="#", skiprows=4, header=1)
Out[78]:
A B C
0 1.0 2.0 4.0
1 5.0 NaN 10.0
Comments#
Sometimes comments or meta data may be included in a file:
In [79]: print(open("tmp.csv").read())
ID,level,category
Patient1,123000,x # really unpleasant
Patient2,23000,y # wouldn't take his medicine
Patient3,1234018,z # awesome
By default, the parser includes the comments in the output:
In [80]: df = pd.read_csv("tmp.csv")
In [81]: df
Out[81]:
ID level category
0 Patient1 123000 x # really unpleasant
1 Patient2 23000 y # wouldn't take his medicine
2 Patient3 1234018 z # awesome
We can suppress the comments using the comment keyword:
In [82]: df = pd.read_csv("tmp.csv", comment="#")
In [83]: df
Out[83]:
ID level category
0 Patient1 123000 x
1 Patient2 23000 y
2 Patient3 1234018 z
Dealing with Unicode data#
The encoding argument should be used for encoded unicode data, which will
result in byte strings being decoded to unicode in the result:
In [84]: from io import BytesIO
In [85]: data = b"word,length\n" b"Tr\xc3\xa4umen,7\n" b"Gr\xc3\xbc\xc3\x9fe,5"
In [86]: data = data.decode("utf8").encode("latin-1")
In [87]: df = pd.read_csv(BytesIO(data), encoding="latin-1")
In [88]: df
Out[88]:
word length
0 Träumen 7
1 Grüße 5
In [89]: df["word"][1]
Out[89]: 'Grüße'
Some formats which encode all characters as multiple bytes, like UTF-16, won’t
parse correctly at all without specifying the encoding. Full list of Python
standard encodings.
Index columns and trailing delimiters#
If a file has one more column of data than the number of column names, the
first column will be used as the DataFrame’s row names:
In [90]: data = "a,b,c\n4,apple,bat,5.7\n8,orange,cow,10"
In [91]: pd.read_csv(StringIO(data))
Out[91]:
a b c
4 apple bat 5.7
8 orange cow 10.0
In [92]: data = "index,a,b,c\n4,apple,bat,5.7\n8,orange,cow,10"
In [93]: pd.read_csv(StringIO(data), index_col=0)
Out[93]:
a b c
index
4 apple bat 5.7
8 orange cow 10.0
Ordinarily, you can achieve this behavior using the index_col option.
There are some exception cases when a file has been prepared with delimiters at
the end of each data line, confusing the parser. To explicitly disable the
index column inference and discard the last column, pass index_col=False:
In [94]: data = "a,b,c\n4,apple,bat,\n8,orange,cow,"
In [95]: print(data)
a,b,c
4,apple,bat,
8,orange,cow,
In [96]: pd.read_csv(StringIO(data))
Out[96]:
a b c
4 apple bat NaN
8 orange cow NaN
In [97]: pd.read_csv(StringIO(data), index_col=False)
Out[97]:
a b c
0 4 apple bat
1 8 orange cow
If a subset of data is being parsed using the usecols option, the
index_col specification is based on that subset, not the original data.
In [98]: data = "a,b,c\n4,apple,bat,\n8,orange,cow,"
In [99]: print(data)
a,b,c
4,apple,bat,
8,orange,cow,
In [100]: pd.read_csv(StringIO(data), usecols=["b", "c"])
Out[100]:
b c
4 bat NaN
8 cow NaN
In [101]: pd.read_csv(StringIO(data), usecols=["b", "c"], index_col=0)
Out[101]:
b c
4 bat NaN
8 cow NaN
Date Handling#
Specifying date columns#
To better facilitate working with datetime data, read_csv()
uses the keyword arguments parse_dates and date_parser
to allow users to specify a variety of columns and date/time formats to turn the
input text data into datetime objects.
The simplest case is to just pass in parse_dates=True:
In [102]: with open("foo.csv", mode="w") as f:
.....: f.write("date,A,B,C\n20090101,a,1,2\n20090102,b,3,4\n20090103,c,4,5")
.....:
# Use a column as an index, and parse it as dates.
In [103]: df = pd.read_csv("foo.csv", index_col=0, parse_dates=True)
In [104]: df
Out[104]:
A B C
date
2009-01-01 a 1 2
2009-01-02 b 3 4
2009-01-03 c 4 5
# These are Python datetime objects
In [105]: df.index
Out[105]: DatetimeIndex(['2009-01-01', '2009-01-02', '2009-01-03'], dtype='datetime64[ns]', name='date', freq=None)
It is often the case that we may want to store date and time data separately,
or store various date fields separately. the parse_dates keyword can be
used to specify a combination of columns to parse the dates and/or times from.
You can specify a list of column lists to parse_dates, the resulting date
columns will be prepended to the output (so as to not affect the existing column
order) and the new column names will be the concatenation of the component
column names:
In [106]: data = (
.....: "KORD,19990127, 19:00:00, 18:56:00, 0.8100\n"
.....: "KORD,19990127, 20:00:00, 19:56:00, 0.0100\n"
.....: "KORD,19990127, 21:00:00, 20:56:00, -0.5900\n"
.....: "KORD,19990127, 21:00:00, 21:18:00, -0.9900\n"
.....: "KORD,19990127, 22:00:00, 21:56:00, -0.5900\n"
.....: "KORD,19990127, 23:00:00, 22:56:00, -0.5900"
.....: )
.....:
In [107]: with open("tmp.csv", "w") as fh:
.....: fh.write(data)
.....:
In [108]: df = pd.read_csv("tmp.csv", header=None, parse_dates=[[1, 2], [1, 3]])
In [109]: df
Out[109]:
1_2 1_3 0 4
0 1999-01-27 19:00:00 1999-01-27 18:56:00 KORD 0.81
1 1999-01-27 20:00:00 1999-01-27 19:56:00 KORD 0.01
2 1999-01-27 21:00:00 1999-01-27 20:56:00 KORD -0.59
3 1999-01-27 21:00:00 1999-01-27 21:18:00 KORD -0.99
4 1999-01-27 22:00:00 1999-01-27 21:56:00 KORD -0.59
5 1999-01-27 23:00:00 1999-01-27 22:56:00 KORD -0.59
By default the parser removes the component date columns, but you can choose
to retain them via the keep_date_col keyword:
In [110]: df = pd.read_csv(
.....: "tmp.csv", header=None, parse_dates=[[1, 2], [1, 3]], keep_date_col=True
.....: )
.....:
In [111]: df
Out[111]:
1_2 1_3 0 ... 2 3 4
0 1999-01-27 19:00:00 1999-01-27 18:56:00 KORD ... 19:00:00 18:56:00 0.81
1 1999-01-27 20:00:00 1999-01-27 19:56:00 KORD ... 20:00:00 19:56:00 0.01
2 1999-01-27 21:00:00 1999-01-27 20:56:00 KORD ... 21:00:00 20:56:00 -0.59
3 1999-01-27 21:00:00 1999-01-27 21:18:00 KORD ... 21:00:00 21:18:00 -0.99
4 1999-01-27 22:00:00 1999-01-27 21:56:00 KORD ... 22:00:00 21:56:00 -0.59
5 1999-01-27 23:00:00 1999-01-27 22:56:00 KORD ... 23:00:00 22:56:00 -0.59
[6 rows x 7 columns]
Note that if you wish to combine multiple columns into a single date column, a
nested list must be used. In other words, parse_dates=[1, 2] indicates that
the second and third columns should each be parsed as separate date columns
while parse_dates=[[1, 2]] means the two columns should be parsed into a
single column.
You can also use a dict to specify custom name columns:
In [112]: date_spec = {"nominal": [1, 2], "actual": [1, 3]}
In [113]: df = pd.read_csv("tmp.csv", header=None, parse_dates=date_spec)
In [114]: df
Out[114]:
nominal actual 0 4
0 1999-01-27 19:00:00 1999-01-27 18:56:00 KORD 0.81
1 1999-01-27 20:00:00 1999-01-27 19:56:00 KORD 0.01
2 1999-01-27 21:00:00 1999-01-27 20:56:00 KORD -0.59
3 1999-01-27 21:00:00 1999-01-27 21:18:00 KORD -0.99
4 1999-01-27 22:00:00 1999-01-27 21:56:00 KORD -0.59
5 1999-01-27 23:00:00 1999-01-27 22:56:00 KORD -0.59
It is important to remember that if multiple text columns are to be parsed into
a single date column, then a new column is prepended to the data. The index_col
specification is based off of this new set of columns rather than the original
data columns:
In [115]: date_spec = {"nominal": [1, 2], "actual": [1, 3]}
In [116]: df = pd.read_csv(
.....: "tmp.csv", header=None, parse_dates=date_spec, index_col=0
.....: ) # index is the nominal column
.....:
In [117]: df
Out[117]:
actual 0 4
nominal
1999-01-27 19:00:00 1999-01-27 18:56:00 KORD 0.81
1999-01-27 20:00:00 1999-01-27 19:56:00 KORD 0.01
1999-01-27 21:00:00 1999-01-27 20:56:00 KORD -0.59
1999-01-27 21:00:00 1999-01-27 21:18:00 KORD -0.99
1999-01-27 22:00:00 1999-01-27 21:56:00 KORD -0.59
1999-01-27 23:00:00 1999-01-27 22:56:00 KORD -0.59
Note
If a column or index contains an unparsable date, the entire column or
index will be returned unaltered as an object data type. For non-standard
datetime parsing, use to_datetime() after pd.read_csv.
Note
read_csv has a fast_path for parsing datetime strings in iso8601 format,
e.g “2000-01-01T00:01:02+00:00” and similar variations. If you can arrange
for your data to store datetimes in this format, load times will be
significantly faster, ~20x has been observed.
Date parsing functions#
Finally, the parser allows you to specify a custom date_parser function to
take full advantage of the flexibility of the date parsing API:
In [118]: df = pd.read_csv(
.....: "tmp.csv", header=None, parse_dates=date_spec, date_parser=pd.to_datetime
.....: )
.....:
In [119]: df
Out[119]:
nominal actual 0 4
0 1999-01-27 19:00:00 1999-01-27 18:56:00 KORD 0.81
1 1999-01-27 20:00:00 1999-01-27 19:56:00 KORD 0.01
2 1999-01-27 21:00:00 1999-01-27 20:56:00 KORD -0.59
3 1999-01-27 21:00:00 1999-01-27 21:18:00 KORD -0.99
4 1999-01-27 22:00:00 1999-01-27 21:56:00 KORD -0.59
5 1999-01-27 23:00:00 1999-01-27 22:56:00 KORD -0.59
pandas will try to call the date_parser function in three different ways. If
an exception is raised, the next one is tried:
date_parser is first called with one or more arrays as arguments,
as defined using parse_dates (e.g., date_parser(['2013', '2013'], ['1', '2'])).
If #1 fails, date_parser is called with all the columns
concatenated row-wise into a single array (e.g., date_parser(['2013 1', '2013 2'])).
Note that performance-wise, you should try these methods of parsing dates in order:
Try to infer the format using infer_datetime_format=True (see section below).
If you know the format, use pd.to_datetime():
date_parser=lambda x: pd.to_datetime(x, format=...).
If you have a really non-standard format, use a custom date_parser function.
For optimal performance, this should be vectorized, i.e., it should accept arrays
as arguments.
Parsing a CSV with mixed timezones#
pandas cannot natively represent a column or index with mixed timezones. If your CSV
file contains columns with a mixture of timezones, the default result will be
an object-dtype column with strings, even with parse_dates.
In [120]: content = """\
.....: a
.....: 2000-01-01T00:00:00+05:00
.....: 2000-01-01T00:00:00+06:00"""
.....:
In [121]: df = pd.read_csv(StringIO(content), parse_dates=["a"])
In [122]: df["a"]
Out[122]:
0 2000-01-01 00:00:00+05:00
1 2000-01-01 00:00:00+06:00
Name: a, dtype: object
To parse the mixed-timezone values as a datetime column, pass a partially-applied
to_datetime() with utc=True as the date_parser.
In [123]: df = pd.read_csv(
.....: StringIO(content),
.....: parse_dates=["a"],
.....: date_parser=lambda col: pd.to_datetime(col, utc=True),
.....: )
.....:
In [124]: df["a"]
Out[124]:
0 1999-12-31 19:00:00+00:00
1 1999-12-31 18:00:00+00:00
Name: a, dtype: datetime64[ns, UTC]
Inferring datetime format#
If you have parse_dates enabled for some or all of your columns, and your
datetime strings are all formatted the same way, you may get a large speed
up by setting infer_datetime_format=True. If set, pandas will attempt
to guess the format of your datetime strings, and then use a faster means
of parsing the strings. 5-10x parsing speeds have been observed. pandas
will fallback to the usual parsing if either the format cannot be guessed
or the format that was guessed cannot properly parse the entire column
of strings. So in general, infer_datetime_format should not have any
negative consequences if enabled.
Here are some examples of datetime strings that can be guessed (All
representing December 30th, 2011 at 00:00:00):
“20111230”
“2011/12/30”
“20111230 00:00:00”
“12/30/2011 00:00:00”
“30/Dec/2011 00:00:00”
“30/December/2011 00:00:00”
Note that infer_datetime_format is sensitive to dayfirst. With
dayfirst=True, it will guess “01/12/2011” to be December 1st. With
dayfirst=False (default) it will guess “01/12/2011” to be January 12th.
# Try to infer the format for the index column
In [125]: df = pd.read_csv(
.....: "foo.csv",
.....: index_col=0,
.....: parse_dates=True,
.....: infer_datetime_format=True,
.....: )
.....:
In [126]: df
Out[126]:
A B C
date
2009-01-01 a 1 2
2009-01-02 b 3 4
2009-01-03 c 4 5
International date formats#
While US date formats tend to be MM/DD/YYYY, many international formats use
DD/MM/YYYY instead. For convenience, a dayfirst keyword is provided:
In [127]: data = "date,value,cat\n1/6/2000,5,a\n2/6/2000,10,b\n3/6/2000,15,c"
In [128]: print(data)
date,value,cat
1/6/2000,5,a
2/6/2000,10,b
3/6/2000,15,c
In [129]: with open("tmp.csv", "w") as fh:
.....: fh.write(data)
.....:
In [130]: pd.read_csv("tmp.csv", parse_dates=[0])
Out[130]:
date value cat
0 2000-01-06 5 a
1 2000-02-06 10 b
2 2000-03-06 15 c
In [131]: pd.read_csv("tmp.csv", dayfirst=True, parse_dates=[0])
Out[131]:
date value cat
0 2000-06-01 5 a
1 2000-06-02 10 b
2 2000-06-03 15 c
Writing CSVs to binary file objects#
New in version 1.2.0.
df.to_csv(..., mode="wb") allows writing a CSV to a file object
opened binary mode. In most cases, it is not necessary to specify
mode as Pandas will auto-detect whether the file object is
opened in text or binary mode.
In [132]: import io
In [133]: data = pd.DataFrame([0, 1, 2])
In [134]: buffer = io.BytesIO()
In [135]: data.to_csv(buffer, encoding="utf-8", compression="gzip")
Specifying method for floating-point conversion#
The parameter float_precision can be specified in order to use
a specific floating-point converter during parsing with the C engine.
The options are the ordinary converter, the high-precision converter, and
the round-trip converter (which is guaranteed to round-trip values after
writing to a file). For example:
In [136]: val = "0.3066101993807095471566981359501369297504425048828125"
In [137]: data = "a,b,c\n1,2,{0}".format(val)
In [138]: abs(
.....: pd.read_csv(
.....: StringIO(data),
.....: engine="c",
.....: float_precision=None,
.....: )["c"][0] - float(val)
.....: )
.....:
Out[138]: 5.551115123125783e-17
In [139]: abs(
.....: pd.read_csv(
.....: StringIO(data),
.....: engine="c",
.....: float_precision="high",
.....: )["c"][0] - float(val)
.....: )
.....:
Out[139]: 5.551115123125783e-17
In [140]: abs(
.....: pd.read_csv(StringIO(data), engine="c", float_precision="round_trip")["c"][0]
.....: - float(val)
.....: )
.....:
Out[140]: 0.0
Thousand separators#
For large numbers that have been written with a thousands separator, you can
set the thousands keyword to a string of length 1 so that integers will be parsed
correctly:
By default, numbers with a thousands separator will be parsed as strings:
In [141]: data = (
.....: "ID|level|category\n"
.....: "Patient1|123,000|x\n"
.....: "Patient2|23,000|y\n"
.....: "Patient3|1,234,018|z"
.....: )
.....:
In [142]: with open("tmp.csv", "w") as fh:
.....: fh.write(data)
.....:
In [143]: df = pd.read_csv("tmp.csv", sep="|")
In [144]: df
Out[144]:
ID level category
0 Patient1 123,000 x
1 Patient2 23,000 y
2 Patient3 1,234,018 z
In [145]: df.level.dtype
Out[145]: dtype('O')
The thousands keyword allows integers to be parsed correctly:
In [146]: df = pd.read_csv("tmp.csv", sep="|", thousands=",")
In [147]: df
Out[147]:
ID level category
0 Patient1 123000 x
1 Patient2 23000 y
2 Patient3 1234018 z
In [148]: df.level.dtype
Out[148]: dtype('int64')
NA values#
To control which values are parsed as missing values (which are signified by
NaN), specify a string in na_values. If you specify a list of strings,
then all values in it are considered to be missing values. If you specify a
number (a float, like 5.0 or an integer like 5), the
corresponding equivalent values will also imply a missing value (in this case
effectively [5.0, 5] are recognized as NaN).
To completely override the default values that are recognized as missing, specify keep_default_na=False.
The default NaN recognized values are ['-1.#IND', '1.#QNAN', '1.#IND', '-1.#QNAN', '#N/A N/A', '#N/A', 'N/A',
'n/a', 'NA', '<NA>', '#NA', 'NULL', 'null', 'NaN', '-NaN', 'nan', '-nan', ''].
Let us consider some examples:
pd.read_csv("path_to_file.csv", na_values=[5])
In the example above 5 and 5.0 will be recognized as NaN, in
addition to the defaults. A string will first be interpreted as a numerical
5, then as a NaN.
pd.read_csv("path_to_file.csv", keep_default_na=False, na_values=[""])
Above, only an empty field will be recognized as NaN.
pd.read_csv("path_to_file.csv", keep_default_na=False, na_values=["NA", "0"])
Above, both NA and 0 as strings are NaN.
pd.read_csv("path_to_file.csv", na_values=["Nope"])
The default values, in addition to the string "Nope" are recognized as
NaN.
Infinity#
inf like values will be parsed as np.inf (positive infinity), and -inf as -np.inf (negative infinity).
These will ignore the case of the value, meaning Inf, will also be parsed as np.inf.
Returning Series#
Using the squeeze keyword, the parser will return output with a single column
as a Series:
Deprecated since version 1.4.0: Users should append .squeeze("columns") to the DataFrame returned by
read_csv instead.
In [149]: data = "level\nPatient1,123000\nPatient2,23000\nPatient3,1234018"
In [150]: with open("tmp.csv", "w") as fh:
.....: fh.write(data)
.....:
In [151]: print(open("tmp.csv").read())
level
Patient1,123000
Patient2,23000
Patient3,1234018
In [152]: output = pd.read_csv("tmp.csv", squeeze=True)
In [153]: output
Out[153]:
Patient1 123000
Patient2 23000
Patient3 1234018
Name: level, dtype: int64
In [154]: type(output)
Out[154]: pandas.core.series.Series
Boolean values#
The common values True, False, TRUE, and FALSE are all
recognized as boolean. Occasionally you might want to recognize other values
as being boolean. To do this, use the true_values and false_values
options as follows:
In [155]: data = "a,b,c\n1,Yes,2\n3,No,4"
In [156]: print(data)
a,b,c
1,Yes,2
3,No,4
In [157]: pd.read_csv(StringIO(data))
Out[157]:
a b c
0 1 Yes 2
1 3 No 4
In [158]: pd.read_csv(StringIO(data), true_values=["Yes"], false_values=["No"])
Out[158]:
a b c
0 1 True 2
1 3 False 4
Handling “bad” lines#
Some files may have malformed lines with too few fields or too many. Lines with
too few fields will have NA values filled in the trailing fields. Lines with
too many fields will raise an error by default:
In [159]: data = "a,b,c\n1,2,3\n4,5,6,7\n8,9,10"
In [160]: pd.read_csv(StringIO(data))
---------------------------------------------------------------------------
ParserError Traceback (most recent call last)
Cell In[160], line 1
----> 1 pd.read_csv(StringIO(data))
File ~/work/pandas/pandas/pandas/util/_decorators.py:211, in deprecate_kwarg.<locals>._deprecate_kwarg.<locals>.wrapper(*args, **kwargs)
209 else:
210 kwargs[new_arg_name] = new_arg_value
--> 211 return func(*args, **kwargs)
File ~/work/pandas/pandas/pandas/util/_decorators.py:331, in deprecate_nonkeyword_arguments.<locals>.decorate.<locals>.wrapper(*args, **kwargs)
325 if len(args) > num_allow_args:
326 warnings.warn(
327 msg.format(arguments=_format_argument_list(allow_args)),
328 FutureWarning,
329 stacklevel=find_stack_level(),
330 )
--> 331 return func(*args, **kwargs)
File ~/work/pandas/pandas/pandas/io/parsers/readers.py:950, in read_csv(filepath_or_buffer, sep, delimiter, header, names, index_col, usecols, squeeze, prefix, mangle_dupe_cols, dtype, engine, converters, true_values, false_values, skipinitialspace, skiprows, skipfooter, nrows, na_values, keep_default_na, na_filter, verbose, skip_blank_lines, parse_dates, infer_datetime_format, keep_date_col, date_parser, dayfirst, cache_dates, iterator, chunksize, compression, thousands, decimal, lineterminator, quotechar, quoting, doublequote, escapechar, comment, encoding, encoding_errors, dialect, error_bad_lines, warn_bad_lines, on_bad_lines, delim_whitespace, low_memory, memory_map, float_precision, storage_options)
935 kwds_defaults = _refine_defaults_read(
936 dialect,
937 delimiter,
(...)
946 defaults={"delimiter": ","},
947 )
948 kwds.update(kwds_defaults)
--> 950 return _read(filepath_or_buffer, kwds)
File ~/work/pandas/pandas/pandas/io/parsers/readers.py:611, in _read(filepath_or_buffer, kwds)
608 return parser
610 with parser:
--> 611 return parser.read(nrows)
File ~/work/pandas/pandas/pandas/io/parsers/readers.py:1778, in TextFileReader.read(self, nrows)
1771 nrows = validate_integer("nrows", nrows)
1772 try:
1773 # error: "ParserBase" has no attribute "read"
1774 (
1775 index,
1776 columns,
1777 col_dict,
-> 1778 ) = self._engine.read( # type: ignore[attr-defined]
1779 nrows
1780 )
1781 except Exception:
1782 self.close()
File ~/work/pandas/pandas/pandas/io/parsers/c_parser_wrapper.py:230, in CParserWrapper.read(self, nrows)
228 try:
229 if self.low_memory:
--> 230 chunks = self._reader.read_low_memory(nrows)
231 # destructive to chunks
232 data = _concatenate_chunks(chunks)
File ~/work/pandas/pandas/pandas/_libs/parsers.pyx:808, in pandas._libs.parsers.TextReader.read_low_memory()
File ~/work/pandas/pandas/pandas/_libs/parsers.pyx:866, in pandas._libs.parsers.TextReader._read_rows()
File ~/work/pandas/pandas/pandas/_libs/parsers.pyx:852, in pandas._libs.parsers.TextReader._tokenize_rows()
File ~/work/pandas/pandas/pandas/_libs/parsers.pyx:1973, in pandas._libs.parsers.raise_parser_error()
ParserError: Error tokenizing data. C error: Expected 3 fields in line 3, saw 4
You can elect to skip bad lines:
In [29]: pd.read_csv(StringIO(data), on_bad_lines="warn")
Skipping line 3: expected 3 fields, saw 4
Out[29]:
a b c
0 1 2 3
1 8 9 10
Or pass a callable function to handle the bad line if engine="python".
The bad line will be a list of strings that was split by the sep:
In [29]: external_list = []
In [30]: def bad_lines_func(line):
...: external_list.append(line)
...: return line[-3:]
In [31]: pd.read_csv(StringIO(data), on_bad_lines=bad_lines_func, engine="python")
Out[31]:
a b c
0 1 2 3
1 5 6 7
2 8 9 10
In [32]: external_list
Out[32]: [4, 5, 6, 7]
.. versionadded:: 1.4.0
You can also use the usecols parameter to eliminate extraneous column
data that appear in some lines but not others:
In [33]: pd.read_csv(StringIO(data), usecols=[0, 1, 2])
Out[33]:
a b c
0 1 2 3
1 4 5 6
2 8 9 10
In case you want to keep all data including the lines with too many fields, you can
specify a sufficient number of names. This ensures that lines with not enough
fields are filled with NaN.
In [34]: pd.read_csv(StringIO(data), names=['a', 'b', 'c', 'd'])
Out[34]:
a b c d
0 1 2 3 NaN
1 4 5 6 7
2 8 9 10 NaN
Dialect#
The dialect keyword gives greater flexibility in specifying the file format.
By default it uses the Excel dialect but you can specify either the dialect name
or a csv.Dialect instance.
Suppose you had data with unenclosed quotes:
In [161]: data = "label1,label2,label3\n" 'index1,"a,c,e\n' "index2,b,d,f"
In [162]: print(data)
label1,label2,label3
index1,"a,c,e
index2,b,d,f
By default, read_csv uses the Excel dialect and treats the double quote as
the quote character, which causes it to fail when it finds a newline before it
finds the closing double quote.
We can get around this using dialect:
In [163]: import csv
In [164]: dia = csv.excel()
In [165]: dia.quoting = csv.QUOTE_NONE
In [166]: pd.read_csv(StringIO(data), dialect=dia)
Out[166]:
label1 label2 label3
index1 "a c e
index2 b d f
All of the dialect options can be specified separately by keyword arguments:
In [167]: data = "a,b,c~1,2,3~4,5,6"
In [168]: pd.read_csv(StringIO(data), lineterminator="~")
Out[168]:
a b c
0 1 2 3
1 4 5 6
Another common dialect option is skipinitialspace, to skip any whitespace
after a delimiter:
In [169]: data = "a, b, c\n1, 2, 3\n4, 5, 6"
In [170]: print(data)
a, b, c
1, 2, 3
4, 5, 6
In [171]: pd.read_csv(StringIO(data), skipinitialspace=True)
Out[171]:
a b c
0 1 2 3
1 4 5 6
The parsers make every attempt to “do the right thing” and not be fragile. Type
inference is a pretty big deal. If a column can be coerced to integer dtype
without altering the contents, the parser will do so. Any non-numeric
columns will come through as object dtype as with the rest of pandas objects.
Quoting and Escape Characters#
Quotes (and other escape characters) in embedded fields can be handled in any
number of ways. One way is to use backslashes; to properly parse this data, you
should pass the escapechar option:
In [172]: data = 'a,b\n"hello, \\"Bob\\", nice to see you",5'
In [173]: print(data)
a,b
"hello, \"Bob\", nice to see you",5
In [174]: pd.read_csv(StringIO(data), escapechar="\\")
Out[174]:
a b
0 hello, "Bob", nice to see you 5
Files with fixed width columns#
While read_csv() reads delimited data, the read_fwf() function works
with data files that have known and fixed column widths. The function parameters
to read_fwf are largely the same as read_csv with two extra parameters, and
a different usage of the delimiter parameter:
colspecs: A list of pairs (tuples) giving the extents of the
fixed-width fields of each line as half-open intervals (i.e., [from, to[ ).
String value ‘infer’ can be used to instruct the parser to try detecting
the column specifications from the first 100 rows of the data. Default
behavior, if not specified, is to infer.
widths: A list of field widths which can be used instead of ‘colspecs’
if the intervals are contiguous.
delimiter: Characters to consider as filler characters in the fixed-width file.
Can be used to specify the filler character of the fields
if it is not spaces (e.g., ‘~’).
Consider a typical fixed-width data file:
In [175]: data1 = (
.....: "id8141 360.242940 149.910199 11950.7\n"
.....: "id1594 444.953632 166.985655 11788.4\n"
.....: "id1849 364.136849 183.628767 11806.2\n"
.....: "id1230 413.836124 184.375703 11916.8\n"
.....: "id1948 502.953953 173.237159 12468.3"
.....: )
.....:
In [176]: with open("bar.csv", "w") as f:
.....: f.write(data1)
.....:
In order to parse this file into a DataFrame, we simply need to supply the
column specifications to the read_fwf function along with the file name:
# Column specifications are a list of half-intervals
In [177]: colspecs = [(0, 6), (8, 20), (21, 33), (34, 43)]
In [178]: df = pd.read_fwf("bar.csv", colspecs=colspecs, header=None, index_col=0)
In [179]: df
Out[179]:
1 2 3
0
id8141 360.242940 149.910199 11950.7
id1594 444.953632 166.985655 11788.4
id1849 364.136849 183.628767 11806.2
id1230 413.836124 184.375703 11916.8
id1948 502.953953 173.237159 12468.3
Note how the parser automatically picks column names X.<column number> when
header=None argument is specified. Alternatively, you can supply just the
column widths for contiguous columns:
# Widths are a list of integers
In [180]: widths = [6, 14, 13, 10]
In [181]: df = pd.read_fwf("bar.csv", widths=widths, header=None)
In [182]: df
Out[182]:
0 1 2 3
0 id8141 360.242940 149.910199 11950.7
1 id1594 444.953632 166.985655 11788.4
2 id1849 364.136849 183.628767 11806.2
3 id1230 413.836124 184.375703 11916.8
4 id1948 502.953953 173.237159 12468.3
The parser will take care of extra white spaces around the columns
so it’s ok to have extra separation between the columns in the file.
By default, read_fwf will try to infer the file’s colspecs by using the
first 100 rows of the file. It can do it only in cases when the columns are
aligned and correctly separated by the provided delimiter (default delimiter
is whitespace).
In [183]: df = pd.read_fwf("bar.csv", header=None, index_col=0)
In [184]: df
Out[184]:
1 2 3
0
id8141 360.242940 149.910199 11950.7
id1594 444.953632 166.985655 11788.4
id1849 364.136849 183.628767 11806.2
id1230 413.836124 184.375703 11916.8
id1948 502.953953 173.237159 12468.3
read_fwf supports the dtype parameter for specifying the types of
parsed columns to be different from the inferred type.
In [185]: pd.read_fwf("bar.csv", header=None, index_col=0).dtypes
Out[185]:
1 float64
2 float64
3 float64
dtype: object
In [186]: pd.read_fwf("bar.csv", header=None, dtype={2: "object"}).dtypes
Out[186]:
0 object
1 float64
2 object
3 float64
dtype: object
Indexes#
Files with an “implicit” index column#
Consider a file with one less entry in the header than the number of data
column:
In [187]: data = "A,B,C\n20090101,a,1,2\n20090102,b,3,4\n20090103,c,4,5"
In [188]: print(data)
A,B,C
20090101,a,1,2
20090102,b,3,4
20090103,c,4,5
In [189]: with open("foo.csv", "w") as f:
.....: f.write(data)
.....:
In this special case, read_csv assumes that the first column is to be used
as the index of the DataFrame:
In [190]: pd.read_csv("foo.csv")
Out[190]:
A B C
20090101 a 1 2
20090102 b 3 4
20090103 c 4 5
Note that the dates weren’t automatically parsed. In that case you would need
to do as before:
In [191]: df = pd.read_csv("foo.csv", parse_dates=True)
In [192]: df.index
Out[192]: DatetimeIndex(['2009-01-01', '2009-01-02', '2009-01-03'], dtype='datetime64[ns]', freq=None)
Reading an index with a MultiIndex#
Suppose you have data indexed by two columns:
In [193]: data = 'year,indiv,zit,xit\n1977,"A",1.2,.6\n1977,"B",1.5,.5'
In [194]: print(data)
year,indiv,zit,xit
1977,"A",1.2,.6
1977,"B",1.5,.5
In [195]: with open("mindex_ex.csv", mode="w") as f:
.....: f.write(data)
.....:
The index_col argument to read_csv can take a list of
column numbers to turn multiple columns into a MultiIndex for the index of the
returned object:
In [196]: df = pd.read_csv("mindex_ex.csv", index_col=[0, 1])
In [197]: df
Out[197]:
zit xit
year indiv
1977 A 1.2 0.6
B 1.5 0.5
In [198]: df.loc[1977]
Out[198]:
zit xit
indiv
A 1.2 0.6
B 1.5 0.5
Reading columns with a MultiIndex#
By specifying list of row locations for the header argument, you
can read in a MultiIndex for the columns. Specifying non-consecutive
rows will skip the intervening rows.
In [199]: from pandas._testing import makeCustomDataframe as mkdf
In [200]: df = mkdf(5, 3, r_idx_nlevels=2, c_idx_nlevels=4)
In [201]: df.to_csv("mi.csv")
In [202]: print(open("mi.csv").read())
C0,,C_l0_g0,C_l0_g1,C_l0_g2
C1,,C_l1_g0,C_l1_g1,C_l1_g2
C2,,C_l2_g0,C_l2_g1,C_l2_g2
C3,,C_l3_g0,C_l3_g1,C_l3_g2
R0,R1,,,
R_l0_g0,R_l1_g0,R0C0,R0C1,R0C2
R_l0_g1,R_l1_g1,R1C0,R1C1,R1C2
R_l0_g2,R_l1_g2,R2C0,R2C1,R2C2
R_l0_g3,R_l1_g3,R3C0,R3C1,R3C2
R_l0_g4,R_l1_g4,R4C0,R4C1,R4C2
In [203]: pd.read_csv("mi.csv", header=[0, 1, 2, 3], index_col=[0, 1])
Out[203]:
C0 C_l0_g0 C_l0_g1 C_l0_g2
C1 C_l1_g0 C_l1_g1 C_l1_g2
C2 C_l2_g0 C_l2_g1 C_l2_g2
C3 C_l3_g0 C_l3_g1 C_l3_g2
R0 R1
R_l0_g0 R_l1_g0 R0C0 R0C1 R0C2
R_l0_g1 R_l1_g1 R1C0 R1C1 R1C2
R_l0_g2 R_l1_g2 R2C0 R2C1 R2C2
R_l0_g3 R_l1_g3 R3C0 R3C1 R3C2
R_l0_g4 R_l1_g4 R4C0 R4C1 R4C2
read_csv is also able to interpret a more common format
of multi-columns indices.
In [204]: data = ",a,a,a,b,c,c\n,q,r,s,t,u,v\none,1,2,3,4,5,6\ntwo,7,8,9,10,11,12"
In [205]: print(data)
,a,a,a,b,c,c
,q,r,s,t,u,v
one,1,2,3,4,5,6
two,7,8,9,10,11,12
In [206]: with open("mi2.csv", "w") as fh:
.....: fh.write(data)
.....:
In [207]: pd.read_csv("mi2.csv", header=[0, 1], index_col=0)
Out[207]:
a b c
q r s t u v
one 1 2 3 4 5 6
two 7 8 9 10 11 12
Note
If an index_col is not specified (e.g. you don’t have an index, or wrote it
with df.to_csv(..., index=False), then any names on the columns index will
be lost.
Automatically “sniffing” the delimiter#
read_csv is capable of inferring delimited (not necessarily
comma-separated) files, as pandas uses the csv.Sniffer
class of the csv module. For this, you have to specify sep=None.
In [208]: df = pd.DataFrame(np.random.randn(10, 4))
In [209]: df.to_csv("tmp.csv", sep="|")
In [210]: df.to_csv("tmp2.csv", sep=":")
In [211]: pd.read_csv("tmp2.csv", sep=None, engine="python")
Out[211]:
Unnamed: 0 0 1 2 3
0 0 0.469112 -0.282863 -1.509059 -1.135632
1 1 1.212112 -0.173215 0.119209 -1.044236
2 2 -0.861849 -2.104569 -0.494929 1.071804
3 3 0.721555 -0.706771 -1.039575 0.271860
4 4 -0.424972 0.567020 0.276232 -1.087401
5 5 -0.673690 0.113648 -1.478427 0.524988
6 6 0.404705 0.577046 -1.715002 -1.039268
7 7 -0.370647 -1.157892 -1.344312 0.844885
8 8 1.075770 -0.109050 1.643563 -1.469388
9 9 0.357021 -0.674600 -1.776904 -0.968914
Reading multiple files to create a single DataFrame#
It’s best to use concat() to combine multiple files.
See the cookbook for an example.
Iterating through files chunk by chunk#
Suppose you wish to iterate through a (potentially very large) file lazily
rather than reading the entire file into memory, such as the following:
In [212]: df = pd.DataFrame(np.random.randn(10, 4))
In [213]: df.to_csv("tmp.csv", sep="|")
In [214]: table = pd.read_csv("tmp.csv", sep="|")
In [215]: table
Out[215]:
Unnamed: 0 0 1 2 3
0 0 -1.294524 0.413738 0.276662 -0.472035
1 1 -0.013960 -0.362543 -0.006154 -0.923061
2 2 0.895717 0.805244 -1.206412 2.565646
3 3 1.431256 1.340309 -1.170299 -0.226169
4 4 0.410835 0.813850 0.132003 -0.827317
5 5 -0.076467 -1.187678 1.130127 -1.436737
6 6 -1.413681 1.607920 1.024180 0.569605
7 7 0.875906 -2.211372 0.974466 -2.006747
8 8 -0.410001 -0.078638 0.545952 -1.219217
9 9 -1.226825 0.769804 -1.281247 -0.727707
By specifying a chunksize to read_csv, the return
value will be an iterable object of type TextFileReader:
In [216]: with pd.read_csv("tmp.csv", sep="|", chunksize=4) as reader:
.....: reader
.....: for chunk in reader:
.....: print(chunk)
.....:
Unnamed: 0 0 1 2 3
0 0 -1.294524 0.413738 0.276662 -0.472035
1 1 -0.013960 -0.362543 -0.006154 -0.923061
2 2 0.895717 0.805244 -1.206412 2.565646
3 3 1.431256 1.340309 -1.170299 -0.226169
Unnamed: 0 0 1 2 3
4 4 0.410835 0.813850 0.132003 -0.827317
5 5 -0.076467 -1.187678 1.130127 -1.436737
6 6 -1.413681 1.607920 1.024180 0.569605
7 7 0.875906 -2.211372 0.974466 -2.006747
Unnamed: 0 0 1 2 3
8 8 -0.410001 -0.078638 0.545952 -1.219217
9 9 -1.226825 0.769804 -1.281247 -0.727707
Changed in version 1.2: read_csv/json/sas return a context-manager when iterating through a file.
Specifying iterator=True will also return the TextFileReader object:
In [217]: with pd.read_csv("tmp.csv", sep="|", iterator=True) as reader:
.....: reader.get_chunk(5)
.....:
Specifying the parser engine#
Pandas currently supports three engines, the C engine, the python engine, and an experimental
pyarrow engine (requires the pyarrow package). In general, the pyarrow engine is fastest
on larger workloads and is equivalent in speed to the C engine on most other workloads.
The python engine tends to be slower than the pyarrow and C engines on most workloads. However,
the pyarrow engine is much less robust than the C engine, which lacks a few features compared to the
Python engine.
Where possible, pandas uses the C parser (specified as engine='c'), but it may fall
back to Python if C-unsupported options are specified.
Currently, options unsupported by the C and pyarrow engines include:
sep other than a single character (e.g. regex separators)
skipfooter
sep=None with delim_whitespace=False
Specifying any of the above options will produce a ParserWarning unless the
python engine is selected explicitly using engine='python'.
Options that are unsupported by the pyarrow engine which are not covered by the list above include:
float_precision
chunksize
comment
nrows
thousands
memory_map
dialect
warn_bad_lines
error_bad_lines
on_bad_lines
delim_whitespace
quoting
lineterminator
converters
decimal
iterator
dayfirst
infer_datetime_format
verbose
skipinitialspace
low_memory
Specifying these options with engine='pyarrow' will raise a ValueError.
Reading/writing remote files#
You can pass in a URL to read or write remote files to many of pandas’ IO
functions - the following example shows reading a CSV file:
df = pd.read_csv("https://download.bls.gov/pub/time.series/cu/cu.item", sep="\t")
New in version 1.3.0.
A custom header can be sent alongside HTTP(s) requests by passing a dictionary
of header key value mappings to the storage_options keyword argument as shown below:
headers = {"User-Agent": "pandas"}
df = pd.read_csv(
"https://download.bls.gov/pub/time.series/cu/cu.item",
sep="\t",
storage_options=headers
)
All URLs which are not local files or HTTP(s) are handled by
fsspec, if installed, and its various filesystem implementations
(including Amazon S3, Google Cloud, SSH, FTP, webHDFS…).
Some of these implementations will require additional packages to be
installed, for example
S3 URLs require the s3fs library:
df = pd.read_json("s3://pandas-test/adatafile.json")
When dealing with remote storage systems, you might need
extra configuration with environment variables or config files in
special locations. For example, to access data in your S3 bucket,
you will need to define credentials in one of the several ways listed in
the S3Fs documentation. The same is true
for several of the storage backends, and you should follow the links
at fsimpl1 for implementations built into fsspec and fsimpl2
for those not included in the main fsspec
distribution.
You can also pass parameters directly to the backend driver. For example,
if you do not have S3 credentials, you can still access public data by
specifying an anonymous connection, such as
New in version 1.2.0.
pd.read_csv(
"s3://ncei-wcsd-archive/data/processed/SH1305/18kHz/SaKe2013"
"-D20130523-T080854_to_SaKe2013-D20130523-T085643.csv",
storage_options={"anon": True},
)
fsspec also allows complex URLs, for accessing data in compressed
archives, local caching of files, and more. To locally cache the above
example, you would modify the call to
pd.read_csv(
"simplecache::s3://ncei-wcsd-archive/data/processed/SH1305/18kHz/"
"SaKe2013-D20130523-T080854_to_SaKe2013-D20130523-T085643.csv",
storage_options={"s3": {"anon": True}},
)
where we specify that the “anon” parameter is meant for the “s3” part of
the implementation, not to the caching implementation. Note that this caches to a temporary
directory for the duration of the session only, but you can also specify
a permanent store.
Writing out data#
Writing to CSV format#
The Series and DataFrame objects have an instance method to_csv which
allows storing the contents of the object as a comma-separated-values file. The
function takes a number of arguments. Only the first is required.
path_or_buf: A string path to the file to write or a file object. If a file object it must be opened with newline=''
sep : Field delimiter for the output file (default “,”)
na_rep: A string representation of a missing value (default ‘’)
float_format: Format string for floating point numbers
columns: Columns to write (default None)
header: Whether to write out the column names (default True)
index: whether to write row (index) names (default True)
index_label: Column label(s) for index column(s) if desired. If None
(default), and header and index are True, then the index names are
used. (A sequence should be given if the DataFrame uses MultiIndex).
mode : Python write mode, default ‘w’
encoding: a string representing the encoding to use if the contents are
non-ASCII, for Python versions prior to 3
lineterminator: Character sequence denoting line end (default os.linesep)
quoting: Set quoting rules as in csv module (default csv.QUOTE_MINIMAL). Note that if you have set a float_format then floats are converted to strings and csv.QUOTE_NONNUMERIC will treat them as non-numeric
quotechar: Character used to quote fields (default ‘”’)
doublequote: Control quoting of quotechar in fields (default True)
escapechar: Character used to escape sep and quotechar when
appropriate (default None)
chunksize: Number of rows to write at a time
date_format: Format string for datetime objects
Writing a formatted string#
The DataFrame object has an instance method to_string which allows control
over the string representation of the object. All arguments are optional:
buf default None, for example a StringIO object
columns default None, which columns to write
col_space default None, minimum width of each column.
na_rep default NaN, representation of NA value
formatters default None, a dictionary (by column) of functions each of
which takes a single argument and returns a formatted string
float_format default None, a function which takes a single (float)
argument and returns a formatted string; to be applied to floats in the
DataFrame.
sparsify default True, set to False for a DataFrame with a hierarchical
index to print every MultiIndex key at each row.
index_names default True, will print the names of the indices
index default True, will print the index (ie, row labels)
header default True, will print the column labels
justify default left, will print column headers left- or
right-justified
The Series object also has a to_string method, but with only the buf,
na_rep, float_format arguments. There is also a length argument
which, if set to True, will additionally output the length of the Series.
JSON#
Read and write JSON format files and strings.
Writing JSON#
A Series or DataFrame can be converted to a valid JSON string. Use to_json
with optional parameters:
path_or_buf : the pathname or buffer to write the output
This can be None in which case a JSON string is returned
orient :
Series:
default is index
allowed values are {split, records, index}
DataFrame:
default is columns
allowed values are {split, records, index, columns, values, table}
The format of the JSON string
split
dict like {index -> [index], columns -> [columns], data -> [values]}
records
list like [{column -> value}, … , {column -> value}]
index
dict like {index -> {column -> value}}
columns
dict like {column -> {index -> value}}
values
just the values array
table
adhering to the JSON Table Schema
date_format : string, type of date conversion, ‘epoch’ for timestamp, ‘iso’ for ISO8601.
double_precision : The number of decimal places to use when encoding floating point values, default 10.
force_ascii : force encoded string to be ASCII, default True.
date_unit : The time unit to encode to, governs timestamp and ISO8601 precision. One of ‘s’, ‘ms’, ‘us’ or ‘ns’ for seconds, milliseconds, microseconds and nanoseconds respectively. Default ‘ms’.
default_handler : The handler to call if an object cannot otherwise be converted to a suitable format for JSON. Takes a single argument, which is the object to convert, and returns a serializable object.
lines : If records orient, then will write each record per line as json.
Note NaN’s, NaT’s and None will be converted to null and datetime objects will be converted based on the date_format and date_unit parameters.
In [218]: dfj = pd.DataFrame(np.random.randn(5, 2), columns=list("AB"))
In [219]: json = dfj.to_json()
In [220]: json
Out[220]: '{"A":{"0":-0.1213062281,"1":0.6957746499,"2":0.9597255933,"3":-0.6199759194,"4":-0.7323393705},"B":{"0":-0.0978826728,"1":0.3417343559,"2":-1.1103361029,"3":0.1497483186,"4":0.6877383895}}'
Orient options#
There are a number of different options for the format of the resulting JSON
file / string. Consider the following DataFrame and Series:
In [221]: dfjo = pd.DataFrame(
.....: dict(A=range(1, 4), B=range(4, 7), C=range(7, 10)),
.....: columns=list("ABC"),
.....: index=list("xyz"),
.....: )
.....:
In [222]: dfjo
Out[222]:
A B C
x 1 4 7
y 2 5 8
z 3 6 9
In [223]: sjo = pd.Series(dict(x=15, y=16, z=17), name="D")
In [224]: sjo
Out[224]:
x 15
y 16
z 17
Name: D, dtype: int64
Column oriented (the default for DataFrame) serializes the data as
nested JSON objects with column labels acting as the primary index:
In [225]: dfjo.to_json(orient="columns")
Out[225]: '{"A":{"x":1,"y":2,"z":3},"B":{"x":4,"y":5,"z":6},"C":{"x":7,"y":8,"z":9}}'
# Not available for Series
Index oriented (the default for Series) similar to column oriented
but the index labels are now primary:
In [226]: dfjo.to_json(orient="index")
Out[226]: '{"x":{"A":1,"B":4,"C":7},"y":{"A":2,"B":5,"C":8},"z":{"A":3,"B":6,"C":9}}'
In [227]: sjo.to_json(orient="index")
Out[227]: '{"x":15,"y":16,"z":17}'
Record oriented serializes the data to a JSON array of column -> value records,
index labels are not included. This is useful for passing DataFrame data to plotting
libraries, for example the JavaScript library d3.js:
In [228]: dfjo.to_json(orient="records")
Out[228]: '[{"A":1,"B":4,"C":7},{"A":2,"B":5,"C":8},{"A":3,"B":6,"C":9}]'
In [229]: sjo.to_json(orient="records")
Out[229]: '[15,16,17]'
Value oriented is a bare-bones option which serializes to nested JSON arrays of
values only, column and index labels are not included:
In [230]: dfjo.to_json(orient="values")
Out[230]: '[[1,4,7],[2,5,8],[3,6,9]]'
# Not available for Series
Split oriented serializes to a JSON object containing separate entries for
values, index and columns. Name is also included for Series:
In [231]: dfjo.to_json(orient="split")
Out[231]: '{"columns":["A","B","C"],"index":["x","y","z"],"data":[[1,4,7],[2,5,8],[3,6,9]]}'
In [232]: sjo.to_json(orient="split")
Out[232]: '{"name":"D","index":["x","y","z"],"data":[15,16,17]}'
Table oriented serializes to the JSON Table Schema, allowing for the
preservation of metadata including but not limited to dtypes and index names.
Note
Any orient option that encodes to a JSON object will not preserve the ordering of
index and column labels during round-trip serialization. If you wish to preserve
label ordering use the split option as it uses ordered containers.
Date handling#
Writing in ISO date format:
In [233]: dfd = pd.DataFrame(np.random.randn(5, 2), columns=list("AB"))
In [234]: dfd["date"] = pd.Timestamp("20130101")
In [235]: dfd = dfd.sort_index(axis=1, ascending=False)
In [236]: json = dfd.to_json(date_format="iso")
In [237]: json
Out[237]: '{"date":{"0":"2013-01-01T00:00:00.000","1":"2013-01-01T00:00:00.000","2":"2013-01-01T00:00:00.000","3":"2013-01-01T00:00:00.000","4":"2013-01-01T00:00:00.000"},"B":{"0":0.403309524,"1":0.3016244523,"2":-1.3698493577,"3":1.4626960492,"4":-0.8265909164},"A":{"0":0.1764443426,"1":-0.1549507744,"2":-2.1798606054,"3":-0.9542078401,"4":-1.7431609117}}'
Writing in ISO date format, with microseconds:
In [238]: json = dfd.to_json(date_format="iso", date_unit="us")
In [239]: json
Out[239]: '{"date":{"0":"2013-01-01T00:00:00.000000","1":"2013-01-01T00:00:00.000000","2":"2013-01-01T00:00:00.000000","3":"2013-01-01T00:00:00.000000","4":"2013-01-01T00:00:00.000000"},"B":{"0":0.403309524,"1":0.3016244523,"2":-1.3698493577,"3":1.4626960492,"4":-0.8265909164},"A":{"0":0.1764443426,"1":-0.1549507744,"2":-2.1798606054,"3":-0.9542078401,"4":-1.7431609117}}'
Epoch timestamps, in seconds:
In [240]: json = dfd.to_json(date_format="epoch", date_unit="s")
In [241]: json
Out[241]: '{"date":{"0":1356998400,"1":1356998400,"2":1356998400,"3":1356998400,"4":1356998400},"B":{"0":0.403309524,"1":0.3016244523,"2":-1.3698493577,"3":1.4626960492,"4":-0.8265909164},"A":{"0":0.1764443426,"1":-0.1549507744,"2":-2.1798606054,"3":-0.9542078401,"4":-1.7431609117}}'
Writing to a file, with a date index and a date column:
In [242]: dfj2 = dfj.copy()
In [243]: dfj2["date"] = pd.Timestamp("20130101")
In [244]: dfj2["ints"] = list(range(5))
In [245]: dfj2["bools"] = True
In [246]: dfj2.index = pd.date_range("20130101", periods=5)
In [247]: dfj2.to_json("test.json")
In [248]: with open("test.json") as fh:
.....: print(fh.read())
.....:
{"A":{"1356998400000":-0.1213062281,"1357084800000":0.6957746499,"1357171200000":0.9597255933,"1357257600000":-0.6199759194,"1357344000000":-0.7323393705},"B":{"1356998400000":-0.0978826728,"1357084800000":0.3417343559,"1357171200000":-1.1103361029,"1357257600000":0.1497483186,"1357344000000":0.6877383895},"date":{"1356998400000":1356998400000,"1357084800000":1356998400000,"1357171200000":1356998400000,"1357257600000":1356998400000,"1357344000000":1356998400000},"ints":{"1356998400000":0,"1357084800000":1,"1357171200000":2,"1357257600000":3,"1357344000000":4},"bools":{"1356998400000":true,"1357084800000":true,"1357171200000":true,"1357257600000":true,"1357344000000":true}}
Fallback behavior#
If the JSON serializer cannot handle the container contents directly it will
fall back in the following manner:
if the dtype is unsupported (e.g. np.complex_) then the default_handler, if provided, will be called
for each value, otherwise an exception is raised.
if an object is unsupported it will attempt the following:
check if the object has defined a toDict method and call it.
A toDict method should return a dict which will then be JSON serialized.
invoke the default_handler if one was provided.
convert the object to a dict by traversing its contents. However this will often fail
with an OverflowError or give unexpected results.
In general the best approach for unsupported objects or dtypes is to provide a default_handler.
For example:
>>> DataFrame([1.0, 2.0, complex(1.0, 2.0)]).to_json() # raises
RuntimeError: Unhandled numpy dtype 15
can be dealt with by specifying a simple default_handler:
In [249]: pd.DataFrame([1.0, 2.0, complex(1.0, 2.0)]).to_json(default_handler=str)
Out[249]: '{"0":{"0":"(1+0j)","1":"(2+0j)","2":"(1+2j)"}}'
Reading JSON#
Reading a JSON string to pandas object can take a number of parameters.
The parser will try to parse a DataFrame if typ is not supplied or
is None. To explicitly force Series parsing, pass typ=series
filepath_or_buffer : a VALID JSON string or file handle / StringIO. The string could be
a URL. Valid URL schemes include http, ftp, S3, and file. For file URLs, a host
is expected. For instance, a local file could be
file ://localhost/path/to/table.json
typ : type of object to recover (series or frame), default ‘frame’
orient :
Series :
default is index
allowed values are {split, records, index}
DataFrame
default is columns
allowed values are {split, records, index, columns, values, table}
The format of the JSON string
split
dict like {index -> [index], columns -> [columns], data -> [values]}
records
list like [{column -> value}, … , {column -> value}]
index
dict like {index -> {column -> value}}
columns
dict like {column -> {index -> value}}
values
just the values array
table
adhering to the JSON Table Schema
dtype : if True, infer dtypes, if a dict of column to dtype, then use those, if False, then don’t infer dtypes at all, default is True, apply only to the data.
convert_axes : boolean, try to convert the axes to the proper dtypes, default is True
convert_dates : a list of columns to parse for dates; If True, then try to parse date-like columns, default is True.
keep_default_dates : boolean, default True. If parsing dates, then parse the default date-like columns.
numpy : direct decoding to NumPy arrays. default is False;
Supports numeric data only, although labels may be non-numeric. Also note that the JSON ordering MUST be the same for each term if numpy=True.
precise_float : boolean, default False. Set to enable usage of higher precision (strtod) function when decoding string to double values. Default (False) is to use fast but less precise builtin functionality.
date_unit : string, the timestamp unit to detect if converting dates. Default
None. By default the timestamp precision will be detected, if this is not desired
then pass one of ‘s’, ‘ms’, ‘us’ or ‘ns’ to force timestamp precision to
seconds, milliseconds, microseconds or nanoseconds respectively.
lines : reads file as one json object per line.
encoding : The encoding to use to decode py3 bytes.
chunksize : when used in combination with lines=True, return a JsonReader which reads in chunksize lines per iteration.
The parser will raise one of ValueError/TypeError/AssertionError if the JSON is not parseable.
If a non-default orient was used when encoding to JSON be sure to pass the same
option here so that decoding produces sensible results, see Orient Options for an
overview.
Data conversion#
The default of convert_axes=True, dtype=True, and convert_dates=True
will try to parse the axes, and all of the data into appropriate types,
including dates. If you need to override specific dtypes, pass a dict to
dtype. convert_axes should only be set to False if you need to
preserve string-like numbers (e.g. ‘1’, ‘2’) in an axes.
Note
Large integer values may be converted to dates if convert_dates=True and the data and / or column labels appear ‘date-like’. The exact threshold depends on the date_unit specified. ‘date-like’ means that the column label meets one of the following criteria:
it ends with '_at'
it ends with '_time'
it begins with 'timestamp'
it is 'modified'
it is 'date'
Warning
When reading JSON data, automatic coercing into dtypes has some quirks:
an index can be reconstructed in a different order from serialization, that is, the returned order is not guaranteed to be the same as before serialization
a column that was float data will be converted to integer if it can be done safely, e.g. a column of 1.
bool columns will be converted to integer on reconstruction
Thus there are times where you may want to specify specific dtypes via the dtype keyword argument.
Reading from a JSON string:
In [250]: pd.read_json(json)
Out[250]:
date B A
0 2013-01-01 0.403310 0.176444
1 2013-01-01 0.301624 -0.154951
2 2013-01-01 -1.369849 -2.179861
3 2013-01-01 1.462696 -0.954208
4 2013-01-01 -0.826591 -1.743161
Reading from a file:
In [251]: pd.read_json("test.json")
Out[251]:
A B date ints bools
2013-01-01 -0.121306 -0.097883 2013-01-01 0 True
2013-01-02 0.695775 0.341734 2013-01-01 1 True
2013-01-03 0.959726 -1.110336 2013-01-01 2 True
2013-01-04 -0.619976 0.149748 2013-01-01 3 True
2013-01-05 -0.732339 0.687738 2013-01-01 4 True
Don’t convert any data (but still convert axes and dates):
In [252]: pd.read_json("test.json", dtype=object).dtypes
Out[252]:
A object
B object
date object
ints object
bools object
dtype: object
Specify dtypes for conversion:
In [253]: pd.read_json("test.json", dtype={"A": "float32", "bools": "int8"}).dtypes
Out[253]:
A float32
B float64
date datetime64[ns]
ints int64
bools int8
dtype: object
Preserve string indices:
In [254]: si = pd.DataFrame(
.....: np.zeros((4, 4)), columns=list(range(4)), index=[str(i) for i in range(4)]
.....: )
.....:
In [255]: si
Out[255]:
0 1 2 3
0 0.0 0.0 0.0 0.0
1 0.0 0.0 0.0 0.0
2 0.0 0.0 0.0 0.0
3 0.0 0.0 0.0 0.0
In [256]: si.index
Out[256]: Index(['0', '1', '2', '3'], dtype='object')
In [257]: si.columns
Out[257]: Int64Index([0, 1, 2, 3], dtype='int64')
In [258]: json = si.to_json()
In [259]: sij = pd.read_json(json, convert_axes=False)
In [260]: sij
Out[260]:
0 1 2 3
0 0 0 0 0
1 0 0 0 0
2 0 0 0 0
3 0 0 0 0
In [261]: sij.index
Out[261]: Index(['0', '1', '2', '3'], dtype='object')
In [262]: sij.columns
Out[262]: Index(['0', '1', '2', '3'], dtype='object')
Dates written in nanoseconds need to be read back in nanoseconds:
In [263]: json = dfj2.to_json(date_unit="ns")
# Try to parse timestamps as milliseconds -> Won't Work
In [264]: dfju = pd.read_json(json, date_unit="ms")
In [265]: dfju
Out[265]:
A B date ints bools
1356998400000000000 -0.121306 -0.097883 1356998400000000000 0 True
1357084800000000000 0.695775 0.341734 1356998400000000000 1 True
1357171200000000000 0.959726 -1.110336 1356998400000000000 2 True
1357257600000000000 -0.619976 0.149748 1356998400000000000 3 True
1357344000000000000 -0.732339 0.687738 1356998400000000000 4 True
# Let pandas detect the correct precision
In [266]: dfju = pd.read_json(json)
In [267]: dfju
Out[267]:
A B date ints bools
2013-01-01 -0.121306 -0.097883 2013-01-01 0 True
2013-01-02 0.695775 0.341734 2013-01-01 1 True
2013-01-03 0.959726 -1.110336 2013-01-01 2 True
2013-01-04 -0.619976 0.149748 2013-01-01 3 True
2013-01-05 -0.732339 0.687738 2013-01-01 4 True
# Or specify that all timestamps are in nanoseconds
In [268]: dfju = pd.read_json(json, date_unit="ns")
In [269]: dfju
Out[269]:
A B date ints bools
2013-01-01 -0.121306 -0.097883 2013-01-01 0 True
2013-01-02 0.695775 0.341734 2013-01-01 1 True
2013-01-03 0.959726 -1.110336 2013-01-01 2 True
2013-01-04 -0.619976 0.149748 2013-01-01 3 True
2013-01-05 -0.732339 0.687738 2013-01-01 4 True
The Numpy parameter#
Note
This param has been deprecated as of version 1.0.0 and will raise a FutureWarning.
This supports numeric data only. Index and columns labels may be non-numeric, e.g. strings, dates etc.
If numpy=True is passed to read_json an attempt will be made to sniff
an appropriate dtype during deserialization and to subsequently decode directly
to NumPy arrays, bypassing the need for intermediate Python objects.
This can provide speedups if you are deserialising a large amount of numeric
data:
In [270]: randfloats = np.random.uniform(-100, 1000, 10000)
In [271]: randfloats.shape = (1000, 10)
In [272]: dffloats = pd.DataFrame(randfloats, columns=list("ABCDEFGHIJ"))
In [273]: jsonfloats = dffloats.to_json()
In [274]: %timeit pd.read_json(jsonfloats)
7.91 ms +- 77.3 us per loop (mean +- std. dev. of 7 runs, 100 loops each)
In [275]: %timeit pd.read_json(jsonfloats, numpy=True)
5.71 ms +- 333 us per loop (mean +- std. dev. of 7 runs, 100 loops each)
The speedup is less noticeable for smaller datasets:
In [276]: jsonfloats = dffloats.head(100).to_json()
In [277]: %timeit pd.read_json(jsonfloats)
4.46 ms +- 25.9 us per loop (mean +- std. dev. of 7 runs, 100 loops each)
In [278]: %timeit pd.read_json(jsonfloats, numpy=True)
4.09 ms +- 32.3 us per loop (mean +- std. dev. of 7 runs, 100 loops each)
Warning
Direct NumPy decoding makes a number of assumptions and may fail or produce
unexpected output if these assumptions are not satisfied:
data is numeric.
data is uniform. The dtype is sniffed from the first value decoded.
A ValueError may be raised, or incorrect output may be produced
if this condition is not satisfied.
labels are ordered. Labels are only read from the first container, it is assumed
that each subsequent row / column has been encoded in the same order. This should be satisfied if the
data was encoded using to_json but may not be the case if the JSON
is from another source.
Normalization#
pandas provides a utility function to take a dict or list of dicts and normalize this semi-structured data
into a flat table.
In [279]: data = [
.....: {"id": 1, "name": {"first": "Coleen", "last": "Volk"}},
.....: {"name": {"given": "Mark", "family": "Regner"}},
.....: {"id": 2, "name": "Faye Raker"},
.....: ]
.....:
In [280]: pd.json_normalize(data)
Out[280]:
id name.first name.last name.given name.family name
0 1.0 Coleen Volk NaN NaN NaN
1 NaN NaN NaN Mark Regner NaN
2 2.0 NaN NaN NaN NaN Faye Raker
In [281]: data = [
.....: {
.....: "state": "Florida",
.....: "shortname": "FL",
.....: "info": {"governor": "Rick Scott"},
.....: "county": [
.....: {"name": "Dade", "population": 12345},
.....: {"name": "Broward", "population": 40000},
.....: {"name": "Palm Beach", "population": 60000},
.....: ],
.....: },
.....: {
.....: "state": "Ohio",
.....: "shortname": "OH",
.....: "info": {"governor": "John Kasich"},
.....: "county": [
.....: {"name": "Summit", "population": 1234},
.....: {"name": "Cuyahoga", "population": 1337},
.....: ],
.....: },
.....: ]
.....:
In [282]: pd.json_normalize(data, "county", ["state", "shortname", ["info", "governor"]])
Out[282]:
name population state shortname info.governor
0 Dade 12345 Florida FL Rick Scott
1 Broward 40000 Florida FL Rick Scott
2 Palm Beach 60000 Florida FL Rick Scott
3 Summit 1234 Ohio OH John Kasich
4 Cuyahoga 1337 Ohio OH John Kasich
The max_level parameter provides more control over which level to end normalization.
With max_level=1 the following snippet normalizes until 1st nesting level of the provided dict.
In [283]: data = [
.....: {
.....: "CreatedBy": {"Name": "User001"},
.....: "Lookup": {
.....: "TextField": "Some text",
.....: "UserField": {"Id": "ID001", "Name": "Name001"},
.....: },
.....: "Image": {"a": "b"},
.....: }
.....: ]
.....:
In [284]: pd.json_normalize(data, max_level=1)
Out[284]:
CreatedBy.Name Lookup.TextField Lookup.UserField Image.a
0 User001 Some text {'Id': 'ID001', 'Name': 'Name001'} b
Line delimited json#
pandas is able to read and write line-delimited json files that are common in data processing pipelines
using Hadoop or Spark.
For line-delimited json files, pandas can also return an iterator which reads in chunksize lines at a time. This can be useful for large files or to read from a stream.
In [285]: jsonl = """
.....: {"a": 1, "b": 2}
.....: {"a": 3, "b": 4}
.....: """
.....:
In [286]: df = pd.read_json(jsonl, lines=True)
In [287]: df
Out[287]:
a b
0 1 2
1 3 4
In [288]: df.to_json(orient="records", lines=True)
Out[288]: '{"a":1,"b":2}\n{"a":3,"b":4}\n'
# reader is an iterator that returns ``chunksize`` lines each iteration
In [289]: with pd.read_json(StringIO(jsonl), lines=True, chunksize=1) as reader:
.....: reader
.....: for chunk in reader:
.....: print(chunk)
.....:
Empty DataFrame
Columns: []
Index: []
a b
0 1 2
a b
1 3 4
Table schema#
Table Schema is a spec for describing tabular datasets as a JSON
object. The JSON includes information on the field names, types, and
other attributes. You can use the orient table to build
a JSON string with two fields, schema and data.
In [290]: df = pd.DataFrame(
.....: {
.....: "A": [1, 2, 3],
.....: "B": ["a", "b", "c"],
.....: "C": pd.date_range("2016-01-01", freq="d", periods=3),
.....: },
.....: index=pd.Index(range(3), name="idx"),
.....: )
.....:
In [291]: df
Out[291]:
A B C
idx
0 1 a 2016-01-01
1 2 b 2016-01-02
2 3 c 2016-01-03
In [292]: df.to_json(orient="table", date_format="iso")
Out[292]: '{"schema":{"fields":[{"name":"idx","type":"integer"},{"name":"A","type":"integer"},{"name":"B","type":"string"},{"name":"C","type":"datetime"}],"primaryKey":["idx"],"pandas_version":"1.4.0"},"data":[{"idx":0,"A":1,"B":"a","C":"2016-01-01T00:00:00.000"},{"idx":1,"A":2,"B":"b","C":"2016-01-02T00:00:00.000"},{"idx":2,"A":3,"B":"c","C":"2016-01-03T00:00:00.000"}]}'
The schema field contains the fields key, which itself contains
a list of column name to type pairs, including the Index or MultiIndex
(see below for a list of types).
The schema field also contains a primaryKey field if the (Multi)index
is unique.
The second field, data, contains the serialized data with the records
orient.
The index is included, and any datetimes are ISO 8601 formatted, as required
by the Table Schema spec.
The full list of types supported are described in the Table Schema
spec. This table shows the mapping from pandas types:
pandas type
Table Schema type
int64
integer
float64
number
bool
boolean
datetime64[ns]
datetime
timedelta64[ns]
duration
categorical
any
object
str
A few notes on the generated table schema:
The schema object contains a pandas_version field. This contains
the version of pandas’ dialect of the schema, and will be incremented
with each revision.
All dates are converted to UTC when serializing. Even timezone naive values,
which are treated as UTC with an offset of 0.
In [293]: from pandas.io.json import build_table_schema
In [294]: s = pd.Series(pd.date_range("2016", periods=4))
In [295]: build_table_schema(s)
Out[295]:
{'fields': [{'name': 'index', 'type': 'integer'},
{'name': 'values', 'type': 'datetime'}],
'primaryKey': ['index'],
'pandas_version': '1.4.0'}
datetimes with a timezone (before serializing), include an additional field
tz with the time zone name (e.g. 'US/Central').
In [296]: s_tz = pd.Series(pd.date_range("2016", periods=12, tz="US/Central"))
In [297]: build_table_schema(s_tz)
Out[297]:
{'fields': [{'name': 'index', 'type': 'integer'},
{'name': 'values', 'type': 'datetime', 'tz': 'US/Central'}],
'primaryKey': ['index'],
'pandas_version': '1.4.0'}
Periods are converted to timestamps before serialization, and so have the
same behavior of being converted to UTC. In addition, periods will contain
and additional field freq with the period’s frequency, e.g. 'A-DEC'.
In [298]: s_per = pd.Series(1, index=pd.period_range("2016", freq="A-DEC", periods=4))
In [299]: build_table_schema(s_per)
Out[299]:
{'fields': [{'name': 'index', 'type': 'datetime', 'freq': 'A-DEC'},
{'name': 'values', 'type': 'integer'}],
'primaryKey': ['index'],
'pandas_version': '1.4.0'}
Categoricals use the any type and an enum constraint listing
the set of possible values. Additionally, an ordered field is included:
In [300]: s_cat = pd.Series(pd.Categorical(["a", "b", "a"]))
In [301]: build_table_schema(s_cat)
Out[301]:
{'fields': [{'name': 'index', 'type': 'integer'},
{'name': 'values',
'type': 'any',
'constraints': {'enum': ['a', 'b']},
'ordered': False}],
'primaryKey': ['index'],
'pandas_version': '1.4.0'}
A primaryKey field, containing an array of labels, is included
if the index is unique:
In [302]: s_dupe = pd.Series([1, 2], index=[1, 1])
In [303]: build_table_schema(s_dupe)
Out[303]:
{'fields': [{'name': 'index', 'type': 'integer'},
{'name': 'values', 'type': 'integer'}],
'pandas_version': '1.4.0'}
The primaryKey behavior is the same with MultiIndexes, but in this
case the primaryKey is an array:
In [304]: s_multi = pd.Series(1, index=pd.MultiIndex.from_product([("a", "b"), (0, 1)]))
In [305]: build_table_schema(s_multi)
Out[305]:
{'fields': [{'name': 'level_0', 'type': 'string'},
{'name': 'level_1', 'type': 'integer'},
{'name': 'values', 'type': 'integer'}],
'primaryKey': FrozenList(['level_0', 'level_1']),
'pandas_version': '1.4.0'}
The default naming roughly follows these rules:
For series, the object.name is used. If that’s none, then the
name is values
For DataFrames, the stringified version of the column name is used
For Index (not MultiIndex), index.name is used, with a
fallback to index if that is None.
For MultiIndex, mi.names is used. If any level has no name,
then level_<i> is used.
read_json also accepts orient='table' as an argument. This allows for
the preservation of metadata such as dtypes and index names in a
round-trippable manner.
In [306]: df = pd.DataFrame(
.....: {
.....: "foo": [1, 2, 3, 4],
.....: "bar": ["a", "b", "c", "d"],
.....: "baz": pd.date_range("2018-01-01", freq="d", periods=4),
.....: "qux": pd.Categorical(["a", "b", "c", "c"]),
.....: },
.....: index=pd.Index(range(4), name="idx"),
.....: )
.....:
In [307]: df
Out[307]:
foo bar baz qux
idx
0 1 a 2018-01-01 a
1 2 b 2018-01-02 b
2 3 c 2018-01-03 c
3 4 d 2018-01-04 c
In [308]: df.dtypes
Out[308]:
foo int64
bar object
baz datetime64[ns]
qux category
dtype: object
In [309]: df.to_json("test.json", orient="table")
In [310]: new_df = pd.read_json("test.json", orient="table")
In [311]: new_df
Out[311]:
foo bar baz qux
idx
0 1 a 2018-01-01 a
1 2 b 2018-01-02 b
2 3 c 2018-01-03 c
3 4 d 2018-01-04 c
In [312]: new_df.dtypes
Out[312]:
foo int64
bar object
baz datetime64[ns]
qux category
dtype: object
Please note that the literal string ‘index’ as the name of an Index
is not round-trippable, nor are any names beginning with 'level_' within a
MultiIndex. These are used by default in DataFrame.to_json() to
indicate missing values and the subsequent read cannot distinguish the intent.
In [313]: df.index.name = "index"
In [314]: df.to_json("test.json", orient="table")
In [315]: new_df = pd.read_json("test.json", orient="table")
In [316]: print(new_df.index.name)
None
When using orient='table' along with user-defined ExtensionArray,
the generated schema will contain an additional extDtype key in the respective
fields element. This extra key is not standard but does enable JSON roundtrips
for extension types (e.g. read_json(df.to_json(orient="table"), orient="table")).
The extDtype key carries the name of the extension, if you have properly registered
the ExtensionDtype, pandas will use said name to perform a lookup into the registry
and re-convert the serialized data into your custom dtype.
HTML#
Reading HTML content#
Warning
We highly encourage you to read the HTML Table Parsing gotchas
below regarding the issues surrounding the BeautifulSoup4/html5lib/lxml parsers.
The top-level read_html() function can accept an HTML
string/file/URL and will parse HTML tables into list of pandas DataFrames.
Let’s look at a few examples.
Note
read_html returns a list of DataFrame objects, even if there is
only a single table contained in the HTML content.
Read a URL with no options:
In [320]: "https://www.fdic.gov/resources/resolutions/bank-failures/failed-bank-list"
In [321]: pd.read_html(url)
Out[321]:
[ Bank NameBank CityCity StateSt ... Acquiring InstitutionAI Closing DateClosing FundFund
0 Almena State Bank Almena KS ... Equity Bank October 23, 2020 10538
1 First City Bank of Florida Fort Walton Beach FL ... United Fidelity Bank, fsb October 16, 2020 10537
2 The First State Bank Barboursville WV ... MVB Bank, Inc. April 3, 2020 10536
3 Ericson State Bank Ericson NE ... Farmers and Merchants Bank February 14, 2020 10535
4 City National Bank of New Jersey Newark NJ ... Industrial Bank November 1, 2019 10534
.. ... ... ... ... ... ... ...
558 Superior Bank, FSB Hinsdale IL ... Superior Federal, FSB July 27, 2001 6004
559 Malta National Bank Malta OH ... North Valley Bank May 3, 2001 4648
560 First Alliance Bank & Trust Co. Manchester NH ... Southern New Hampshire Bank & Trust February 2, 2001 4647
561 National State Bank of Metropolis Metropolis IL ... Banterra Bank of Marion December 14, 2000 4646
562 Bank of Honolulu Honolulu HI ... Bank of the Orient October 13, 2000 4645
[563 rows x 7 columns]]
Note
The data from the above URL changes every Monday so the resulting data above may be slightly different.
Read in the content of the file from the above URL and pass it to read_html
as a string:
In [317]: html_str = """
.....: <table>
.....: <tr>
.....: <th>A</th>
.....: <th colspan="1">B</th>
.....: <th rowspan="1">C</th>
.....: </tr>
.....: <tr>
.....: <td>a</td>
.....: <td>b</td>
.....: <td>c</td>
.....: </tr>
.....: </table>
.....: """
.....:
In [318]: with open("tmp.html", "w") as f:
.....: f.write(html_str)
.....:
In [319]: df = pd.read_html("tmp.html")
In [320]: df[0]
Out[320]:
A B C
0 a b c
You can even pass in an instance of StringIO if you so desire:
In [321]: dfs = pd.read_html(StringIO(html_str))
In [322]: dfs[0]
Out[322]:
A B C
0 a b c
Note
The following examples are not run by the IPython evaluator due to the fact
that having so many network-accessing functions slows down the documentation
build. If you spot an error or an example that doesn’t run, please do not
hesitate to report it over on pandas GitHub issues page.
Read a URL and match a table that contains specific text:
match = "Metcalf Bank"
df_list = pd.read_html(url, match=match)
Specify a header row (by default <th> or <td> elements located within a
<thead> are used to form the column index, if multiple rows are contained within
<thead> then a MultiIndex is created); if specified, the header row is taken
from the data minus the parsed header elements (<th> elements).
dfs = pd.read_html(url, header=0)
Specify an index column:
dfs = pd.read_html(url, index_col=0)
Specify a number of rows to skip:
dfs = pd.read_html(url, skiprows=0)
Specify a number of rows to skip using a list (range works
as well):
dfs = pd.read_html(url, skiprows=range(2))
Specify an HTML attribute:
dfs1 = pd.read_html(url, attrs={"id": "table"})
dfs2 = pd.read_html(url, attrs={"class": "sortable"})
print(np.array_equal(dfs1[0], dfs2[0])) # Should be True
Specify values that should be converted to NaN:
dfs = pd.read_html(url, na_values=["No Acquirer"])
Specify whether to keep the default set of NaN values:
dfs = pd.read_html(url, keep_default_na=False)
Specify converters for columns. This is useful for numerical text data that has
leading zeros. By default columns that are numerical are cast to numeric
types and the leading zeros are lost. To avoid this, we can convert these
columns to strings.
url_mcc = "https://en.wikipedia.org/wiki/Mobile_country_code"
dfs = pd.read_html(
url_mcc,
match="Telekom Albania",
header=0,
converters={"MNC": str},
)
Use some combination of the above:
dfs = pd.read_html(url, match="Metcalf Bank", index_col=0)
Read in pandas to_html output (with some loss of floating point precision):
df = pd.DataFrame(np.random.randn(2, 2))
s = df.to_html(float_format="{0:.40g}".format)
dfin = pd.read_html(s, index_col=0)
The lxml backend will raise an error on a failed parse if that is the only
parser you provide. If you only have a single parser you can provide just a
string, but it is considered good practice to pass a list with one string if,
for example, the function expects a sequence of strings. You may use:
dfs = pd.read_html(url, "Metcalf Bank", index_col=0, flavor=["lxml"])
Or you could pass flavor='lxml' without a list:
dfs = pd.read_html(url, "Metcalf Bank", index_col=0, flavor="lxml")
However, if you have bs4 and html5lib installed and pass None or ['lxml',
'bs4'] then the parse will most likely succeed. Note that as soon as a parse
succeeds, the function will return.
dfs = pd.read_html(url, "Metcalf Bank", index_col=0, flavor=["lxml", "bs4"])
Links can be extracted from cells along with the text using extract_links="all".
In [323]: html_table = """
.....: <table>
.....: <tr>
.....: <th>GitHub</th>
.....: </tr>
.....: <tr>
.....: <td><a href="https://github.com/pandas-dev/pandas">pandas</a></td>
.....: </tr>
.....: </table>
.....: """
.....:
In [324]: df = pd.read_html(
.....: html_table,
.....: extract_links="all"
.....: )[0]
.....:
In [325]: df
Out[325]:
(GitHub, None)
0 (pandas, https://github.com/pandas-dev/pandas)
In [326]: df[("GitHub", None)]
Out[326]:
0 (pandas, https://github.com/pandas-dev/pandas)
Name: (GitHub, None), dtype: object
In [327]: df[("GitHub", None)].str[1]
Out[327]:
0 https://github.com/pandas-dev/pandas
Name: (GitHub, None), dtype: object
New in version 1.5.0.
Writing to HTML files#
DataFrame objects have an instance method to_html which renders the
contents of the DataFrame as an HTML table. The function arguments are as
in the method to_string described above.
Note
Not all of the possible options for DataFrame.to_html are shown here for
brevity’s sake. See to_html() for the
full set of options.
Note
In an HTML-rendering supported environment like a Jupyter Notebook, display(HTML(...))`
will render the raw HTML into the environment.
In [328]: from IPython.display import display, HTML
In [329]: df = pd.DataFrame(np.random.randn(2, 2))
In [330]: df
Out[330]:
0 1
0 0.070319 1.773907
1 0.253908 0.414581
In [331]: html = df.to_html()
In [332]: print(html) # raw html
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>0</th>
<th>1</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>0.070319</td>
<td>1.773907</td>
</tr>
<tr>
<th>1</th>
<td>0.253908</td>
<td>0.414581</td>
</tr>
</tbody>
</table>
In [333]: display(HTML(html))
<IPython.core.display.HTML object>
The columns argument will limit the columns shown:
In [334]: html = df.to_html(columns=[0])
In [335]: print(html)
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>0</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>0.070319</td>
</tr>
<tr>
<th>1</th>
<td>0.253908</td>
</tr>
</tbody>
</table>
In [336]: display(HTML(html))
<IPython.core.display.HTML object>
float_format takes a Python callable to control the precision of floating
point values:
In [337]: html = df.to_html(float_format="{0:.10f}".format)
In [338]: print(html)
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>0</th>
<th>1</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>0.0703192665</td>
<td>1.7739074228</td>
</tr>
<tr>
<th>1</th>
<td>0.2539083433</td>
<td>0.4145805920</td>
</tr>
</tbody>
</table>
In [339]: display(HTML(html))
<IPython.core.display.HTML object>
bold_rows will make the row labels bold by default, but you can turn that
off:
In [340]: html = df.to_html(bold_rows=False)
In [341]: print(html)
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>0</th>
<th>1</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>0.070319</td>
<td>1.773907</td>
</tr>
<tr>
<td>1</td>
<td>0.253908</td>
<td>0.414581</td>
</tr>
</tbody>
</table>
In [342]: display(HTML(html))
<IPython.core.display.HTML object>
The classes argument provides the ability to give the resulting HTML
table CSS classes. Note that these classes are appended to the existing
'dataframe' class.
In [343]: print(df.to_html(classes=["awesome_table_class", "even_more_awesome_class"]))
<table border="1" class="dataframe awesome_table_class even_more_awesome_class">
<thead>
<tr style="text-align: right;">
<th></th>
<th>0</th>
<th>1</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>0.070319</td>
<td>1.773907</td>
</tr>
<tr>
<th>1</th>
<td>0.253908</td>
<td>0.414581</td>
</tr>
</tbody>
</table>
The render_links argument provides the ability to add hyperlinks to cells
that contain URLs.
In [344]: url_df = pd.DataFrame(
.....: {
.....: "name": ["Python", "pandas"],
.....: "url": ["https://www.python.org/", "https://pandas.pydata.org"],
.....: }
.....: )
.....:
In [345]: html = url_df.to_html(render_links=True)
In [346]: print(html)
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>name</th>
<th>url</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>Python</td>
<td><a href="https://www.python.org/" target="_blank">https://www.python.org/</a></td>
</tr>
<tr>
<th>1</th>
<td>pandas</td>
<td><a href="https://pandas.pydata.org" target="_blank">https://pandas.pydata.org</a></td>
</tr>
</tbody>
</table>
In [347]: display(HTML(html))
<IPython.core.display.HTML object>
Finally, the escape argument allows you to control whether the
“<”, “>” and “&” characters escaped in the resulting HTML (by default it is
True). So to get the HTML without escaped characters pass escape=False
In [348]: df = pd.DataFrame({"a": list("&<>"), "b": np.random.randn(3)})
Escaped:
In [349]: html = df.to_html()
In [350]: print(html)
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>a</th>
<th>b</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>&</td>
<td>0.842321</td>
</tr>
<tr>
<th>1</th>
<td><</td>
<td>0.211337</td>
</tr>
<tr>
<th>2</th>
<td>></td>
<td>-1.055427</td>
</tr>
</tbody>
</table>
In [351]: display(HTML(html))
<IPython.core.display.HTML object>
Not escaped:
In [352]: html = df.to_html(escape=False)
In [353]: print(html)
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>a</th>
<th>b</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>&</td>
<td>0.842321</td>
</tr>
<tr>
<th>1</th>
<td><</td>
<td>0.211337</td>
</tr>
<tr>
<th>2</th>
<td>></td>
<td>-1.055427</td>
</tr>
</tbody>
</table>
In [354]: display(HTML(html))
<IPython.core.display.HTML object>
Note
Some browsers may not show a difference in the rendering of the previous two
HTML tables.
HTML Table Parsing Gotchas#
There are some versioning issues surrounding the libraries that are used to
parse HTML tables in the top-level pandas io function read_html.
Issues with lxml
Benefits
lxml is very fast.
lxml requires Cython to install correctly.
Drawbacks
lxml does not make any guarantees about the results of its parse
unless it is given strictly valid markup.
In light of the above, we have chosen to allow you, the user, to use the
lxml backend, but this backend will use html5lib if lxml
fails to parse
It is therefore highly recommended that you install both
BeautifulSoup4 and html5lib, so that you will still get a valid
result (provided everything else is valid) even if lxml fails.
Issues with BeautifulSoup4 using lxml as a backend
The above issues hold here as well since BeautifulSoup4 is essentially
just a wrapper around a parser backend.
Issues with BeautifulSoup4 using html5lib as a backend
Benefits
html5lib is far more lenient than lxml and consequently deals
with real-life markup in a much saner way rather than just, e.g.,
dropping an element without notifying you.
html5lib generates valid HTML5 markup from invalid markup
automatically. This is extremely important for parsing HTML tables,
since it guarantees a valid document. However, that does NOT mean that
it is “correct”, since the process of fixing markup does not have a
single definition.
html5lib is pure Python and requires no additional build steps beyond
its own installation.
Drawbacks
The biggest drawback to using html5lib is that it is slow as
molasses. However consider the fact that many tables on the web are not
big enough for the parsing algorithm runtime to matter. It is more
likely that the bottleneck will be in the process of reading the raw
text from the URL over the web, i.e., IO (input-output). For very large
tables, this might not be true.
LaTeX#
New in version 1.3.0.
Currently there are no methods to read from LaTeX, only output methods.
Writing to LaTeX files#
Note
DataFrame and Styler objects currently have a to_latex method. We recommend
using the Styler.to_latex() method
over DataFrame.to_latex() due to the former’s greater flexibility with
conditional styling, and the latter’s possible future deprecation.
Review the documentation for Styler.to_latex,
which gives examples of conditional styling and explains the operation of its keyword
arguments.
For simple application the following pattern is sufficient.
In [355]: df = pd.DataFrame([[1, 2], [3, 4]], index=["a", "b"], columns=["c", "d"])
In [356]: print(df.style.to_latex())
\begin{tabular}{lrr}
& c & d \\
a & 1 & 2 \\
b & 3 & 4 \\
\end{tabular}
To format values before output, chain the Styler.format
method.
In [357]: print(df.style.format("€ {}").to_latex())
\begin{tabular}{lrr}
& c & d \\
a & € 1 & € 2 \\
b & € 3 & € 4 \\
\end{tabular}
XML#
Reading XML#
New in version 1.3.0.
The top-level read_xml() function can accept an XML
string/file/URL and will parse nodes and attributes into a pandas DataFrame.
Note
Since there is no standard XML structure where design types can vary in
many ways, read_xml works best with flatter, shallow versions. If
an XML document is deeply nested, use the stylesheet feature to
transform XML into a flatter version.
Let’s look at a few examples.
Read an XML string:
In [358]: xml = """<?xml version="1.0" encoding="UTF-8"?>
.....: <bookstore>
.....: <book category="cooking">
.....: <title lang="en">Everyday Italian</title>
.....: <author>Giada De Laurentiis</author>
.....: <year>2005</year>
.....: <price>30.00</price>
.....: </book>
.....: <book category="children">
.....: <title lang="en">Harry Potter</title>
.....: <author>J K. Rowling</author>
.....: <year>2005</year>
.....: <price>29.99</price>
.....: </book>
.....: <book category="web">
.....: <title lang="en">Learning XML</title>
.....: <author>Erik T. Ray</author>
.....: <year>2003</year>
.....: <price>39.95</price>
.....: </book>
.....: </bookstore>"""
.....:
In [359]: df = pd.read_xml(xml)
In [360]: df
Out[360]:
category title author year price
0 cooking Everyday Italian Giada De Laurentiis 2005 30.00
1 children Harry Potter J K. Rowling 2005 29.99
2 web Learning XML Erik T. Ray 2003 39.95
Read a URL with no options:
In [361]: df = pd.read_xml("https://www.w3schools.com/xml/books.xml")
In [362]: df
Out[362]:
category title author year price cover
0 cooking Everyday Italian Giada De Laurentiis 2005 30.00 None
1 children Harry Potter J K. Rowling 2005 29.99 None
2 web XQuery Kick Start Vaidyanathan Nagarajan 2003 49.99 None
3 web Learning XML Erik T. Ray 2003 39.95 paperback
Read in the content of the “books.xml” file and pass it to read_xml
as a string:
In [363]: file_path = "books.xml"
In [364]: with open(file_path, "w") as f:
.....: f.write(xml)
.....:
In [365]: with open(file_path, "r") as f:
.....: df = pd.read_xml(f.read())
.....:
In [366]: df
Out[366]:
category title author year price
0 cooking Everyday Italian Giada De Laurentiis 2005 30.00
1 children Harry Potter J K. Rowling 2005 29.99
2 web Learning XML Erik T. Ray 2003 39.95
Read in the content of the “books.xml” as instance of StringIO or
BytesIO and pass it to read_xml:
In [367]: with open(file_path, "r") as f:
.....: sio = StringIO(f.read())
.....:
In [368]: df = pd.read_xml(sio)
In [369]: df
Out[369]:
category title author year price
0 cooking Everyday Italian Giada De Laurentiis 2005 30.00
1 children Harry Potter J K. Rowling 2005 29.99
2 web Learning XML Erik T. Ray 2003 39.95
In [370]: with open(file_path, "rb") as f:
.....: bio = BytesIO(f.read())
.....:
In [371]: df = pd.read_xml(bio)
In [372]: df
Out[372]:
category title author year price
0 cooking Everyday Italian Giada De Laurentiis 2005 30.00
1 children Harry Potter J K. Rowling 2005 29.99
2 web Learning XML Erik T. Ray 2003 39.95
Even read XML from AWS S3 buckets such as NIH NCBI PMC Article Datasets providing
Biomedical and Life Science Jorurnals:
In [373]: df = pd.read_xml(
.....: "s3://pmc-oa-opendata/oa_comm/xml/all/PMC1236943.xml",
.....: xpath=".//journal-meta",
.....: )
.....:
In [374]: df
Out[374]:
journal-id journal-title issn publisher
0 Cardiovasc Ultrasound Cardiovascular Ultrasound 1476-7120 NaN
With lxml as default parser, you access the full-featured XML library
that extends Python’s ElementTree API. One powerful tool is ability to query
nodes selectively or conditionally with more expressive XPath:
In [375]: df = pd.read_xml(file_path, xpath="//book[year=2005]")
In [376]: df
Out[376]:
category title author year price
0 cooking Everyday Italian Giada De Laurentiis 2005 30.00
1 children Harry Potter J K. Rowling 2005 29.99
Specify only elements or only attributes to parse:
In [377]: df = pd.read_xml(file_path, elems_only=True)
In [378]: df
Out[378]:
title author year price
0 Everyday Italian Giada De Laurentiis 2005 30.00
1 Harry Potter J K. Rowling 2005 29.99
2 Learning XML Erik T. Ray 2003 39.95
In [379]: df = pd.read_xml(file_path, attrs_only=True)
In [380]: df
Out[380]:
category
0 cooking
1 children
2 web
XML documents can have namespaces with prefixes and default namespaces without
prefixes both of which are denoted with a special attribute xmlns. In order
to parse by node under a namespace context, xpath must reference a prefix.
For example, below XML contains a namespace with prefix, doc, and URI at
https://example.com. In order to parse doc:row nodes,
namespaces must be used.
In [381]: xml = """<?xml version='1.0' encoding='utf-8'?>
.....: <doc:data xmlns:doc="https://example.com">
.....: <doc:row>
.....: <doc:shape>square</doc:shape>
.....: <doc:degrees>360</doc:degrees>
.....: <doc:sides>4.0</doc:sides>
.....: </doc:row>
.....: <doc:row>
.....: <doc:shape>circle</doc:shape>
.....: <doc:degrees>360</doc:degrees>
.....: <doc:sides/>
.....: </doc:row>
.....: <doc:row>
.....: <doc:shape>triangle</doc:shape>
.....: <doc:degrees>180</doc:degrees>
.....: <doc:sides>3.0</doc:sides>
.....: </doc:row>
.....: </doc:data>"""
.....:
In [382]: df = pd.read_xml(xml,
.....: xpath="//doc:row",
.....: namespaces={"doc": "https://example.com"})
.....:
In [383]: df
Out[383]:
shape degrees sides
0 square 360 4.0
1 circle 360 NaN
2 triangle 180 3.0
Similarly, an XML document can have a default namespace without prefix. Failing
to assign a temporary prefix will return no nodes and raise a ValueError.
But assigning any temporary name to correct URI allows parsing by nodes.
In [384]: xml = """<?xml version='1.0' encoding='utf-8'?>
.....: <data xmlns="https://example.com">
.....: <row>
.....: <shape>square</shape>
.....: <degrees>360</degrees>
.....: <sides>4.0</sides>
.....: </row>
.....: <row>
.....: <shape>circle</shape>
.....: <degrees>360</degrees>
.....: <sides/>
.....: </row>
.....: <row>
.....: <shape>triangle</shape>
.....: <degrees>180</degrees>
.....: <sides>3.0</sides>
.....: </row>
.....: </data>"""
.....:
In [385]: df = pd.read_xml(xml,
.....: xpath="//pandas:row",
.....: namespaces={"pandas": "https://example.com"})
.....:
In [386]: df
Out[386]:
shape degrees sides
0 square 360 4.0
1 circle 360 NaN
2 triangle 180 3.0
However, if XPath does not reference node names such as default, /*, then
namespaces is not required.
With lxml as parser, you can flatten nested XML documents with an XSLT
script which also can be string/file/URL types. As background, XSLT is
a special-purpose language written in a special XML file that can transform
original XML documents into other XML, HTML, even text (CSV, JSON, etc.)
using an XSLT processor.
For example, consider this somewhat nested structure of Chicago “L” Rides
where station and rides elements encapsulate data in their own sections.
With below XSLT, lxml can transform original nested document into a flatter
output (as shown below for demonstration) for easier parse into DataFrame:
In [387]: xml = """<?xml version='1.0' encoding='utf-8'?>
.....: <response>
.....: <row>
.....: <station id="40850" name="Library"/>
.....: <month>2020-09-01T00:00:00</month>
.....: <rides>
.....: <avg_weekday_rides>864.2</avg_weekday_rides>
.....: <avg_saturday_rides>534</avg_saturday_rides>
.....: <avg_sunday_holiday_rides>417.2</avg_sunday_holiday_rides>
.....: </rides>
.....: </row>
.....: <row>
.....: <station id="41700" name="Washington/Wabash"/>
.....: <month>2020-09-01T00:00:00</month>
.....: <rides>
.....: <avg_weekday_rides>2707.4</avg_weekday_rides>
.....: <avg_saturday_rides>1909.8</avg_saturday_rides>
.....: <avg_sunday_holiday_rides>1438.6</avg_sunday_holiday_rides>
.....: </rides>
.....: </row>
.....: <row>
.....: <station id="40380" name="Clark/Lake"/>
.....: <month>2020-09-01T00:00:00</month>
.....: <rides>
.....: <avg_weekday_rides>2949.6</avg_weekday_rides>
.....: <avg_saturday_rides>1657</avg_saturday_rides>
.....: <avg_sunday_holiday_rides>1453.8</avg_sunday_holiday_rides>
.....: </rides>
.....: </row>
.....: </response>"""
.....:
In [388]: xsl = """<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
.....: <xsl:output method="xml" omit-xml-declaration="no" indent="yes"/>
.....: <xsl:strip-space elements="*"/>
.....: <xsl:template match="/response">
.....: <xsl:copy>
.....: <xsl:apply-templates select="row"/>
.....: </xsl:copy>
.....: </xsl:template>
.....: <xsl:template match="row">
.....: <xsl:copy>
.....: <station_id><xsl:value-of select="station/@id"/></station_id>
.....: <station_name><xsl:value-of select="station/@name"/></station_name>
.....: <xsl:copy-of select="month|rides/*"/>
.....: </xsl:copy>
.....: </xsl:template>
.....: </xsl:stylesheet>"""
.....:
In [389]: output = """<?xml version='1.0' encoding='utf-8'?>
.....: <response>
.....: <row>
.....: <station_id>40850</station_id>
.....: <station_name>Library</station_name>
.....: <month>2020-09-01T00:00:00</month>
.....: <avg_weekday_rides>864.2</avg_weekday_rides>
.....: <avg_saturday_rides>534</avg_saturday_rides>
.....: <avg_sunday_holiday_rides>417.2</avg_sunday_holiday_rides>
.....: </row>
.....: <row>
.....: <station_id>41700</station_id>
.....: <station_name>Washington/Wabash</station_name>
.....: <month>2020-09-01T00:00:00</month>
.....: <avg_weekday_rides>2707.4</avg_weekday_rides>
.....: <avg_saturday_rides>1909.8</avg_saturday_rides>
.....: <avg_sunday_holiday_rides>1438.6</avg_sunday_holiday_rides>
.....: </row>
.....: <row>
.....: <station_id>40380</station_id>
.....: <station_name>Clark/Lake</station_name>
.....: <month>2020-09-01T00:00:00</month>
.....: <avg_weekday_rides>2949.6</avg_weekday_rides>
.....: <avg_saturday_rides>1657</avg_saturday_rides>
.....: <avg_sunday_holiday_rides>1453.8</avg_sunday_holiday_rides>
.....: </row>
.....: </response>"""
.....:
In [390]: df = pd.read_xml(xml, stylesheet=xsl)
In [391]: df
Out[391]:
station_id station_name ... avg_saturday_rides avg_sunday_holiday_rides
0 40850 Library ... 534.0 417.2
1 41700 Washington/Wabash ... 1909.8 1438.6
2 40380 Clark/Lake ... 1657.0 1453.8
[3 rows x 6 columns]
For very large XML files that can range in hundreds of megabytes to gigabytes, pandas.read_xml()
supports parsing such sizeable files using lxml’s iterparse and etree’s iterparse
which are memory-efficient methods to iterate through an XML tree and extract specific elements and attributes.
without holding entire tree in memory.
New in version 1.5.0.
To use this feature, you must pass a physical XML file path into read_xml and use the iterparse argument.
Files should not be compressed or point to online sources but stored on local disk. Also, iterparse should be
a dictionary where the key is the repeating nodes in document (which become the rows) and the value is a list of
any element or attribute that is a descendant (i.e., child, grandchild) of repeating node. Since XPath is not
used in this method, descendants do not need to share same relationship with one another. Below shows example
of reading in Wikipedia’s very large (12 GB+) latest article data dump.
In [1]: df = pd.read_xml(
... "/path/to/downloaded/enwikisource-latest-pages-articles.xml",
... iterparse = {"page": ["title", "ns", "id"]}
... )
... df
Out[2]:
title ns id
0 Gettysburg Address 0 21450
1 Main Page 0 42950
2 Declaration by United Nations 0 8435
3 Constitution of the United States of America 0 8435
4 Declaration of Independence (Israel) 0 17858
... ... ... ...
3578760 Page:Black cat 1897 07 v2 n10.pdf/17 104 219649
3578761 Page:Black cat 1897 07 v2 n10.pdf/43 104 219649
3578762 Page:Black cat 1897 07 v2 n10.pdf/44 104 219649
3578763 The History of Tom Jones, a Foundling/Book IX 0 12084291
3578764 Page:Shakespeare of Stratford (1926) Yale.djvu/91 104 21450
[3578765 rows x 3 columns]
Writing XML#
New in version 1.3.0.
DataFrame objects have an instance method to_xml which renders the
contents of the DataFrame as an XML document.
Note
This method does not support special properties of XML including DTD,
CData, XSD schemas, processing instructions, comments, and others.
Only namespaces at the root level is supported. However, stylesheet
allows design changes after initial output.
Let’s look at a few examples.
Write an XML without options:
In [392]: geom_df = pd.DataFrame(
.....: {
.....: "shape": ["square", "circle", "triangle"],
.....: "degrees": [360, 360, 180],
.....: "sides": [4, np.nan, 3],
.....: }
.....: )
.....:
In [393]: print(geom_df.to_xml())
<?xml version='1.0' encoding='utf-8'?>
<data>
<row>
<index>0</index>
<shape>square</shape>
<degrees>360</degrees>
<sides>4.0</sides>
</row>
<row>
<index>1</index>
<shape>circle</shape>
<degrees>360</degrees>
<sides/>
</row>
<row>
<index>2</index>
<shape>triangle</shape>
<degrees>180</degrees>
<sides>3.0</sides>
</row>
</data>
Write an XML with new root and row name:
In [394]: print(geom_df.to_xml(root_name="geometry", row_name="objects"))
<?xml version='1.0' encoding='utf-8'?>
<geometry>
<objects>
<index>0</index>
<shape>square</shape>
<degrees>360</degrees>
<sides>4.0</sides>
</objects>
<objects>
<index>1</index>
<shape>circle</shape>
<degrees>360</degrees>
<sides/>
</objects>
<objects>
<index>2</index>
<shape>triangle</shape>
<degrees>180</degrees>
<sides>3.0</sides>
</objects>
</geometry>
Write an attribute-centric XML:
In [395]: print(geom_df.to_xml(attr_cols=geom_df.columns.tolist()))
<?xml version='1.0' encoding='utf-8'?>
<data>
<row index="0" shape="square" degrees="360" sides="4.0"/>
<row index="1" shape="circle" degrees="360"/>
<row index="2" shape="triangle" degrees="180" sides="3.0"/>
</data>
Write a mix of elements and attributes:
In [396]: print(
.....: geom_df.to_xml(
.....: index=False,
.....: attr_cols=['shape'],
.....: elem_cols=['degrees', 'sides'])
.....: )
.....:
<?xml version='1.0' encoding='utf-8'?>
<data>
<row shape="square">
<degrees>360</degrees>
<sides>4.0</sides>
</row>
<row shape="circle">
<degrees>360</degrees>
<sides/>
</row>
<row shape="triangle">
<degrees>180</degrees>
<sides>3.0</sides>
</row>
</data>
Any DataFrames with hierarchical columns will be flattened for XML element names
with levels delimited by underscores:
In [397]: ext_geom_df = pd.DataFrame(
.....: {
.....: "type": ["polygon", "other", "polygon"],
.....: "shape": ["square", "circle", "triangle"],
.....: "degrees": [360, 360, 180],
.....: "sides": [4, np.nan, 3],
.....: }
.....: )
.....:
In [398]: pvt_df = ext_geom_df.pivot_table(index='shape',
.....: columns='type',
.....: values=['degrees', 'sides'],
.....: aggfunc='sum')
.....:
In [399]: pvt_df
Out[399]:
degrees sides
type other polygon other polygon
shape
circle 360.0 NaN 0.0 NaN
square NaN 360.0 NaN 4.0
triangle NaN 180.0 NaN 3.0
In [400]: print(pvt_df.to_xml())
<?xml version='1.0' encoding='utf-8'?>
<data>
<row>
<shape>circle</shape>
<degrees_other>360.0</degrees_other>
<degrees_polygon/>
<sides_other>0.0</sides_other>
<sides_polygon/>
</row>
<row>
<shape>square</shape>
<degrees_other/>
<degrees_polygon>360.0</degrees_polygon>
<sides_other/>
<sides_polygon>4.0</sides_polygon>
</row>
<row>
<shape>triangle</shape>
<degrees_other/>
<degrees_polygon>180.0</degrees_polygon>
<sides_other/>
<sides_polygon>3.0</sides_polygon>
</row>
</data>
Write an XML with default namespace:
In [401]: print(geom_df.to_xml(namespaces={"": "https://example.com"}))
<?xml version='1.0' encoding='utf-8'?>
<data xmlns="https://example.com">
<row>
<index>0</index>
<shape>square</shape>
<degrees>360</degrees>
<sides>4.0</sides>
</row>
<row>
<index>1</index>
<shape>circle</shape>
<degrees>360</degrees>
<sides/>
</row>
<row>
<index>2</index>
<shape>triangle</shape>
<degrees>180</degrees>
<sides>3.0</sides>
</row>
</data>
Write an XML with namespace prefix:
In [402]: print(
.....: geom_df.to_xml(namespaces={"doc": "https://example.com"},
.....: prefix="doc")
.....: )
.....:
<?xml version='1.0' encoding='utf-8'?>
<doc:data xmlns:doc="https://example.com">
<doc:row>
<doc:index>0</doc:index>
<doc:shape>square</doc:shape>
<doc:degrees>360</doc:degrees>
<doc:sides>4.0</doc:sides>
</doc:row>
<doc:row>
<doc:index>1</doc:index>
<doc:shape>circle</doc:shape>
<doc:degrees>360</doc:degrees>
<doc:sides/>
</doc:row>
<doc:row>
<doc:index>2</doc:index>
<doc:shape>triangle</doc:shape>
<doc:degrees>180</doc:degrees>
<doc:sides>3.0</doc:sides>
</doc:row>
</doc:data>
Write an XML without declaration or pretty print:
In [403]: print(
.....: geom_df.to_xml(xml_declaration=False,
.....: pretty_print=False)
.....: )
.....:
<data><row><index>0</index><shape>square</shape><degrees>360</degrees><sides>4.0</sides></row><row><index>1</index><shape>circle</shape><degrees>360</degrees><sides/></row><row><index>2</index><shape>triangle</shape><degrees>180</degrees><sides>3.0</sides></row></data>
Write an XML and transform with stylesheet:
In [404]: xsl = """<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
.....: <xsl:output method="xml" omit-xml-declaration="no" indent="yes"/>
.....: <xsl:strip-space elements="*"/>
.....: <xsl:template match="/data">
.....: <geometry>
.....: <xsl:apply-templates select="row"/>
.....: </geometry>
.....: </xsl:template>
.....: <xsl:template match="row">
.....: <object index="{index}">
.....: <xsl:if test="shape!='circle'">
.....: <xsl:attribute name="type">polygon</xsl:attribute>
.....: </xsl:if>
.....: <xsl:copy-of select="shape"/>
.....: <property>
.....: <xsl:copy-of select="degrees|sides"/>
.....: </property>
.....: </object>
.....: </xsl:template>
.....: </xsl:stylesheet>"""
.....:
In [405]: print(geom_df.to_xml(stylesheet=xsl))
<?xml version="1.0"?>
<geometry>
<object index="0" type="polygon">
<shape>square</shape>
<property>
<degrees>360</degrees>
<sides>4.0</sides>
</property>
</object>
<object index="1">
<shape>circle</shape>
<property>
<degrees>360</degrees>
<sides/>
</property>
</object>
<object index="2" type="polygon">
<shape>triangle</shape>
<property>
<degrees>180</degrees>
<sides>3.0</sides>
</property>
</object>
</geometry>
XML Final Notes#
All XML documents adhere to W3C specifications. Both etree and lxml
parsers will fail to parse any markup document that is not well-formed or
follows XML syntax rules. Do be aware HTML is not an XML document unless it
follows XHTML specs. However, other popular markup types including KML, XAML,
RSS, MusicML, MathML are compliant XML schemas.
For above reason, if your application builds XML prior to pandas operations,
use appropriate DOM libraries like etree and lxml to build the necessary
document and not by string concatenation or regex adjustments. Always remember
XML is a special text file with markup rules.
With very large XML files (several hundred MBs to GBs), XPath and XSLT
can become memory-intensive operations. Be sure to have enough available
RAM for reading and writing to large XML files (roughly about 5 times the
size of text).
Because XSLT is a programming language, use it with caution since such scripts
can pose a security risk in your environment and can run large or infinite
recursive operations. Always test scripts on small fragments before full run.
The etree parser supports all functionality of both read_xml and
to_xml except for complex XPath and any XSLT. Though limited in features,
etree is still a reliable and capable parser and tree builder. Its
performance may trail lxml to a certain degree for larger files but
relatively unnoticeable on small to medium size files.
Excel files#
The read_excel() method can read Excel 2007+ (.xlsx) files
using the openpyxl Python module. Excel 2003 (.xls) files
can be read using xlrd. Binary Excel (.xlsb)
files can be read using pyxlsb.
The to_excel() instance method is used for
saving a DataFrame to Excel. Generally the semantics are
similar to working with csv data.
See the cookbook for some advanced strategies.
Warning
The xlwt package for writing old-style .xls
excel files is no longer maintained.
The xlrd package is now only for reading
old-style .xls files.
Before pandas 1.3.0, the default argument engine=None to read_excel()
would result in using the xlrd engine in many cases, including new
Excel 2007+ (.xlsx) files. pandas will now default to using the
openpyxl engine.
It is strongly encouraged to install openpyxl to read Excel 2007+
(.xlsx) files.
Please do not report issues when using ``xlrd`` to read ``.xlsx`` files.
This is no longer supported, switch to using openpyxl instead.
Attempting to use the xlwt engine will raise a FutureWarning
unless the option io.excel.xls.writer is set to "xlwt".
While this option is now deprecated and will also raise a FutureWarning,
it can be globally set and the warning suppressed. Users are recommended to
write .xlsx files using the openpyxl engine instead.
Reading Excel files#
In the most basic use-case, read_excel takes a path to an Excel
file, and the sheet_name indicating which sheet to parse.
# Returns a DataFrame
pd.read_excel("path_to_file.xls", sheet_name="Sheet1")
ExcelFile class#
To facilitate working with multiple sheets from the same file, the ExcelFile
class can be used to wrap the file and can be passed into read_excel
There will be a performance benefit for reading multiple sheets as the file is
read into memory only once.
xlsx = pd.ExcelFile("path_to_file.xls")
df = pd.read_excel(xlsx, "Sheet1")
The ExcelFile class can also be used as a context manager.
with pd.ExcelFile("path_to_file.xls") as xls:
df1 = pd.read_excel(xls, "Sheet1")
df2 = pd.read_excel(xls, "Sheet2")
The sheet_names property will generate
a list of the sheet names in the file.
The primary use-case for an ExcelFile is parsing multiple sheets with
different parameters:
data = {}
# For when Sheet1's format differs from Sheet2
with pd.ExcelFile("path_to_file.xls") as xls:
data["Sheet1"] = pd.read_excel(xls, "Sheet1", index_col=None, na_values=["NA"])
data["Sheet2"] = pd.read_excel(xls, "Sheet2", index_col=1)
Note that if the same parsing parameters are used for all sheets, a list
of sheet names can simply be passed to read_excel with no loss in performance.
# using the ExcelFile class
data = {}
with pd.ExcelFile("path_to_file.xls") as xls:
data["Sheet1"] = pd.read_excel(xls, "Sheet1", index_col=None, na_values=["NA"])
data["Sheet2"] = pd.read_excel(xls, "Sheet2", index_col=None, na_values=["NA"])
# equivalent using the read_excel function
data = pd.read_excel(
"path_to_file.xls", ["Sheet1", "Sheet2"], index_col=None, na_values=["NA"]
)
ExcelFile can also be called with a xlrd.book.Book object
as a parameter. This allows the user to control how the excel file is read.
For example, sheets can be loaded on demand by calling xlrd.open_workbook()
with on_demand=True.
import xlrd
xlrd_book = xlrd.open_workbook("path_to_file.xls", on_demand=True)
with pd.ExcelFile(xlrd_book) as xls:
df1 = pd.read_excel(xls, "Sheet1")
df2 = pd.read_excel(xls, "Sheet2")
Specifying sheets#
Note
The second argument is sheet_name, not to be confused with ExcelFile.sheet_names.
Note
An ExcelFile’s attribute sheet_names provides access to a list of sheets.
The arguments sheet_name allows specifying the sheet or sheets to read.
The default value for sheet_name is 0, indicating to read the first sheet
Pass a string to refer to the name of a particular sheet in the workbook.
Pass an integer to refer to the index of a sheet. Indices follow Python
convention, beginning at 0.
Pass a list of either strings or integers, to return a dictionary of specified sheets.
Pass a None to return a dictionary of all available sheets.
# Returns a DataFrame
pd.read_excel("path_to_file.xls", "Sheet1", index_col=None, na_values=["NA"])
Using the sheet index:
# Returns a DataFrame
pd.read_excel("path_to_file.xls", 0, index_col=None, na_values=["NA"])
Using all default values:
# Returns a DataFrame
pd.read_excel("path_to_file.xls")
Using None to get all sheets:
# Returns a dictionary of DataFrames
pd.read_excel("path_to_file.xls", sheet_name=None)
Using a list to get multiple sheets:
# Returns the 1st and 4th sheet, as a dictionary of DataFrames.
pd.read_excel("path_to_file.xls", sheet_name=["Sheet1", 3])
read_excel can read more than one sheet, by setting sheet_name to either
a list of sheet names, a list of sheet positions, or None to read all sheets.
Sheets can be specified by sheet index or sheet name, using an integer or string,
respectively.
Reading a MultiIndex#
read_excel can read a MultiIndex index, by passing a list of columns to index_col
and a MultiIndex column by passing a list of rows to header. If either the index
or columns have serialized level names those will be read in as well by specifying
the rows/columns that make up the levels.
For example, to read in a MultiIndex index without names:
In [406]: df = pd.DataFrame(
.....: {"a": [1, 2, 3, 4], "b": [5, 6, 7, 8]},
.....: index=pd.MultiIndex.from_product([["a", "b"], ["c", "d"]]),
.....: )
.....:
In [407]: df.to_excel("path_to_file.xlsx")
In [408]: df = pd.read_excel("path_to_file.xlsx", index_col=[0, 1])
In [409]: df
Out[409]:
a b
a c 1 5
d 2 6
b c 3 7
d 4 8
If the index has level names, they will parsed as well, using the same
parameters.
In [410]: df.index = df.index.set_names(["lvl1", "lvl2"])
In [411]: df.to_excel("path_to_file.xlsx")
In [412]: df = pd.read_excel("path_to_file.xlsx", index_col=[0, 1])
In [413]: df
Out[413]:
a b
lvl1 lvl2
a c 1 5
d 2 6
b c 3 7
d 4 8
If the source file has both MultiIndex index and columns, lists specifying each
should be passed to index_col and header:
In [414]: df.columns = pd.MultiIndex.from_product([["a"], ["b", "d"]], names=["c1", "c2"])
In [415]: df.to_excel("path_to_file.xlsx")
In [416]: df = pd.read_excel("path_to_file.xlsx", index_col=[0, 1], header=[0, 1])
In [417]: df
Out[417]:
c1 a
c2 b d
lvl1 lvl2
a c 1 5
d 2 6
b c 3 7
d 4 8
Missing values in columns specified in index_col will be forward filled to
allow roundtripping with to_excel for merged_cells=True. To avoid forward
filling the missing values use set_index after reading the data instead of
index_col.
Parsing specific columns#
It is often the case that users will insert columns to do temporary computations
in Excel and you may not want to read in those columns. read_excel takes
a usecols keyword to allow you to specify a subset of columns to parse.
Changed in version 1.0.0.
Passing in an integer for usecols will no longer work. Please pass in a list
of ints from 0 to usecols inclusive instead.
You can specify a comma-delimited set of Excel columns and ranges as a string:
pd.read_excel("path_to_file.xls", "Sheet1", usecols="A,C:E")
If usecols is a list of integers, then it is assumed to be the file column
indices to be parsed.
pd.read_excel("path_to_file.xls", "Sheet1", usecols=[0, 2, 3])
Element order is ignored, so usecols=[0, 1] is the same as [1, 0].
If usecols is a list of strings, it is assumed that each string corresponds
to a column name provided either by the user in names or inferred from the
document header row(s). Those strings define which columns will be parsed:
pd.read_excel("path_to_file.xls", "Sheet1", usecols=["foo", "bar"])
Element order is ignored, so usecols=['baz', 'joe'] is the same as ['joe', 'baz'].
If usecols is callable, the callable function will be evaluated against
the column names, returning names where the callable function evaluates to True.
pd.read_excel("path_to_file.xls", "Sheet1", usecols=lambda x: x.isalpha())
Parsing dates#
Datetime-like values are normally automatically converted to the appropriate
dtype when reading the excel file. But if you have a column of strings that
look like dates (but are not actually formatted as dates in excel), you can
use the parse_dates keyword to parse those strings to datetimes:
pd.read_excel("path_to_file.xls", "Sheet1", parse_dates=["date_strings"])
Cell converters#
It is possible to transform the contents of Excel cells via the converters
option. For instance, to convert a column to boolean:
pd.read_excel("path_to_file.xls", "Sheet1", converters={"MyBools": bool})
This options handles missing values and treats exceptions in the converters
as missing data. Transformations are applied cell by cell rather than to the
column as a whole, so the array dtype is not guaranteed. For instance, a
column of integers with missing values cannot be transformed to an array
with integer dtype, because NaN is strictly a float. You can manually mask
missing data to recover integer dtype:
def cfun(x):
return int(x) if x else -1
pd.read_excel("path_to_file.xls", "Sheet1", converters={"MyInts": cfun})
Dtype specifications#
As an alternative to converters, the type for an entire column can
be specified using the dtype keyword, which takes a dictionary
mapping column names to types. To interpret data with
no type inference, use the type str or object.
pd.read_excel("path_to_file.xls", dtype={"MyInts": "int64", "MyText": str})
Writing Excel files#
Writing Excel files to disk#
To write a DataFrame object to a sheet of an Excel file, you can use the
to_excel instance method. The arguments are largely the same as to_csv
described above, the first argument being the name of the excel file, and the
optional second argument the name of the sheet to which the DataFrame should be
written. For example:
df.to_excel("path_to_file.xlsx", sheet_name="Sheet1")
Files with a .xls extension will be written using xlwt and those with a
.xlsx extension will be written using xlsxwriter (if available) or
openpyxl.
The DataFrame will be written in a way that tries to mimic the REPL output.
The index_label will be placed in the second
row instead of the first. You can place it in the first row by setting the
merge_cells option in to_excel() to False:
df.to_excel("path_to_file.xlsx", index_label="label", merge_cells=False)
In order to write separate DataFrames to separate sheets in a single Excel file,
one can pass an ExcelWriter.
with pd.ExcelWriter("path_to_file.xlsx") as writer:
df1.to_excel(writer, sheet_name="Sheet1")
df2.to_excel(writer, sheet_name="Sheet2")
Writing Excel files to memory#
pandas supports writing Excel files to buffer-like objects such as StringIO or
BytesIO using ExcelWriter.
from io import BytesIO
bio = BytesIO()
# By setting the 'engine' in the ExcelWriter constructor.
writer = pd.ExcelWriter(bio, engine="xlsxwriter")
df.to_excel(writer, sheet_name="Sheet1")
# Save the workbook
writer.save()
# Seek to the beginning and read to copy the workbook to a variable in memory
bio.seek(0)
workbook = bio.read()
Note
engine is optional but recommended. Setting the engine determines
the version of workbook produced. Setting engine='xlrd' will produce an
Excel 2003-format workbook (xls). Using either 'openpyxl' or
'xlsxwriter' will produce an Excel 2007-format workbook (xlsx). If
omitted, an Excel 2007-formatted workbook is produced.
Excel writer engines#
Deprecated since version 1.2.0: As the xlwt package is no longer
maintained, the xlwt engine will be removed from a future version
of pandas. This is the only engine in pandas that supports writing to
.xls files.
pandas chooses an Excel writer via two methods:
the engine keyword argument
the filename extension (via the default specified in config options)
By default, pandas uses the XlsxWriter for .xlsx, openpyxl
for .xlsm, and xlwt for .xls files. If you have multiple
engines installed, you can set the default engine through setting the
config options io.excel.xlsx.writer and
io.excel.xls.writer. pandas will fall back on openpyxl for .xlsx
files if Xlsxwriter is not available.
To specify which writer you want to use, you can pass an engine keyword
argument to to_excel and to ExcelWriter. The built-in engines are:
openpyxl: version 2.4 or higher is required
xlsxwriter
xlwt
# By setting the 'engine' in the DataFrame 'to_excel()' methods.
df.to_excel("path_to_file.xlsx", sheet_name="Sheet1", engine="xlsxwriter")
# By setting the 'engine' in the ExcelWriter constructor.
writer = pd.ExcelWriter("path_to_file.xlsx", engine="xlsxwriter")
# Or via pandas configuration.
from pandas import options # noqa: E402
options.io.excel.xlsx.writer = "xlsxwriter"
df.to_excel("path_to_file.xlsx", sheet_name="Sheet1")
Style and formatting#
The look and feel of Excel worksheets created from pandas can be modified using the following parameters on the DataFrame’s to_excel method.
float_format : Format string for floating point numbers (default None).
freeze_panes : A tuple of two integers representing the bottommost row and rightmost column to freeze. Each of these parameters is one-based, so (1, 1) will freeze the first row and first column (default None).
Using the Xlsxwriter engine provides many options for controlling the
format of an Excel worksheet created with the to_excel method. Excellent examples can be found in the
Xlsxwriter documentation here: https://xlsxwriter.readthedocs.io/working_with_pandas.html
OpenDocument Spreadsheets#
New in version 0.25.
The read_excel() method can also read OpenDocument spreadsheets
using the odfpy module. The semantics and features for reading
OpenDocument spreadsheets match what can be done for Excel files using
engine='odf'.
# Returns a DataFrame
pd.read_excel("path_to_file.ods", engine="odf")
Note
Currently pandas only supports reading OpenDocument spreadsheets. Writing
is not implemented.
Binary Excel (.xlsb) files#
New in version 1.0.0.
The read_excel() method can also read binary Excel files
using the pyxlsb module. The semantics and features for reading
binary Excel files mostly match what can be done for Excel files using
engine='pyxlsb'. pyxlsb does not recognize datetime types
in files and will return floats instead.
# Returns a DataFrame
pd.read_excel("path_to_file.xlsb", engine="pyxlsb")
Note
Currently pandas only supports reading binary Excel files. Writing
is not implemented.
Clipboard#
A handy way to grab data is to use the read_clipboard() method,
which takes the contents of the clipboard buffer and passes them to the
read_csv method. For instance, you can copy the following text to the
clipboard (CTRL-C on many operating systems):
A B C
x 1 4 p
y 2 5 q
z 3 6 r
And then import the data directly to a DataFrame by calling:
>>> clipdf = pd.read_clipboard()
>>> clipdf
A B C
x 1 4 p
y 2 5 q
z 3 6 r
The to_clipboard method can be used to write the contents of a DataFrame to
the clipboard. Following which you can paste the clipboard contents into other
applications (CTRL-V on many operating systems). Here we illustrate writing a
DataFrame into clipboard and reading it back.
>>> df = pd.DataFrame(
... {"A": [1, 2, 3], "B": [4, 5, 6], "C": ["p", "q", "r"]}, index=["x", "y", "z"]
... )
>>> df
A B C
x 1 4 p
y 2 5 q
z 3 6 r
>>> df.to_clipboard()
>>> pd.read_clipboard()
A B C
x 1 4 p
y 2 5 q
z 3 6 r
We can see that we got the same content back, which we had earlier written to the clipboard.
Note
You may need to install xclip or xsel (with PyQt5, PyQt4 or qtpy) on Linux to use these methods.
Pickling#
All pandas objects are equipped with to_pickle methods which use Python’s
cPickle module to save data structures to disk using the pickle format.
In [418]: df
Out[418]:
c1 a
c2 b d
lvl1 lvl2
a c 1 5
d 2 6
b c 3 7
d 4 8
In [419]: df.to_pickle("foo.pkl")
The read_pickle function in the pandas namespace can be used to load
any pickled pandas object (or any other pickled object) from file:
In [420]: pd.read_pickle("foo.pkl")
Out[420]:
c1 a
c2 b d
lvl1 lvl2
a c 1 5
d 2 6
b c 3 7
d 4 8
Warning
Loading pickled data received from untrusted sources can be unsafe.
See: https://docs.python.org/3/library/pickle.html
Warning
read_pickle() is only guaranteed backwards compatible back to pandas version 0.20.3
Compressed pickle files#
read_pickle(), DataFrame.to_pickle() and Series.to_pickle() can read
and write compressed pickle files. The compression types of gzip, bz2, xz, zstd are supported for reading and writing.
The zip file format only supports reading and must contain only one data file
to be read.
The compression type can be an explicit parameter or be inferred from the file extension.
If ‘infer’, then use gzip, bz2, zip, xz, zstd if filename ends in '.gz', '.bz2', '.zip',
'.xz', or '.zst', respectively.
The compression parameter can also be a dict in order to pass options to the
compression protocol. It must have a 'method' key set to the name
of the compression protocol, which must be one of
{'zip', 'gzip', 'bz2', 'xz', 'zstd'}. All other key-value pairs are passed to
the underlying compression library.
In [421]: df = pd.DataFrame(
.....: {
.....: "A": np.random.randn(1000),
.....: "B": "foo",
.....: "C": pd.date_range("20130101", periods=1000, freq="s"),
.....: }
.....: )
.....:
In [422]: df
Out[422]:
A B C
0 -0.828876 foo 2013-01-01 00:00:00
1 -0.110383 foo 2013-01-01 00:00:01
2 2.357598 foo 2013-01-01 00:00:02
3 -1.620073 foo 2013-01-01 00:00:03
4 0.440903 foo 2013-01-01 00:00:04
.. ... ... ...
995 -1.177365 foo 2013-01-01 00:16:35
996 1.236988 foo 2013-01-01 00:16:36
997 0.743946 foo 2013-01-01 00:16:37
998 -0.533097 foo 2013-01-01 00:16:38
999 -0.140850 foo 2013-01-01 00:16:39
[1000 rows x 3 columns]
Using an explicit compression type:
In [423]: df.to_pickle("data.pkl.compress", compression="gzip")
In [424]: rt = pd.read_pickle("data.pkl.compress", compression="gzip")
In [425]: rt
Out[425]:
A B C
0 -0.828876 foo 2013-01-01 00:00:00
1 -0.110383 foo 2013-01-01 00:00:01
2 2.357598 foo 2013-01-01 00:00:02
3 -1.620073 foo 2013-01-01 00:00:03
4 0.440903 foo 2013-01-01 00:00:04
.. ... ... ...
995 -1.177365 foo 2013-01-01 00:16:35
996 1.236988 foo 2013-01-01 00:16:36
997 0.743946 foo 2013-01-01 00:16:37
998 -0.533097 foo 2013-01-01 00:16:38
999 -0.140850 foo 2013-01-01 00:16:39
[1000 rows x 3 columns]
Inferring compression type from the extension:
In [426]: df.to_pickle("data.pkl.xz", compression="infer")
In [427]: rt = pd.read_pickle("data.pkl.xz", compression="infer")
In [428]: rt
Out[428]:
A B C
0 -0.828876 foo 2013-01-01 00:00:00
1 -0.110383 foo 2013-01-01 00:00:01
2 2.357598 foo 2013-01-01 00:00:02
3 -1.620073 foo 2013-01-01 00:00:03
4 0.440903 foo 2013-01-01 00:00:04
.. ... ... ...
995 -1.177365 foo 2013-01-01 00:16:35
996 1.236988 foo 2013-01-01 00:16:36
997 0.743946 foo 2013-01-01 00:16:37
998 -0.533097 foo 2013-01-01 00:16:38
999 -0.140850 foo 2013-01-01 00:16:39
[1000 rows x 3 columns]
The default is to ‘infer’:
In [429]: df.to_pickle("data.pkl.gz")
In [430]: rt = pd.read_pickle("data.pkl.gz")
In [431]: rt
Out[431]:
A B C
0 -0.828876 foo 2013-01-01 00:00:00
1 -0.110383 foo 2013-01-01 00:00:01
2 2.357598 foo 2013-01-01 00:00:02
3 -1.620073 foo 2013-01-01 00:00:03
4 0.440903 foo 2013-01-01 00:00:04
.. ... ... ...
995 -1.177365 foo 2013-01-01 00:16:35
996 1.236988 foo 2013-01-01 00:16:36
997 0.743946 foo 2013-01-01 00:16:37
998 -0.533097 foo 2013-01-01 00:16:38
999 -0.140850 foo 2013-01-01 00:16:39
[1000 rows x 3 columns]
In [432]: df["A"].to_pickle("s1.pkl.bz2")
In [433]: rt = pd.read_pickle("s1.pkl.bz2")
In [434]: rt
Out[434]:
0 -0.828876
1 -0.110383
2 2.357598
3 -1.620073
4 0.440903
...
995 -1.177365
996 1.236988
997 0.743946
998 -0.533097
999 -0.140850
Name: A, Length: 1000, dtype: float64
Passing options to the compression protocol in order to speed up compression:
In [435]: df.to_pickle("data.pkl.gz", compression={"method": "gzip", "compresslevel": 1})
msgpack#
pandas support for msgpack has been removed in version 1.0.0. It is
recommended to use pickle instead.
Alternatively, you can also the Arrow IPC serialization format for on-the-wire
transmission of pandas objects. For documentation on pyarrow, see
here.
HDF5 (PyTables)#
HDFStore is a dict-like object which reads and writes pandas using
the high performance HDF5 format using the excellent PyTables library. See the cookbook
for some advanced strategies
Warning
pandas uses PyTables for reading and writing HDF5 files, which allows
serializing object-dtype data with pickle. Loading pickled data received from
untrusted sources can be unsafe.
See: https://docs.python.org/3/library/pickle.html for more.
In [436]: store = pd.HDFStore("store.h5")
In [437]: print(store)
<class 'pandas.io.pytables.HDFStore'>
File path: store.h5
Objects can be written to the file just like adding key-value pairs to a
dict:
In [438]: index = pd.date_range("1/1/2000", periods=8)
In [439]: s = pd.Series(np.random.randn(5), index=["a", "b", "c", "d", "e"])
In [440]: df = pd.DataFrame(np.random.randn(8, 3), index=index, columns=["A", "B", "C"])
# store.put('s', s) is an equivalent method
In [441]: store["s"] = s
In [442]: store["df"] = df
In [443]: store
Out[443]:
<class 'pandas.io.pytables.HDFStore'>
File path: store.h5
In a current or later Python session, you can retrieve stored objects:
# store.get('df') is an equivalent method
In [444]: store["df"]
Out[444]:
A B C
2000-01-01 -0.398501 -0.677311 -0.874991
2000-01-02 -1.167564 -0.593353 0.146262
2000-01-03 -0.131959 0.089012 0.667450
2000-01-04 0.169405 -1.358046 -0.105563
2000-01-05 0.492195 0.076693 0.213685
2000-01-06 -0.285283 -1.210529 -1.408386
2000-01-07 0.941577 -0.342447 0.222031
2000-01-08 0.052607 2.093214 1.064908
# dotted (attribute) access provides get as well
In [445]: store.df
Out[445]:
A B C
2000-01-01 -0.398501 -0.677311 -0.874991
2000-01-02 -1.167564 -0.593353 0.146262
2000-01-03 -0.131959 0.089012 0.667450
2000-01-04 0.169405 -1.358046 -0.105563
2000-01-05 0.492195 0.076693 0.213685
2000-01-06 -0.285283 -1.210529 -1.408386
2000-01-07 0.941577 -0.342447 0.222031
2000-01-08 0.052607 2.093214 1.064908
Deletion of the object specified by the key:
# store.remove('df') is an equivalent method
In [446]: del store["df"]
In [447]: store
Out[447]:
<class 'pandas.io.pytables.HDFStore'>
File path: store.h5
Closing a Store and using a context manager:
In [448]: store.close()
In [449]: store
Out[449]:
<class 'pandas.io.pytables.HDFStore'>
File path: store.h5
In [450]: store.is_open
Out[450]: False
# Working with, and automatically closing the store using a context manager
In [451]: with pd.HDFStore("store.h5") as store:
.....: store.keys()
.....:
Read/write API#
HDFStore supports a top-level API using read_hdf for reading and to_hdf for writing,
similar to how read_csv and to_csv work.
In [452]: df_tl = pd.DataFrame({"A": list(range(5)), "B": list(range(5))})
In [453]: df_tl.to_hdf("store_tl.h5", "table", append=True)
In [454]: pd.read_hdf("store_tl.h5", "table", where=["index>2"])
Out[454]:
A B
3 3 3
4 4 4
HDFStore will by default not drop rows that are all missing. This behavior can be changed by setting dropna=True.
In [455]: df_with_missing = pd.DataFrame(
.....: {
.....: "col1": [0, np.nan, 2],
.....: "col2": [1, np.nan, np.nan],
.....: }
.....: )
.....:
In [456]: df_with_missing
Out[456]:
col1 col2
0 0.0 1.0
1 NaN NaN
2 2.0 NaN
In [457]: df_with_missing.to_hdf("file.h5", "df_with_missing", format="table", mode="w")
In [458]: pd.read_hdf("file.h5", "df_with_missing")
Out[458]:
col1 col2
0 0.0 1.0
1 NaN NaN
2 2.0 NaN
In [459]: df_with_missing.to_hdf(
.....: "file.h5", "df_with_missing", format="table", mode="w", dropna=True
.....: )
.....:
In [460]: pd.read_hdf("file.h5", "df_with_missing")
Out[460]:
col1 col2
0 0.0 1.0
2 2.0 NaN
Fixed format#
The examples above show storing using put, which write the HDF5 to PyTables in a fixed array format, called
the fixed format. These types of stores are not appendable once written (though you can simply
remove them and rewrite). Nor are they queryable; they must be
retrieved in their entirety. They also do not support dataframes with non-unique column names.
The fixed format stores offer very fast writing and slightly faster reading than table stores.
This format is specified by default when using put or to_hdf or by format='fixed' or format='f'.
Warning
A fixed format will raise a TypeError if you try to retrieve using a where:
>>> pd.DataFrame(np.random.randn(10, 2)).to_hdf("test_fixed.h5", "df")
>>> pd.read_hdf("test_fixed.h5", "df", where="index>5")
TypeError: cannot pass a where specification when reading a fixed format.
this store must be selected in its entirety
Table format#
HDFStore supports another PyTables format on disk, the table
format. Conceptually a table is shaped very much like a DataFrame,
with rows and columns. A table may be appended to in the same or
other sessions. In addition, delete and query type operations are
supported. This format is specified by format='table' or format='t'
to append or put or to_hdf.
This format can be set as an option as well pd.set_option('io.hdf.default_format','table') to
enable put/append/to_hdf to by default store in the table format.
In [461]: store = pd.HDFStore("store.h5")
In [462]: df1 = df[0:4]
In [463]: df2 = df[4:]
# append data (creates a table automatically)
In [464]: store.append("df", df1)
In [465]: store.append("df", df2)
In [466]: store
Out[466]:
<class 'pandas.io.pytables.HDFStore'>
File path: store.h5
# select the entire object
In [467]: store.select("df")
Out[467]:
A B C
2000-01-01 -0.398501 -0.677311 -0.874991
2000-01-02 -1.167564 -0.593353 0.146262
2000-01-03 -0.131959 0.089012 0.667450
2000-01-04 0.169405 -1.358046 -0.105563
2000-01-05 0.492195 0.076693 0.213685
2000-01-06 -0.285283 -1.210529 -1.408386
2000-01-07 0.941577 -0.342447 0.222031
2000-01-08 0.052607 2.093214 1.064908
# the type of stored data
In [468]: store.root.df._v_attrs.pandas_type
Out[468]: 'frame_table'
Note
You can also create a table by passing format='table' or format='t' to a put operation.
Hierarchical keys#
Keys to a store can be specified as a string. These can be in a
hierarchical path-name like format (e.g. foo/bar/bah), which will
generate a hierarchy of sub-stores (or Groups in PyTables
parlance). Keys can be specified without the leading ‘/’ and are always
absolute (e.g. ‘foo’ refers to ‘/foo’). Removal operations can remove
everything in the sub-store and below, so be careful.
In [469]: store.put("foo/bar/bah", df)
In [470]: store.append("food/orange", df)
In [471]: store.append("food/apple", df)
In [472]: store
Out[472]:
<class 'pandas.io.pytables.HDFStore'>
File path: store.h5
# a list of keys are returned
In [473]: store.keys()
Out[473]: ['/df', '/food/apple', '/food/orange', '/foo/bar/bah']
# remove all nodes under this level
In [474]: store.remove("food")
In [475]: store
Out[475]:
<class 'pandas.io.pytables.HDFStore'>
File path: store.h5
You can walk through the group hierarchy using the walk method which
will yield a tuple for each group key along with the relative keys of its contents.
In [476]: for (path, subgroups, subkeys) in store.walk():
.....: for subgroup in subgroups:
.....: print("GROUP: {}/{}".format(path, subgroup))
.....: for subkey in subkeys:
.....: key = "/".join([path, subkey])
.....: print("KEY: {}".format(key))
.....: print(store.get(key))
.....:
GROUP: /foo
KEY: /df
A B C
2000-01-01 -0.398501 -0.677311 -0.874991
2000-01-02 -1.167564 -0.593353 0.146262
2000-01-03 -0.131959 0.089012 0.667450
2000-01-04 0.169405 -1.358046 -0.105563
2000-01-05 0.492195 0.076693 0.213685
2000-01-06 -0.285283 -1.210529 -1.408386
2000-01-07 0.941577 -0.342447 0.222031
2000-01-08 0.052607 2.093214 1.064908
GROUP: /foo/bar
KEY: /foo/bar/bah
A B C
2000-01-01 -0.398501 -0.677311 -0.874991
2000-01-02 -1.167564 -0.593353 0.146262
2000-01-03 -0.131959 0.089012 0.667450
2000-01-04 0.169405 -1.358046 -0.105563
2000-01-05 0.492195 0.076693 0.213685
2000-01-06 -0.285283 -1.210529 -1.408386
2000-01-07 0.941577 -0.342447 0.222031
2000-01-08 0.052607 2.093214 1.064908
Warning
Hierarchical keys cannot be retrieved as dotted (attribute) access as described above for items stored under the root node.
In [8]: store.foo.bar.bah
AttributeError: 'HDFStore' object has no attribute 'foo'
# you can directly access the actual PyTables node but using the root node
In [9]: store.root.foo.bar.bah
Out[9]:
/foo/bar/bah (Group) ''
children := ['block0_items' (Array), 'block0_values' (Array), 'axis0' (Array), 'axis1' (Array)]
Instead, use explicit string based keys:
In [477]: store["foo/bar/bah"]
Out[477]:
A B C
2000-01-01 -0.398501 -0.677311 -0.874991
2000-01-02 -1.167564 -0.593353 0.146262
2000-01-03 -0.131959 0.089012 0.667450
2000-01-04 0.169405 -1.358046 -0.105563
2000-01-05 0.492195 0.076693 0.213685
2000-01-06 -0.285283 -1.210529 -1.408386
2000-01-07 0.941577 -0.342447 0.222031
2000-01-08 0.052607 2.093214 1.064908
Storing types#
Storing mixed types in a table#
Storing mixed-dtype data is supported. Strings are stored as a
fixed-width using the maximum size of the appended column. Subsequent attempts
at appending longer strings will raise a ValueError.
Passing min_itemsize={`values`: size} as a parameter to append
will set a larger minimum for the string columns. Storing floats,
strings, ints, bools, datetime64 are currently supported. For string
columns, passing nan_rep = 'nan' to append will change the default
nan representation on disk (which converts to/from np.nan), this
defaults to nan.
In [478]: df_mixed = pd.DataFrame(
.....: {
.....: "A": np.random.randn(8),
.....: "B": np.random.randn(8),
.....: "C": np.array(np.random.randn(8), dtype="float32"),
.....: "string": "string",
.....: "int": 1,
.....: "bool": True,
.....: "datetime64": pd.Timestamp("20010102"),
.....: },
.....: index=list(range(8)),
.....: )
.....:
In [479]: df_mixed.loc[df_mixed.index[3:5], ["A", "B", "string", "datetime64"]] = np.nan
In [480]: store.append("df_mixed", df_mixed, min_itemsize={"values": 50})
In [481]: df_mixed1 = store.select("df_mixed")
In [482]: df_mixed1
Out[482]:
A B C string int bool datetime64
0 1.778161 -0.898283 -0.263043 string 1 True 2001-01-02
1 -0.913867 -0.218499 -0.639244 string 1 True 2001-01-02
2 -0.030004 1.408028 -0.866305 string 1 True 2001-01-02
3 NaN NaN -0.225250 NaN 1 True NaT
4 NaN NaN -0.890978 NaN 1 True NaT
5 0.081323 0.520995 -0.553839 string 1 True 2001-01-02
6 -0.268494 0.620028 -2.762875 string 1 True 2001-01-02
7 0.168016 0.159416 -1.244763 string 1 True 2001-01-02
In [483]: df_mixed1.dtypes.value_counts()
Out[483]:
float64 2
float32 1
object 1
int64 1
bool 1
datetime64[ns] 1
dtype: int64
# we have provided a minimum string column size
In [484]: store.root.df_mixed.table
Out[484]:
/df_mixed/table (Table(8,)) ''
description := {
"index": Int64Col(shape=(), dflt=0, pos=0),
"values_block_0": Float64Col(shape=(2,), dflt=0.0, pos=1),
"values_block_1": Float32Col(shape=(1,), dflt=0.0, pos=2),
"values_block_2": StringCol(itemsize=50, shape=(1,), dflt=b'', pos=3),
"values_block_3": Int64Col(shape=(1,), dflt=0, pos=4),
"values_block_4": BoolCol(shape=(1,), dflt=False, pos=5),
"values_block_5": Int64Col(shape=(1,), dflt=0, pos=6)}
byteorder := 'little'
chunkshape := (689,)
autoindex := True
colindexes := {
"index": Index(6, mediumshuffle, zlib(1)).is_csi=False}
Storing MultiIndex DataFrames#
Storing MultiIndex DataFrames as tables is very similar to
storing/selecting from homogeneous index DataFrames.
In [485]: index = pd.MultiIndex(
.....: levels=[["foo", "bar", "baz", "qux"], ["one", "two", "three"]],
.....: codes=[[0, 0, 0, 1, 1, 2, 2, 3, 3, 3], [0, 1, 2, 0, 1, 1, 2, 0, 1, 2]],
.....: names=["foo", "bar"],
.....: )
.....:
In [486]: df_mi = pd.DataFrame(np.random.randn(10, 3), index=index, columns=["A", "B", "C"])
In [487]: df_mi
Out[487]:
A B C
foo bar
foo one -1.280289 0.692545 -0.536722
two 1.005707 0.296917 0.139796
three -1.083889 0.811865 1.648435
bar one -0.164377 -0.402227 1.618922
two -1.424723 -0.023232 0.948196
baz two 0.183573 0.145277 0.308146
three -1.043530 -0.708145 1.430905
qux one -0.850136 0.813949 1.508891
two -1.556154 0.187597 1.176488
three -1.246093 -0.002726 -0.444249
In [488]: store.append("df_mi", df_mi)
In [489]: store.select("df_mi")
Out[489]:
A B C
foo bar
foo one -1.280289 0.692545 -0.536722
two 1.005707 0.296917 0.139796
three -1.083889 0.811865 1.648435
bar one -0.164377 -0.402227 1.618922
two -1.424723 -0.023232 0.948196
baz two 0.183573 0.145277 0.308146
three -1.043530 -0.708145 1.430905
qux one -0.850136 0.813949 1.508891
two -1.556154 0.187597 1.176488
three -1.246093 -0.002726 -0.444249
# the levels are automatically included as data columns
In [490]: store.select("df_mi", "foo=bar")
Out[490]:
A B C
foo bar
bar one -0.164377 -0.402227 1.618922
two -1.424723 -0.023232 0.948196
Note
The index keyword is reserved and cannot be use as a level name.
Querying#
Querying a table#
select and delete operations have an optional criterion that can
be specified to select/delete only a subset of the data. This allows one
to have a very large on-disk table and retrieve only a portion of the
data.
A query is specified using the Term class under the hood, as a boolean expression.
index and columns are supported indexers of DataFrames.
if data_columns are specified, these can be used as additional indexers.
level name in a MultiIndex, with default name level_0, level_1, … if not provided.
Valid comparison operators are:
=, ==, !=, >, >=, <, <=
Valid boolean expressions are combined with:
| : or
& : and
( and ) : for grouping
These rules are similar to how boolean expressions are used in pandas for indexing.
Note
= will be automatically expanded to the comparison operator ==
~ is the not operator, but can only be used in very limited
circumstances
If a list/tuple of expressions is passed they will be combined via &
The following are valid expressions:
'index >= date'
"columns = ['A', 'D']"
"columns in ['A', 'D']"
'columns = A'
'columns == A'
"~(columns = ['A', 'B'])"
'index > df.index[3] & string = "bar"'
'(index > df.index[3] & index <= df.index[6]) | string = "bar"'
"ts >= Timestamp('2012-02-01')"
"major_axis>=20130101"
The indexers are on the left-hand side of the sub-expression:
columns, major_axis, ts
The right-hand side of the sub-expression (after a comparison operator) can be:
functions that will be evaluated, e.g. Timestamp('2012-02-01')
strings, e.g. "bar"
date-like, e.g. 20130101, or "20130101"
lists, e.g. "['A', 'B']"
variables that are defined in the local names space, e.g. date
Note
Passing a string to a query by interpolating it into the query
expression is not recommended. Simply assign the string of interest to a
variable and use that variable in an expression. For example, do this
string = "HolyMoly'"
store.select("df", "index == string")
instead of this
string = "HolyMoly'"
store.select('df', f'index == {string}')
The latter will not work and will raise a SyntaxError.Note that
there’s a single quote followed by a double quote in the string
variable.
If you must interpolate, use the '%r' format specifier
store.select("df", "index == %r" % string)
which will quote string.
Here are some examples:
In [491]: dfq = pd.DataFrame(
.....: np.random.randn(10, 4),
.....: columns=list("ABCD"),
.....: index=pd.date_range("20130101", periods=10),
.....: )
.....:
In [492]: store.append("dfq", dfq, format="table", data_columns=True)
Use boolean expressions, with in-line function evaluation.
In [493]: store.select("dfq", "index>pd.Timestamp('20130104') & columns=['A', 'B']")
Out[493]:
A B
2013-01-05 1.366810 1.073372
2013-01-06 2.119746 -2.628174
2013-01-07 0.337920 -0.634027
2013-01-08 1.053434 1.109090
2013-01-09 -0.772942 -0.269415
2013-01-10 0.048562 -0.285920
Use inline column reference.
In [494]: store.select("dfq", where="A>0 or C>0")
Out[494]:
A B C D
2013-01-01 0.856838 1.491776 0.001283 0.701816
2013-01-02 -1.097917 0.102588 0.661740 0.443531
2013-01-03 0.559313 -0.459055 -1.222598 -0.455304
2013-01-05 1.366810 1.073372 -0.994957 0.755314
2013-01-06 2.119746 -2.628174 -0.089460 -0.133636
2013-01-07 0.337920 -0.634027 0.421107 0.604303
2013-01-08 1.053434 1.109090 -0.367891 -0.846206
2013-01-10 0.048562 -0.285920 1.334100 0.194462
The columns keyword can be supplied to select a list of columns to be
returned, this is equivalent to passing a
'columns=list_of_columns_to_filter':
In [495]: store.select("df", "columns=['A', 'B']")
Out[495]:
A B
2000-01-01 -0.398501 -0.677311
2000-01-02 -1.167564 -0.593353
2000-01-03 -0.131959 0.089012
2000-01-04 0.169405 -1.358046
2000-01-05 0.492195 0.076693
2000-01-06 -0.285283 -1.210529
2000-01-07 0.941577 -0.342447
2000-01-08 0.052607 2.093214
start and stop parameters can be specified to limit the total search
space. These are in terms of the total number of rows in a table.
Note
select will raise a ValueError if the query expression has an unknown
variable reference. Usually this means that you are trying to select on a column
that is not a data_column.
select will raise a SyntaxError if the query expression is not valid.
Query timedelta64[ns]#
You can store and query using the timedelta64[ns] type. Terms can be
specified in the format: <float>(<unit>), where float may be signed (and fractional), and unit can be
D,s,ms,us,ns for the timedelta. Here’s an example:
In [496]: from datetime import timedelta
In [497]: dftd = pd.DataFrame(
.....: {
.....: "A": pd.Timestamp("20130101"),
.....: "B": [
.....: pd.Timestamp("20130101") + timedelta(days=i, seconds=10)
.....: for i in range(10)
.....: ],
.....: }
.....: )
.....:
In [498]: dftd["C"] = dftd["A"] - dftd["B"]
In [499]: dftd
Out[499]:
A B C
0 2013-01-01 2013-01-01 00:00:10 -1 days +23:59:50
1 2013-01-01 2013-01-02 00:00:10 -2 days +23:59:50
2 2013-01-01 2013-01-03 00:00:10 -3 days +23:59:50
3 2013-01-01 2013-01-04 00:00:10 -4 days +23:59:50
4 2013-01-01 2013-01-05 00:00:10 -5 days +23:59:50
5 2013-01-01 2013-01-06 00:00:10 -6 days +23:59:50
6 2013-01-01 2013-01-07 00:00:10 -7 days +23:59:50
7 2013-01-01 2013-01-08 00:00:10 -8 days +23:59:50
8 2013-01-01 2013-01-09 00:00:10 -9 days +23:59:50
9 2013-01-01 2013-01-10 00:00:10 -10 days +23:59:50
In [500]: store.append("dftd", dftd, data_columns=True)
In [501]: store.select("dftd", "C<'-3.5D'")
Out[501]:
A B C
4 2013-01-01 2013-01-05 00:00:10 -5 days +23:59:50
5 2013-01-01 2013-01-06 00:00:10 -6 days +23:59:50
6 2013-01-01 2013-01-07 00:00:10 -7 days +23:59:50
7 2013-01-01 2013-01-08 00:00:10 -8 days +23:59:50
8 2013-01-01 2013-01-09 00:00:10 -9 days +23:59:50
9 2013-01-01 2013-01-10 00:00:10 -10 days +23:59:50
Query MultiIndex#
Selecting from a MultiIndex can be achieved by using the name of the level.
In [502]: df_mi.index.names
Out[502]: FrozenList(['foo', 'bar'])
In [503]: store.select("df_mi", "foo=baz and bar=two")
Out[503]:
A B C
foo bar
baz two 0.183573 0.145277 0.308146
If the MultiIndex levels names are None, the levels are automatically made available via
the level_n keyword with n the level of the MultiIndex you want to select from.
In [504]: index = pd.MultiIndex(
.....: levels=[["foo", "bar", "baz", "qux"], ["one", "two", "three"]],
.....: codes=[[0, 0, 0, 1, 1, 2, 2, 3, 3, 3], [0, 1, 2, 0, 1, 1, 2, 0, 1, 2]],
.....: )
.....:
In [505]: df_mi_2 = pd.DataFrame(np.random.randn(10, 3), index=index, columns=["A", "B", "C"])
In [506]: df_mi_2
Out[506]:
A B C
foo one -0.646538 1.210676 -0.315409
two 1.528366 0.376542 0.174490
three 1.247943 -0.742283 0.710400
bar one 0.434128 -1.246384 1.139595
two 1.388668 -0.413554 -0.666287
baz two 0.010150 -0.163820 -0.115305
three 0.216467 0.633720 0.473945
qux one -0.155446 1.287082 0.320201
two -1.256989 0.874920 0.765944
three 0.025557 -0.729782 -0.127439
In [507]: store.append("df_mi_2", df_mi_2)
# the levels are automatically included as data columns with keyword level_n
In [508]: store.select("df_mi_2", "level_0=foo and level_1=two")
Out[508]:
A B C
foo two 1.528366 0.376542 0.17449
Indexing#
You can create/modify an index for a table with create_table_index
after data is already in the table (after and append/put
operation). Creating a table index is highly encouraged. This will
speed your queries a great deal when you use a select with the
indexed dimension as the where.
Note
Indexes are automagically created on the indexables
and any data columns you specify. This behavior can be turned off by passing
index=False to append.
# we have automagically already created an index (in the first section)
In [509]: i = store.root.df.table.cols.index.index
In [510]: i.optlevel, i.kind
Out[510]: (6, 'medium')
# change an index by passing new parameters
In [511]: store.create_table_index("df", optlevel=9, kind="full")
In [512]: i = store.root.df.table.cols.index.index
In [513]: i.optlevel, i.kind
Out[513]: (9, 'full')
Oftentimes when appending large amounts of data to a store, it is useful to turn off index creation for each append, then recreate at the end.
In [514]: df_1 = pd.DataFrame(np.random.randn(10, 2), columns=list("AB"))
In [515]: df_2 = pd.DataFrame(np.random.randn(10, 2), columns=list("AB"))
In [516]: st = pd.HDFStore("appends.h5", mode="w")
In [517]: st.append("df", df_1, data_columns=["B"], index=False)
In [518]: st.append("df", df_2, data_columns=["B"], index=False)
In [519]: st.get_storer("df").table
Out[519]:
/df/table (Table(20,)) ''
description := {
"index": Int64Col(shape=(), dflt=0, pos=0),
"values_block_0": Float64Col(shape=(1,), dflt=0.0, pos=1),
"B": Float64Col(shape=(), dflt=0.0, pos=2)}
byteorder := 'little'
chunkshape := (2730,)
Then create the index when finished appending.
In [520]: st.create_table_index("df", columns=["B"], optlevel=9, kind="full")
In [521]: st.get_storer("df").table
Out[521]:
/df/table (Table(20,)) ''
description := {
"index": Int64Col(shape=(), dflt=0, pos=0),
"values_block_0": Float64Col(shape=(1,), dflt=0.0, pos=1),
"B": Float64Col(shape=(), dflt=0.0, pos=2)}
byteorder := 'little'
chunkshape := (2730,)
autoindex := True
colindexes := {
"B": Index(9, fullshuffle, zlib(1)).is_csi=True}
In [522]: st.close()
See here for how to create a completely-sorted-index (CSI) on an existing store.
Query via data columns#
You can designate (and index) certain columns that you want to be able
to perform queries (other than the indexable columns, which you can
always query). For instance say you want to perform this common
operation, on-disk, and return just the frame that matches this
query. You can specify data_columns = True to force all columns to
be data_columns.
In [523]: df_dc = df.copy()
In [524]: df_dc["string"] = "foo"
In [525]: df_dc.loc[df_dc.index[4:6], "string"] = np.nan
In [526]: df_dc.loc[df_dc.index[7:9], "string"] = "bar"
In [527]: df_dc["string2"] = "cool"
In [528]: df_dc.loc[df_dc.index[1:3], ["B", "C"]] = 1.0
In [529]: df_dc
Out[529]:
A B C string string2
2000-01-01 -0.398501 -0.677311 -0.874991 foo cool
2000-01-02 -1.167564 1.000000 1.000000 foo cool
2000-01-03 -0.131959 1.000000 1.000000 foo cool
2000-01-04 0.169405 -1.358046 -0.105563 foo cool
2000-01-05 0.492195 0.076693 0.213685 NaN cool
2000-01-06 -0.285283 -1.210529 -1.408386 NaN cool
2000-01-07 0.941577 -0.342447 0.222031 foo cool
2000-01-08 0.052607 2.093214 1.064908 bar cool
# on-disk operations
In [530]: store.append("df_dc", df_dc, data_columns=["B", "C", "string", "string2"])
In [531]: store.select("df_dc", where="B > 0")
Out[531]:
A B C string string2
2000-01-02 -1.167564 1.000000 1.000000 foo cool
2000-01-03 -0.131959 1.000000 1.000000 foo cool
2000-01-05 0.492195 0.076693 0.213685 NaN cool
2000-01-08 0.052607 2.093214 1.064908 bar cool
# getting creative
In [532]: store.select("df_dc", "B > 0 & C > 0 & string == foo")
Out[532]:
A B C string string2
2000-01-02 -1.167564 1.0 1.0 foo cool
2000-01-03 -0.131959 1.0 1.0 foo cool
# this is in-memory version of this type of selection
In [533]: df_dc[(df_dc.B > 0) & (df_dc.C > 0) & (df_dc.string == "foo")]
Out[533]:
A B C string string2
2000-01-02 -1.167564 1.0 1.0 foo cool
2000-01-03 -0.131959 1.0 1.0 foo cool
# we have automagically created this index and the B/C/string/string2
# columns are stored separately as ``PyTables`` columns
In [534]: store.root.df_dc.table
Out[534]:
/df_dc/table (Table(8,)) ''
description := {
"index": Int64Col(shape=(), dflt=0, pos=0),
"values_block_0": Float64Col(shape=(1,), dflt=0.0, pos=1),
"B": Float64Col(shape=(), dflt=0.0, pos=2),
"C": Float64Col(shape=(), dflt=0.0, pos=3),
"string": StringCol(itemsize=3, shape=(), dflt=b'', pos=4),
"string2": StringCol(itemsize=4, shape=(), dflt=b'', pos=5)}
byteorder := 'little'
chunkshape := (1680,)
autoindex := True
colindexes := {
"index": Index(6, mediumshuffle, zlib(1)).is_csi=False,
"B": Index(6, mediumshuffle, zlib(1)).is_csi=False,
"C": Index(6, mediumshuffle, zlib(1)).is_csi=False,
"string": Index(6, mediumshuffle, zlib(1)).is_csi=False,
"string2": Index(6, mediumshuffle, zlib(1)).is_csi=False}
There is some performance degradation by making lots of columns into
data columns, so it is up to the user to designate these. In addition,
you cannot change data columns (nor indexables) after the first
append/put operation (Of course you can simply read in the data and
create a new table!).
Iterator#
You can pass iterator=True or chunksize=number_in_a_chunk
to select and select_as_multiple to return an iterator on the results.
The default is 50,000 rows returned in a chunk.
In [535]: for df in store.select("df", chunksize=3):
.....: print(df)
.....:
A B C
2000-01-01 -0.398501 -0.677311 -0.874991
2000-01-02 -1.167564 -0.593353 0.146262
2000-01-03 -0.131959 0.089012 0.667450
A B C
2000-01-04 0.169405 -1.358046 -0.105563
2000-01-05 0.492195 0.076693 0.213685
2000-01-06 -0.285283 -1.210529 -1.408386
A B C
2000-01-07 0.941577 -0.342447 0.222031
2000-01-08 0.052607 2.093214 1.064908
Note
You can also use the iterator with read_hdf which will open, then
automatically close the store when finished iterating.
for df in pd.read_hdf("store.h5", "df", chunksize=3):
print(df)
Note, that the chunksize keyword applies to the source rows. So if you
are doing a query, then the chunksize will subdivide the total rows in the table
and the query applied, returning an iterator on potentially unequal sized chunks.
Here is a recipe for generating a query and using it to create equal sized return
chunks.
In [536]: dfeq = pd.DataFrame({"number": np.arange(1, 11)})
In [537]: dfeq
Out[537]:
number
0 1
1 2
2 3
3 4
4 5
5 6
6 7
7 8
8 9
9 10
In [538]: store.append("dfeq", dfeq, data_columns=["number"])
In [539]: def chunks(l, n):
.....: return [l[i: i + n] for i in range(0, len(l), n)]
.....:
In [540]: evens = [2, 4, 6, 8, 10]
In [541]: coordinates = store.select_as_coordinates("dfeq", "number=evens")
In [542]: for c in chunks(coordinates, 2):
.....: print(store.select("dfeq", where=c))
.....:
number
1 2
3 4
number
5 6
7 8
number
9 10
Advanced queries#
Select a single column#
To retrieve a single indexable or data column, use the
method select_column. This will, for example, enable you to get the index
very quickly. These return a Series of the result, indexed by the row number.
These do not currently accept the where selector.
In [543]: store.select_column("df_dc", "index")
Out[543]:
0 2000-01-01
1 2000-01-02
2 2000-01-03
3 2000-01-04
4 2000-01-05
5 2000-01-06
6 2000-01-07
7 2000-01-08
Name: index, dtype: datetime64[ns]
In [544]: store.select_column("df_dc", "string")
Out[544]:
0 foo
1 foo
2 foo
3 foo
4 NaN
5 NaN
6 foo
7 bar
Name: string, dtype: object
Selecting coordinates#
Sometimes you want to get the coordinates (a.k.a the index locations) of your query. This returns an
Int64Index of the resulting locations. These coordinates can also be passed to subsequent
where operations.
In [545]: df_coord = pd.DataFrame(
.....: np.random.randn(1000, 2), index=pd.date_range("20000101", periods=1000)
.....: )
.....:
In [546]: store.append("df_coord", df_coord)
In [547]: c = store.select_as_coordinates("df_coord", "index > 20020101")
In [548]: c
Out[548]:
Int64Index([732, 733, 734, 735, 736, 737, 738, 739, 740, 741,
...
990, 991, 992, 993, 994, 995, 996, 997, 998, 999],
dtype='int64', length=268)
In [549]: store.select("df_coord", where=c)
Out[549]:
0 1
2002-01-02 0.009035 0.921784
2002-01-03 -1.476563 -1.376375
2002-01-04 1.266731 2.173681
2002-01-05 0.147621 0.616468
2002-01-06 0.008611 2.136001
... ... ...
2002-09-22 0.781169 -0.791687
2002-09-23 -0.764810 -2.000933
2002-09-24 -0.345662 0.393915
2002-09-25 -0.116661 0.834638
2002-09-26 -1.341780 0.686366
[268 rows x 2 columns]
Selecting using a where mask#
Sometime your query can involve creating a list of rows to select. Usually this mask would
be a resulting index from an indexing operation. This example selects the months of
a datetimeindex which are 5.
In [550]: df_mask = pd.DataFrame(
.....: np.random.randn(1000, 2), index=pd.date_range("20000101", periods=1000)
.....: )
.....:
In [551]: store.append("df_mask", df_mask)
In [552]: c = store.select_column("df_mask", "index")
In [553]: where = c[pd.DatetimeIndex(c).month == 5].index
In [554]: store.select("df_mask", where=where)
Out[554]:
0 1
2000-05-01 -0.386742 -0.977433
2000-05-02 -0.228819 0.471671
2000-05-03 0.337307 1.840494
2000-05-04 0.050249 0.307149
2000-05-05 -0.802947 -0.946730
... ... ...
2002-05-27 1.605281 1.741415
2002-05-28 -0.804450 -0.715040
2002-05-29 -0.874851 0.037178
2002-05-30 -0.161167 -1.294944
2002-05-31 -0.258463 -0.731969
[93 rows x 2 columns]
Storer object#
If you want to inspect the stored object, retrieve via
get_storer. You could use this programmatically to say get the number
of rows in an object.
In [555]: store.get_storer("df_dc").nrows
Out[555]: 8
Multiple table queries#
The methods append_to_multiple and
select_as_multiple can perform appending/selecting from
multiple tables at once. The idea is to have one table (call it the
selector table) that you index most/all of the columns, and perform your
queries. The other table(s) are data tables with an index matching the
selector table’s index. You can then perform a very fast query
on the selector table, yet get lots of data back. This method is similar to
having a very wide table, but enables more efficient queries.
The append_to_multiple method splits a given single DataFrame
into multiple tables according to d, a dictionary that maps the
table names to a list of ‘columns’ you want in that table. If None
is used in place of a list, that table will have the remaining
unspecified columns of the given DataFrame. The argument selector
defines which table is the selector table (which you can make queries from).
The argument dropna will drop rows from the input DataFrame to ensure
tables are synchronized. This means that if a row for one of the tables
being written to is entirely np.NaN, that row will be dropped from all tables.
If dropna is False, THE USER IS RESPONSIBLE FOR SYNCHRONIZING THE TABLES.
Remember that entirely np.Nan rows are not written to the HDFStore, so if
you choose to call dropna=False, some tables may have more rows than others,
and therefore select_as_multiple may not work or it may return unexpected
results.
In [556]: df_mt = pd.DataFrame(
.....: np.random.randn(8, 6),
.....: index=pd.date_range("1/1/2000", periods=8),
.....: columns=["A", "B", "C", "D", "E", "F"],
.....: )
.....:
In [557]: df_mt["foo"] = "bar"
In [558]: df_mt.loc[df_mt.index[1], ("A", "B")] = np.nan
# you can also create the tables individually
In [559]: store.append_to_multiple(
.....: {"df1_mt": ["A", "B"], "df2_mt": None}, df_mt, selector="df1_mt"
.....: )
.....:
In [560]: store
Out[560]:
<class 'pandas.io.pytables.HDFStore'>
File path: store.h5
# individual tables were created
In [561]: store.select("df1_mt")
Out[561]:
A B
2000-01-01 0.079529 -1.459471
2000-01-02 NaN NaN
2000-01-03 -0.423113 2.314361
2000-01-04 0.756744 -0.792372
2000-01-05 -0.184971 0.170852
2000-01-06 0.678830 0.633974
2000-01-07 0.034973 0.974369
2000-01-08 -2.110103 0.243062
In [562]: store.select("df2_mt")
Out[562]:
C D E F foo
2000-01-01 -0.596306 -0.910022 -1.057072 -0.864360 bar
2000-01-02 0.477849 0.283128 -2.045700 -0.338206 bar
2000-01-03 -0.033100 -0.965461 -0.001079 -0.351689 bar
2000-01-04 -0.513555 -1.484776 -0.796280 -0.182321 bar
2000-01-05 -0.872407 -1.751515 0.934334 0.938818 bar
2000-01-06 -1.398256 1.347142 -0.029520 0.082738 bar
2000-01-07 -0.755544 0.380786 -1.634116 1.293610 bar
2000-01-08 1.453064 0.500558 -0.574475 0.694324 bar
# as a multiple
In [563]: store.select_as_multiple(
.....: ["df1_mt", "df2_mt"],
.....: where=["A>0", "B>0"],
.....: selector="df1_mt",
.....: )
.....:
Out[563]:
A B C D E F foo
2000-01-06 0.678830 0.633974 -1.398256 1.347142 -0.029520 0.082738 bar
2000-01-07 0.034973 0.974369 -0.755544 0.380786 -1.634116 1.293610 bar
Delete from a table#
You can delete from a table selectively by specifying a where. In
deleting rows, it is important to understand the PyTables deletes
rows by erasing the rows, then moving the following data. Thus
deleting can potentially be a very expensive operation depending on the
orientation of your data. To get optimal performance, it’s
worthwhile to have the dimension you are deleting be the first of the
indexables.
Data is ordered (on the disk) in terms of the indexables. Here’s a
simple use case. You store panel-type data, with dates in the
major_axis and ids in the minor_axis. The data is then
interleaved like this:
date_1
id_1
id_2
.
id_n
date_2
id_1
.
id_n
It should be clear that a delete operation on the major_axis will be
fairly quick, as one chunk is removed, then the following data moved. On
the other hand a delete operation on the minor_axis will be very
expensive. In this case it would almost certainly be faster to rewrite
the table using a where that selects all but the missing data.
Warning
Please note that HDF5 DOES NOT RECLAIM SPACE in the h5 files
automatically. Thus, repeatedly deleting (or removing nodes) and adding
again, WILL TEND TO INCREASE THE FILE SIZE.
To repack and clean the file, use ptrepack.
Notes & caveats#
Compression#
PyTables allows the stored data to be compressed. This applies to
all kinds of stores, not just tables. Two parameters are used to
control compression: complevel and complib.
complevel specifies if and how hard data is to be compressed.
complevel=0 and complevel=None disables compression and
0<complevel<10 enables compression.
complib specifies which compression library to use.
If nothing is specified the default library zlib is used. A
compression library usually optimizes for either good compression rates
or speed and the results will depend on the type of data. Which type of
compression to choose depends on your specific needs and data. The list
of supported compression libraries:
zlib: The default compression library.
A classic in terms of compression, achieves good compression
rates but is somewhat slow.
lzo: Fast
compression and decompression.
bzip2: Good compression rates.
blosc: Fast compression and
decompression.
Support for alternative blosc compressors:
blosc:blosclz This is the
default compressor for blosc
blosc:lz4:
A compact, very popular and fast compressor.
blosc:lz4hc:
A tweaked version of LZ4, produces better
compression ratios at the expense of speed.
blosc:snappy:
A popular compressor used in many places.
blosc:zlib: A classic;
somewhat slower than the previous ones, but
achieving better compression ratios.
blosc:zstd: An
extremely well balanced codec; it provides the best
compression ratios among the others above, and at
reasonably fast speed.
If complib is defined as something other than the listed libraries a
ValueError exception is issued.
Note
If the library specified with the complib option is missing on your platform,
compression defaults to zlib without further ado.
Enable compression for all objects within the file:
store_compressed = pd.HDFStore(
"store_compressed.h5", complevel=9, complib="blosc:blosclz"
)
Or on-the-fly compression (this only applies to tables) in stores where compression is not enabled:
store.append("df", df, complib="zlib", complevel=5)
ptrepack#
PyTables offers better write performance when tables are compressed after
they are written, as opposed to turning on compression at the very
beginning. You can use the supplied PyTables utility
ptrepack. In addition, ptrepack can change compression levels
after the fact.
ptrepack --chunkshape=auto --propindexes --complevel=9 --complib=blosc in.h5 out.h5
Furthermore ptrepack in.h5 out.h5 will repack the file to allow
you to reuse previously deleted space. Alternatively, one can simply
remove the file and write again, or use the copy method.
Caveats#
Warning
HDFStore is not-threadsafe for writing. The underlying
PyTables only supports concurrent reads (via threading or
processes). If you need reading and writing at the same time, you
need to serialize these operations in a single thread in a single
process. You will corrupt your data otherwise. See the (GH2397) for more information.
If you use locks to manage write access between multiple processes, you
may want to use fsync() before releasing write locks. For
convenience you can use store.flush(fsync=True) to do this for you.
Once a table is created columns (DataFrame)
are fixed; only exactly the same columns can be appended
Be aware that timezones (e.g., pytz.timezone('US/Eastern'))
are not necessarily equal across timezone versions. So if data is
localized to a specific timezone in the HDFStore using one version
of a timezone library and that data is updated with another version, the data
will be converted to UTC since these timezones are not considered
equal. Either use the same version of timezone library or use tz_convert with
the updated timezone definition.
Warning
PyTables will show a NaturalNameWarning if a column name
cannot be used as an attribute selector.
Natural identifiers contain only letters, numbers, and underscores,
and may not begin with a number.
Other identifiers cannot be used in a where clause
and are generally a bad idea.
DataTypes#
HDFStore will map an object dtype to the PyTables underlying
dtype. This means the following types are known to work:
Type
Represents missing values
floating : float64, float32, float16
np.nan
integer : int64, int32, int8, uint64,uint32, uint8
boolean
datetime64[ns]
NaT
timedelta64[ns]
NaT
categorical : see the section below
object : strings
np.nan
unicode columns are not supported, and WILL FAIL.
Categorical data#
You can write data that contains category dtypes to a HDFStore.
Queries work the same as if it was an object array. However, the category dtyped data is
stored in a more efficient manner.
In [564]: dfcat = pd.DataFrame(
.....: {"A": pd.Series(list("aabbcdba")).astype("category"), "B": np.random.randn(8)}
.....: )
.....:
In [565]: dfcat
Out[565]:
A B
0 a -1.608059
1 a 0.851060
2 b -0.736931
3 b 0.003538
4 c -1.422611
5 d 2.060901
6 b 0.993899
7 a -1.371768
In [566]: dfcat.dtypes
Out[566]:
A category
B float64
dtype: object
In [567]: cstore = pd.HDFStore("cats.h5", mode="w")
In [568]: cstore.append("dfcat", dfcat, format="table", data_columns=["A"])
In [569]: result = cstore.select("dfcat", where="A in ['b', 'c']")
In [570]: result
Out[570]:
A B
2 b -0.736931
3 b 0.003538
4 c -1.422611
6 b 0.993899
In [571]: result.dtypes
Out[571]:
A category
B float64
dtype: object
String columns#
min_itemsize
The underlying implementation of HDFStore uses a fixed column width (itemsize) for string columns.
A string column itemsize is calculated as the maximum of the
length of data (for that column) that is passed to the HDFStore, in the first append. Subsequent appends,
may introduce a string for a column larger than the column can hold, an Exception will be raised (otherwise you
could have a silent truncation of these columns, leading to loss of information). In the future we may relax this and
allow a user-specified truncation to occur.
Pass min_itemsize on the first table creation to a-priori specify the minimum length of a particular string column.
min_itemsize can be an integer, or a dict mapping a column name to an integer. You can pass values as a key to
allow all indexables or data_columns to have this min_itemsize.
Passing a min_itemsize dict will cause all passed columns to be created as data_columns automatically.
Note
If you are not passing any data_columns, then the min_itemsize will be the maximum of the length of any string passed
In [572]: dfs = pd.DataFrame({"A": "foo", "B": "bar"}, index=list(range(5)))
In [573]: dfs
Out[573]:
A B
0 foo bar
1 foo bar
2 foo bar
3 foo bar
4 foo bar
# A and B have a size of 30
In [574]: store.append("dfs", dfs, min_itemsize=30)
In [575]: store.get_storer("dfs").table
Out[575]:
/dfs/table (Table(5,)) ''
description := {
"index": Int64Col(shape=(), dflt=0, pos=0),
"values_block_0": StringCol(itemsize=30, shape=(2,), dflt=b'', pos=1)}
byteorder := 'little'
chunkshape := (963,)
autoindex := True
colindexes := {
"index": Index(6, mediumshuffle, zlib(1)).is_csi=False}
# A is created as a data_column with a size of 30
# B is size is calculated
In [576]: store.append("dfs2", dfs, min_itemsize={"A": 30})
In [577]: store.get_storer("dfs2").table
Out[577]:
/dfs2/table (Table(5,)) ''
description := {
"index": Int64Col(shape=(), dflt=0, pos=0),
"values_block_0": StringCol(itemsize=3, shape=(1,), dflt=b'', pos=1),
"A": StringCol(itemsize=30, shape=(), dflt=b'', pos=2)}
byteorder := 'little'
chunkshape := (1598,)
autoindex := True
colindexes := {
"index": Index(6, mediumshuffle, zlib(1)).is_csi=False,
"A": Index(6, mediumshuffle, zlib(1)).is_csi=False}
nan_rep
String columns will serialize a np.nan (a missing value) with the nan_rep string representation. This defaults to the string value nan.
You could inadvertently turn an actual nan value into a missing value.
In [578]: dfss = pd.DataFrame({"A": ["foo", "bar", "nan"]})
In [579]: dfss
Out[579]:
A
0 foo
1 bar
2 nan
In [580]: store.append("dfss", dfss)
In [581]: store.select("dfss")
Out[581]:
A
0 foo
1 bar
2 NaN
# here you need to specify a different nan rep
In [582]: store.append("dfss2", dfss, nan_rep="_nan_")
In [583]: store.select("dfss2")
Out[583]:
A
0 foo
1 bar
2 nan
External compatibility#
HDFStore writes table format objects in specific formats suitable for
producing loss-less round trips to pandas objects. For external
compatibility, HDFStore can read native PyTables format
tables.
It is possible to write an HDFStore object that can easily be imported into R using the
rhdf5 library (Package website). Create a table format store like this:
In [584]: df_for_r = pd.DataFrame(
.....: {
.....: "first": np.random.rand(100),
.....: "second": np.random.rand(100),
.....: "class": np.random.randint(0, 2, (100,)),
.....: },
.....: index=range(100),
.....: )
.....:
In [585]: df_for_r.head()
Out[585]:
first second class
0 0.013480 0.504941 0
1 0.690984 0.898188 1
2 0.510113 0.618748 1
3 0.357698 0.004972 0
4 0.451658 0.012065 1
In [586]: store_export = pd.HDFStore("export.h5")
In [587]: store_export.append("df_for_r", df_for_r, data_columns=df_dc.columns)
In [588]: store_export
Out[588]:
<class 'pandas.io.pytables.HDFStore'>
File path: export.h5
In R this file can be read into a data.frame object using the rhdf5
library. The following example function reads the corresponding column names
and data values from the values and assembles them into a data.frame:
# Load values and column names for all datasets from corresponding nodes and
# insert them into one data.frame object.
library(rhdf5)
loadhdf5data <- function(h5File) {
listing <- h5ls(h5File)
# Find all data nodes, values are stored in *_values and corresponding column
# titles in *_items
data_nodes <- grep("_values", listing$name)
name_nodes <- grep("_items", listing$name)
data_paths = paste(listing$group[data_nodes], listing$name[data_nodes], sep = "/")
name_paths = paste(listing$group[name_nodes], listing$name[name_nodes], sep = "/")
columns = list()
for (idx in seq(data_paths)) {
# NOTE: matrices returned by h5read have to be transposed to obtain
# required Fortran order!
data <- data.frame(t(h5read(h5File, data_paths[idx])))
names <- t(h5read(h5File, name_paths[idx]))
entry <- data.frame(data)
colnames(entry) <- names
columns <- append(columns, entry)
}
data <- data.frame(columns)
return(data)
}
Now you can import the DataFrame into R:
> data = loadhdf5data("transfer.hdf5")
> head(data)
first second class
1 0.4170220047 0.3266449 0
2 0.7203244934 0.5270581 0
3 0.0001143748 0.8859421 1
4 0.3023325726 0.3572698 1
5 0.1467558908 0.9085352 1
6 0.0923385948 0.6233601 1
Note
The R function lists the entire HDF5 file’s contents and assembles the
data.frame object from all matching nodes, so use this only as a
starting point if you have stored multiple DataFrame objects to a
single HDF5 file.
Performance#
tables format come with a writing performance penalty as compared to
fixed stores. The benefit is the ability to append/delete and
query (potentially very large amounts of data). Write times are
generally longer as compared with regular stores. Query times can
be quite fast, especially on an indexed axis.
You can pass chunksize=<int> to append, specifying the
write chunksize (default is 50000). This will significantly lower
your memory usage on writing.
You can pass expectedrows=<int> to the first append,
to set the TOTAL number of rows that PyTables will expect.
This will optimize read/write performance.
Duplicate rows can be written to tables, but are filtered out in
selection (with the last items being selected; thus a table is
unique on major, minor pairs)
A PerformanceWarning will be raised if you are attempting to
store types that will be pickled by PyTables (rather than stored as
endemic types). See
Here
for more information and some solutions.
Feather#
Feather provides binary columnar serialization for data frames. It is designed to make reading and writing data
frames efficient, and to make sharing data across data analysis languages easy.
Feather is designed to faithfully serialize and de-serialize DataFrames, supporting all of the pandas
dtypes, including extension dtypes such as categorical and datetime with tz.
Several caveats:
The format will NOT write an Index, or MultiIndex for the
DataFrame and will raise an error if a non-default one is provided. You
can .reset_index() to store the index or .reset_index(drop=True) to
ignore it.
Duplicate column names and non-string columns names are not supported
Actual Python objects in object dtype columns are not supported. These will
raise a helpful error message on an attempt at serialization.
See the Full Documentation.
In [589]: df = pd.DataFrame(
.....: {
.....: "a": list("abc"),
.....: "b": list(range(1, 4)),
.....: "c": np.arange(3, 6).astype("u1"),
.....: "d": np.arange(4.0, 7.0, dtype="float64"),
.....: "e": [True, False, True],
.....: "f": pd.Categorical(list("abc")),
.....: "g": pd.date_range("20130101", periods=3),
.....: "h": pd.date_range("20130101", periods=3, tz="US/Eastern"),
.....: "i": pd.date_range("20130101", periods=3, freq="ns"),
.....: }
.....: )
.....:
In [590]: df
Out[590]:
a b c ... g h i
0 a 1 3 ... 2013-01-01 2013-01-01 00:00:00-05:00 2013-01-01 00:00:00.000000000
1 b 2 4 ... 2013-01-02 2013-01-02 00:00:00-05:00 2013-01-01 00:00:00.000000001
2 c 3 5 ... 2013-01-03 2013-01-03 00:00:00-05:00 2013-01-01 00:00:00.000000002
[3 rows x 9 columns]
In [591]: df.dtypes
Out[591]:
a object
b int64
c uint8
d float64
e bool
f category
g datetime64[ns]
h datetime64[ns, US/Eastern]
i datetime64[ns]
dtype: object
Write to a feather file.
In [592]: df.to_feather("example.feather")
Read from a feather file.
In [593]: result = pd.read_feather("example.feather")
In [594]: result
Out[594]:
a b c ... g h i
0 a 1 3 ... 2013-01-01 2013-01-01 00:00:00-05:00 2013-01-01 00:00:00.000000000
1 b 2 4 ... 2013-01-02 2013-01-02 00:00:00-05:00 2013-01-01 00:00:00.000000001
2 c 3 5 ... 2013-01-03 2013-01-03 00:00:00-05:00 2013-01-01 00:00:00.000000002
[3 rows x 9 columns]
# we preserve dtypes
In [595]: result.dtypes
Out[595]:
a object
b int64
c uint8
d float64
e bool
f category
g datetime64[ns]
h datetime64[ns, US/Eastern]
i datetime64[ns]
dtype: object
Parquet#
Apache Parquet provides a partitioned binary columnar serialization for data frames. It is designed to
make reading and writing data frames efficient, and to make sharing data across data analysis
languages easy. Parquet can use a variety of compression techniques to shrink the file size as much as possible
while still maintaining good read performance.
Parquet is designed to faithfully serialize and de-serialize DataFrame s, supporting all of the pandas
dtypes, including extension dtypes such as datetime with tz.
Several caveats.
Duplicate column names and non-string columns names are not supported.
The pyarrow engine always writes the index to the output, but fastparquet only writes non-default
indexes. This extra column can cause problems for non-pandas consumers that are not expecting it. You can
force including or omitting indexes with the index argument, regardless of the underlying engine.
Index level names, if specified, must be strings.
In the pyarrow engine, categorical dtypes for non-string types can be serialized to parquet, but will de-serialize as their primitive dtype.
The pyarrow engine preserves the ordered flag of categorical dtypes with string types. fastparquet does not preserve the ordered flag.
Non supported types include Interval and actual Python object types. These will raise a helpful error message
on an attempt at serialization. Period type is supported with pyarrow >= 0.16.0.
The pyarrow engine preserves extension data types such as the nullable integer and string data
type (requiring pyarrow >= 0.16.0, and requiring the extension type to implement the needed protocols,
see the extension types documentation).
You can specify an engine to direct the serialization. This can be one of pyarrow, or fastparquet, or auto.
If the engine is NOT specified, then the pd.options.io.parquet.engine option is checked; if this is also auto,
then pyarrow is tried, and falling back to fastparquet.
See the documentation for pyarrow and fastparquet.
Note
These engines are very similar and should read/write nearly identical parquet format files.
pyarrow>=8.0.0 supports timedelta data, fastparquet>=0.1.4 supports timezone aware datetimes.
These libraries differ by having different underlying dependencies (fastparquet by using numba, while pyarrow uses a c-library).
In [596]: df = pd.DataFrame(
.....: {
.....: "a": list("abc"),
.....: "b": list(range(1, 4)),
.....: "c": np.arange(3, 6).astype("u1"),
.....: "d": np.arange(4.0, 7.0, dtype="float64"),
.....: "e": [True, False, True],
.....: "f": pd.date_range("20130101", periods=3),
.....: "g": pd.date_range("20130101", periods=3, tz="US/Eastern"),
.....: "h": pd.Categorical(list("abc")),
.....: "i": pd.Categorical(list("abc"), ordered=True),
.....: }
.....: )
.....:
In [597]: df
Out[597]:
a b c d e f g h i
0 a 1 3 4.0 True 2013-01-01 2013-01-01 00:00:00-05:00 a a
1 b 2 4 5.0 False 2013-01-02 2013-01-02 00:00:00-05:00 b b
2 c 3 5 6.0 True 2013-01-03 2013-01-03 00:00:00-05:00 c c
In [598]: df.dtypes
Out[598]:
a object
b int64
c uint8
d float64
e bool
f datetime64[ns]
g datetime64[ns, US/Eastern]
h category
i category
dtype: object
Write to a parquet file.
In [599]: df.to_parquet("example_pa.parquet", engine="pyarrow")
In [600]: df.to_parquet("example_fp.parquet", engine="fastparquet")
Read from a parquet file.
In [601]: result = pd.read_parquet("example_fp.parquet", engine="fastparquet")
In [602]: result = pd.read_parquet("example_pa.parquet", engine="pyarrow")
In [603]: result.dtypes
Out[603]:
a object
b int64
c uint8
d float64
e bool
f datetime64[ns]
g datetime64[ns, US/Eastern]
h category
i category
dtype: object
Read only certain columns of a parquet file.
In [604]: result = pd.read_parquet(
.....: "example_fp.parquet",
.....: engine="fastparquet",
.....: columns=["a", "b"],
.....: )
.....:
In [605]: result = pd.read_parquet(
.....: "example_pa.parquet",
.....: engine="pyarrow",
.....: columns=["a", "b"],
.....: )
.....:
In [606]: result.dtypes
Out[606]:
a object
b int64
dtype: object
Handling indexes#
Serializing a DataFrame to parquet may include the implicit index as one or
more columns in the output file. Thus, this code:
In [607]: df = pd.DataFrame({"a": [1, 2], "b": [3, 4]})
In [608]: df.to_parquet("test.parquet", engine="pyarrow")
creates a parquet file with three columns if you use pyarrow for serialization:
a, b, and __index_level_0__. If you’re using fastparquet, the
index may or may not
be written to the file.
This unexpected extra column causes some databases like Amazon Redshift to reject
the file, because that column doesn’t exist in the target table.
If you want to omit a dataframe’s indexes when writing, pass index=False to
to_parquet():
In [609]: df.to_parquet("test.parquet", index=False)
This creates a parquet file with just the two expected columns, a and b.
If your DataFrame has a custom index, you won’t get it back when you load
this file into a DataFrame.
Passing index=True will always write the index, even if that’s not the
underlying engine’s default behavior.
Partitioning Parquet files#
Parquet supports partitioning of data based on the values of one or more columns.
In [610]: df = pd.DataFrame({"a": [0, 0, 1, 1], "b": [0, 1, 0, 1]})
In [611]: df.to_parquet(path="test", engine="pyarrow", partition_cols=["a"], compression=None)
The path specifies the parent directory to which data will be saved.
The partition_cols are the column names by which the dataset will be partitioned.
Columns are partitioned in the order they are given. The partition splits are
determined by the unique values in the partition columns.
The above example creates a partitioned dataset that may look like:
test
├── a=0
│ ├── 0bac803e32dc42ae83fddfd029cbdebc.parquet
│ └── ...
└── a=1
├── e6ab24a4f45147b49b54a662f0c412a3.parquet
└── ...
ORC#
New in version 1.0.0.
Similar to the parquet format, the ORC Format is a binary columnar serialization
for data frames. It is designed to make reading data frames efficient. pandas provides both the reader and the writer for the
ORC format, read_orc() and to_orc(). This requires the pyarrow library.
Warning
It is highly recommended to install pyarrow using conda due to some issues occurred by pyarrow.
to_orc() requires pyarrow>=7.0.0.
read_orc() and to_orc() are not supported on Windows yet, you can find valid environments on install optional dependencies.
For supported dtypes please refer to supported ORC features in Arrow.
Currently timezones in datetime columns are not preserved when a dataframe is converted into ORC files.
In [612]: df = pd.DataFrame(
.....: {
.....: "a": list("abc"),
.....: "b": list(range(1, 4)),
.....: "c": np.arange(4.0, 7.0, dtype="float64"),
.....: "d": [True, False, True],
.....: "e": pd.date_range("20130101", periods=3),
.....: }
.....: )
.....:
In [613]: df
Out[613]:
a b c d e
0 a 1 4.0 True 2013-01-01
1 b 2 5.0 False 2013-01-02
2 c 3 6.0 True 2013-01-03
In [614]: df.dtypes
Out[614]:
a object
b int64
c float64
d bool
e datetime64[ns]
dtype: object
Write to an orc file.
In [615]: df.to_orc("example_pa.orc", engine="pyarrow")
Read from an orc file.
In [616]: result = pd.read_orc("example_pa.orc")
In [617]: result.dtypes
Out[617]:
a object
b int64
c float64
d bool
e datetime64[ns]
dtype: object
Read only certain columns of an orc file.
In [618]: result = pd.read_orc(
.....: "example_pa.orc",
.....: columns=["a", "b"],
.....: )
.....:
In [619]: result.dtypes
Out[619]:
a object
b int64
dtype: object
SQL queries#
The pandas.io.sql module provides a collection of query wrappers to both
facilitate data retrieval and to reduce dependency on DB-specific API. Database abstraction
is provided by SQLAlchemy if installed. In addition you will need a driver library for
your database. Examples of such drivers are psycopg2
for PostgreSQL or pymysql for MySQL.
For SQLite this is
included in Python’s standard library by default.
You can find an overview of supported drivers for each SQL dialect in the
SQLAlchemy docs.
If SQLAlchemy is not installed, a fallback is only provided for sqlite (and
for mysql for backwards compatibility, but this is deprecated and will be
removed in a future version).
This mode requires a Python database adapter which respect the Python
DB-API.
See also some cookbook examples for some advanced strategies.
The key functions are:
read_sql_table(table_name, con[, schema, ...])
Read SQL database table into a DataFrame.
read_sql_query(sql, con[, index_col, ...])
Read SQL query into a DataFrame.
read_sql(sql, con[, index_col, ...])
Read SQL query or database table into a DataFrame.
DataFrame.to_sql(name, con[, schema, ...])
Write records stored in a DataFrame to a SQL database.
Note
The function read_sql() is a convenience wrapper around
read_sql_table() and read_sql_query() (and for
backward compatibility) and will delegate to specific function depending on
the provided input (database table name or sql query).
Table names do not need to be quoted if they have special characters.
In the following example, we use the SQlite SQL database
engine. You can use a temporary SQLite database where data are stored in
“memory”.
To connect with SQLAlchemy you use the create_engine() function to create an engine
object from database URI. You only need to create the engine once per database you are
connecting to.
For more information on create_engine() and the URI formatting, see the examples
below and the SQLAlchemy documentation
In [620]: from sqlalchemy import create_engine
# Create your engine.
In [621]: engine = create_engine("sqlite:///:memory:")
If you want to manage your own connections you can pass one of those instead. The example below opens a
connection to the database using a Python context manager that automatically closes the connection after
the block has completed.
See the SQLAlchemy docs
for an explanation of how the database connection is handled.
with engine.connect() as conn, conn.begin():
data = pd.read_sql_table("data", conn)
Warning
When you open a connection to a database you are also responsible for closing it.
Side effects of leaving a connection open may include locking the database or
other breaking behaviour.
Writing DataFrames#
Assuming the following data is in a DataFrame data, we can insert it into
the database using to_sql().
id
Date
Col_1
Col_2
Col_3
26
2012-10-18
X
25.7
True
42
2012-10-19
Y
-12.4
False
63
2012-10-20
Z
5.73
True
In [622]: import datetime
In [623]: c = ["id", "Date", "Col_1", "Col_2", "Col_3"]
In [624]: d = [
.....: (26, datetime.datetime(2010, 10, 18), "X", 27.5, True),
.....: (42, datetime.datetime(2010, 10, 19), "Y", -12.5, False),
.....: (63, datetime.datetime(2010, 10, 20), "Z", 5.73, True),
.....: ]
.....:
In [625]: data = pd.DataFrame(d, columns=c)
In [626]: data
Out[626]:
id Date Col_1 Col_2 Col_3
0 26 2010-10-18 X 27.50 True
1 42 2010-10-19 Y -12.50 False
2 63 2010-10-20 Z 5.73 True
In [627]: data.to_sql("data", engine)
Out[627]: 3
With some databases, writing large DataFrames can result in errors due to
packet size limitations being exceeded. This can be avoided by setting the
chunksize parameter when calling to_sql. For example, the following
writes data to the database in batches of 1000 rows at a time:
In [628]: data.to_sql("data_chunked", engine, chunksize=1000)
Out[628]: 3
SQL data types#
to_sql() will try to map your data to an appropriate
SQL data type based on the dtype of the data. When you have columns of dtype
object, pandas will try to infer the data type.
You can always override the default type by specifying the desired SQL type of
any of the columns by using the dtype argument. This argument needs a
dictionary mapping column names to SQLAlchemy types (or strings for the sqlite3
fallback mode).
For example, specifying to use the sqlalchemy String type instead of the
default Text type for string columns:
In [629]: from sqlalchemy.types import String
In [630]: data.to_sql("data_dtype", engine, dtype={"Col_1": String})
Out[630]: 3
Note
Due to the limited support for timedelta’s in the different database
flavors, columns with type timedelta64 will be written as integer
values as nanoseconds to the database and a warning will be raised.
Note
Columns of category dtype will be converted to the dense representation
as you would get with np.asarray(categorical) (e.g. for string categories
this gives an array of strings).
Because of this, reading the database table back in does not generate
a categorical.
Datetime data types#
Using SQLAlchemy, to_sql() is capable of writing
datetime data that is timezone naive or timezone aware. However, the resulting
data stored in the database ultimately depends on the supported data type
for datetime data of the database system being used.
The following table lists supported data types for datetime data for some
common databases. Other database dialects may have different data types for
datetime data.
Database
SQL Datetime Types
Timezone Support
SQLite
TEXT
No
MySQL
TIMESTAMP or DATETIME
No
PostgreSQL
TIMESTAMP or TIMESTAMP WITH TIME ZONE
Yes
When writing timezone aware data to databases that do not support timezones,
the data will be written as timezone naive timestamps that are in local time
with respect to the timezone.
read_sql_table() is also capable of reading datetime data that is
timezone aware or naive. When reading TIMESTAMP WITH TIME ZONE types, pandas
will convert the data to UTC.
Insertion method#
The parameter method controls the SQL insertion clause used.
Possible values are:
None: Uses standard SQL INSERT clause (one per row).
'multi': Pass multiple values in a single INSERT clause.
It uses a special SQL syntax not supported by all backends.
This usually provides better performance for analytic databases
like Presto and Redshift, but has worse performance for
traditional SQL backend if the table contains many columns.
For more information check the SQLAlchemy documentation.
callable with signature (pd_table, conn, keys, data_iter):
This can be used to implement a more performant insertion method based on
specific backend dialect features.
Example of a callable using PostgreSQL COPY clause:
# Alternative to_sql() *method* for DBs that support COPY FROM
import csv
from io import StringIO
def psql_insert_copy(table, conn, keys, data_iter):
"""
Execute SQL statement inserting data
Parameters
----------
table : pandas.io.sql.SQLTable
conn : sqlalchemy.engine.Engine or sqlalchemy.engine.Connection
keys : list of str
Column names
data_iter : Iterable that iterates the values to be inserted
"""
# gets a DBAPI connection that can provide a cursor
dbapi_conn = conn.connection
with dbapi_conn.cursor() as cur:
s_buf = StringIO()
writer = csv.writer(s_buf)
writer.writerows(data_iter)
s_buf.seek(0)
columns = ', '.join(['"{}"'.format(k) for k in keys])
if table.schema:
table_name = '{}.{}'.format(table.schema, table.name)
else:
table_name = table.name
sql = 'COPY {} ({}) FROM STDIN WITH CSV'.format(
table_name, columns)
cur.copy_expert(sql=sql, file=s_buf)
Reading tables#
read_sql_table() will read a database table given the
table name and optionally a subset of columns to read.
Note
In order to use read_sql_table(), you must have the
SQLAlchemy optional dependency installed.
In [631]: pd.read_sql_table("data", engine)
Out[631]:
index id Date Col_1 Col_2 Col_3
0 0 26 2010-10-18 X 27.50 True
1 1 42 2010-10-19 Y -12.50 False
2 2 63 2010-10-20 Z 5.73 True
Note
Note that pandas infers column dtypes from query outputs, and not by looking
up data types in the physical database schema. For example, assume userid
is an integer column in a table. Then, intuitively, select userid ... will
return integer-valued series, while select cast(userid as text) ... will
return object-valued (str) series. Accordingly, if the query output is empty,
then all resulting columns will be returned as object-valued (since they are
most general). If you foresee that your query will sometimes generate an empty
result, you may want to explicitly typecast afterwards to ensure dtype
integrity.
You can also specify the name of the column as the DataFrame index,
and specify a subset of columns to be read.
In [632]: pd.read_sql_table("data", engine, index_col="id")
Out[632]:
index Date Col_1 Col_2 Col_3
id
26 0 2010-10-18 X 27.50 True
42 1 2010-10-19 Y -12.50 False
63 2 2010-10-20 Z 5.73 True
In [633]: pd.read_sql_table("data", engine, columns=["Col_1", "Col_2"])
Out[633]:
Col_1 Col_2
0 X 27.50
1 Y -12.50
2 Z 5.73
And you can explicitly force columns to be parsed as dates:
In [634]: pd.read_sql_table("data", engine, parse_dates=["Date"])
Out[634]:
index id Date Col_1 Col_2 Col_3
0 0 26 2010-10-18 X 27.50 True
1 1 42 2010-10-19 Y -12.50 False
2 2 63 2010-10-20 Z 5.73 True
If needed you can explicitly specify a format string, or a dict of arguments
to pass to pandas.to_datetime():
pd.read_sql_table("data", engine, parse_dates={"Date": "%Y-%m-%d"})
pd.read_sql_table(
"data",
engine,
parse_dates={"Date": {"format": "%Y-%m-%d %H:%M:%S"}},
)
You can check if a table exists using has_table()
Schema support#
Reading from and writing to different schema’s is supported through the schema
keyword in the read_sql_table() and to_sql()
functions. Note however that this depends on the database flavor (sqlite does not
have schema’s). For example:
df.to_sql("table", engine, schema="other_schema")
pd.read_sql_table("table", engine, schema="other_schema")
Querying#
You can query using raw SQL in the read_sql_query() function.
In this case you must use the SQL variant appropriate for your database.
When using SQLAlchemy, you can also pass SQLAlchemy Expression language constructs,
which are database-agnostic.
In [635]: pd.read_sql_query("SELECT * FROM data", engine)
Out[635]:
index id Date Col_1 Col_2 Col_3
0 0 26 2010-10-18 00:00:00.000000 X 27.50 1
1 1 42 2010-10-19 00:00:00.000000 Y -12.50 0
2 2 63 2010-10-20 00:00:00.000000 Z 5.73 1
Of course, you can specify a more “complex” query.
In [636]: pd.read_sql_query("SELECT id, Col_1, Col_2 FROM data WHERE id = 42;", engine)
Out[636]:
id Col_1 Col_2
0 42 Y -12.5
The read_sql_query() function supports a chunksize argument.
Specifying this will return an iterator through chunks of the query result:
In [637]: df = pd.DataFrame(np.random.randn(20, 3), columns=list("abc"))
In [638]: df.to_sql("data_chunks", engine, index=False)
Out[638]: 20
In [639]: for chunk in pd.read_sql_query("SELECT * FROM data_chunks", engine, chunksize=5):
.....: print(chunk)
.....:
a b c
0 0.070470 0.901320 0.937577
1 0.295770 1.420548 -0.005283
2 -1.518598 -0.730065 0.226497
3 -2.061465 0.632115 0.853619
4 2.719155 0.139018 0.214557
a b c
0 -1.538924 -0.366973 -0.748801
1 -0.478137 -1.559153 -3.097759
2 -2.320335 -0.221090 0.119763
3 0.608228 1.064810 -0.780506
4 -2.736887 0.143539 1.170191
a b c
0 -1.573076 0.075792 -1.722223
1 -0.774650 0.803627 0.221665
2 0.584637 0.147264 1.057825
3 -0.284136 0.912395 1.552808
4 0.189376 -0.109830 0.539341
a b c
0 0.592591 -0.155407 -1.356475
1 0.833837 1.524249 1.606722
2 -0.029487 -0.051359 1.700152
3 0.921484 -0.926347 0.979818
4 0.182380 -0.186376 0.049820
You can also run a plain query without creating a DataFrame with
execute(). This is useful for queries that don’t return values,
such as INSERT. This is functionally equivalent to calling execute on the
SQLAlchemy engine or db connection object. Again, you must use the SQL syntax
variant appropriate for your database.
from pandas.io import sql
sql.execute("SELECT * FROM table_name", engine)
sql.execute(
"INSERT INTO table_name VALUES(?, ?, ?)", engine, params=[("id", 1, 12.2, True)]
)
Engine connection examples#
To connect with SQLAlchemy you use the create_engine() function to create an engine
object from database URI. You only need to create the engine once per database you are
connecting to.
from sqlalchemy import create_engine
engine = create_engine("postgresql://scott:[email protected]:5432/mydatabase")
engine = create_engine("mysql+mysqldb://scott:[email protected]/foo")
engine = create_engine("oracle://scott:[email protected]7.0.0.1:1521/sidname")
engine = create_engine("mssql+pyodbc://mydsn")
# sqlite://<nohostname>/<path>
# where <path> is relative:
engine = create_engine("sqlite:///foo.db")
# or absolute, starting with a slash:
engine = create_engine("sqlite:////absolute/path/to/foo.db")
For more information see the examples the SQLAlchemy documentation
Advanced SQLAlchemy queries#
You can use SQLAlchemy constructs to describe your query.
Use sqlalchemy.text() to specify query parameters in a backend-neutral way
In [640]: import sqlalchemy as sa
In [641]: pd.read_sql(
.....: sa.text("SELECT * FROM data where Col_1=:col1"), engine, params={"col1": "X"}
.....: )
.....:
Out[641]:
index id Date Col_1 Col_2 Col_3
0 0 26 2010-10-18 00:00:00.000000 X 27.5 1
If you have an SQLAlchemy description of your database you can express where conditions using SQLAlchemy expressions
In [642]: metadata = sa.MetaData()
In [643]: data_table = sa.Table(
.....: "data",
.....: metadata,
.....: sa.Column("index", sa.Integer),
.....: sa.Column("Date", sa.DateTime),
.....: sa.Column("Col_1", sa.String),
.....: sa.Column("Col_2", sa.Float),
.....: sa.Column("Col_3", sa.Boolean),
.....: )
.....:
In [644]: pd.read_sql(sa.select([data_table]).where(data_table.c.Col_3 is True), engine)
Out[644]:
Empty DataFrame
Columns: [index, Date, Col_1, Col_2, Col_3]
Index: []
You can combine SQLAlchemy expressions with parameters passed to read_sql() using sqlalchemy.bindparam()
In [645]: import datetime as dt
In [646]: expr = sa.select([data_table]).where(data_table.c.Date > sa.bindparam("date"))
In [647]: pd.read_sql(expr, engine, params={"date": dt.datetime(2010, 10, 18)})
Out[647]:
index Date Col_1 Col_2 Col_3
0 1 2010-10-19 Y -12.50 False
1 2 2010-10-20 Z 5.73 True
Sqlite fallback#
The use of sqlite is supported without using SQLAlchemy.
This mode requires a Python database adapter which respect the Python
DB-API.
You can create connections like so:
import sqlite3
con = sqlite3.connect(":memory:")
And then issue the following queries:
data.to_sql("data", con)
pd.read_sql_query("SELECT * FROM data", con)
Google BigQuery#
Warning
Starting in 0.20.0, pandas has split off Google BigQuery support into the
separate package pandas-gbq. You can pip install pandas-gbq to get it.
The pandas-gbq package provides functionality to read/write from Google BigQuery.
pandas integrates with this external package. if pandas-gbq is installed, you can
use the pandas methods pd.read_gbq and DataFrame.to_gbq, which will call the
respective functions from pandas-gbq.
Full documentation can be found here.
Stata format#
Writing to stata format#
The method to_stata() will write a DataFrame
into a .dta file. The format version of this file is always 115 (Stata 12).
In [648]: df = pd.DataFrame(np.random.randn(10, 2), columns=list("AB"))
In [649]: df.to_stata("stata.dta")
Stata data files have limited data type support; only strings with
244 or fewer characters, int8, int16, int32, float32
and float64 can be stored in .dta files. Additionally,
Stata reserves certain values to represent missing data. Exporting a
non-missing value that is outside of the permitted range in Stata for
a particular data type will retype the variable to the next larger
size. For example, int8 values are restricted to lie between -127
and 100 in Stata, and so variables with values above 100 will trigger
a conversion to int16. nan values in floating points data
types are stored as the basic missing data type (. in Stata).
Note
It is not possible to export missing data values for integer data types.
The Stata writer gracefully handles other data types including int64,
bool, uint8, uint16, uint32 by casting to
the smallest supported type that can represent the data. For example, data
with a type of uint8 will be cast to int8 if all values are less than
100 (the upper bound for non-missing int8 data in Stata), or, if values are
outside of this range, the variable is cast to int16.
Warning
Conversion from int64 to float64 may result in a loss of precision
if int64 values are larger than 2**53.
Warning
StataWriter and
to_stata() only support fixed width
strings containing up to 244 characters, a limitation imposed by the version
115 dta file format. Attempting to write Stata dta files with strings
longer than 244 characters raises a ValueError.
Reading from Stata format#
The top-level function read_stata will read a dta file and return
either a DataFrame or a StataReader that can
be used to read the file incrementally.
In [650]: pd.read_stata("stata.dta")
Out[650]:
index A B
0 0 -1.690072 0.405144
1 1 -1.511309 -1.531396
2 2 0.572698 -1.106845
3 3 -1.185859 0.174564
4 4 0.603797 -1.796129
5 5 -0.791679 1.173795
6 6 -0.277710 1.859988
7 7 -0.258413 1.251808
8 8 1.443262 0.441553
9 9 1.168163 -2.054946
Specifying a chunksize yields a
StataReader instance that can be used to
read chunksize lines from the file at a time. The StataReader
object can be used as an iterator.
In [651]: with pd.read_stata("stata.dta", chunksize=3) as reader:
.....: for df in reader:
.....: print(df.shape)
.....:
(3, 3)
(3, 3)
(3, 3)
(1, 3)
For more fine-grained control, use iterator=True and specify
chunksize with each call to
read().
In [652]: with pd.read_stata("stata.dta", iterator=True) as reader:
.....: chunk1 = reader.read(5)
.....: chunk2 = reader.read(5)
.....:
Currently the index is retrieved as a column.
The parameter convert_categoricals indicates whether value labels should be
read and used to create a Categorical variable from them. Value labels can
also be retrieved by the function value_labels, which requires read()
to be called before use.
The parameter convert_missing indicates whether missing value
representations in Stata should be preserved. If False (the default),
missing values are represented as np.nan. If True, missing values are
represented using StataMissingValue objects, and columns containing missing
values will have object data type.
Note
read_stata() and
StataReader support .dta formats 113-115
(Stata 10-12), 117 (Stata 13), and 118 (Stata 14).
Note
Setting preserve_dtypes=False will upcast to the standard pandas data types:
int64 for all integer types and float64 for floating point data. By default,
the Stata data types are preserved when importing.
Categorical data#
Categorical data can be exported to Stata data files as value labeled data.
The exported data consists of the underlying category codes as integer data values
and the categories as value labels. Stata does not have an explicit equivalent
to a Categorical and information about whether the variable is ordered
is lost when exporting.
Warning
Stata only supports string value labels, and so str is called on the
categories when exporting data. Exporting Categorical variables with
non-string categories produces a warning, and can result a loss of
information if the str representations of the categories are not unique.
Labeled data can similarly be imported from Stata data files as Categorical
variables using the keyword argument convert_categoricals (True by default).
The keyword argument order_categoricals (True by default) determines
whether imported Categorical variables are ordered.
Note
When importing categorical data, the values of the variables in the Stata
data file are not preserved since Categorical variables always
use integer data types between -1 and n-1 where n is the number
of categories. If the original values in the Stata data file are required,
these can be imported by setting convert_categoricals=False, which will
import original data (but not the variable labels). The original values can
be matched to the imported categorical data since there is a simple mapping
between the original Stata data values and the category codes of imported
Categorical variables: missing values are assigned code -1, and the
smallest original value is assigned 0, the second smallest is assigned
1 and so on until the largest original value is assigned the code n-1.
Note
Stata supports partially labeled series. These series have value labels for
some but not all data values. Importing a partially labeled series will produce
a Categorical with string categories for the values that are labeled and
numeric categories for values with no label.
SAS formats#
The top-level function read_sas() can read (but not write) SAS
XPORT (.xpt) and (since v0.18.0) SAS7BDAT (.sas7bdat) format files.
SAS files only contain two value types: ASCII text and floating point
values (usually 8 bytes but sometimes truncated). For xport files,
there is no automatic type conversion to integers, dates, or
categoricals. For SAS7BDAT files, the format codes may allow date
variables to be automatically converted to dates. By default the
whole file is read and returned as a DataFrame.
Specify a chunksize or use iterator=True to obtain reader
objects (XportReader or SAS7BDATReader) for incrementally
reading the file. The reader objects also have attributes that
contain additional information about the file and its variables.
Read a SAS7BDAT file:
df = pd.read_sas("sas_data.sas7bdat")
Obtain an iterator and read an XPORT file 100,000 lines at a time:
def do_something(chunk):
pass
with pd.read_sas("sas_xport.xpt", chunk=100000) as rdr:
for chunk in rdr:
do_something(chunk)
The specification for the xport file format is available from the SAS
web site.
No official documentation is available for the SAS7BDAT format.
SPSS formats#
New in version 0.25.0.
The top-level function read_spss() can read (but not write) SPSS
SAV (.sav) and ZSAV (.zsav) format files.
SPSS files contain column names. By default the
whole file is read, categorical columns are converted into pd.Categorical,
and a DataFrame with all columns is returned.
Specify the usecols parameter to obtain a subset of columns. Specify convert_categoricals=False
to avoid converting categorical columns into pd.Categorical.
Read an SPSS file:
df = pd.read_spss("spss_data.sav")
Extract a subset of columns contained in usecols from an SPSS file and
avoid converting categorical columns into pd.Categorical:
df = pd.read_spss(
"spss_data.sav",
usecols=["foo", "bar"],
convert_categoricals=False,
)
More information about the SAV and ZSAV file formats is available here.
Other file formats#
pandas itself only supports IO with a limited set of file formats that map
cleanly to its tabular data model. For reading and writing other file formats
into and from pandas, we recommend these packages from the broader community.
netCDF#
xarray provides data structures inspired by the pandas DataFrame for working
with multi-dimensional datasets, with a focus on the netCDF file format and
easy conversion to and from pandas.
Performance considerations#
This is an informal comparison of various IO methods, using pandas
0.24.2. Timings are machine dependent and small differences should be
ignored.
In [1]: sz = 1000000
In [2]: df = pd.DataFrame({'A': np.random.randn(sz), 'B': [1] * sz})
In [3]: df.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1000000 entries, 0 to 999999
Data columns (total 2 columns):
A 1000000 non-null float64
B 1000000 non-null int64
dtypes: float64(1), int64(1)
memory usage: 15.3 MB
The following test functions will be used below to compare the performance of several IO methods:
import numpy as np
import os
sz = 1000000
df = pd.DataFrame({"A": np.random.randn(sz), "B": [1] * sz})
sz = 1000000
np.random.seed(42)
df = pd.DataFrame({"A": np.random.randn(sz), "B": [1] * sz})
def test_sql_write(df):
if os.path.exists("test.sql"):
os.remove("test.sql")
sql_db = sqlite3.connect("test.sql")
df.to_sql(name="test_table", con=sql_db)
sql_db.close()
def test_sql_read():
sql_db = sqlite3.connect("test.sql")
pd.read_sql_query("select * from test_table", sql_db)
sql_db.close()
def test_hdf_fixed_write(df):
df.to_hdf("test_fixed.hdf", "test", mode="w")
def test_hdf_fixed_read():
pd.read_hdf("test_fixed.hdf", "test")
def test_hdf_fixed_write_compress(df):
df.to_hdf("test_fixed_compress.hdf", "test", mode="w", complib="blosc")
def test_hdf_fixed_read_compress():
pd.read_hdf("test_fixed_compress.hdf", "test")
def test_hdf_table_write(df):
df.to_hdf("test_table.hdf", "test", mode="w", format="table")
def test_hdf_table_read():
pd.read_hdf("test_table.hdf", "test")
def test_hdf_table_write_compress(df):
df.to_hdf(
"test_table_compress.hdf", "test", mode="w", complib="blosc", format="table"
)
def test_hdf_table_read_compress():
pd.read_hdf("test_table_compress.hdf", "test")
def test_csv_write(df):
df.to_csv("test.csv", mode="w")
def test_csv_read():
pd.read_csv("test.csv", index_col=0)
def test_feather_write(df):
df.to_feather("test.feather")
def test_feather_read():
pd.read_feather("test.feather")
def test_pickle_write(df):
df.to_pickle("test.pkl")
def test_pickle_read():
pd.read_pickle("test.pkl")
def test_pickle_write_compress(df):
df.to_pickle("test.pkl.compress", compression="xz")
def test_pickle_read_compress():
pd.read_pickle("test.pkl.compress", compression="xz")
def test_parquet_write(df):
df.to_parquet("test.parquet")
def test_parquet_read():
pd.read_parquet("test.parquet")
When writing, the top three functions in terms of speed are test_feather_write, test_hdf_fixed_write and test_hdf_fixed_write_compress.
In [4]: %timeit test_sql_write(df)
3.29 s ± 43.2 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [5]: %timeit test_hdf_fixed_write(df)
19.4 ms ± 560 µs per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [6]: %timeit test_hdf_fixed_write_compress(df)
19.6 ms ± 308 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [7]: %timeit test_hdf_table_write(df)
449 ms ± 5.61 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [8]: %timeit test_hdf_table_write_compress(df)
448 ms ± 11.9 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [9]: %timeit test_csv_write(df)
3.66 s ± 26.2 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [10]: %timeit test_feather_write(df)
9.75 ms ± 117 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [11]: %timeit test_pickle_write(df)
30.1 ms ± 229 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [12]: %timeit test_pickle_write_compress(df)
4.29 s ± 15.9 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [13]: %timeit test_parquet_write(df)
67.6 ms ± 706 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
When reading, the top three functions in terms of speed are test_feather_read, test_pickle_read and
test_hdf_fixed_read.
In [14]: %timeit test_sql_read()
1.77 s ± 17.7 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [15]: %timeit test_hdf_fixed_read()
19.4 ms ± 436 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [16]: %timeit test_hdf_fixed_read_compress()
19.5 ms ± 222 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [17]: %timeit test_hdf_table_read()
38.6 ms ± 857 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [18]: %timeit test_hdf_table_read_compress()
38.8 ms ± 1.49 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [19]: %timeit test_csv_read()
452 ms ± 9.04 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [20]: %timeit test_feather_read()
12.4 ms ± 99.7 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [21]: %timeit test_pickle_read()
18.4 ms ± 191 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [22]: %timeit test_pickle_read_compress()
915 ms ± 7.48 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [23]: %timeit test_parquet_read()
24.4 ms ± 146 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
The files test.pkl.compress, test.parquet and test.feather took the least space on disk (in bytes).
29519500 Oct 10 06:45 test.csv
16000248 Oct 10 06:45 test.feather
8281983 Oct 10 06:49 test.parquet
16000857 Oct 10 06:47 test.pkl
7552144 Oct 10 06:48 test.pkl.compress
34816000 Oct 10 06:42 test.sql
24009288 Oct 10 06:43 test_fixed.hdf
24009288 Oct 10 06:43 test_fixed_compress.hdf
24458940 Oct 10 06:44 test_table.hdf
24458940 Oct 10 06:44 test_table_compress.hdf
| 340
| 956
|
python/ pandas how to convert a list to a single cell and store in excel or in cvs format
[I am expecting the output as shown in the left side, i am getting the output as shown in the right side]
1I have a list:
listA = ['Vlan VN-Segment', '==== ==========', '800 30800', '801 30801', '3951 33951']
My output should be
vlan vn-segment
==== ==========
800 30800
801 30801
3951 33951
But all the 4 rows show be in a single CELL in Excel. as above
I tried the following, but the output will be in 4 different rows in the Excel/cvs
my_input_file = open('n9k-1.txt')
my_string = my_input_file.read().strip()
my_list = json.loads(my_string)
#print(type(my_list))
x = (my_list[2])
print(x)
t = StringIO('\n'.join(map(str, x)))
df = pd.read_csv(t)
df2 = df.to_csv('/Users/masam/Python-Scripts/new.csv', index=False)
|
64,486,178
|
Transform a Pandas Series into a Dataframe with a for loop
|
<p>Thank in advance for anyone's help.</p>
<p>I'm trying to transform this Pandas series into a Dataframe with the following logic.</p>
<p>Any time a row from the series starts with "MB" it should create another column in the dataframe, and all the rows below it until the next "MB" should go under that column.</p>
<pre><code>MB104
TR15
TR16
SP16
MB301
TR16
SP11
SP16
SP26
SP67
MB302
TR15
MB504
TR15
SP16
SP67
SP109
MB652
SP109
SP110
</code></pre>
<p>Into this:</p>
<pre><code>MB104 MB031 MB302 MB504 MB652
TR15 TR16 TR15 TR15 SP109
TR16 SP11 SP16 SP110
SP16 SP16 SP67
SP26 SP109
SP67
</code></pre>
<p>And this is what I've tried so far</p>
<pre><code>mbdf = pd.DataFrame()
assetlist = []
for row in mbs.itertuples():
left2 = row.data[:2]
if left2 == 'MB':
if headername:
mbdf[headername] = pd.Series(assetlist)
headername = row.data
assetlist = []
else:
assetname = row.data
assetlist.append(assetname)
</code></pre>
| 64,486,402
| 2020-10-22T16:05:21.053000
| 3
| null | 1
| 181
|
python|pandas
|
<p>It's unclear from your question whether you want them as separate Series or together in the same DataFrame. I assume you want a DataFrame:</p>
<pre><code># Read the data
from collections import defaultdict
data = defaultdict(list)
col = None
with open('data.txt') as fp:
for line in fp:
line = line.strip('\n')
if line.startswith('MB'):
col = line
else:
data[col].append(line)
</code></pre>
<p>If you want a collection of series:</p>
<pre><code>series = [pd.Series(value, name=key) for key, value in data.items()]
</code></pre>
<p>If you want a DataFrame:</p>
<pre><code># Pad every column to the same length
max_len = max(len(v) for v in data.values())
for key, value in data.items():
value += [None for _ in range(max_len - len(value))]
df = pd.DataFrame(data)
</code></pre>
| 2020-10-22T16:20:29.660000
| 0
|
https://pandas.pydata.org/docs/reference/api/pandas.Series.iteritems.html
|
It's unclear from your question whether you want them as separate Series or together in the same DataFrame. I assume you want a DataFrame:
# Read the data
from collections import defaultdict
data = defaultdict(list)
col = None
with open('data.txt') as fp:
for line in fp:
line = line.strip('\n')
if line.startswith('MB'):
col = line
else:
data[col].append(line)
If you want a collection of series:
series = [pd.Series(value, name=key) for key, value in data.items()]
If you want a DataFrame:
# Pad every column to the same length
max_len = max(len(v) for v in data.values())
for key, value in data.items():
value += [None for _ in range(max_len - len(value))]
df = pd.DataFrame(data)
| 0
| 744
|
Transform a Pandas Series into a Dataframe with a for loop
Thank in advance for anyone's help.
I'm trying to transform this Pandas series into a Dataframe with the following logic.
Any time a row from the series starts with "MB" it should create another column in the dataframe, and all the rows below it until the next "MB" should go under that column.
MB104
TR15
TR16
SP16
MB301
TR16
SP11
SP16
SP26
SP67
MB302
TR15
MB504
TR15
SP16
SP67
SP109
MB652
SP109
SP110
Into this:
MB104 MB031 MB302 MB504 MB652
TR15 TR16 TR15 TR15 SP109
TR16 SP11 SP16 SP110
SP16 SP16 SP67
SP26 SP109
SP67
And this is what I've tried so far
mbdf = pd.DataFrame()
assetlist = []
for row in mbs.itertuples():
left2 = row.data[:2]
if left2 == 'MB':
if headername:
mbdf[headername] = pd.Series(assetlist)
headername = row.data
assetlist = []
else:
assetname = row.data
assetlist.append(assetname)
|
64,568,472
|
How to map to multiple values in a dictionary in pandas
|
<p>I have the following <code>pandas df</code>:</p>
<pre><code>Name
Jack
Alex
Jackie
Susan
</code></pre>
<p>i also have the following dict:</p>
<pre><code>d = {'Jack':['Male','22'],'Alex':['Male','26'],'Jackie':['Female','28'],'Susan':['Female','30']}
</code></pre>
<p>I would like to add in two colums for <code>Gender</code> and <code>Age</code> so that my <code>df</code> returns:</p>
<pre><code>Name Gender Age
Jack Male 22
Alex Male 26
Jackie Female 28
Susan Female 30
</code></pre>
<p>I have tried:</p>
<pre><code>df['Gender'] = df.Name.map(d[0])
df['Age'] = df.Name.map(d[1])
</code></pre>
<p>but no such luck. Any ideas or help would be muhc appreciated! Thanks!</p>
| 64,568,548
| 2020-10-28T07:48:13.813000
| 4
| null | 1
| 735
|
python|pandas
|
<p>Solutions working well also if no match in dictionary like:</p>
<pre><code>d = {'Alex':['Male','26'],'Jackie':['Female','28'],'Susan':['Female','30']}
print (df)
Name Gender Age
0 Alex Male 26
1 Jack NaN NaN
2 Jackie Female 28
3 Susan Female 30
</code></pre>
<hr />
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.from_dict.html" rel="nofollow noreferrer"><code>DataFrame.from_dict</code></a> from your dictionary and add to column <code>Name</code> by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.join.html" rel="nofollow noreferrer"><code>DataFrame.join</code></a>, advantage is if more columns in input data all working same way:</p>
<pre><code>df = df.join(pd.DataFrame.from_dict(d, orient='index', columns=['Gender','Age']), on='Name')
print (df)
Name Gender Age
0 Jack Male 22
1 Alex Male 26
2 Jackie Female 28
3 Susan Female 30
</code></pre>
<p>Your solution should working if create 2 dictionaries:</p>
<pre><code>d1 = {k:v[0] for k,v in d.items()}
d2 = {k:v[1] for k,v in d.items()}
df['Gender'] = df.Name.map(d1)
df['Age'] = df.Name.map(d2)
print (df)
Name Gender Age
0 Jack Male 22
1 Alex Male 26
2 Jackie Female 28
3 Susan Female 30
</code></pre>
| 2020-10-28T07:52:06.370000
| 0
|
https://pandas.pydata.org/docs/reference/api/pandas.Series.map.html
|
pandas.Series.map#
pandas.Series.map#
Series.map(arg, na_action=None)[source]#
Map values of Series according to an input mapping or function.
Used for substituting each value in a Series with another value,
that may be derived from a function, a dict or
a Series.
Parameters
argfunction, collections.abc.Mapping subclass or SeriesMapping correspondence.
Solutions working well also if no match in dictionary like:
d = {'Alex':['Male','26'],'Jackie':['Female','28'],'Susan':['Female','30']}
print (df)
Name Gender Age
0 Alex Male 26
1 Jack NaN NaN
2 Jackie Female 28
3 Susan Female 30
Use DataFrame.from_dict from your dictionary and add to column Name by DataFrame.join, advantage is if more columns in input data all working same way:
df = df.join(pd.DataFrame.from_dict(d, orient='index', columns=['Gender','Age']), on='Name')
print (df)
Name Gender Age
0 Jack Male 22
1 Alex Male 26
2 Jackie Female 28
3 Susan Female 30
Your solution should working if create 2 dictionaries:
d1 = {k:v[0] for k,v in d.items()}
d2 = {k:v[1] for k,v in d.items()}
df['Gender'] = df.Name.map(d1)
df['Age'] = df.Name.map(d2)
print (df)
Name Gender Age
0 Jack Male 22
1 Alex Male 26
2 Jackie Female 28
3 Susan Female 30
na_action{None, ‘ignore’}, default NoneIf ‘ignore’, propagate NaN values, without passing them to the
mapping correspondence.
Returns
SeriesSame index as caller.
See also
Series.applyFor applying more complex functions on a Series.
DataFrame.applyApply a function row-/column-wise.
DataFrame.applymapApply a function elementwise on a whole DataFrame.
Notes
When arg is a dictionary, values in Series that are not in the
dictionary (as keys) are converted to NaN. However, if the
dictionary is a dict subclass that defines __missing__ (i.e.
provides a method for default values), then this default is used
rather than NaN.
Examples
>>> s = pd.Series(['cat', 'dog', np.nan, 'rabbit'])
>>> s
0 cat
1 dog
2 NaN
3 rabbit
dtype: object
map accepts a dict or a Series. Values that are not found
in the dict are converted to NaN, unless the dict has a default
value (e.g. defaultdict):
>>> s.map({'cat': 'kitten', 'dog': 'puppy'})
0 kitten
1 puppy
2 NaN
3 NaN
dtype: object
It also accepts a function:
>>> s.map('I am a {}'.format)
0 I am a cat
1 I am a dog
2 I am a nan
3 I am a rabbit
dtype: object
To avoid applying the function to missing values (and keep them as
NaN) na_action='ignore' can be used:
>>> s.map('I am a {}'.format, na_action='ignore')
0 I am a cat
1 I am a dog
2 NaN
3 I am a rabbit
dtype: object
| 361
| 1,300
|
How to map to multiple values in a dictionary in pandas
I have the following pandas df:
Name
Jack
Alex
Jackie
Susan
i also have the following dict:
d = {'Jack':['Male','22'],'Alex':['Male','26'],'Jackie':['Female','28'],'Susan':['Female','30']}
I would like to add in two colums for Gender and Age so that my df returns:
Name Gender Age
Jack Male 22
Alex Male 26
Jackie Female 28
Susan Female 30
I have tried:
df['Gender'] = df.Name.map(d[0])
df['Age'] = df.Name.map(d[1])
but no such luck. Any ideas or help would be muhc appreciated! Thanks!
|
66,067,573
|
How do I assign a value to a specific row and column in a pandas database?
|
<p>I have an integer:</p>
<p><code>num = 1</code></p>
<p>,and a database table <code>points</code>:</p>
<pre><code> X Y
0
1
2
3
</code></pre>
<p>How would I go about placing <code>num</code> into column <code>X</code> and field <code>3</code> using pandas?</p>
<p>I have searched around and found <code>points.ix[]</code>, which selects a specific row but using this I get an error message:</p>
<p><code>AttributeError: 'DataFrame' object has no attribute 'ix'</code></p>
<p>Apart from this I can't find anything else.</p>
| 66,068,195
| 2021-02-05T17:08:39.013000
| 2
| null | -1
| 492
|
python|pandas
|
<p><code>pandas.DataFrame.ix</code> deprecated since version 0.20.0</p>
<p>You can use <code>df.loc[3, 'X']</code> for the same result.</p>
| 2021-02-05T17:53:01.307000
| 0
|
https://pandas.pydata.org/docs/dev/getting_started/intro_tutorials/03_subset_data.html
|
How do I select a subset of a DataFrame?#
In [1]: import pandas as pd
Data used for this tutorial:
Titanic data
This tutorial uses the Titanic data set, stored as CSV. The data
consists of the following data columns:
PassengerId: Id of every passenger.
Survived: Indication whether passenger survived. 0 for yes and 1 for no.
Pclass: One out of the 3 ticket classes: Class 1, Class 2 and Class 3.
Name: Name of passenger.
Sex: Gender of passenger.
Age: Age of passenger in years.
SibSp: Number of siblings or spouses aboard.
Parch: Number of parents or children aboard.
Ticket: Ticket number of passenger.
Fare: Indicating the fare.
Cabin: Cabin number of passenger.
Embarked: Port of embarkation.
To raw data
In [2]: titanic = pd.read_csv("data/titanic.csv")
In [3]: titanic.head()
Out[3]:
PassengerId Survived Pclass ... Fare Cabin Embarked
0 1 0 3 ... 7.2500 NaN S
1 2 1 1 ... 71.2833 C85 C
2 3 1 3 ... 7.9250 NaN S
pandas.DataFrame.ix deprecated since version 0.20.0
You can use df.loc[3, 'X'] for the same result.
3 4 1 1 ... 53.1000 C123 S
4 5 0 3 ... 8.0500 NaN S
[5 rows x 12 columns]
How do I select a subset of a DataFrame?#
How do I select specific columns from a DataFrame?#
I’m interested in the age of the Titanic passengers.
In [4]: ages = titanic["Age"]
In [5]: ages.head()
Out[5]:
0 22.0
1 38.0
2 26.0
3 35.0
4 35.0
Name: Age, dtype: float64
To select a single column, use square brackets [] with the column
name of the column of interest.
Each column in a DataFrame is a Series. As a single column is
selected, the returned object is a pandas Series. We can verify this
by checking the type of the output:
In [6]: type(titanic["Age"])
Out[6]: pandas.core.series.Series
And have a look at the shape of the output:
In [7]: titanic["Age"].shape
Out[7]: (891,)
DataFrame.shape is an attribute (remember tutorial on reading and writing, do not use parentheses for attributes) of a
pandas Series and DataFrame containing the number of rows and
columns: (nrows, ncolumns). A pandas Series is 1-dimensional and only
the number of rows is returned.
I’m interested in the age and sex of the Titanic passengers.
In [8]: age_sex = titanic[["Age", "Sex"]]
In [9]: age_sex.head()
Out[9]:
Age Sex
0 22.0 male
1 38.0 female
2 26.0 female
3 35.0 female
4 35.0 male
To select multiple columns, use a list of column names within the
selection brackets [].
Note
The inner square brackets define a
Python list with column names, whereas
the outer brackets are used to select the data from a pandas
DataFrame as seen in the previous example.
The returned data type is a pandas DataFrame:
In [10]: type(titanic[["Age", "Sex"]])
Out[10]: pandas.core.frame.DataFrame
In [11]: titanic[["Age", "Sex"]].shape
Out[11]: (891, 2)
The selection returned a DataFrame with 891 rows and 2 columns. Remember, a
DataFrame is 2-dimensional with both a row and column dimension.
To user guideFor basic information on indexing, see the user guide section on indexing and selecting data.
How do I filter specific rows from a DataFrame?#
I’m interested in the passengers older than 35 years.
In [12]: above_35 = titanic[titanic["Age"] > 35]
In [13]: above_35.head()
Out[13]:
PassengerId Survived Pclass ... Fare Cabin Embarked
1 2 1 1 ... 71.2833 C85 C
6 7 0 1 ... 51.8625 E46 S
11 12 1 1 ... 26.5500 C103 S
13 14 0 3 ... 31.2750 NaN S
15 16 1 2 ... 16.0000 NaN S
[5 rows x 12 columns]
To select rows based on a conditional expression, use a condition inside
the selection brackets [].
The condition inside the selection
brackets titanic["Age"] > 35 checks for which rows the Age
column has a value larger than 35:
In [14]: titanic["Age"] > 35
Out[14]:
0 False
1 True
2 False
3 False
4 False
...
886 False
887 False
888 False
889 False
890 False
Name: Age, Length: 891, dtype: bool
The output of the conditional expression (>, but also ==,
!=, <, <=,… would work) is actually a pandas Series of
boolean values (either True or False) with the same number of
rows as the original DataFrame. Such a Series of boolean values
can be used to filter the DataFrame by putting it in between the
selection brackets []. Only rows for which the value is True
will be selected.
We know from before that the original Titanic DataFrame consists of
891 rows. Let’s have a look at the number of rows which satisfy the
condition by checking the shape attribute of the resulting
DataFrame above_35:
In [15]: above_35.shape
Out[15]: (217, 12)
I’m interested in the Titanic passengers from cabin class 2 and 3.
In [16]: class_23 = titanic[titanic["Pclass"].isin([2, 3])]
In [17]: class_23.head()
Out[17]:
PassengerId Survived Pclass ... Fare Cabin Embarked
0 1 0 3 ... 7.2500 NaN S
2 3 1 3 ... 7.9250 NaN S
4 5 0 3 ... 8.0500 NaN S
5 6 0 3 ... 8.4583 NaN Q
7 8 0 3 ... 21.0750 NaN S
[5 rows x 12 columns]
Similar to the conditional expression, the isin() conditional function
returns a True for each row the values are in the provided list. To
filter the rows based on such a function, use the conditional function
inside the selection brackets []. In this case, the condition inside
the selection brackets titanic["Pclass"].isin([2, 3]) checks for
which rows the Pclass column is either 2 or 3.
The above is equivalent to filtering by rows for which the class is
either 2 or 3 and combining the two statements with an | (or)
operator:
In [18]: class_23 = titanic[(titanic["Pclass"] == 2) | (titanic["Pclass"] == 3)]
In [19]: class_23.head()
Out[19]:
PassengerId Survived Pclass ... Fare Cabin Embarked
0 1 0 3 ... 7.2500 NaN S
2 3 1 3 ... 7.9250 NaN S
4 5 0 3 ... 8.0500 NaN S
5 6 0 3 ... 8.4583 NaN Q
7 8 0 3 ... 21.0750 NaN S
[5 rows x 12 columns]
Note
When combining multiple conditional statements, each condition
must be surrounded by parentheses (). Moreover, you can not use
or/and but need to use the or operator | and the and
operator &.
To user guideSee the dedicated section in the user guide about boolean indexing or about the isin function.
I want to work with passenger data for which the age is known.
In [20]: age_no_na = titanic[titanic["Age"].notna()]
In [21]: age_no_na.head()
Out[21]:
PassengerId Survived Pclass ... Fare Cabin Embarked
0 1 0 3 ... 7.2500 NaN S
1 2 1 1 ... 71.2833 C85 C
2 3 1 3 ... 7.9250 NaN S
3 4 1 1 ... 53.1000 C123 S
4 5 0 3 ... 8.0500 NaN S
[5 rows x 12 columns]
The notna() conditional function returns a True for each row the
values are not a Null value. As such, this can be combined with the
selection brackets [] to filter the data table.
You might wonder what actually changed, as the first 5 lines are still
the same values. One way to verify is to check if the shape has changed:
In [22]: age_no_na.shape
Out[22]: (714, 12)
To user guideFor more dedicated functions on missing values, see the user guide section about handling missing data.
How do I select specific rows and columns from a DataFrame?#
I’m interested in the names of the passengers older than 35 years.
In [23]: adult_names = titanic.loc[titanic["Age"] > 35, "Name"]
In [24]: adult_names.head()
Out[24]:
1 Cumings, Mrs. John Bradley (Florence Briggs Th...
6 McCarthy, Mr. Timothy J
11 Bonnell, Miss. Elizabeth
13 Andersson, Mr. Anders Johan
15 Hewlett, Mrs. (Mary D Kingcome)
Name: Name, dtype: object
In this case, a subset of both rows and columns is made in one go and
just using selection brackets [] is not sufficient anymore. The
loc/iloc operators are required in front of the selection
brackets []. When using loc/iloc, the part before the comma
is the rows you want, and the part after the comma is the columns you
want to select.
When using the column names, row labels or a condition expression, use
the loc operator in front of the selection brackets []. For both
the part before and after the comma, you can use a single label, a list
of labels, a slice of labels, a conditional expression or a colon. Using
a colon specifies you want to select all rows or columns.
I’m interested in rows 10 till 25 and columns 3 to 5.
In [25]: titanic.iloc[9:25, 2:5]
Out[25]:
Pclass Name Sex
9 2 Nasser, Mrs. Nicholas (Adele Achem) female
10 3 Sandstrom, Miss. Marguerite Rut female
11 1 Bonnell, Miss. Elizabeth female
12 3 Saundercock, Mr. William Henry male
13 3 Andersson, Mr. Anders Johan male
.. ... ... ...
20 2 Fynney, Mr. Joseph J male
21 2 Beesley, Mr. Lawrence male
22 3 McGowan, Miss. Anna "Annie" female
23 1 Sloper, Mr. William Thompson male
24 3 Palsson, Miss. Torborg Danira female
[16 rows x 3 columns]
Again, a subset of both rows and columns is made in one go and just
using selection brackets [] is not sufficient anymore. When
specifically interested in certain rows and/or columns based on their
position in the table, use the iloc operator in front of the
selection brackets [].
When selecting specific rows and/or columns with loc or iloc,
new values can be assigned to the selected data. For example, to assign
the name anonymous to the first 3 elements of the third column:
In [26]: titanic.iloc[0:3, 3] = "anonymous"
In [27]: titanic.head()
Out[27]:
PassengerId Survived Pclass ... Fare Cabin Embarked
0 1 0 3 ... 7.2500 NaN S
1 2 1 1 ... 71.2833 C85 C
2 3 1 3 ... 7.9250 NaN S
3 4 1 1 ... 53.1000 C123 S
4 5 0 3 ... 8.0500 NaN S
[5 rows x 12 columns]
To user guideSee the user guide section on different choices for indexing to get more insight in the usage of loc and iloc.
REMEMBER
When selecting subsets of data, square brackets [] are used.
Inside these brackets, you can use a single column/row label, a list
of column/row labels, a slice of labels, a conditional expression or
a colon.
Select specific rows and/or columns using loc when using the row
and column names.
Select specific rows and/or columns using iloc when using the
positions in the table.
You can assign new values to a selection based on loc/iloc.
To user guideA full overview of indexing is provided in the user guide pages on indexing and selecting data.
| 1,062
| 1,161
|
How do I assign a value to a specific row and column in a pandas database?
I have an integer:
num = 1
,and a database table points:
X Y
0
1
2
3
How would I go about placing num into column X and field 3 using pandas?
I have searched around and found points.ix[], which selects a specific row but using this I get an error message:
AttributeError: 'DataFrame' object has no attribute 'ix'
Apart from this I can't find anything else.
|
68,398,818
|
Create a dataframe from a series with a TimeSeriesIndex multiplied by another series
|
<p>Let's say I have a series, ser1 with a TimeSeriesIndex length x. I also have another series, ser2 length y. How do I multiply these so that I get a dataframe shape (x,y) where the index is from ser1 and the columns are the indices from ser2. I want every element of ser2 to be multiplied by the values of each element in ser1.</p>
<pre><code>import pandas as pd
ser1 = pd.Series([100, 105, 110, 114, 89],index=pd.date_range(start='2021-01-01', end='2021-01-05', freq='D'), name='test')
test_ser2 = pd.Series([1, 2, 3, 4, 5], index=['a', 'b', 'c', 'd', 'e'])
</code></pre>
<p>Perhaps this is more elegantly done with numpy.</p>
| 68,398,934
| 2021-07-15T18:05:54.047000
| 1
| null | 0
| 19
|
pandas
|
<p>Try this using <code>np.outer</code> with pandas DataFrame constructor:</p>
<pre><code>pd.DataFrame(np.outer(ser1, test_ser2), index=ser1.index, columns=test_ser2.index)
</code></pre>
<p>Output:</p>
<pre><code> a b c d e
2021-01-01 100 200 300 400 500
2021-01-02 105 210 315 420 525
2021-01-03 110 220 330 440 550
2021-01-04 114 228 342 456 570
2021-01-05 89 178 267 356 445
</code></pre>
| 2021-07-15T18:15:28.347000
| 1
|
https://pandas.pydata.org/docs/user_guide/timeseries.html
|
Time series / date functionality#
Time series / date functionality#
pandas contains extensive capabilities and features for working with time series data for all domains.
Using the NumPy datetime64 and timedelta64 dtypes, pandas has consolidated a large number of
features from other Python libraries like scikits.timeseries as well as created
a tremendous amount of new functionality for manipulating time series data.
For example, pandas supports:
Parsing time series information from various sources and formats
In [1]: import datetime
In [2]: dti = pd.to_datetime(
...: ["1/1/2018", np.datetime64("2018-01-01"), datetime.datetime(2018, 1, 1)]
Try this using np.outer with pandas DataFrame constructor:
pd.DataFrame(np.outer(ser1, test_ser2), index=ser1.index, columns=test_ser2.index)
Output:
a b c d e
2021-01-01 100 200 300 400 500
2021-01-02 105 210 315 420 525
2021-01-03 110 220 330 440 550
2021-01-04 114 228 342 456 570
2021-01-05 89 178 267 356 445
...: )
...:
In [3]: dti
Out[3]: DatetimeIndex(['2018-01-01', '2018-01-01', '2018-01-01'], dtype='datetime64[ns]', freq=None)
Generate sequences of fixed-frequency dates and time spans
In [4]: dti = pd.date_range("2018-01-01", periods=3, freq="H")
In [5]: dti
Out[5]:
DatetimeIndex(['2018-01-01 00:00:00', '2018-01-01 01:00:00',
'2018-01-01 02:00:00'],
dtype='datetime64[ns]', freq='H')
Manipulating and converting date times with timezone information
In [6]: dti = dti.tz_localize("UTC")
In [7]: dti
Out[7]:
DatetimeIndex(['2018-01-01 00:00:00+00:00', '2018-01-01 01:00:00+00:00',
'2018-01-01 02:00:00+00:00'],
dtype='datetime64[ns, UTC]', freq='H')
In [8]: dti.tz_convert("US/Pacific")
Out[8]:
DatetimeIndex(['2017-12-31 16:00:00-08:00', '2017-12-31 17:00:00-08:00',
'2017-12-31 18:00:00-08:00'],
dtype='datetime64[ns, US/Pacific]', freq='H')
Resampling or converting a time series to a particular frequency
In [9]: idx = pd.date_range("2018-01-01", periods=5, freq="H")
In [10]: ts = pd.Series(range(len(idx)), index=idx)
In [11]: ts
Out[11]:
2018-01-01 00:00:00 0
2018-01-01 01:00:00 1
2018-01-01 02:00:00 2
2018-01-01 03:00:00 3
2018-01-01 04:00:00 4
Freq: H, dtype: int64
In [12]: ts.resample("2H").mean()
Out[12]:
2018-01-01 00:00:00 0.5
2018-01-01 02:00:00 2.5
2018-01-01 04:00:00 4.0
Freq: 2H, dtype: float64
Performing date and time arithmetic with absolute or relative time increments
In [13]: friday = pd.Timestamp("2018-01-05")
In [14]: friday.day_name()
Out[14]: 'Friday'
# Add 1 day
In [15]: saturday = friday + pd.Timedelta("1 day")
In [16]: saturday.day_name()
Out[16]: 'Saturday'
# Add 1 business day (Friday --> Monday)
In [17]: monday = friday + pd.offsets.BDay()
In [18]: monday.day_name()
Out[18]: 'Monday'
pandas provides a relatively compact and self-contained set of tools for
performing the above tasks and more.
Overview#
pandas captures 4 general time related concepts:
Date times: A specific date and time with timezone support. Similar to datetime.datetime from the standard library.
Time deltas: An absolute time duration. Similar to datetime.timedelta from the standard library.
Time spans: A span of time defined by a point in time and its associated frequency.
Date offsets: A relative time duration that respects calendar arithmetic. Similar to dateutil.relativedelta.relativedelta from the dateutil package.
Concept
Scalar Class
Array Class
pandas Data Type
Primary Creation Method
Date times
Timestamp
DatetimeIndex
datetime64[ns] or datetime64[ns, tz]
to_datetime or date_range
Time deltas
Timedelta
TimedeltaIndex
timedelta64[ns]
to_timedelta or timedelta_range
Time spans
Period
PeriodIndex
period[freq]
Period or period_range
Date offsets
DateOffset
None
None
DateOffset
For time series data, it’s conventional to represent the time component in the index of a Series or DataFrame
so manipulations can be performed with respect to the time element.
In [19]: pd.Series(range(3), index=pd.date_range("2000", freq="D", periods=3))
Out[19]:
2000-01-01 0
2000-01-02 1
2000-01-03 2
Freq: D, dtype: int64
However, Series and DataFrame can directly also support the time component as data itself.
In [20]: pd.Series(pd.date_range("2000", freq="D", periods=3))
Out[20]:
0 2000-01-01
1 2000-01-02
2 2000-01-03
dtype: datetime64[ns]
Series and DataFrame have extended data type support and functionality for datetime, timedelta
and Period data when passed into those constructors. DateOffset
data however will be stored as object data.
In [21]: pd.Series(pd.period_range("1/1/2011", freq="M", periods=3))
Out[21]:
0 2011-01
1 2011-02
2 2011-03
dtype: period[M]
In [22]: pd.Series([pd.DateOffset(1), pd.DateOffset(2)])
Out[22]:
0 <DateOffset>
1 <2 * DateOffsets>
dtype: object
In [23]: pd.Series(pd.date_range("1/1/2011", freq="M", periods=3))
Out[23]:
0 2011-01-31
1 2011-02-28
2 2011-03-31
dtype: datetime64[ns]
Lastly, pandas represents null date times, time deltas, and time spans as NaT which
is useful for representing missing or null date like values and behaves similar
as np.nan does for float data.
In [24]: pd.Timestamp(pd.NaT)
Out[24]: NaT
In [25]: pd.Timedelta(pd.NaT)
Out[25]: NaT
In [26]: pd.Period(pd.NaT)
Out[26]: NaT
# Equality acts as np.nan would
In [27]: pd.NaT == pd.NaT
Out[27]: False
Timestamps vs. time spans#
Timestamped data is the most basic type of time series data that associates
values with points in time. For pandas objects it means using the points in
time.
In [28]: pd.Timestamp(datetime.datetime(2012, 5, 1))
Out[28]: Timestamp('2012-05-01 00:00:00')
In [29]: pd.Timestamp("2012-05-01")
Out[29]: Timestamp('2012-05-01 00:00:00')
In [30]: pd.Timestamp(2012, 5, 1)
Out[30]: Timestamp('2012-05-01 00:00:00')
However, in many cases it is more natural to associate things like change
variables with a time span instead. The span represented by Period can be
specified explicitly, or inferred from datetime string format.
For example:
In [31]: pd.Period("2011-01")
Out[31]: Period('2011-01', 'M')
In [32]: pd.Period("2012-05", freq="D")
Out[32]: Period('2012-05-01', 'D')
Timestamp and Period can serve as an index. Lists of
Timestamp and Period are automatically coerced to DatetimeIndex
and PeriodIndex respectively.
In [33]: dates = [
....: pd.Timestamp("2012-05-01"),
....: pd.Timestamp("2012-05-02"),
....: pd.Timestamp("2012-05-03"),
....: ]
....:
In [34]: ts = pd.Series(np.random.randn(3), dates)
In [35]: type(ts.index)
Out[35]: pandas.core.indexes.datetimes.DatetimeIndex
In [36]: ts.index
Out[36]: DatetimeIndex(['2012-05-01', '2012-05-02', '2012-05-03'], dtype='datetime64[ns]', freq=None)
In [37]: ts
Out[37]:
2012-05-01 0.469112
2012-05-02 -0.282863
2012-05-03 -1.509059
dtype: float64
In [38]: periods = [pd.Period("2012-01"), pd.Period("2012-02"), pd.Period("2012-03")]
In [39]: ts = pd.Series(np.random.randn(3), periods)
In [40]: type(ts.index)
Out[40]: pandas.core.indexes.period.PeriodIndex
In [41]: ts.index
Out[41]: PeriodIndex(['2012-01', '2012-02', '2012-03'], dtype='period[M]')
In [42]: ts
Out[42]:
2012-01 -1.135632
2012-02 1.212112
2012-03 -0.173215
Freq: M, dtype: float64
pandas allows you to capture both representations and
convert between them. Under the hood, pandas represents timestamps using
instances of Timestamp and sequences of timestamps using instances of
DatetimeIndex. For regular time spans, pandas uses Period objects for
scalar values and PeriodIndex for sequences of spans. Better support for
irregular intervals with arbitrary start and end points are forth-coming in
future releases.
Converting to timestamps#
To convert a Series or list-like object of date-like objects e.g. strings,
epochs, or a mixture, you can use the to_datetime function. When passed
a Series, this returns a Series (with the same index), while a list-like
is converted to a DatetimeIndex:
In [43]: pd.to_datetime(pd.Series(["Jul 31, 2009", "2010-01-10", None]))
Out[43]:
0 2009-07-31
1 2010-01-10
2 NaT
dtype: datetime64[ns]
In [44]: pd.to_datetime(["2005/11/23", "2010.12.31"])
Out[44]: DatetimeIndex(['2005-11-23', '2010-12-31'], dtype='datetime64[ns]', freq=None)
If you use dates which start with the day first (i.e. European style),
you can pass the dayfirst flag:
In [45]: pd.to_datetime(["04-01-2012 10:00"], dayfirst=True)
Out[45]: DatetimeIndex(['2012-01-04 10:00:00'], dtype='datetime64[ns]', freq=None)
In [46]: pd.to_datetime(["14-01-2012", "01-14-2012"], dayfirst=True)
Out[46]: DatetimeIndex(['2012-01-14', '2012-01-14'], dtype='datetime64[ns]', freq=None)
Warning
You see in the above example that dayfirst isn’t strict. If a date
can’t be parsed with the day being first it will be parsed as if
dayfirst were False, and in the case of parsing delimited date strings
(e.g. 31-12-2012) then a warning will also be raised.
If you pass a single string to to_datetime, it returns a single Timestamp.
Timestamp can also accept string input, but it doesn’t accept string parsing
options like dayfirst or format, so use to_datetime if these are required.
In [47]: pd.to_datetime("2010/11/12")
Out[47]: Timestamp('2010-11-12 00:00:00')
In [48]: pd.Timestamp("2010/11/12")
Out[48]: Timestamp('2010-11-12 00:00:00')
You can also use the DatetimeIndex constructor directly:
In [49]: pd.DatetimeIndex(["2018-01-01", "2018-01-03", "2018-01-05"])
Out[49]: DatetimeIndex(['2018-01-01', '2018-01-03', '2018-01-05'], dtype='datetime64[ns]', freq=None)
The string ‘infer’ can be passed in order to set the frequency of the index as the
inferred frequency upon creation:
In [50]: pd.DatetimeIndex(["2018-01-01", "2018-01-03", "2018-01-05"], freq="infer")
Out[50]: DatetimeIndex(['2018-01-01', '2018-01-03', '2018-01-05'], dtype='datetime64[ns]', freq='2D')
Providing a format argument#
In addition to the required datetime string, a format argument can be passed to ensure specific parsing.
This could also potentially speed up the conversion considerably.
In [51]: pd.to_datetime("2010/11/12", format="%Y/%m/%d")
Out[51]: Timestamp('2010-11-12 00:00:00')
In [52]: pd.to_datetime("12-11-2010 00:00", format="%d-%m-%Y %H:%M")
Out[52]: Timestamp('2010-11-12 00:00:00')
For more information on the choices available when specifying the format
option, see the Python datetime documentation.
Assembling datetime from multiple DataFrame columns#
You can also pass a DataFrame of integer or string columns to assemble into a Series of Timestamps.
In [53]: df = pd.DataFrame(
....: {"year": [2015, 2016], "month": [2, 3], "day": [4, 5], "hour": [2, 3]}
....: )
....:
In [54]: pd.to_datetime(df)
Out[54]:
0 2015-02-04 02:00:00
1 2016-03-05 03:00:00
dtype: datetime64[ns]
You can pass only the columns that you need to assemble.
In [55]: pd.to_datetime(df[["year", "month", "day"]])
Out[55]:
0 2015-02-04
1 2016-03-05
dtype: datetime64[ns]
pd.to_datetime looks for standard designations of the datetime component in the column names, including:
required: year, month, day
optional: hour, minute, second, millisecond, microsecond, nanosecond
Invalid data#
The default behavior, errors='raise', is to raise when unparsable:
In [2]: pd.to_datetime(['2009/07/31', 'asd'], errors='raise')
ValueError: Unknown string format
Pass errors='ignore' to return the original input when unparsable:
In [56]: pd.to_datetime(["2009/07/31", "asd"], errors="ignore")
Out[56]: Index(['2009/07/31', 'asd'], dtype='object')
Pass errors='coerce' to convert unparsable data to NaT (not a time):
In [57]: pd.to_datetime(["2009/07/31", "asd"], errors="coerce")
Out[57]: DatetimeIndex(['2009-07-31', 'NaT'], dtype='datetime64[ns]', freq=None)
Epoch timestamps#
pandas supports converting integer or float epoch times to Timestamp and
DatetimeIndex. The default unit is nanoseconds, since that is how Timestamp
objects are stored internally. However, epochs are often stored in another unit
which can be specified. These are computed from the starting point specified by the
origin parameter.
In [58]: pd.to_datetime(
....: [1349720105, 1349806505, 1349892905, 1349979305, 1350065705], unit="s"
....: )
....:
Out[58]:
DatetimeIndex(['2012-10-08 18:15:05', '2012-10-09 18:15:05',
'2012-10-10 18:15:05', '2012-10-11 18:15:05',
'2012-10-12 18:15:05'],
dtype='datetime64[ns]', freq=None)
In [59]: pd.to_datetime(
....: [1349720105100, 1349720105200, 1349720105300, 1349720105400, 1349720105500],
....: unit="ms",
....: )
....:
Out[59]:
DatetimeIndex(['2012-10-08 18:15:05.100000', '2012-10-08 18:15:05.200000',
'2012-10-08 18:15:05.300000', '2012-10-08 18:15:05.400000',
'2012-10-08 18:15:05.500000'],
dtype='datetime64[ns]', freq=None)
Note
The unit parameter does not use the same strings as the format parameter
that was discussed above). The
available units are listed on the documentation for pandas.to_datetime().
Changed in version 1.0.0.
Constructing a Timestamp or DatetimeIndex with an epoch timestamp
with the tz argument specified will raise a ValueError. If you have
epochs in wall time in another timezone, you can read the epochs
as timezone-naive timestamps and then localize to the appropriate timezone:
In [60]: pd.Timestamp(1262347200000000000).tz_localize("US/Pacific")
Out[60]: Timestamp('2010-01-01 12:00:00-0800', tz='US/Pacific')
In [61]: pd.DatetimeIndex([1262347200000000000]).tz_localize("US/Pacific")
Out[61]: DatetimeIndex(['2010-01-01 12:00:00-08:00'], dtype='datetime64[ns, US/Pacific]', freq=None)
Note
Epoch times will be rounded to the nearest nanosecond.
Warning
Conversion of float epoch times can lead to inaccurate and unexpected results.
Python floats have about 15 digits precision in
decimal. Rounding during conversion from float to high precision Timestamp is
unavoidable. The only way to achieve exact precision is to use a fixed-width
types (e.g. an int64).
In [62]: pd.to_datetime([1490195805.433, 1490195805.433502912], unit="s")
Out[62]: DatetimeIndex(['2017-03-22 15:16:45.433000088', '2017-03-22 15:16:45.433502913'], dtype='datetime64[ns]', freq=None)
In [63]: pd.to_datetime(1490195805433502912, unit="ns")
Out[63]: Timestamp('2017-03-22 15:16:45.433502912')
See also
Using the origin parameter
From timestamps to epoch#
To invert the operation from above, namely, to convert from a Timestamp to a ‘unix’ epoch:
In [64]: stamps = pd.date_range("2012-10-08 18:15:05", periods=4, freq="D")
In [65]: stamps
Out[65]:
DatetimeIndex(['2012-10-08 18:15:05', '2012-10-09 18:15:05',
'2012-10-10 18:15:05', '2012-10-11 18:15:05'],
dtype='datetime64[ns]', freq='D')
We subtract the epoch (midnight at January 1, 1970 UTC) and then floor divide by the
“unit” (1 second).
In [66]: (stamps - pd.Timestamp("1970-01-01")) // pd.Timedelta("1s")
Out[66]: Int64Index([1349720105, 1349806505, 1349892905, 1349979305], dtype='int64')
Using the origin parameter#
Using the origin parameter, one can specify an alternative starting point for creation
of a DatetimeIndex. For example, to use 1960-01-01 as the starting date:
In [67]: pd.to_datetime([1, 2, 3], unit="D", origin=pd.Timestamp("1960-01-01"))
Out[67]: DatetimeIndex(['1960-01-02', '1960-01-03', '1960-01-04'], dtype='datetime64[ns]', freq=None)
The default is set at origin='unix', which defaults to 1970-01-01 00:00:00.
Commonly called ‘unix epoch’ or POSIX time.
In [68]: pd.to_datetime([1, 2, 3], unit="D")
Out[68]: DatetimeIndex(['1970-01-02', '1970-01-03', '1970-01-04'], dtype='datetime64[ns]', freq=None)
Generating ranges of timestamps#
To generate an index with timestamps, you can use either the DatetimeIndex or
Index constructor and pass in a list of datetime objects:
In [69]: dates = [
....: datetime.datetime(2012, 5, 1),
....: datetime.datetime(2012, 5, 2),
....: datetime.datetime(2012, 5, 3),
....: ]
....:
# Note the frequency information
In [70]: index = pd.DatetimeIndex(dates)
In [71]: index
Out[71]: DatetimeIndex(['2012-05-01', '2012-05-02', '2012-05-03'], dtype='datetime64[ns]', freq=None)
# Automatically converted to DatetimeIndex
In [72]: index = pd.Index(dates)
In [73]: index
Out[73]: DatetimeIndex(['2012-05-01', '2012-05-02', '2012-05-03'], dtype='datetime64[ns]', freq=None)
In practice this becomes very cumbersome because we often need a very long
index with a large number of timestamps. If we need timestamps on a regular
frequency, we can use the date_range() and bdate_range() functions
to create a DatetimeIndex. The default frequency for date_range is a
calendar day while the default for bdate_range is a business day:
In [74]: start = datetime.datetime(2011, 1, 1)
In [75]: end = datetime.datetime(2012, 1, 1)
In [76]: index = pd.date_range(start, end)
In [77]: index
Out[77]:
DatetimeIndex(['2011-01-01', '2011-01-02', '2011-01-03', '2011-01-04',
'2011-01-05', '2011-01-06', '2011-01-07', '2011-01-08',
'2011-01-09', '2011-01-10',
...
'2011-12-23', '2011-12-24', '2011-12-25', '2011-12-26',
'2011-12-27', '2011-12-28', '2011-12-29', '2011-12-30',
'2011-12-31', '2012-01-01'],
dtype='datetime64[ns]', length=366, freq='D')
In [78]: index = pd.bdate_range(start, end)
In [79]: index
Out[79]:
DatetimeIndex(['2011-01-03', '2011-01-04', '2011-01-05', '2011-01-06',
'2011-01-07', '2011-01-10', '2011-01-11', '2011-01-12',
'2011-01-13', '2011-01-14',
...
'2011-12-19', '2011-12-20', '2011-12-21', '2011-12-22',
'2011-12-23', '2011-12-26', '2011-12-27', '2011-12-28',
'2011-12-29', '2011-12-30'],
dtype='datetime64[ns]', length=260, freq='B')
Convenience functions like date_range and bdate_range can utilize a
variety of frequency aliases:
In [80]: pd.date_range(start, periods=1000, freq="M")
Out[80]:
DatetimeIndex(['2011-01-31', '2011-02-28', '2011-03-31', '2011-04-30',
'2011-05-31', '2011-06-30', '2011-07-31', '2011-08-31',
'2011-09-30', '2011-10-31',
...
'2093-07-31', '2093-08-31', '2093-09-30', '2093-10-31',
'2093-11-30', '2093-12-31', '2094-01-31', '2094-02-28',
'2094-03-31', '2094-04-30'],
dtype='datetime64[ns]', length=1000, freq='M')
In [81]: pd.bdate_range(start, periods=250, freq="BQS")
Out[81]:
DatetimeIndex(['2011-01-03', '2011-04-01', '2011-07-01', '2011-10-03',
'2012-01-02', '2012-04-02', '2012-07-02', '2012-10-01',
'2013-01-01', '2013-04-01',
...
'2071-01-01', '2071-04-01', '2071-07-01', '2071-10-01',
'2072-01-01', '2072-04-01', '2072-07-01', '2072-10-03',
'2073-01-02', '2073-04-03'],
dtype='datetime64[ns]', length=250, freq='BQS-JAN')
date_range and bdate_range make it easy to generate a range of dates
using various combinations of parameters like start, end, periods,
and freq. The start and end dates are strictly inclusive, so dates outside
of those specified will not be generated:
In [82]: pd.date_range(start, end, freq="BM")
Out[82]:
DatetimeIndex(['2011-01-31', '2011-02-28', '2011-03-31', '2011-04-29',
'2011-05-31', '2011-06-30', '2011-07-29', '2011-08-31',
'2011-09-30', '2011-10-31', '2011-11-30', '2011-12-30'],
dtype='datetime64[ns]', freq='BM')
In [83]: pd.date_range(start, end, freq="W")
Out[83]:
DatetimeIndex(['2011-01-02', '2011-01-09', '2011-01-16', '2011-01-23',
'2011-01-30', '2011-02-06', '2011-02-13', '2011-02-20',
'2011-02-27', '2011-03-06', '2011-03-13', '2011-03-20',
'2011-03-27', '2011-04-03', '2011-04-10', '2011-04-17',
'2011-04-24', '2011-05-01', '2011-05-08', '2011-05-15',
'2011-05-22', '2011-05-29', '2011-06-05', '2011-06-12',
'2011-06-19', '2011-06-26', '2011-07-03', '2011-07-10',
'2011-07-17', '2011-07-24', '2011-07-31', '2011-08-07',
'2011-08-14', '2011-08-21', '2011-08-28', '2011-09-04',
'2011-09-11', '2011-09-18', '2011-09-25', '2011-10-02',
'2011-10-09', '2011-10-16', '2011-10-23', '2011-10-30',
'2011-11-06', '2011-11-13', '2011-11-20', '2011-11-27',
'2011-12-04', '2011-12-11', '2011-12-18', '2011-12-25',
'2012-01-01'],
dtype='datetime64[ns]', freq='W-SUN')
In [84]: pd.bdate_range(end=end, periods=20)
Out[84]:
DatetimeIndex(['2011-12-05', '2011-12-06', '2011-12-07', '2011-12-08',
'2011-12-09', '2011-12-12', '2011-12-13', '2011-12-14',
'2011-12-15', '2011-12-16', '2011-12-19', '2011-12-20',
'2011-12-21', '2011-12-22', '2011-12-23', '2011-12-26',
'2011-12-27', '2011-12-28', '2011-12-29', '2011-12-30'],
dtype='datetime64[ns]', freq='B')
In [85]: pd.bdate_range(start=start, periods=20)
Out[85]:
DatetimeIndex(['2011-01-03', '2011-01-04', '2011-01-05', '2011-01-06',
'2011-01-07', '2011-01-10', '2011-01-11', '2011-01-12',
'2011-01-13', '2011-01-14', '2011-01-17', '2011-01-18',
'2011-01-19', '2011-01-20', '2011-01-21', '2011-01-24',
'2011-01-25', '2011-01-26', '2011-01-27', '2011-01-28'],
dtype='datetime64[ns]', freq='B')
Specifying start, end, and periods will generate a range of evenly spaced
dates from start to end inclusively, with periods number of elements in the
resulting DatetimeIndex:
In [86]: pd.date_range("2018-01-01", "2018-01-05", periods=5)
Out[86]:
DatetimeIndex(['2018-01-01', '2018-01-02', '2018-01-03', '2018-01-04',
'2018-01-05'],
dtype='datetime64[ns]', freq=None)
In [87]: pd.date_range("2018-01-01", "2018-01-05", periods=10)
Out[87]:
DatetimeIndex(['2018-01-01 00:00:00', '2018-01-01 10:40:00',
'2018-01-01 21:20:00', '2018-01-02 08:00:00',
'2018-01-02 18:40:00', '2018-01-03 05:20:00',
'2018-01-03 16:00:00', '2018-01-04 02:40:00',
'2018-01-04 13:20:00', '2018-01-05 00:00:00'],
dtype='datetime64[ns]', freq=None)
Custom frequency ranges#
bdate_range can also generate a range of custom frequency dates by using
the weekmask and holidays parameters. These parameters will only be
used if a custom frequency string is passed.
In [88]: weekmask = "Mon Wed Fri"
In [89]: holidays = [datetime.datetime(2011, 1, 5), datetime.datetime(2011, 3, 14)]
In [90]: pd.bdate_range(start, end, freq="C", weekmask=weekmask, holidays=holidays)
Out[90]:
DatetimeIndex(['2011-01-03', '2011-01-07', '2011-01-10', '2011-01-12',
'2011-01-14', '2011-01-17', '2011-01-19', '2011-01-21',
'2011-01-24', '2011-01-26',
...
'2011-12-09', '2011-12-12', '2011-12-14', '2011-12-16',
'2011-12-19', '2011-12-21', '2011-12-23', '2011-12-26',
'2011-12-28', '2011-12-30'],
dtype='datetime64[ns]', length=154, freq='C')
In [91]: pd.bdate_range(start, end, freq="CBMS", weekmask=weekmask)
Out[91]:
DatetimeIndex(['2011-01-03', '2011-02-02', '2011-03-02', '2011-04-01',
'2011-05-02', '2011-06-01', '2011-07-01', '2011-08-01',
'2011-09-02', '2011-10-03', '2011-11-02', '2011-12-02'],
dtype='datetime64[ns]', freq='CBMS')
See also
Custom business days
Timestamp limitations#
Since pandas represents timestamps in nanosecond resolution, the time span that
can be represented using a 64-bit integer is limited to approximately 584 years:
In [92]: pd.Timestamp.min
Out[92]: Timestamp('1677-09-21 00:12:43.145224193')
In [93]: pd.Timestamp.max
Out[93]: Timestamp('2262-04-11 23:47:16.854775807')
See also
Representing out-of-bounds spans
Indexing#
One of the main uses for DatetimeIndex is as an index for pandas objects.
The DatetimeIndex class contains many time series related optimizations:
A large range of dates for various offsets are pre-computed and cached
under the hood in order to make generating subsequent date ranges very fast
(just have to grab a slice).
Fast shifting using the shift method on pandas objects.
Unioning of overlapping DatetimeIndex objects with the same frequency is
very fast (important for fast data alignment).
Quick access to date fields via properties such as year, month, etc.
Regularization functions like snap and very fast asof logic.
DatetimeIndex objects have all the basic functionality of regular Index
objects, and a smorgasbord of advanced time series specific methods for easy
frequency processing.
See also
Reindexing methods
Note
While pandas does not force you to have a sorted date index, some of these
methods may have unexpected or incorrect behavior if the dates are unsorted.
DatetimeIndex can be used like a regular index and offers all of its
intelligent functionality like selection, slicing, etc.
In [94]: rng = pd.date_range(start, end, freq="BM")
In [95]: ts = pd.Series(np.random.randn(len(rng)), index=rng)
In [96]: ts.index
Out[96]:
DatetimeIndex(['2011-01-31', '2011-02-28', '2011-03-31', '2011-04-29',
'2011-05-31', '2011-06-30', '2011-07-29', '2011-08-31',
'2011-09-30', '2011-10-31', '2011-11-30', '2011-12-30'],
dtype='datetime64[ns]', freq='BM')
In [97]: ts[:5].index
Out[97]:
DatetimeIndex(['2011-01-31', '2011-02-28', '2011-03-31', '2011-04-29',
'2011-05-31'],
dtype='datetime64[ns]', freq='BM')
In [98]: ts[::2].index
Out[98]:
DatetimeIndex(['2011-01-31', '2011-03-31', '2011-05-31', '2011-07-29',
'2011-09-30', '2011-11-30'],
dtype='datetime64[ns]', freq='2BM')
Partial string indexing#
Dates and strings that parse to timestamps can be passed as indexing parameters:
In [99]: ts["1/31/2011"]
Out[99]: 0.11920871129693428
In [100]: ts[datetime.datetime(2011, 12, 25):]
Out[100]:
2011-12-30 0.56702
Freq: BM, dtype: float64
In [101]: ts["10/31/2011":"12/31/2011"]
Out[101]:
2011-10-31 0.271860
2011-11-30 -0.424972
2011-12-30 0.567020
Freq: BM, dtype: float64
To provide convenience for accessing longer time series, you can also pass in
the year or year and month as strings:
In [102]: ts["2011"]
Out[102]:
2011-01-31 0.119209
2011-02-28 -1.044236
2011-03-31 -0.861849
2011-04-29 -2.104569
2011-05-31 -0.494929
2011-06-30 1.071804
2011-07-29 0.721555
2011-08-31 -0.706771
2011-09-30 -1.039575
2011-10-31 0.271860
2011-11-30 -0.424972
2011-12-30 0.567020
Freq: BM, dtype: float64
In [103]: ts["2011-6"]
Out[103]:
2011-06-30 1.071804
Freq: BM, dtype: float64
This type of slicing will work on a DataFrame with a DatetimeIndex as well. Since the
partial string selection is a form of label slicing, the endpoints will be included. This
would include matching times on an included date:
Warning
Indexing DataFrame rows with a single string with getitem (e.g. frame[dtstring])
is deprecated starting with pandas 1.2.0 (given the ambiguity whether it is indexing
the rows or selecting a column) and will be removed in a future version. The equivalent
with .loc (e.g. frame.loc[dtstring]) is still supported.
In [104]: dft = pd.DataFrame(
.....: np.random.randn(100000, 1),
.....: columns=["A"],
.....: index=pd.date_range("20130101", periods=100000, freq="T"),
.....: )
.....:
In [105]: dft
Out[105]:
A
2013-01-01 00:00:00 0.276232
2013-01-01 00:01:00 -1.087401
2013-01-01 00:02:00 -0.673690
2013-01-01 00:03:00 0.113648
2013-01-01 00:04:00 -1.478427
... ...
2013-03-11 10:35:00 -0.747967
2013-03-11 10:36:00 -0.034523
2013-03-11 10:37:00 -0.201754
2013-03-11 10:38:00 -1.509067
2013-03-11 10:39:00 -1.693043
[100000 rows x 1 columns]
In [106]: dft.loc["2013"]
Out[106]:
A
2013-01-01 00:00:00 0.276232
2013-01-01 00:01:00 -1.087401
2013-01-01 00:02:00 -0.673690
2013-01-01 00:03:00 0.113648
2013-01-01 00:04:00 -1.478427
... ...
2013-03-11 10:35:00 -0.747967
2013-03-11 10:36:00 -0.034523
2013-03-11 10:37:00 -0.201754
2013-03-11 10:38:00 -1.509067
2013-03-11 10:39:00 -1.693043
[100000 rows x 1 columns]
This starts on the very first time in the month, and includes the last date and
time for the month:
In [107]: dft["2013-1":"2013-2"]
Out[107]:
A
2013-01-01 00:00:00 0.276232
2013-01-01 00:01:00 -1.087401
2013-01-01 00:02:00 -0.673690
2013-01-01 00:03:00 0.113648
2013-01-01 00:04:00 -1.478427
... ...
2013-02-28 23:55:00 0.850929
2013-02-28 23:56:00 0.976712
2013-02-28 23:57:00 -2.693884
2013-02-28 23:58:00 -1.575535
2013-02-28 23:59:00 -1.573517
[84960 rows x 1 columns]
This specifies a stop time that includes all of the times on the last day:
In [108]: dft["2013-1":"2013-2-28"]
Out[108]:
A
2013-01-01 00:00:00 0.276232
2013-01-01 00:01:00 -1.087401
2013-01-01 00:02:00 -0.673690
2013-01-01 00:03:00 0.113648
2013-01-01 00:04:00 -1.478427
... ...
2013-02-28 23:55:00 0.850929
2013-02-28 23:56:00 0.976712
2013-02-28 23:57:00 -2.693884
2013-02-28 23:58:00 -1.575535
2013-02-28 23:59:00 -1.573517
[84960 rows x 1 columns]
This specifies an exact stop time (and is not the same as the above):
In [109]: dft["2013-1":"2013-2-28 00:00:00"]
Out[109]:
A
2013-01-01 00:00:00 0.276232
2013-01-01 00:01:00 -1.087401
2013-01-01 00:02:00 -0.673690
2013-01-01 00:03:00 0.113648
2013-01-01 00:04:00 -1.478427
... ...
2013-02-27 23:56:00 1.197749
2013-02-27 23:57:00 0.720521
2013-02-27 23:58:00 -0.072718
2013-02-27 23:59:00 -0.681192
2013-02-28 00:00:00 -0.557501
[83521 rows x 1 columns]
We are stopping on the included end-point as it is part of the index:
In [110]: dft["2013-1-15":"2013-1-15 12:30:00"]
Out[110]:
A
2013-01-15 00:00:00 -0.984810
2013-01-15 00:01:00 0.941451
2013-01-15 00:02:00 1.559365
2013-01-15 00:03:00 1.034374
2013-01-15 00:04:00 -1.480656
... ...
2013-01-15 12:26:00 0.371454
2013-01-15 12:27:00 -0.930806
2013-01-15 12:28:00 -0.069177
2013-01-15 12:29:00 0.066510
2013-01-15 12:30:00 -0.003945
[751 rows x 1 columns]
DatetimeIndex partial string indexing also works on a DataFrame with a MultiIndex:
In [111]: dft2 = pd.DataFrame(
.....: np.random.randn(20, 1),
.....: columns=["A"],
.....: index=pd.MultiIndex.from_product(
.....: [pd.date_range("20130101", periods=10, freq="12H"), ["a", "b"]]
.....: ),
.....: )
.....:
In [112]: dft2
Out[112]:
A
2013-01-01 00:00:00 a -0.298694
b 0.823553
2013-01-01 12:00:00 a 0.943285
b -1.479399
2013-01-02 00:00:00 a -1.643342
... ...
2013-01-04 12:00:00 b 0.069036
2013-01-05 00:00:00 a 0.122297
b 1.422060
2013-01-05 12:00:00 a 0.370079
b 1.016331
[20 rows x 1 columns]
In [113]: dft2.loc["2013-01-05"]
Out[113]:
A
2013-01-05 00:00:00 a 0.122297
b 1.422060
2013-01-05 12:00:00 a 0.370079
b 1.016331
In [114]: idx = pd.IndexSlice
In [115]: dft2 = dft2.swaplevel(0, 1).sort_index()
In [116]: dft2.loc[idx[:, "2013-01-05"], :]
Out[116]:
A
a 2013-01-05 00:00:00 0.122297
2013-01-05 12:00:00 0.370079
b 2013-01-05 00:00:00 1.422060
2013-01-05 12:00:00 1.016331
New in version 0.25.0.
Slicing with string indexing also honors UTC offset.
In [117]: df = pd.DataFrame([0], index=pd.DatetimeIndex(["2019-01-01"], tz="US/Pacific"))
In [118]: df
Out[118]:
0
2019-01-01 00:00:00-08:00 0
In [119]: df["2019-01-01 12:00:00+04:00":"2019-01-01 13:00:00+04:00"]
Out[119]:
0
2019-01-01 00:00:00-08:00 0
Slice vs. exact match#
The same string used as an indexing parameter can be treated either as a slice or as an exact match depending on the resolution of the index. If the string is less accurate than the index, it will be treated as a slice, otherwise as an exact match.
Consider a Series object with a minute resolution index:
In [120]: series_minute = pd.Series(
.....: [1, 2, 3],
.....: pd.DatetimeIndex(
.....: ["2011-12-31 23:59:00", "2012-01-01 00:00:00", "2012-01-01 00:02:00"]
.....: ),
.....: )
.....:
In [121]: series_minute.index.resolution
Out[121]: 'minute'
A timestamp string less accurate than a minute gives a Series object.
In [122]: series_minute["2011-12-31 23"]
Out[122]:
2011-12-31 23:59:00 1
dtype: int64
A timestamp string with minute resolution (or more accurate), gives a scalar instead, i.e. it is not casted to a slice.
In [123]: series_minute["2011-12-31 23:59"]
Out[123]: 1
In [124]: series_minute["2011-12-31 23:59:00"]
Out[124]: 1
If index resolution is second, then the minute-accurate timestamp gives a
Series.
In [125]: series_second = pd.Series(
.....: [1, 2, 3],
.....: pd.DatetimeIndex(
.....: ["2011-12-31 23:59:59", "2012-01-01 00:00:00", "2012-01-01 00:00:01"]
.....: ),
.....: )
.....:
In [126]: series_second.index.resolution
Out[126]: 'second'
In [127]: series_second["2011-12-31 23:59"]
Out[127]:
2011-12-31 23:59:59 1
dtype: int64
If the timestamp string is treated as a slice, it can be used to index DataFrame with .loc[] as well.
In [128]: dft_minute = pd.DataFrame(
.....: {"a": [1, 2, 3], "b": [4, 5, 6]}, index=series_minute.index
.....: )
.....:
In [129]: dft_minute.loc["2011-12-31 23"]
Out[129]:
a b
2011-12-31 23:59:00 1 4
Warning
However, if the string is treated as an exact match, the selection in DataFrame’s [] will be column-wise and not row-wise, see Indexing Basics. For example dft_minute['2011-12-31 23:59'] will raise KeyError as '2012-12-31 23:59' has the same resolution as the index and there is no column with such name:
To always have unambiguous selection, whether the row is treated as a slice or a single selection, use .loc.
In [130]: dft_minute.loc["2011-12-31 23:59"]
Out[130]:
a 1
b 4
Name: 2011-12-31 23:59:00, dtype: int64
Note also that DatetimeIndex resolution cannot be less precise than day.
In [131]: series_monthly = pd.Series(
.....: [1, 2, 3], pd.DatetimeIndex(["2011-12", "2012-01", "2012-02"])
.....: )
.....:
In [132]: series_monthly.index.resolution
Out[132]: 'day'
In [133]: series_monthly["2011-12"] # returns Series
Out[133]:
2011-12-01 1
dtype: int64
Exact indexing#
As discussed in previous section, indexing a DatetimeIndex with a partial string depends on the “accuracy” of the period, in other words how specific the interval is in relation to the resolution of the index. In contrast, indexing with Timestamp or datetime objects is exact, because the objects have exact meaning. These also follow the semantics of including both endpoints.
These Timestamp and datetime objects have exact hours, minutes, and seconds, even though they were not explicitly specified (they are 0).
In [134]: dft[datetime.datetime(2013, 1, 1): datetime.datetime(2013, 2, 28)]
Out[134]:
A
2013-01-01 00:00:00 0.276232
2013-01-01 00:01:00 -1.087401
2013-01-01 00:02:00 -0.673690
2013-01-01 00:03:00 0.113648
2013-01-01 00:04:00 -1.478427
... ...
2013-02-27 23:56:00 1.197749
2013-02-27 23:57:00 0.720521
2013-02-27 23:58:00 -0.072718
2013-02-27 23:59:00 -0.681192
2013-02-28 00:00:00 -0.557501
[83521 rows x 1 columns]
With no defaults.
In [135]: dft[
.....: datetime.datetime(2013, 1, 1, 10, 12, 0): datetime.datetime(
.....: 2013, 2, 28, 10, 12, 0
.....: )
.....: ]
.....:
Out[135]:
A
2013-01-01 10:12:00 0.565375
2013-01-01 10:13:00 0.068184
2013-01-01 10:14:00 0.788871
2013-01-01 10:15:00 -0.280343
2013-01-01 10:16:00 0.931536
... ...
2013-02-28 10:08:00 0.148098
2013-02-28 10:09:00 -0.388138
2013-02-28 10:10:00 0.139348
2013-02-28 10:11:00 0.085288
2013-02-28 10:12:00 0.950146
[83521 rows x 1 columns]
Truncating & fancy indexing#
A truncate() convenience function is provided that is similar
to slicing. Note that truncate assumes a 0 value for any unspecified date
component in a DatetimeIndex in contrast to slicing which returns any
partially matching dates:
In [136]: rng2 = pd.date_range("2011-01-01", "2012-01-01", freq="W")
In [137]: ts2 = pd.Series(np.random.randn(len(rng2)), index=rng2)
In [138]: ts2.truncate(before="2011-11", after="2011-12")
Out[138]:
2011-11-06 0.437823
2011-11-13 -0.293083
2011-11-20 -0.059881
2011-11-27 1.252450
Freq: W-SUN, dtype: float64
In [139]: ts2["2011-11":"2011-12"]
Out[139]:
2011-11-06 0.437823
2011-11-13 -0.293083
2011-11-20 -0.059881
2011-11-27 1.252450
2011-12-04 0.046611
2011-12-11 0.059478
2011-12-18 -0.286539
2011-12-25 0.841669
Freq: W-SUN, dtype: float64
Even complicated fancy indexing that breaks the DatetimeIndex frequency
regularity will result in a DatetimeIndex, although frequency is lost:
In [140]: ts2[[0, 2, 6]].index
Out[140]: DatetimeIndex(['2011-01-02', '2011-01-16', '2011-02-13'], dtype='datetime64[ns]', freq=None)
Time/date components#
There are several time/date properties that one can access from Timestamp or a collection of timestamps like a DatetimeIndex.
Property
Description
year
The year of the datetime
month
The month of the datetime
day
The days of the datetime
hour
The hour of the datetime
minute
The minutes of the datetime
second
The seconds of the datetime
microsecond
The microseconds of the datetime
nanosecond
The nanoseconds of the datetime
date
Returns datetime.date (does not contain timezone information)
time
Returns datetime.time (does not contain timezone information)
timetz
Returns datetime.time as local time with timezone information
dayofyear
The ordinal day of year
day_of_year
The ordinal day of year
weekofyear
The week ordinal of the year
week
The week ordinal of the year
dayofweek
The number of the day of the week with Monday=0, Sunday=6
day_of_week
The number of the day of the week with Monday=0, Sunday=6
weekday
The number of the day of the week with Monday=0, Sunday=6
quarter
Quarter of the date: Jan-Mar = 1, Apr-Jun = 2, etc.
days_in_month
The number of days in the month of the datetime
is_month_start
Logical indicating if first day of month (defined by frequency)
is_month_end
Logical indicating if last day of month (defined by frequency)
is_quarter_start
Logical indicating if first day of quarter (defined by frequency)
is_quarter_end
Logical indicating if last day of quarter (defined by frequency)
is_year_start
Logical indicating if first day of year (defined by frequency)
is_year_end
Logical indicating if last day of year (defined by frequency)
is_leap_year
Logical indicating if the date belongs to a leap year
Furthermore, if you have a Series with datetimelike values, then you can
access these properties via the .dt accessor, as detailed in the section
on .dt accessors.
New in version 1.1.0.
You may obtain the year, week and day components of the ISO year from the ISO 8601 standard:
In [141]: idx = pd.date_range(start="2019-12-29", freq="D", periods=4)
In [142]: idx.isocalendar()
Out[142]:
year week day
2019-12-29 2019 52 7
2019-12-30 2020 1 1
2019-12-31 2020 1 2
2020-01-01 2020 1 3
In [143]: idx.to_series().dt.isocalendar()
Out[143]:
year week day
2019-12-29 2019 52 7
2019-12-30 2020 1 1
2019-12-31 2020 1 2
2020-01-01 2020 1 3
DateOffset objects#
In the preceding examples, frequency strings (e.g. 'D') were used to specify
a frequency that defined:
how the date times in DatetimeIndex were spaced when using date_range()
the frequency of a Period or PeriodIndex
These frequency strings map to a DateOffset object and its subclasses. A DateOffset
is similar to a Timedelta that represents a duration of time but follows specific calendar duration rules.
For example, a Timedelta day will always increment datetimes by 24 hours, while a DateOffset day
will increment datetimes to the same time the next day whether a day represents 23, 24 or 25 hours due to daylight
savings time. However, all DateOffset subclasses that are an hour or smaller
(Hour, Minute, Second, Milli, Micro, Nano) behave like
Timedelta and respect absolute time.
The basic DateOffset acts similar to dateutil.relativedelta (relativedelta documentation)
that shifts a date time by the corresponding calendar duration specified. The
arithmetic operator (+) can be used to perform the shift.
# This particular day contains a day light savings time transition
In [144]: ts = pd.Timestamp("2016-10-30 00:00:00", tz="Europe/Helsinki")
# Respects absolute time
In [145]: ts + pd.Timedelta(days=1)
Out[145]: Timestamp('2016-10-30 23:00:00+0200', tz='Europe/Helsinki')
# Respects calendar time
In [146]: ts + pd.DateOffset(days=1)
Out[146]: Timestamp('2016-10-31 00:00:00+0200', tz='Europe/Helsinki')
In [147]: friday = pd.Timestamp("2018-01-05")
In [148]: friday.day_name()
Out[148]: 'Friday'
# Add 2 business days (Friday --> Tuesday)
In [149]: two_business_days = 2 * pd.offsets.BDay()
In [150]: friday + two_business_days
Out[150]: Timestamp('2018-01-09 00:00:00')
In [151]: (friday + two_business_days).day_name()
Out[151]: 'Tuesday'
Most DateOffsets have associated frequencies strings, or offset aliases, that can be passed
into freq keyword arguments. The available date offsets and associated frequency strings can be found below:
Date Offset
Frequency String
Description
DateOffset
None
Generic offset class, defaults to absolute 24 hours
BDay or BusinessDay
'B'
business day (weekday)
CDay or CustomBusinessDay
'C'
custom business day
Week
'W'
one week, optionally anchored on a day of the week
WeekOfMonth
'WOM'
the x-th day of the y-th week of each month
LastWeekOfMonth
'LWOM'
the x-th day of the last week of each month
MonthEnd
'M'
calendar month end
MonthBegin
'MS'
calendar month begin
BMonthEnd or BusinessMonthEnd
'BM'
business month end
BMonthBegin or BusinessMonthBegin
'BMS'
business month begin
CBMonthEnd or CustomBusinessMonthEnd
'CBM'
custom business month end
CBMonthBegin or CustomBusinessMonthBegin
'CBMS'
custom business month begin
SemiMonthEnd
'SM'
15th (or other day_of_month) and calendar month end
SemiMonthBegin
'SMS'
15th (or other day_of_month) and calendar month begin
QuarterEnd
'Q'
calendar quarter end
QuarterBegin
'QS'
calendar quarter begin
BQuarterEnd
'BQ
business quarter end
BQuarterBegin
'BQS'
business quarter begin
FY5253Quarter
'REQ'
retail (aka 52-53 week) quarter
YearEnd
'A'
calendar year end
YearBegin
'AS' or 'BYS'
calendar year begin
BYearEnd
'BA'
business year end
BYearBegin
'BAS'
business year begin
FY5253
'RE'
retail (aka 52-53 week) year
Easter
None
Easter holiday
BusinessHour
'BH'
business hour
CustomBusinessHour
'CBH'
custom business hour
Day
'D'
one absolute day
Hour
'H'
one hour
Minute
'T' or 'min'
one minute
Second
'S'
one second
Milli
'L' or 'ms'
one millisecond
Micro
'U' or 'us'
one microsecond
Nano
'N'
one nanosecond
DateOffsets additionally have rollforward() and rollback()
methods for moving a date forward or backward respectively to a valid offset
date relative to the offset. For example, business offsets will roll dates
that land on the weekends (Saturday and Sunday) forward to Monday since
business offsets operate on the weekdays.
In [152]: ts = pd.Timestamp("2018-01-06 00:00:00")
In [153]: ts.day_name()
Out[153]: 'Saturday'
# BusinessHour's valid offset dates are Monday through Friday
In [154]: offset = pd.offsets.BusinessHour(start="09:00")
# Bring the date to the closest offset date (Monday)
In [155]: offset.rollforward(ts)
Out[155]: Timestamp('2018-01-08 09:00:00')
# Date is brought to the closest offset date first and then the hour is added
In [156]: ts + offset
Out[156]: Timestamp('2018-01-08 10:00:00')
These operations preserve time (hour, minute, etc) information by default.
To reset time to midnight, use normalize() before or after applying
the operation (depending on whether you want the time information included
in the operation).
In [157]: ts = pd.Timestamp("2014-01-01 09:00")
In [158]: day = pd.offsets.Day()
In [159]: day + ts
Out[159]: Timestamp('2014-01-02 09:00:00')
In [160]: (day + ts).normalize()
Out[160]: Timestamp('2014-01-02 00:00:00')
In [161]: ts = pd.Timestamp("2014-01-01 22:00")
In [162]: hour = pd.offsets.Hour()
In [163]: hour + ts
Out[163]: Timestamp('2014-01-01 23:00:00')
In [164]: (hour + ts).normalize()
Out[164]: Timestamp('2014-01-01 00:00:00')
In [165]: (hour + pd.Timestamp("2014-01-01 23:30")).normalize()
Out[165]: Timestamp('2014-01-02 00:00:00')
Parametric offsets#
Some of the offsets can be “parameterized” when created to result in different
behaviors. For example, the Week offset for generating weekly data accepts a
weekday parameter which results in the generated dates always lying on a
particular day of the week:
In [166]: d = datetime.datetime(2008, 8, 18, 9, 0)
In [167]: d
Out[167]: datetime.datetime(2008, 8, 18, 9, 0)
In [168]: d + pd.offsets.Week()
Out[168]: Timestamp('2008-08-25 09:00:00')
In [169]: d + pd.offsets.Week(weekday=4)
Out[169]: Timestamp('2008-08-22 09:00:00')
In [170]: (d + pd.offsets.Week(weekday=4)).weekday()
Out[170]: 4
In [171]: d - pd.offsets.Week()
Out[171]: Timestamp('2008-08-11 09:00:00')
The normalize option will be effective for addition and subtraction.
In [172]: d + pd.offsets.Week(normalize=True)
Out[172]: Timestamp('2008-08-25 00:00:00')
In [173]: d - pd.offsets.Week(normalize=True)
Out[173]: Timestamp('2008-08-11 00:00:00')
Another example is parameterizing YearEnd with the specific ending month:
In [174]: d + pd.offsets.YearEnd()
Out[174]: Timestamp('2008-12-31 09:00:00')
In [175]: d + pd.offsets.YearEnd(month=6)
Out[175]: Timestamp('2009-06-30 09:00:00')
Using offsets with Series / DatetimeIndex#
Offsets can be used with either a Series or DatetimeIndex to
apply the offset to each element.
In [176]: rng = pd.date_range("2012-01-01", "2012-01-03")
In [177]: s = pd.Series(rng)
In [178]: rng
Out[178]: DatetimeIndex(['2012-01-01', '2012-01-02', '2012-01-03'], dtype='datetime64[ns]', freq='D')
In [179]: rng + pd.DateOffset(months=2)
Out[179]: DatetimeIndex(['2012-03-01', '2012-03-02', '2012-03-03'], dtype='datetime64[ns]', freq=None)
In [180]: s + pd.DateOffset(months=2)
Out[180]:
0 2012-03-01
1 2012-03-02
2 2012-03-03
dtype: datetime64[ns]
In [181]: s - pd.DateOffset(months=2)
Out[181]:
0 2011-11-01
1 2011-11-02
2 2011-11-03
dtype: datetime64[ns]
If the offset class maps directly to a Timedelta (Day, Hour,
Minute, Second, Micro, Milli, Nano) it can be
used exactly like a Timedelta - see the
Timedelta section for more examples.
In [182]: s - pd.offsets.Day(2)
Out[182]:
0 2011-12-30
1 2011-12-31
2 2012-01-01
dtype: datetime64[ns]
In [183]: td = s - pd.Series(pd.date_range("2011-12-29", "2011-12-31"))
In [184]: td
Out[184]:
0 3 days
1 3 days
2 3 days
dtype: timedelta64[ns]
In [185]: td + pd.offsets.Minute(15)
Out[185]:
0 3 days 00:15:00
1 3 days 00:15:00
2 3 days 00:15:00
dtype: timedelta64[ns]
Note that some offsets (such as BQuarterEnd) do not have a
vectorized implementation. They can still be used but may
calculate significantly slower and will show a PerformanceWarning
In [186]: rng + pd.offsets.BQuarterEnd()
Out[186]: DatetimeIndex(['2012-03-30', '2012-03-30', '2012-03-30'], dtype='datetime64[ns]', freq=None)
Custom business days#
The CDay or CustomBusinessDay class provides a parametric
BusinessDay class which can be used to create customized business day
calendars which account for local holidays and local weekend conventions.
As an interesting example, let’s look at Egypt where a Friday-Saturday weekend is observed.
In [187]: weekmask_egypt = "Sun Mon Tue Wed Thu"
# They also observe International Workers' Day so let's
# add that for a couple of years
In [188]: holidays = [
.....: "2012-05-01",
.....: datetime.datetime(2013, 5, 1),
.....: np.datetime64("2014-05-01"),
.....: ]
.....:
In [189]: bday_egypt = pd.offsets.CustomBusinessDay(
.....: holidays=holidays,
.....: weekmask=weekmask_egypt,
.....: )
.....:
In [190]: dt = datetime.datetime(2013, 4, 30)
In [191]: dt + 2 * bday_egypt
Out[191]: Timestamp('2013-05-05 00:00:00')
Let’s map to the weekday names:
In [192]: dts = pd.date_range(dt, periods=5, freq=bday_egypt)
In [193]: pd.Series(dts.weekday, dts).map(pd.Series("Mon Tue Wed Thu Fri Sat Sun".split()))
Out[193]:
2013-04-30 Tue
2013-05-02 Thu
2013-05-05 Sun
2013-05-06 Mon
2013-05-07 Tue
Freq: C, dtype: object
Holiday calendars can be used to provide the list of holidays. See the
holiday calendar section for more information.
In [194]: from pandas.tseries.holiday import USFederalHolidayCalendar
In [195]: bday_us = pd.offsets.CustomBusinessDay(calendar=USFederalHolidayCalendar())
# Friday before MLK Day
In [196]: dt = datetime.datetime(2014, 1, 17)
# Tuesday after MLK Day (Monday is skipped because it's a holiday)
In [197]: dt + bday_us
Out[197]: Timestamp('2014-01-21 00:00:00')
Monthly offsets that respect a certain holiday calendar can be defined
in the usual way.
In [198]: bmth_us = pd.offsets.CustomBusinessMonthBegin(calendar=USFederalHolidayCalendar())
# Skip new years
In [199]: dt = datetime.datetime(2013, 12, 17)
In [200]: dt + bmth_us
Out[200]: Timestamp('2014-01-02 00:00:00')
# Define date index with custom offset
In [201]: pd.date_range(start="20100101", end="20120101", freq=bmth_us)
Out[201]:
DatetimeIndex(['2010-01-04', '2010-02-01', '2010-03-01', '2010-04-01',
'2010-05-03', '2010-06-01', '2010-07-01', '2010-08-02',
'2010-09-01', '2010-10-01', '2010-11-01', '2010-12-01',
'2011-01-03', '2011-02-01', '2011-03-01', '2011-04-01',
'2011-05-02', '2011-06-01', '2011-07-01', '2011-08-01',
'2011-09-01', '2011-10-03', '2011-11-01', '2011-12-01'],
dtype='datetime64[ns]', freq='CBMS')
Note
The frequency string ‘C’ is used to indicate that a CustomBusinessDay
DateOffset is used, it is important to note that since CustomBusinessDay is
a parameterised type, instances of CustomBusinessDay may differ and this is
not detectable from the ‘C’ frequency string. The user therefore needs to
ensure that the ‘C’ frequency string is used consistently within the user’s
application.
Business hour#
The BusinessHour class provides a business hour representation on BusinessDay,
allowing to use specific start and end times.
By default, BusinessHour uses 9:00 - 17:00 as business hours.
Adding BusinessHour will increment Timestamp by hourly frequency.
If target Timestamp is out of business hours, move to the next business hour
then increment it. If the result exceeds the business hours end, the remaining
hours are added to the next business day.
In [202]: bh = pd.offsets.BusinessHour()
In [203]: bh
Out[203]: <BusinessHour: BH=09:00-17:00>
# 2014-08-01 is Friday
In [204]: pd.Timestamp("2014-08-01 10:00").weekday()
Out[204]: 4
In [205]: pd.Timestamp("2014-08-01 10:00") + bh
Out[205]: Timestamp('2014-08-01 11:00:00')
# Below example is the same as: pd.Timestamp('2014-08-01 09:00') + bh
In [206]: pd.Timestamp("2014-08-01 08:00") + bh
Out[206]: Timestamp('2014-08-01 10:00:00')
# If the results is on the end time, move to the next business day
In [207]: pd.Timestamp("2014-08-01 16:00") + bh
Out[207]: Timestamp('2014-08-04 09:00:00')
# Remainings are added to the next day
In [208]: pd.Timestamp("2014-08-01 16:30") + bh
Out[208]: Timestamp('2014-08-04 09:30:00')
# Adding 2 business hours
In [209]: pd.Timestamp("2014-08-01 10:00") + pd.offsets.BusinessHour(2)
Out[209]: Timestamp('2014-08-01 12:00:00')
# Subtracting 3 business hours
In [210]: pd.Timestamp("2014-08-01 10:00") + pd.offsets.BusinessHour(-3)
Out[210]: Timestamp('2014-07-31 15:00:00')
You can also specify start and end time by keywords. The argument must
be a str with an hour:minute representation or a datetime.time
instance. Specifying seconds, microseconds and nanoseconds as business hour
results in ValueError.
In [211]: bh = pd.offsets.BusinessHour(start="11:00", end=datetime.time(20, 0))
In [212]: bh
Out[212]: <BusinessHour: BH=11:00-20:00>
In [213]: pd.Timestamp("2014-08-01 13:00") + bh
Out[213]: Timestamp('2014-08-01 14:00:00')
In [214]: pd.Timestamp("2014-08-01 09:00") + bh
Out[214]: Timestamp('2014-08-01 12:00:00')
In [215]: pd.Timestamp("2014-08-01 18:00") + bh
Out[215]: Timestamp('2014-08-01 19:00:00')
Passing start time later than end represents midnight business hour.
In this case, business hour exceeds midnight and overlap to the next day.
Valid business hours are distinguished by whether it started from valid BusinessDay.
In [216]: bh = pd.offsets.BusinessHour(start="17:00", end="09:00")
In [217]: bh
Out[217]: <BusinessHour: BH=17:00-09:00>
In [218]: pd.Timestamp("2014-08-01 17:00") + bh
Out[218]: Timestamp('2014-08-01 18:00:00')
In [219]: pd.Timestamp("2014-08-01 23:00") + bh
Out[219]: Timestamp('2014-08-02 00:00:00')
# Although 2014-08-02 is Saturday,
# it is valid because it starts from 08-01 (Friday).
In [220]: pd.Timestamp("2014-08-02 04:00") + bh
Out[220]: Timestamp('2014-08-02 05:00:00')
# Although 2014-08-04 is Monday,
# it is out of business hours because it starts from 08-03 (Sunday).
In [221]: pd.Timestamp("2014-08-04 04:00") + bh
Out[221]: Timestamp('2014-08-04 18:00:00')
Applying BusinessHour.rollforward and rollback to out of business hours results in
the next business hour start or previous day’s end. Different from other offsets, BusinessHour.rollforward
may output different results from apply by definition.
This is because one day’s business hour end is equal to next day’s business hour start. For example,
under the default business hours (9:00 - 17:00), there is no gap (0 minutes) between 2014-08-01 17:00 and
2014-08-04 09:00.
# This adjusts a Timestamp to business hour edge
In [222]: pd.offsets.BusinessHour().rollback(pd.Timestamp("2014-08-02 15:00"))
Out[222]: Timestamp('2014-08-01 17:00:00')
In [223]: pd.offsets.BusinessHour().rollforward(pd.Timestamp("2014-08-02 15:00"))
Out[223]: Timestamp('2014-08-04 09:00:00')
# It is the same as BusinessHour() + pd.Timestamp('2014-08-01 17:00').
# And it is the same as BusinessHour() + pd.Timestamp('2014-08-04 09:00')
In [224]: pd.offsets.BusinessHour() + pd.Timestamp("2014-08-02 15:00")
Out[224]: Timestamp('2014-08-04 10:00:00')
# BusinessDay results (for reference)
In [225]: pd.offsets.BusinessHour().rollforward(pd.Timestamp("2014-08-02"))
Out[225]: Timestamp('2014-08-04 09:00:00')
# It is the same as BusinessDay() + pd.Timestamp('2014-08-01')
# The result is the same as rollworward because BusinessDay never overlap.
In [226]: pd.offsets.BusinessHour() + pd.Timestamp("2014-08-02")
Out[226]: Timestamp('2014-08-04 10:00:00')
BusinessHour regards Saturday and Sunday as holidays. To use arbitrary
holidays, you can use CustomBusinessHour offset, as explained in the
following subsection.
Custom business hour#
The CustomBusinessHour is a mixture of BusinessHour and CustomBusinessDay which
allows you to specify arbitrary holidays. CustomBusinessHour works as the same
as BusinessHour except that it skips specified custom holidays.
In [227]: from pandas.tseries.holiday import USFederalHolidayCalendar
In [228]: bhour_us = pd.offsets.CustomBusinessHour(calendar=USFederalHolidayCalendar())
# Friday before MLK Day
In [229]: dt = datetime.datetime(2014, 1, 17, 15)
In [230]: dt + bhour_us
Out[230]: Timestamp('2014-01-17 16:00:00')
# Tuesday after MLK Day (Monday is skipped because it's a holiday)
In [231]: dt + bhour_us * 2
Out[231]: Timestamp('2014-01-21 09:00:00')
You can use keyword arguments supported by either BusinessHour and CustomBusinessDay.
In [232]: bhour_mon = pd.offsets.CustomBusinessHour(start="10:00", weekmask="Tue Wed Thu Fri")
# Monday is skipped because it's a holiday, business hour starts from 10:00
In [233]: dt + bhour_mon * 2
Out[233]: Timestamp('2014-01-21 10:00:00')
Offset aliases#
A number of string aliases are given to useful common time series
frequencies. We will refer to these aliases as offset aliases.
Alias
Description
B
business day frequency
C
custom business day frequency
D
calendar day frequency
W
weekly frequency
M
month end frequency
SM
semi-month end frequency (15th and end of month)
BM
business month end frequency
CBM
custom business month end frequency
MS
month start frequency
SMS
semi-month start frequency (1st and 15th)
BMS
business month start frequency
CBMS
custom business month start frequency
Q
quarter end frequency
BQ
business quarter end frequency
QS
quarter start frequency
BQS
business quarter start frequency
A, Y
year end frequency
BA, BY
business year end frequency
AS, YS
year start frequency
BAS, BYS
business year start frequency
BH
business hour frequency
H
hourly frequency
T, min
minutely frequency
S
secondly frequency
L, ms
milliseconds
U, us
microseconds
N
nanoseconds
Note
When using the offset aliases above, it should be noted that functions
such as date_range(), bdate_range(), will only return
timestamps that are in the interval defined by start_date and
end_date. If the start_date does not correspond to the frequency,
the returned timestamps will start at the next valid timestamp, same for
end_date, the returned timestamps will stop at the previous valid
timestamp.
For example, for the offset MS, if the start_date is not the first
of the month, the returned timestamps will start with the first day of the
next month. If end_date is not the first day of a month, the last
returned timestamp will be the first day of the corresponding month.
In [234]: dates_lst_1 = pd.date_range("2020-01-06", "2020-04-03", freq="MS")
In [235]: dates_lst_1
Out[235]: DatetimeIndex(['2020-02-01', '2020-03-01', '2020-04-01'], dtype='datetime64[ns]', freq='MS')
In [236]: dates_lst_2 = pd.date_range("2020-01-01", "2020-04-01", freq="MS")
In [237]: dates_lst_2
Out[237]: DatetimeIndex(['2020-01-01', '2020-02-01', '2020-03-01', '2020-04-01'], dtype='datetime64[ns]', freq='MS')
We can see in the above example date_range() and
bdate_range() will only return the valid timestamps between the
start_date and end_date. If these are not valid timestamps for the
given frequency it will roll to the next value for start_date
(respectively previous for the end_date)
Combining aliases#
As we have seen previously, the alias and the offset instance are fungible in
most functions:
In [238]: pd.date_range(start, periods=5, freq="B")
Out[238]:
DatetimeIndex(['2011-01-03', '2011-01-04', '2011-01-05', '2011-01-06',
'2011-01-07'],
dtype='datetime64[ns]', freq='B')
In [239]: pd.date_range(start, periods=5, freq=pd.offsets.BDay())
Out[239]:
DatetimeIndex(['2011-01-03', '2011-01-04', '2011-01-05', '2011-01-06',
'2011-01-07'],
dtype='datetime64[ns]', freq='B')
You can combine together day and intraday offsets:
In [240]: pd.date_range(start, periods=10, freq="2h20min")
Out[240]:
DatetimeIndex(['2011-01-01 00:00:00', '2011-01-01 02:20:00',
'2011-01-01 04:40:00', '2011-01-01 07:00:00',
'2011-01-01 09:20:00', '2011-01-01 11:40:00',
'2011-01-01 14:00:00', '2011-01-01 16:20:00',
'2011-01-01 18:40:00', '2011-01-01 21:00:00'],
dtype='datetime64[ns]', freq='140T')
In [241]: pd.date_range(start, periods=10, freq="1D10U")
Out[241]:
DatetimeIndex([ '2011-01-01 00:00:00', '2011-01-02 00:00:00.000010',
'2011-01-03 00:00:00.000020', '2011-01-04 00:00:00.000030',
'2011-01-05 00:00:00.000040', '2011-01-06 00:00:00.000050',
'2011-01-07 00:00:00.000060', '2011-01-08 00:00:00.000070',
'2011-01-09 00:00:00.000080', '2011-01-10 00:00:00.000090'],
dtype='datetime64[ns]', freq='86400000010U')
Anchored offsets#
For some frequencies you can specify an anchoring suffix:
Alias
Description
W-SUN
weekly frequency (Sundays). Same as ‘W’
W-MON
weekly frequency (Mondays)
W-TUE
weekly frequency (Tuesdays)
W-WED
weekly frequency (Wednesdays)
W-THU
weekly frequency (Thursdays)
W-FRI
weekly frequency (Fridays)
W-SAT
weekly frequency (Saturdays)
(B)Q(S)-DEC
quarterly frequency, year ends in December. Same as ‘Q’
(B)Q(S)-JAN
quarterly frequency, year ends in January
(B)Q(S)-FEB
quarterly frequency, year ends in February
(B)Q(S)-MAR
quarterly frequency, year ends in March
(B)Q(S)-APR
quarterly frequency, year ends in April
(B)Q(S)-MAY
quarterly frequency, year ends in May
(B)Q(S)-JUN
quarterly frequency, year ends in June
(B)Q(S)-JUL
quarterly frequency, year ends in July
(B)Q(S)-AUG
quarterly frequency, year ends in August
(B)Q(S)-SEP
quarterly frequency, year ends in September
(B)Q(S)-OCT
quarterly frequency, year ends in October
(B)Q(S)-NOV
quarterly frequency, year ends in November
(B)A(S)-DEC
annual frequency, anchored end of December. Same as ‘A’
(B)A(S)-JAN
annual frequency, anchored end of January
(B)A(S)-FEB
annual frequency, anchored end of February
(B)A(S)-MAR
annual frequency, anchored end of March
(B)A(S)-APR
annual frequency, anchored end of April
(B)A(S)-MAY
annual frequency, anchored end of May
(B)A(S)-JUN
annual frequency, anchored end of June
(B)A(S)-JUL
annual frequency, anchored end of July
(B)A(S)-AUG
annual frequency, anchored end of August
(B)A(S)-SEP
annual frequency, anchored end of September
(B)A(S)-OCT
annual frequency, anchored end of October
(B)A(S)-NOV
annual frequency, anchored end of November
These can be used as arguments to date_range, bdate_range, constructors
for DatetimeIndex, as well as various other timeseries-related functions
in pandas.
Anchored offset semantics#
For those offsets that are anchored to the start or end of specific
frequency (MonthEnd, MonthBegin, WeekEnd, etc), the following
rules apply to rolling forward and backwards.
When n is not 0, if the given date is not on an anchor point, it snapped to the next(previous)
anchor point, and moved |n|-1 additional steps forwards or backwards.
In [242]: pd.Timestamp("2014-01-02") + pd.offsets.MonthBegin(n=1)
Out[242]: Timestamp('2014-02-01 00:00:00')
In [243]: pd.Timestamp("2014-01-02") + pd.offsets.MonthEnd(n=1)
Out[243]: Timestamp('2014-01-31 00:00:00')
In [244]: pd.Timestamp("2014-01-02") - pd.offsets.MonthBegin(n=1)
Out[244]: Timestamp('2014-01-01 00:00:00')
In [245]: pd.Timestamp("2014-01-02") - pd.offsets.MonthEnd(n=1)
Out[245]: Timestamp('2013-12-31 00:00:00')
In [246]: pd.Timestamp("2014-01-02") + pd.offsets.MonthBegin(n=4)
Out[246]: Timestamp('2014-05-01 00:00:00')
In [247]: pd.Timestamp("2014-01-02") - pd.offsets.MonthBegin(n=4)
Out[247]: Timestamp('2013-10-01 00:00:00')
If the given date is on an anchor point, it is moved |n| points forwards
or backwards.
In [248]: pd.Timestamp("2014-01-01") + pd.offsets.MonthBegin(n=1)
Out[248]: Timestamp('2014-02-01 00:00:00')
In [249]: pd.Timestamp("2014-01-31") + pd.offsets.MonthEnd(n=1)
Out[249]: Timestamp('2014-02-28 00:00:00')
In [250]: pd.Timestamp("2014-01-01") - pd.offsets.MonthBegin(n=1)
Out[250]: Timestamp('2013-12-01 00:00:00')
In [251]: pd.Timestamp("2014-01-31") - pd.offsets.MonthEnd(n=1)
Out[251]: Timestamp('2013-12-31 00:00:00')
In [252]: pd.Timestamp("2014-01-01") + pd.offsets.MonthBegin(n=4)
Out[252]: Timestamp('2014-05-01 00:00:00')
In [253]: pd.Timestamp("2014-01-31") - pd.offsets.MonthBegin(n=4)
Out[253]: Timestamp('2013-10-01 00:00:00')
For the case when n=0, the date is not moved if on an anchor point, otherwise
it is rolled forward to the next anchor point.
In [254]: pd.Timestamp("2014-01-02") + pd.offsets.MonthBegin(n=0)
Out[254]: Timestamp('2014-02-01 00:00:00')
In [255]: pd.Timestamp("2014-01-02") + pd.offsets.MonthEnd(n=0)
Out[255]: Timestamp('2014-01-31 00:00:00')
In [256]: pd.Timestamp("2014-01-01") + pd.offsets.MonthBegin(n=0)
Out[256]: Timestamp('2014-01-01 00:00:00')
In [257]: pd.Timestamp("2014-01-31") + pd.offsets.MonthEnd(n=0)
Out[257]: Timestamp('2014-01-31 00:00:00')
Holidays / holiday calendars#
Holidays and calendars provide a simple way to define holiday rules to be used
with CustomBusinessDay or in other analysis that requires a predefined
set of holidays. The AbstractHolidayCalendar class provides all the necessary
methods to return a list of holidays and only rules need to be defined
in a specific holiday calendar class. Furthermore, the start_date and end_date
class attributes determine over what date range holidays are generated. These
should be overwritten on the AbstractHolidayCalendar class to have the range
apply to all calendar subclasses. USFederalHolidayCalendar is the
only calendar that exists and primarily serves as an example for developing
other calendars.
For holidays that occur on fixed dates (e.g., US Memorial Day or July 4th) an
observance rule determines when that holiday is observed if it falls on a weekend
or some other non-observed day. Defined observance rules are:
Rule
Description
nearest_workday
move Saturday to Friday and Sunday to Monday
sunday_to_monday
move Sunday to following Monday
next_monday_or_tuesday
move Saturday to Monday and Sunday/Monday to Tuesday
previous_friday
move Saturday and Sunday to previous Friday”
next_monday
move Saturday and Sunday to following Monday
An example of how holidays and holiday calendars are defined:
In [258]: from pandas.tseries.holiday import (
.....: Holiday,
.....: USMemorialDay,
.....: AbstractHolidayCalendar,
.....: nearest_workday,
.....: MO,
.....: )
.....:
In [259]: class ExampleCalendar(AbstractHolidayCalendar):
.....: rules = [
.....: USMemorialDay,
.....: Holiday("July 4th", month=7, day=4, observance=nearest_workday),
.....: Holiday(
.....: "Columbus Day",
.....: month=10,
.....: day=1,
.....: offset=pd.DateOffset(weekday=MO(2)),
.....: ),
.....: ]
.....:
In [260]: cal = ExampleCalendar()
In [261]: cal.holidays(datetime.datetime(2012, 1, 1), datetime.datetime(2012, 12, 31))
Out[261]: DatetimeIndex(['2012-05-28', '2012-07-04', '2012-10-08'], dtype='datetime64[ns]', freq=None)
hint
weekday=MO(2) is same as 2 * Week(weekday=2)
Using this calendar, creating an index or doing offset arithmetic skips weekends
and holidays (i.e., Memorial Day/July 4th). For example, the below defines
a custom business day offset using the ExampleCalendar. Like any other offset,
it can be used to create a DatetimeIndex or added to datetime
or Timestamp objects.
In [262]: pd.date_range(
.....: start="7/1/2012", end="7/10/2012", freq=pd.offsets.CDay(calendar=cal)
.....: ).to_pydatetime()
.....:
Out[262]:
array([datetime.datetime(2012, 7, 2, 0, 0),
datetime.datetime(2012, 7, 3, 0, 0),
datetime.datetime(2012, 7, 5, 0, 0),
datetime.datetime(2012, 7, 6, 0, 0),
datetime.datetime(2012, 7, 9, 0, 0),
datetime.datetime(2012, 7, 10, 0, 0)], dtype=object)
In [263]: offset = pd.offsets.CustomBusinessDay(calendar=cal)
In [264]: datetime.datetime(2012, 5, 25) + offset
Out[264]: Timestamp('2012-05-29 00:00:00')
In [265]: datetime.datetime(2012, 7, 3) + offset
Out[265]: Timestamp('2012-07-05 00:00:00')
In [266]: datetime.datetime(2012, 7, 3) + 2 * offset
Out[266]: Timestamp('2012-07-06 00:00:00')
In [267]: datetime.datetime(2012, 7, 6) + offset
Out[267]: Timestamp('2012-07-09 00:00:00')
Ranges are defined by the start_date and end_date class attributes
of AbstractHolidayCalendar. The defaults are shown below.
In [268]: AbstractHolidayCalendar.start_date
Out[268]: Timestamp('1970-01-01 00:00:00')
In [269]: AbstractHolidayCalendar.end_date
Out[269]: Timestamp('2200-12-31 00:00:00')
These dates can be overwritten by setting the attributes as
datetime/Timestamp/string.
In [270]: AbstractHolidayCalendar.start_date = datetime.datetime(2012, 1, 1)
In [271]: AbstractHolidayCalendar.end_date = datetime.datetime(2012, 12, 31)
In [272]: cal.holidays()
Out[272]: DatetimeIndex(['2012-05-28', '2012-07-04', '2012-10-08'], dtype='datetime64[ns]', freq=None)
Every calendar class is accessible by name using the get_calendar function
which returns a holiday class instance. Any imported calendar class will
automatically be available by this function. Also, HolidayCalendarFactory
provides an easy interface to create calendars that are combinations of calendars
or calendars with additional rules.
In [273]: from pandas.tseries.holiday import get_calendar, HolidayCalendarFactory, USLaborDay
In [274]: cal = get_calendar("ExampleCalendar")
In [275]: cal.rules
Out[275]:
[Holiday: Memorial Day (month=5, day=31, offset=<DateOffset: weekday=MO(-1)>),
Holiday: July 4th (month=7, day=4, observance=<function nearest_workday at 0x7f1e67138ee0>),
Holiday: Columbus Day (month=10, day=1, offset=<DateOffset: weekday=MO(+2)>)]
In [276]: new_cal = HolidayCalendarFactory("NewExampleCalendar", cal, USLaborDay)
In [277]: new_cal.rules
Out[277]:
[Holiday: Labor Day (month=9, day=1, offset=<DateOffset: weekday=MO(+1)>),
Holiday: Memorial Day (month=5, day=31, offset=<DateOffset: weekday=MO(-1)>),
Holiday: July 4th (month=7, day=4, observance=<function nearest_workday at 0x7f1e67138ee0>),
Holiday: Columbus Day (month=10, day=1, offset=<DateOffset: weekday=MO(+2)>)]
Time Series-related instance methods#
Shifting / lagging#
One may want to shift or lag the values in a time series back and forward in
time. The method for this is shift(), which is available on all of
the pandas objects.
In [278]: ts = pd.Series(range(len(rng)), index=rng)
In [279]: ts = ts[:5]
In [280]: ts.shift(1)
Out[280]:
2012-01-01 NaN
2012-01-02 0.0
2012-01-03 1.0
Freq: D, dtype: float64
The shift method accepts an freq argument which can accept a
DateOffset class or other timedelta-like object or also an
offset alias.
When freq is specified, shift method changes all the dates in the index
rather than changing the alignment of the data and the index:
In [281]: ts.shift(5, freq="D")
Out[281]:
2012-01-06 0
2012-01-07 1
2012-01-08 2
Freq: D, dtype: int64
In [282]: ts.shift(5, freq=pd.offsets.BDay())
Out[282]:
2012-01-06 0
2012-01-09 1
2012-01-10 2
dtype: int64
In [283]: ts.shift(5, freq="BM")
Out[283]:
2012-05-31 0
2012-05-31 1
2012-05-31 2
dtype: int64
Note that with when freq is specified, the leading entry is no longer NaN
because the data is not being realigned.
Frequency conversion#
The primary function for changing frequencies is the asfreq()
method. For a DatetimeIndex, this is basically just a thin, but convenient
wrapper around reindex() which generates a date_range and
calls reindex.
In [284]: dr = pd.date_range("1/1/2010", periods=3, freq=3 * pd.offsets.BDay())
In [285]: ts = pd.Series(np.random.randn(3), index=dr)
In [286]: ts
Out[286]:
2010-01-01 1.494522
2010-01-06 -0.778425
2010-01-11 -0.253355
Freq: 3B, dtype: float64
In [287]: ts.asfreq(pd.offsets.BDay())
Out[287]:
2010-01-01 1.494522
2010-01-04 NaN
2010-01-05 NaN
2010-01-06 -0.778425
2010-01-07 NaN
2010-01-08 NaN
2010-01-11 -0.253355
Freq: B, dtype: float64
asfreq provides a further convenience so you can specify an interpolation
method for any gaps that may appear after the frequency conversion.
In [288]: ts.asfreq(pd.offsets.BDay(), method="pad")
Out[288]:
2010-01-01 1.494522
2010-01-04 1.494522
2010-01-05 1.494522
2010-01-06 -0.778425
2010-01-07 -0.778425
2010-01-08 -0.778425
2010-01-11 -0.253355
Freq: B, dtype: float64
Filling forward / backward#
Related to asfreq and reindex is fillna(), which is
documented in the missing data section.
Converting to Python datetimes#
DatetimeIndex can be converted to an array of Python native
datetime.datetime objects using the to_pydatetime method.
Resampling#
pandas has a simple, powerful, and efficient functionality for performing
resampling operations during frequency conversion (e.g., converting secondly
data into 5-minutely data). This is extremely common in, but not limited to,
financial applications.
resample() is a time-based groupby, followed by a reduction method
on each of its groups. See some cookbook examples for
some advanced strategies.
The resample() method can be used directly from DataFrameGroupBy objects,
see the groupby docs.
Basics#
In [289]: rng = pd.date_range("1/1/2012", periods=100, freq="S")
In [290]: ts = pd.Series(np.random.randint(0, 500, len(rng)), index=rng)
In [291]: ts.resample("5Min").sum()
Out[291]:
2012-01-01 25103
Freq: 5T, dtype: int64
The resample function is very flexible and allows you to specify many
different parameters to control the frequency conversion and resampling
operation.
Any function available via dispatching is available as
a method of the returned object, including sum, mean, std, sem,
max, min, median, first, last, ohlc:
In [292]: ts.resample("5Min").mean()
Out[292]:
2012-01-01 251.03
Freq: 5T, dtype: float64
In [293]: ts.resample("5Min").ohlc()
Out[293]:
open high low close
2012-01-01 308 460 9 205
In [294]: ts.resample("5Min").max()
Out[294]:
2012-01-01 460
Freq: 5T, dtype: int64
For downsampling, closed can be set to ‘left’ or ‘right’ to specify which
end of the interval is closed:
In [295]: ts.resample("5Min", closed="right").mean()
Out[295]:
2011-12-31 23:55:00 308.000000
2012-01-01 00:00:00 250.454545
Freq: 5T, dtype: float64
In [296]: ts.resample("5Min", closed="left").mean()
Out[296]:
2012-01-01 251.03
Freq: 5T, dtype: float64
Parameters like label are used to manipulate the resulting labels.
label specifies whether the result is labeled with the beginning or
the end of the interval.
In [297]: ts.resample("5Min").mean() # by default label='left'
Out[297]:
2012-01-01 251.03
Freq: 5T, dtype: float64
In [298]: ts.resample("5Min", label="left").mean()
Out[298]:
2012-01-01 251.03
Freq: 5T, dtype: float64
Warning
The default values for label and closed is ‘left’ for all
frequency offsets except for ‘M’, ‘A’, ‘Q’, ‘BM’, ‘BA’, ‘BQ’, and ‘W’
which all have a default of ‘right’.
This might unintendedly lead to looking ahead, where the value for a later
time is pulled back to a previous time as in the following example with
the BusinessDay frequency:
In [299]: s = pd.date_range("2000-01-01", "2000-01-05").to_series()
In [300]: s.iloc[2] = pd.NaT
In [301]: s.dt.day_name()
Out[301]:
2000-01-01 Saturday
2000-01-02 Sunday
2000-01-03 NaN
2000-01-04 Tuesday
2000-01-05 Wednesday
Freq: D, dtype: object
# default: label='left', closed='left'
In [302]: s.resample("B").last().dt.day_name()
Out[302]:
1999-12-31 Sunday
2000-01-03 NaN
2000-01-04 Tuesday
2000-01-05 Wednesday
Freq: B, dtype: object
Notice how the value for Sunday got pulled back to the previous Friday.
To get the behavior where the value for Sunday is pushed to Monday, use
instead
In [303]: s.resample("B", label="right", closed="right").last().dt.day_name()
Out[303]:
2000-01-03 Sunday
2000-01-04 Tuesday
2000-01-05 Wednesday
Freq: B, dtype: object
The axis parameter can be set to 0 or 1 and allows you to resample the
specified axis for a DataFrame.
kind can be set to ‘timestamp’ or ‘period’ to convert the resulting index
to/from timestamp and time span representations. By default resample
retains the input representation.
convention can be set to ‘start’ or ‘end’ when resampling period data
(detail below). It specifies how low frequency periods are converted to higher
frequency periods.
Upsampling#
For upsampling, you can specify a way to upsample and the limit parameter to interpolate over the gaps that are created:
# from secondly to every 250 milliseconds
In [304]: ts[:2].resample("250L").asfreq()
Out[304]:
2012-01-01 00:00:00.000 308.0
2012-01-01 00:00:00.250 NaN
2012-01-01 00:00:00.500 NaN
2012-01-01 00:00:00.750 NaN
2012-01-01 00:00:01.000 204.0
Freq: 250L, dtype: float64
In [305]: ts[:2].resample("250L").ffill()
Out[305]:
2012-01-01 00:00:00.000 308
2012-01-01 00:00:00.250 308
2012-01-01 00:00:00.500 308
2012-01-01 00:00:00.750 308
2012-01-01 00:00:01.000 204
Freq: 250L, dtype: int64
In [306]: ts[:2].resample("250L").ffill(limit=2)
Out[306]:
2012-01-01 00:00:00.000 308.0
2012-01-01 00:00:00.250 308.0
2012-01-01 00:00:00.500 308.0
2012-01-01 00:00:00.750 NaN
2012-01-01 00:00:01.000 204.0
Freq: 250L, dtype: float64
Sparse resampling#
Sparse timeseries are the ones where you have a lot fewer points relative
to the amount of time you are looking to resample. Naively upsampling a sparse
series can potentially generate lots of intermediate values. When you don’t want
to use a method to fill these values, e.g. fill_method is None, then
intermediate values will be filled with NaN.
Since resample is a time-based groupby, the following is a method to efficiently
resample only the groups that are not all NaN.
In [307]: rng = pd.date_range("2014-1-1", periods=100, freq="D") + pd.Timedelta("1s")
In [308]: ts = pd.Series(range(100), index=rng)
If we want to resample to the full range of the series:
In [309]: ts.resample("3T").sum()
Out[309]:
2014-01-01 00:00:00 0
2014-01-01 00:03:00 0
2014-01-01 00:06:00 0
2014-01-01 00:09:00 0
2014-01-01 00:12:00 0
..
2014-04-09 23:48:00 0
2014-04-09 23:51:00 0
2014-04-09 23:54:00 0
2014-04-09 23:57:00 0
2014-04-10 00:00:00 99
Freq: 3T, Length: 47521, dtype: int64
We can instead only resample those groups where we have points as follows:
In [310]: from functools import partial
In [311]: from pandas.tseries.frequencies import to_offset
In [312]: def round(t, freq):
.....: freq = to_offset(freq)
.....: return pd.Timestamp((t.value // freq.delta.value) * freq.delta.value)
.....:
In [313]: ts.groupby(partial(round, freq="3T")).sum()
Out[313]:
2014-01-01 0
2014-01-02 1
2014-01-03 2
2014-01-04 3
2014-01-05 4
..
2014-04-06 95
2014-04-07 96
2014-04-08 97
2014-04-09 98
2014-04-10 99
Length: 100, dtype: int64
Aggregation#
Similar to the aggregating API, groupby API, and the window API,
a Resampler can be selectively resampled.
Resampling a DataFrame, the default will be to act on all columns with the same function.
In [314]: df = pd.DataFrame(
.....: np.random.randn(1000, 3),
.....: index=pd.date_range("1/1/2012", freq="S", periods=1000),
.....: columns=["A", "B", "C"],
.....: )
.....:
In [315]: r = df.resample("3T")
In [316]: r.mean()
Out[316]:
A B C
2012-01-01 00:00:00 -0.033823 -0.121514 -0.081447
2012-01-01 00:03:00 0.056909 0.146731 -0.024320
2012-01-01 00:06:00 -0.058837 0.047046 -0.052021
2012-01-01 00:09:00 0.063123 -0.026158 -0.066533
2012-01-01 00:12:00 0.186340 -0.003144 0.074752
2012-01-01 00:15:00 -0.085954 -0.016287 -0.050046
We can select a specific column or columns using standard getitem.
In [317]: r["A"].mean()
Out[317]:
2012-01-01 00:00:00 -0.033823
2012-01-01 00:03:00 0.056909
2012-01-01 00:06:00 -0.058837
2012-01-01 00:09:00 0.063123
2012-01-01 00:12:00 0.186340
2012-01-01 00:15:00 -0.085954
Freq: 3T, Name: A, dtype: float64
In [318]: r[["A", "B"]].mean()
Out[318]:
A B
2012-01-01 00:00:00 -0.033823 -0.121514
2012-01-01 00:03:00 0.056909 0.146731
2012-01-01 00:06:00 -0.058837 0.047046
2012-01-01 00:09:00 0.063123 -0.026158
2012-01-01 00:12:00 0.186340 -0.003144
2012-01-01 00:15:00 -0.085954 -0.016287
You can pass a list or dict of functions to do aggregation with, outputting a DataFrame:
In [319]: r["A"].agg([np.sum, np.mean, np.std])
Out[319]:
sum mean std
2012-01-01 00:00:00 -6.088060 -0.033823 1.043263
2012-01-01 00:03:00 10.243678 0.056909 1.058534
2012-01-01 00:06:00 -10.590584 -0.058837 0.949264
2012-01-01 00:09:00 11.362228 0.063123 1.028096
2012-01-01 00:12:00 33.541257 0.186340 0.884586
2012-01-01 00:15:00 -8.595393 -0.085954 1.035476
On a resampled DataFrame, you can pass a list of functions to apply to each
column, which produces an aggregated result with a hierarchical index:
In [320]: r.agg([np.sum, np.mean])
Out[320]:
A ... C
sum mean ... sum mean
2012-01-01 00:00:00 -6.088060 -0.033823 ... -14.660515 -0.081447
2012-01-01 00:03:00 10.243678 0.056909 ... -4.377642 -0.024320
2012-01-01 00:06:00 -10.590584 -0.058837 ... -9.363825 -0.052021
2012-01-01 00:09:00 11.362228 0.063123 ... -11.975895 -0.066533
2012-01-01 00:12:00 33.541257 0.186340 ... 13.455299 0.074752
2012-01-01 00:15:00 -8.595393 -0.085954 ... -5.004580 -0.050046
[6 rows x 6 columns]
By passing a dict to aggregate you can apply a different aggregation to the
columns of a DataFrame:
In [321]: r.agg({"A": np.sum, "B": lambda x: np.std(x, ddof=1)})
Out[321]:
A B
2012-01-01 00:00:00 -6.088060 1.001294
2012-01-01 00:03:00 10.243678 1.074597
2012-01-01 00:06:00 -10.590584 0.987309
2012-01-01 00:09:00 11.362228 0.944953
2012-01-01 00:12:00 33.541257 1.095025
2012-01-01 00:15:00 -8.595393 1.035312
The function names can also be strings. In order for a string to be valid it
must be implemented on the resampled object:
In [322]: r.agg({"A": "sum", "B": "std"})
Out[322]:
A B
2012-01-01 00:00:00 -6.088060 1.001294
2012-01-01 00:03:00 10.243678 1.074597
2012-01-01 00:06:00 -10.590584 0.987309
2012-01-01 00:09:00 11.362228 0.944953
2012-01-01 00:12:00 33.541257 1.095025
2012-01-01 00:15:00 -8.595393 1.035312
Furthermore, you can also specify multiple aggregation functions for each column separately.
In [323]: r.agg({"A": ["sum", "std"], "B": ["mean", "std"]})
Out[323]:
A B
sum std mean std
2012-01-01 00:00:00 -6.088060 1.043263 -0.121514 1.001294
2012-01-01 00:03:00 10.243678 1.058534 0.146731 1.074597
2012-01-01 00:06:00 -10.590584 0.949264 0.047046 0.987309
2012-01-01 00:09:00 11.362228 1.028096 -0.026158 0.944953
2012-01-01 00:12:00 33.541257 0.884586 -0.003144 1.095025
2012-01-01 00:15:00 -8.595393 1.035476 -0.016287 1.035312
If a DataFrame does not have a datetimelike index, but instead you want
to resample based on datetimelike column in the frame, it can passed to the
on keyword.
In [324]: df = pd.DataFrame(
.....: {"date": pd.date_range("2015-01-01", freq="W", periods=5), "a": np.arange(5)},
.....: index=pd.MultiIndex.from_arrays(
.....: [[1, 2, 3, 4, 5], pd.date_range("2015-01-01", freq="W", periods=5)],
.....: names=["v", "d"],
.....: ),
.....: )
.....:
In [325]: df
Out[325]:
date a
v d
1 2015-01-04 2015-01-04 0
2 2015-01-11 2015-01-11 1
3 2015-01-18 2015-01-18 2
4 2015-01-25 2015-01-25 3
5 2015-02-01 2015-02-01 4
In [326]: df.resample("M", on="date")[["a"]].sum()
Out[326]:
a
date
2015-01-31 6
2015-02-28 4
Similarly, if you instead want to resample by a datetimelike
level of MultiIndex, its name or location can be passed to the
level keyword.
In [327]: df.resample("M", level="d")[["a"]].sum()
Out[327]:
a
d
2015-01-31 6
2015-02-28 4
Iterating through groups#
With the Resampler object in hand, iterating through the grouped data is very
natural and functions similarly to itertools.groupby():
In [328]: small = pd.Series(
.....: range(6),
.....: index=pd.to_datetime(
.....: [
.....: "2017-01-01T00:00:00",
.....: "2017-01-01T00:30:00",
.....: "2017-01-01T00:31:00",
.....: "2017-01-01T01:00:00",
.....: "2017-01-01T03:00:00",
.....: "2017-01-01T03:05:00",
.....: ]
.....: ),
.....: )
.....:
In [329]: resampled = small.resample("H")
In [330]: for name, group in resampled:
.....: print("Group: ", name)
.....: print("-" * 27)
.....: print(group, end="\n\n")
.....:
Group: 2017-01-01 00:00:00
---------------------------
2017-01-01 00:00:00 0
2017-01-01 00:30:00 1
2017-01-01 00:31:00 2
dtype: int64
Group: 2017-01-01 01:00:00
---------------------------
2017-01-01 01:00:00 3
dtype: int64
Group: 2017-01-01 02:00:00
---------------------------
Series([], dtype: int64)
Group: 2017-01-01 03:00:00
---------------------------
2017-01-01 03:00:00 4
2017-01-01 03:05:00 5
dtype: int64
See Iterating through groups or Resampler.__iter__ for more.
Use origin or offset to adjust the start of the bins#
New in version 1.1.0.
The bins of the grouping are adjusted based on the beginning of the day of the time series starting point. This works well with frequencies that are multiples of a day (like 30D) or that divide a day evenly (like 90s or 1min). This can create inconsistencies with some frequencies that do not meet this criteria. To change this behavior you can specify a fixed Timestamp with the argument origin.
For example:
In [331]: start, end = "2000-10-01 23:30:00", "2000-10-02 00:30:00"
In [332]: middle = "2000-10-02 00:00:00"
In [333]: rng = pd.date_range(start, end, freq="7min")
In [334]: ts = pd.Series(np.arange(len(rng)) * 3, index=rng)
In [335]: ts
Out[335]:
2000-10-01 23:30:00 0
2000-10-01 23:37:00 3
2000-10-01 23:44:00 6
2000-10-01 23:51:00 9
2000-10-01 23:58:00 12
2000-10-02 00:05:00 15
2000-10-02 00:12:00 18
2000-10-02 00:19:00 21
2000-10-02 00:26:00 24
Freq: 7T, dtype: int64
Here we can see that, when using origin with its default value ('start_day'), the result after '2000-10-02 00:00:00' are not identical depending on the start of time series:
In [336]: ts.resample("17min", origin="start_day").sum()
Out[336]:
2000-10-01 23:14:00 0
2000-10-01 23:31:00 9
2000-10-01 23:48:00 21
2000-10-02 00:05:00 54
2000-10-02 00:22:00 24
Freq: 17T, dtype: int64
In [337]: ts[middle:end].resample("17min", origin="start_day").sum()
Out[337]:
2000-10-02 00:00:00 33
2000-10-02 00:17:00 45
Freq: 17T, dtype: int64
Here we can see that, when setting origin to 'epoch', the result after '2000-10-02 00:00:00' are identical depending on the start of time series:
In [338]: ts.resample("17min", origin="epoch").sum()
Out[338]:
2000-10-01 23:18:00 0
2000-10-01 23:35:00 18
2000-10-01 23:52:00 27
2000-10-02 00:09:00 39
2000-10-02 00:26:00 24
Freq: 17T, dtype: int64
In [339]: ts[middle:end].resample("17min", origin="epoch").sum()
Out[339]:
2000-10-01 23:52:00 15
2000-10-02 00:09:00 39
2000-10-02 00:26:00 24
Freq: 17T, dtype: int64
If needed you can use a custom timestamp for origin:
In [340]: ts.resample("17min", origin="2001-01-01").sum()
Out[340]:
2000-10-01 23:30:00 9
2000-10-01 23:47:00 21
2000-10-02 00:04:00 54
2000-10-02 00:21:00 24
Freq: 17T, dtype: int64
In [341]: ts[middle:end].resample("17min", origin=pd.Timestamp("2001-01-01")).sum()
Out[341]:
2000-10-02 00:04:00 54
2000-10-02 00:21:00 24
Freq: 17T, dtype: int64
If needed you can just adjust the bins with an offset Timedelta that would be added to the default origin.
Those two examples are equivalent for this time series:
In [342]: ts.resample("17min", origin="start").sum()
Out[342]:
2000-10-01 23:30:00 9
2000-10-01 23:47:00 21
2000-10-02 00:04:00 54
2000-10-02 00:21:00 24
Freq: 17T, dtype: int64
In [343]: ts.resample("17min", offset="23h30min").sum()
Out[343]:
2000-10-01 23:30:00 9
2000-10-01 23:47:00 21
2000-10-02 00:04:00 54
2000-10-02 00:21:00 24
Freq: 17T, dtype: int64
Note the use of 'start' for origin on the last example. In that case, origin will be set to the first value of the timeseries.
Backward resample#
New in version 1.3.0.
Instead of adjusting the beginning of bins, sometimes we need to fix the end of the bins to make a backward resample with a given freq. The backward resample sets closed to 'right' by default since the last value should be considered as the edge point for the last bin.
We can set origin to 'end'. The value for a specific Timestamp index stands for the resample result from the current Timestamp minus freq to the current Timestamp with a right close.
In [344]: ts.resample('17min', origin='end').sum()
Out[344]:
2000-10-01 23:35:00 0
2000-10-01 23:52:00 18
2000-10-02 00:09:00 27
2000-10-02 00:26:00 63
Freq: 17T, dtype: int64
Besides, in contrast with the 'start_day' option, end_day is supported. This will set the origin as the ceiling midnight of the largest Timestamp.
In [345]: ts.resample('17min', origin='end_day').sum()
Out[345]:
2000-10-01 23:38:00 3
2000-10-01 23:55:00 15
2000-10-02 00:12:00 45
2000-10-02 00:29:00 45
Freq: 17T, dtype: int64
The above result uses 2000-10-02 00:29:00 as the last bin’s right edge since the following computation.
In [346]: ceil_mid = rng.max().ceil('D')
In [347]: freq = pd.offsets.Minute(17)
In [348]: bin_res = ceil_mid - freq * ((ceil_mid - rng.max()) // freq)
In [349]: bin_res
Out[349]: Timestamp('2000-10-02 00:29:00')
Time span representation#
Regular intervals of time are represented by Period objects in pandas while
sequences of Period objects are collected in a PeriodIndex, which can
be created with the convenience function period_range.
Period#
A Period represents a span of time (e.g., a day, a month, a quarter, etc).
You can specify the span via freq keyword using a frequency alias like below.
Because freq represents a span of Period, it cannot be negative like “-3D”.
In [350]: pd.Period("2012", freq="A-DEC")
Out[350]: Period('2012', 'A-DEC')
In [351]: pd.Period("2012-1-1", freq="D")
Out[351]: Period('2012-01-01', 'D')
In [352]: pd.Period("2012-1-1 19:00", freq="H")
Out[352]: Period('2012-01-01 19:00', 'H')
In [353]: pd.Period("2012-1-1 19:00", freq="5H")
Out[353]: Period('2012-01-01 19:00', '5H')
Adding and subtracting integers from periods shifts the period by its own
frequency. Arithmetic is not allowed between Period with different freq (span).
In [354]: p = pd.Period("2012", freq="A-DEC")
In [355]: p + 1
Out[355]: Period('2013', 'A-DEC')
In [356]: p - 3
Out[356]: Period('2009', 'A-DEC')
In [357]: p = pd.Period("2012-01", freq="2M")
In [358]: p + 2
Out[358]: Period('2012-05', '2M')
In [359]: p - 1
Out[359]: Period('2011-11', '2M')
In [360]: p == pd.Period("2012-01", freq="3M")
Out[360]: False
If Period freq is daily or higher (D, H, T, S, L, U, N), offsets and timedelta-like can be added if the result can have the same freq. Otherwise, ValueError will be raised.
In [361]: p = pd.Period("2014-07-01 09:00", freq="H")
In [362]: p + pd.offsets.Hour(2)
Out[362]: Period('2014-07-01 11:00', 'H')
In [363]: p + datetime.timedelta(minutes=120)
Out[363]: Period('2014-07-01 11:00', 'H')
In [364]: p + np.timedelta64(7200, "s")
Out[364]: Period('2014-07-01 11:00', 'H')
In [1]: p + pd.offsets.Minute(5)
Traceback
...
ValueError: Input has different freq from Period(freq=H)
If Period has other frequencies, only the same offsets can be added. Otherwise, ValueError will be raised.
In [365]: p = pd.Period("2014-07", freq="M")
In [366]: p + pd.offsets.MonthEnd(3)
Out[366]: Period('2014-10', 'M')
In [1]: p + pd.offsets.MonthBegin(3)
Traceback
...
ValueError: Input has different freq from Period(freq=M)
Taking the difference of Period instances with the same frequency will
return the number of frequency units between them:
In [367]: pd.Period("2012", freq="A-DEC") - pd.Period("2002", freq="A-DEC")
Out[367]: <10 * YearEnds: month=12>
PeriodIndex and period_range#
Regular sequences of Period objects can be collected in a PeriodIndex,
which can be constructed using the period_range convenience function:
In [368]: prng = pd.period_range("1/1/2011", "1/1/2012", freq="M")
In [369]: prng
Out[369]:
PeriodIndex(['2011-01', '2011-02', '2011-03', '2011-04', '2011-05', '2011-06',
'2011-07', '2011-08', '2011-09', '2011-10', '2011-11', '2011-12',
'2012-01'],
dtype='period[M]')
The PeriodIndex constructor can also be used directly:
In [370]: pd.PeriodIndex(["2011-1", "2011-2", "2011-3"], freq="M")
Out[370]: PeriodIndex(['2011-01', '2011-02', '2011-03'], dtype='period[M]')
Passing multiplied frequency outputs a sequence of Period which
has multiplied span.
In [371]: pd.period_range(start="2014-01", freq="3M", periods=4)
Out[371]: PeriodIndex(['2014-01', '2014-04', '2014-07', '2014-10'], dtype='period[3M]')
If start or end are Period objects, they will be used as anchor
endpoints for a PeriodIndex with frequency matching that of the
PeriodIndex constructor.
In [372]: pd.period_range(
.....: start=pd.Period("2017Q1", freq="Q"), end=pd.Period("2017Q2", freq="Q"), freq="M"
.....: )
.....:
Out[372]: PeriodIndex(['2017-03', '2017-04', '2017-05', '2017-06'], dtype='period[M]')
Just like DatetimeIndex, a PeriodIndex can also be used to index pandas
objects:
In [373]: ps = pd.Series(np.random.randn(len(prng)), prng)
In [374]: ps
Out[374]:
2011-01 -2.916901
2011-02 0.514474
2011-03 1.346470
2011-04 0.816397
2011-05 2.258648
2011-06 0.494789
2011-07 0.301239
2011-08 0.464776
2011-09 -1.393581
2011-10 0.056780
2011-11 0.197035
2011-12 2.261385
2012-01 -0.329583
Freq: M, dtype: float64
PeriodIndex supports addition and subtraction with the same rule as Period.
In [375]: idx = pd.period_range("2014-07-01 09:00", periods=5, freq="H")
In [376]: idx
Out[376]:
PeriodIndex(['2014-07-01 09:00', '2014-07-01 10:00', '2014-07-01 11:00',
'2014-07-01 12:00', '2014-07-01 13:00'],
dtype='period[H]')
In [377]: idx + pd.offsets.Hour(2)
Out[377]:
PeriodIndex(['2014-07-01 11:00', '2014-07-01 12:00', '2014-07-01 13:00',
'2014-07-01 14:00', '2014-07-01 15:00'],
dtype='period[H]')
In [378]: idx = pd.period_range("2014-07", periods=5, freq="M")
In [379]: idx
Out[379]: PeriodIndex(['2014-07', '2014-08', '2014-09', '2014-10', '2014-11'], dtype='period[M]')
In [380]: idx + pd.offsets.MonthEnd(3)
Out[380]: PeriodIndex(['2014-10', '2014-11', '2014-12', '2015-01', '2015-02'], dtype='period[M]')
PeriodIndex has its own dtype named period, refer to Period Dtypes.
Period dtypes#
PeriodIndex has a custom period dtype. This is a pandas extension
dtype similar to the timezone aware dtype (datetime64[ns, tz]).
The period dtype holds the freq attribute and is represented with
period[freq] like period[D] or period[M], using frequency strings.
In [381]: pi = pd.period_range("2016-01-01", periods=3, freq="M")
In [382]: pi
Out[382]: PeriodIndex(['2016-01', '2016-02', '2016-03'], dtype='period[M]')
In [383]: pi.dtype
Out[383]: period[M]
The period dtype can be used in .astype(...). It allows one to change the
freq of a PeriodIndex like .asfreq() and convert a
DatetimeIndex to PeriodIndex like to_period():
# change monthly freq to daily freq
In [384]: pi.astype("period[D]")
Out[384]: PeriodIndex(['2016-01-31', '2016-02-29', '2016-03-31'], dtype='period[D]')
# convert to DatetimeIndex
In [385]: pi.astype("datetime64[ns]")
Out[385]: DatetimeIndex(['2016-01-01', '2016-02-01', '2016-03-01'], dtype='datetime64[ns]', freq='MS')
# convert to PeriodIndex
In [386]: dti = pd.date_range("2011-01-01", freq="M", periods=3)
In [387]: dti
Out[387]: DatetimeIndex(['2011-01-31', '2011-02-28', '2011-03-31'], dtype='datetime64[ns]', freq='M')
In [388]: dti.astype("period[M]")
Out[388]: PeriodIndex(['2011-01', '2011-02', '2011-03'], dtype='period[M]')
PeriodIndex partial string indexing#
PeriodIndex now supports partial string slicing with non-monotonic indexes.
New in version 1.1.0.
You can pass in dates and strings to Series and DataFrame with PeriodIndex, in the same manner as DatetimeIndex. For details, refer to DatetimeIndex Partial String Indexing.
In [389]: ps["2011-01"]
Out[389]: -2.9169013294054507
In [390]: ps[datetime.datetime(2011, 12, 25):]
Out[390]:
2011-12 2.261385
2012-01 -0.329583
Freq: M, dtype: float64
In [391]: ps["10/31/2011":"12/31/2011"]
Out[391]:
2011-10 0.056780
2011-11 0.197035
2011-12 2.261385
Freq: M, dtype: float64
Passing a string representing a lower frequency than PeriodIndex returns partial sliced data.
In [392]: ps["2011"]
Out[392]:
2011-01 -2.916901
2011-02 0.514474
2011-03 1.346470
2011-04 0.816397
2011-05 2.258648
2011-06 0.494789
2011-07 0.301239
2011-08 0.464776
2011-09 -1.393581
2011-10 0.056780
2011-11 0.197035
2011-12 2.261385
Freq: M, dtype: float64
In [393]: dfp = pd.DataFrame(
.....: np.random.randn(600, 1),
.....: columns=["A"],
.....: index=pd.period_range("2013-01-01 9:00", periods=600, freq="T"),
.....: )
.....:
In [394]: dfp
Out[394]:
A
2013-01-01 09:00 -0.538468
2013-01-01 09:01 -1.365819
2013-01-01 09:02 -0.969051
2013-01-01 09:03 -0.331152
2013-01-01 09:04 -0.245334
... ...
2013-01-01 18:55 0.522460
2013-01-01 18:56 0.118710
2013-01-01 18:57 0.167517
2013-01-01 18:58 0.922883
2013-01-01 18:59 1.721104
[600 rows x 1 columns]
In [395]: dfp.loc["2013-01-01 10H"]
Out[395]:
A
2013-01-01 10:00 -0.308975
2013-01-01 10:01 0.542520
2013-01-01 10:02 1.061068
2013-01-01 10:03 0.754005
2013-01-01 10:04 0.352933
... ...
2013-01-01 10:55 -0.865621
2013-01-01 10:56 -1.167818
2013-01-01 10:57 -2.081748
2013-01-01 10:58 -0.527146
2013-01-01 10:59 0.802298
[60 rows x 1 columns]
As with DatetimeIndex, the endpoints will be included in the result. The example below slices data starting from 10:00 to 11:59.
In [396]: dfp["2013-01-01 10H":"2013-01-01 11H"]
Out[396]:
A
2013-01-01 10:00 -0.308975
2013-01-01 10:01 0.542520
2013-01-01 10:02 1.061068
2013-01-01 10:03 0.754005
2013-01-01 10:04 0.352933
... ...
2013-01-01 11:55 -0.590204
2013-01-01 11:56 1.539990
2013-01-01 11:57 -1.224826
2013-01-01 11:58 0.578798
2013-01-01 11:59 -0.685496
[120 rows x 1 columns]
Frequency conversion and resampling with PeriodIndex#
The frequency of Period and PeriodIndex can be converted via the asfreq
method. Let’s start with the fiscal year 2011, ending in December:
In [397]: p = pd.Period("2011", freq="A-DEC")
In [398]: p
Out[398]: Period('2011', 'A-DEC')
We can convert it to a monthly frequency. Using the how parameter, we can
specify whether to return the starting or ending month:
In [399]: p.asfreq("M", how="start")
Out[399]: Period('2011-01', 'M')
In [400]: p.asfreq("M", how="end")
Out[400]: Period('2011-12', 'M')
The shorthands ‘s’ and ‘e’ are provided for convenience:
In [401]: p.asfreq("M", "s")
Out[401]: Period('2011-01', 'M')
In [402]: p.asfreq("M", "e")
Out[402]: Period('2011-12', 'M')
Converting to a “super-period” (e.g., annual frequency is a super-period of
quarterly frequency) automatically returns the super-period that includes the
input period:
In [403]: p = pd.Period("2011-12", freq="M")
In [404]: p.asfreq("A-NOV")
Out[404]: Period('2012', 'A-NOV')
Note that since we converted to an annual frequency that ends the year in
November, the monthly period of December 2011 is actually in the 2012 A-NOV
period.
Period conversions with anchored frequencies are particularly useful for
working with various quarterly data common to economics, business, and other
fields. Many organizations define quarters relative to the month in which their
fiscal year starts and ends. Thus, first quarter of 2011 could start in 2010 or
a few months into 2011. Via anchored frequencies, pandas works for all quarterly
frequencies Q-JAN through Q-DEC.
Q-DEC define regular calendar quarters:
In [405]: p = pd.Period("2012Q1", freq="Q-DEC")
In [406]: p.asfreq("D", "s")
Out[406]: Period('2012-01-01', 'D')
In [407]: p.asfreq("D", "e")
Out[407]: Period('2012-03-31', 'D')
Q-MAR defines fiscal year end in March:
In [408]: p = pd.Period("2011Q4", freq="Q-MAR")
In [409]: p.asfreq("D", "s")
Out[409]: Period('2011-01-01', 'D')
In [410]: p.asfreq("D", "e")
Out[410]: Period('2011-03-31', 'D')
Converting between representations#
Timestamped data can be converted to PeriodIndex-ed data using to_period
and vice-versa using to_timestamp:
In [411]: rng = pd.date_range("1/1/2012", periods=5, freq="M")
In [412]: ts = pd.Series(np.random.randn(len(rng)), index=rng)
In [413]: ts
Out[413]:
2012-01-31 1.931253
2012-02-29 -0.184594
2012-03-31 0.249656
2012-04-30 -0.978151
2012-05-31 -0.873389
Freq: M, dtype: float64
In [414]: ps = ts.to_period()
In [415]: ps
Out[415]:
2012-01 1.931253
2012-02 -0.184594
2012-03 0.249656
2012-04 -0.978151
2012-05 -0.873389
Freq: M, dtype: float64
In [416]: ps.to_timestamp()
Out[416]:
2012-01-01 1.931253
2012-02-01 -0.184594
2012-03-01 0.249656
2012-04-01 -0.978151
2012-05-01 -0.873389
Freq: MS, dtype: float64
Remember that ‘s’ and ‘e’ can be used to return the timestamps at the start or
end of the period:
In [417]: ps.to_timestamp("D", how="s")
Out[417]:
2012-01-01 1.931253
2012-02-01 -0.184594
2012-03-01 0.249656
2012-04-01 -0.978151
2012-05-01 -0.873389
Freq: MS, dtype: float64
Converting between period and timestamp enables some convenient arithmetic
functions to be used. In the following example, we convert a quarterly
frequency with year ending in November to 9am of the end of the month following
the quarter end:
In [418]: prng = pd.period_range("1990Q1", "2000Q4", freq="Q-NOV")
In [419]: ts = pd.Series(np.random.randn(len(prng)), prng)
In [420]: ts.index = (prng.asfreq("M", "e") + 1).asfreq("H", "s") + 9
In [421]: ts.head()
Out[421]:
1990-03-01 09:00 -0.109291
1990-06-01 09:00 -0.637235
1990-09-01 09:00 -1.735925
1990-12-01 09:00 2.096946
1991-03-01 09:00 -1.039926
Freq: H, dtype: float64
Representing out-of-bounds spans#
If you have data that is outside of the Timestamp bounds, see Timestamp limitations,
then you can use a PeriodIndex and/or Series of Periods to do computations.
In [422]: span = pd.period_range("1215-01-01", "1381-01-01", freq="D")
In [423]: span
Out[423]:
PeriodIndex(['1215-01-01', '1215-01-02', '1215-01-03', '1215-01-04',
'1215-01-05', '1215-01-06', '1215-01-07', '1215-01-08',
'1215-01-09', '1215-01-10',
...
'1380-12-23', '1380-12-24', '1380-12-25', '1380-12-26',
'1380-12-27', '1380-12-28', '1380-12-29', '1380-12-30',
'1380-12-31', '1381-01-01'],
dtype='period[D]', length=60632)
To convert from an int64 based YYYYMMDD representation.
In [424]: s = pd.Series([20121231, 20141130, 99991231])
In [425]: s
Out[425]:
0 20121231
1 20141130
2 99991231
dtype: int64
In [426]: def conv(x):
.....: return pd.Period(year=x // 10000, month=x // 100 % 100, day=x % 100, freq="D")
.....:
In [427]: s.apply(conv)
Out[427]:
0 2012-12-31
1 2014-11-30
2 9999-12-31
dtype: period[D]
In [428]: s.apply(conv)[2]
Out[428]: Period('9999-12-31', 'D')
These can easily be converted to a PeriodIndex:
In [429]: span = pd.PeriodIndex(s.apply(conv))
In [430]: span
Out[430]: PeriodIndex(['2012-12-31', '2014-11-30', '9999-12-31'], dtype='period[D]')
Time zone handling#
pandas provides rich support for working with timestamps in different time
zones using the pytz and dateutil libraries or datetime.timezone
objects from the standard library.
Working with time zones#
By default, pandas objects are time zone unaware:
In [431]: rng = pd.date_range("3/6/2012 00:00", periods=15, freq="D")
In [432]: rng.tz is None
Out[432]: True
To localize these dates to a time zone (assign a particular time zone to a naive date),
you can use the tz_localize method or the tz keyword argument in
date_range(), Timestamp, or DatetimeIndex.
You can either pass pytz or dateutil time zone objects or Olson time zone database strings.
Olson time zone strings will return pytz time zone objects by default.
To return dateutil time zone objects, append dateutil/ before the string.
In pytz you can find a list of common (and less common) time zones using
from pytz import common_timezones, all_timezones.
dateutil uses the OS time zones so there isn’t a fixed list available. For
common zones, the names are the same as pytz.
In [433]: import dateutil
# pytz
In [434]: rng_pytz = pd.date_range("3/6/2012 00:00", periods=3, freq="D", tz="Europe/London")
In [435]: rng_pytz.tz
Out[435]: <DstTzInfo 'Europe/London' LMT-1 day, 23:59:00 STD>
# dateutil
In [436]: rng_dateutil = pd.date_range("3/6/2012 00:00", periods=3, freq="D")
In [437]: rng_dateutil = rng_dateutil.tz_localize("dateutil/Europe/London")
In [438]: rng_dateutil.tz
Out[438]: tzfile('/usr/share/zoneinfo/Europe/London')
# dateutil - utc special case
In [439]: rng_utc = pd.date_range(
.....: "3/6/2012 00:00",
.....: periods=3,
.....: freq="D",
.....: tz=dateutil.tz.tzutc(),
.....: )
.....:
In [440]: rng_utc.tz
Out[440]: tzutc()
New in version 0.25.0.
# datetime.timezone
In [441]: rng_utc = pd.date_range(
.....: "3/6/2012 00:00",
.....: periods=3,
.....: freq="D",
.....: tz=datetime.timezone.utc,
.....: )
.....:
In [442]: rng_utc.tz
Out[442]: datetime.timezone.utc
Note that the UTC time zone is a special case in dateutil and should be constructed explicitly
as an instance of dateutil.tz.tzutc. You can also construct other time
zones objects explicitly first.
In [443]: import pytz
# pytz
In [444]: tz_pytz = pytz.timezone("Europe/London")
In [445]: rng_pytz = pd.date_range("3/6/2012 00:00", periods=3, freq="D")
In [446]: rng_pytz = rng_pytz.tz_localize(tz_pytz)
In [447]: rng_pytz.tz == tz_pytz
Out[447]: True
# dateutil
In [448]: tz_dateutil = dateutil.tz.gettz("Europe/London")
In [449]: rng_dateutil = pd.date_range("3/6/2012 00:00", periods=3, freq="D", tz=tz_dateutil)
In [450]: rng_dateutil.tz == tz_dateutil
Out[450]: True
To convert a time zone aware pandas object from one time zone to another,
you can use the tz_convert method.
In [451]: rng_pytz.tz_convert("US/Eastern")
Out[451]:
DatetimeIndex(['2012-03-05 19:00:00-05:00', '2012-03-06 19:00:00-05:00',
'2012-03-07 19:00:00-05:00'],
dtype='datetime64[ns, US/Eastern]', freq=None)
Note
When using pytz time zones, DatetimeIndex will construct a different
time zone object than a Timestamp for the same time zone input. A DatetimeIndex
can hold a collection of Timestamp objects that may have different UTC offsets and cannot be
succinctly represented by one pytz time zone instance while one Timestamp
represents one point in time with a specific UTC offset.
In [452]: dti = pd.date_range("2019-01-01", periods=3, freq="D", tz="US/Pacific")
In [453]: dti.tz
Out[453]: <DstTzInfo 'US/Pacific' LMT-1 day, 16:07:00 STD>
In [454]: ts = pd.Timestamp("2019-01-01", tz="US/Pacific")
In [455]: ts.tz
Out[455]: <DstTzInfo 'US/Pacific' PST-1 day, 16:00:00 STD>
Warning
Be wary of conversions between libraries. For some time zones, pytz and dateutil have different
definitions of the zone. This is more of a problem for unusual time zones than for
‘standard’ zones like US/Eastern.
Warning
Be aware that a time zone definition across versions of time zone libraries may not
be considered equal. This may cause problems when working with stored data that
is localized using one version and operated on with a different version.
See here for how to handle such a situation.
Warning
For pytz time zones, it is incorrect to pass a time zone object directly into
the datetime.datetime constructor
(e.g., datetime.datetime(2011, 1, 1, tzinfo=pytz.timezone('US/Eastern')).
Instead, the datetime needs to be localized using the localize method
on the pytz time zone object.
Warning
Be aware that for times in the future, correct conversion between time zones
(and UTC) cannot be guaranteed by any time zone library because a timezone’s
offset from UTC may be changed by the respective government.
Warning
If you are using dates beyond 2038-01-18, due to current deficiencies
in the underlying libraries caused by the year 2038 problem, daylight saving time (DST) adjustments
to timezone aware dates will not be applied. If and when the underlying libraries are fixed,
the DST transitions will be applied.
For example, for two dates that are in British Summer Time (and so would normally be GMT+1), both the following asserts evaluate as true:
In [456]: d_2037 = "2037-03-31T010101"
In [457]: d_2038 = "2038-03-31T010101"
In [458]: DST = "Europe/London"
In [459]: assert pd.Timestamp(d_2037, tz=DST) != pd.Timestamp(d_2037, tz="GMT")
In [460]: assert pd.Timestamp(d_2038, tz=DST) == pd.Timestamp(d_2038, tz="GMT")
Under the hood, all timestamps are stored in UTC. Values from a time zone aware
DatetimeIndex or Timestamp will have their fields (day, hour, minute, etc.)
localized to the time zone. However, timestamps with the same UTC value are
still considered to be equal even if they are in different time zones:
In [461]: rng_eastern = rng_utc.tz_convert("US/Eastern")
In [462]: rng_berlin = rng_utc.tz_convert("Europe/Berlin")
In [463]: rng_eastern[2]
Out[463]: Timestamp('2012-03-07 19:00:00-0500', tz='US/Eastern', freq='D')
In [464]: rng_berlin[2]
Out[464]: Timestamp('2012-03-08 01:00:00+0100', tz='Europe/Berlin', freq='D')
In [465]: rng_eastern[2] == rng_berlin[2]
Out[465]: True
Operations between Series in different time zones will yield UTC
Series, aligning the data on the UTC timestamps:
In [466]: ts_utc = pd.Series(range(3), pd.date_range("20130101", periods=3, tz="UTC"))
In [467]: eastern = ts_utc.tz_convert("US/Eastern")
In [468]: berlin = ts_utc.tz_convert("Europe/Berlin")
In [469]: result = eastern + berlin
In [470]: result
Out[470]:
2013-01-01 00:00:00+00:00 0
2013-01-02 00:00:00+00:00 2
2013-01-03 00:00:00+00:00 4
Freq: D, dtype: int64
In [471]: result.index
Out[471]:
DatetimeIndex(['2013-01-01 00:00:00+00:00', '2013-01-02 00:00:00+00:00',
'2013-01-03 00:00:00+00:00'],
dtype='datetime64[ns, UTC]', freq='D')
To remove time zone information, use tz_localize(None) or tz_convert(None).
tz_localize(None) will remove the time zone yielding the local time representation.
tz_convert(None) will remove the time zone after converting to UTC time.
In [472]: didx = pd.date_range(start="2014-08-01 09:00", freq="H", periods=3, tz="US/Eastern")
In [473]: didx
Out[473]:
DatetimeIndex(['2014-08-01 09:00:00-04:00', '2014-08-01 10:00:00-04:00',
'2014-08-01 11:00:00-04:00'],
dtype='datetime64[ns, US/Eastern]', freq='H')
In [474]: didx.tz_localize(None)
Out[474]:
DatetimeIndex(['2014-08-01 09:00:00', '2014-08-01 10:00:00',
'2014-08-01 11:00:00'],
dtype='datetime64[ns]', freq=None)
In [475]: didx.tz_convert(None)
Out[475]:
DatetimeIndex(['2014-08-01 13:00:00', '2014-08-01 14:00:00',
'2014-08-01 15:00:00'],
dtype='datetime64[ns]', freq='H')
# tz_convert(None) is identical to tz_convert('UTC').tz_localize(None)
In [476]: didx.tz_convert("UTC").tz_localize(None)
Out[476]:
DatetimeIndex(['2014-08-01 13:00:00', '2014-08-01 14:00:00',
'2014-08-01 15:00:00'],
dtype='datetime64[ns]', freq=None)
Fold#
New in version 1.1.0.
For ambiguous times, pandas supports explicitly specifying the keyword-only fold argument.
Due to daylight saving time, one wall clock time can occur twice when shifting
from summer to winter time; fold describes whether the datetime-like corresponds
to the first (0) or the second time (1) the wall clock hits the ambiguous time.
Fold is supported only for constructing from naive datetime.datetime
(see datetime documentation for details) or from Timestamp
or for constructing from components (see below). Only dateutil timezones are supported
(see dateutil documentation
for dateutil methods that deal with ambiguous datetimes) as pytz
timezones do not support fold (see pytz documentation
for details on how pytz deals with ambiguous datetimes). To localize an ambiguous datetime
with pytz, please use Timestamp.tz_localize(). In general, we recommend to rely
on Timestamp.tz_localize() when localizing ambiguous datetimes if you need direct
control over how they are handled.
In [477]: pd.Timestamp(
.....: datetime.datetime(2019, 10, 27, 1, 30, 0, 0),
.....: tz="dateutil/Europe/London",
.....: fold=0,
.....: )
.....:
Out[477]: Timestamp('2019-10-27 01:30:00+0100', tz='dateutil//usr/share/zoneinfo/Europe/London')
In [478]: pd.Timestamp(
.....: year=2019,
.....: month=10,
.....: day=27,
.....: hour=1,
.....: minute=30,
.....: tz="dateutil/Europe/London",
.....: fold=1,
.....: )
.....:
Out[478]: Timestamp('2019-10-27 01:30:00+0000', tz='dateutil//usr/share/zoneinfo/Europe/London')
Ambiguous times when localizing#
tz_localize may not be able to determine the UTC offset of a timestamp
because daylight savings time (DST) in a local time zone causes some times to occur
twice within one day (“clocks fall back”). The following options are available:
'raise': Raises a pytz.AmbiguousTimeError (the default behavior)
'infer': Attempt to determine the correct offset base on the monotonicity of the timestamps
'NaT': Replaces ambiguous times with NaT
bool: True represents a DST time, False represents non-DST time. An array-like of bool values is supported for a sequence of times.
In [479]: rng_hourly = pd.DatetimeIndex(
.....: ["11/06/2011 00:00", "11/06/2011 01:00", "11/06/2011 01:00", "11/06/2011 02:00"]
.....: )
.....:
This will fail as there are ambiguous times ('11/06/2011 01:00')
In [2]: rng_hourly.tz_localize('US/Eastern')
AmbiguousTimeError: Cannot infer dst time from Timestamp('2011-11-06 01:00:00'), try using the 'ambiguous' argument
Handle these ambiguous times by specifying the following.
In [480]: rng_hourly.tz_localize("US/Eastern", ambiguous="infer")
Out[480]:
DatetimeIndex(['2011-11-06 00:00:00-04:00', '2011-11-06 01:00:00-04:00',
'2011-11-06 01:00:00-05:00', '2011-11-06 02:00:00-05:00'],
dtype='datetime64[ns, US/Eastern]', freq=None)
In [481]: rng_hourly.tz_localize("US/Eastern", ambiguous="NaT")
Out[481]:
DatetimeIndex(['2011-11-06 00:00:00-04:00', 'NaT', 'NaT',
'2011-11-06 02:00:00-05:00'],
dtype='datetime64[ns, US/Eastern]', freq=None)
In [482]: rng_hourly.tz_localize("US/Eastern", ambiguous=[True, True, False, False])
Out[482]:
DatetimeIndex(['2011-11-06 00:00:00-04:00', '2011-11-06 01:00:00-04:00',
'2011-11-06 01:00:00-05:00', '2011-11-06 02:00:00-05:00'],
dtype='datetime64[ns, US/Eastern]', freq=None)
Nonexistent times when localizing#
A DST transition may also shift the local time ahead by 1 hour creating nonexistent
local times (“clocks spring forward”). The behavior of localizing a timeseries with nonexistent times
can be controlled by the nonexistent argument. The following options are available:
'raise': Raises a pytz.NonExistentTimeError (the default behavior)
'NaT': Replaces nonexistent times with NaT
'shift_forward': Shifts nonexistent times forward to the closest real time
'shift_backward': Shifts nonexistent times backward to the closest real time
timedelta object: Shifts nonexistent times by the timedelta duration
In [483]: dti = pd.date_range(start="2015-03-29 02:30:00", periods=3, freq="H")
# 2:30 is a nonexistent time
Localization of nonexistent times will raise an error by default.
In [2]: dti.tz_localize('Europe/Warsaw')
NonExistentTimeError: 2015-03-29 02:30:00
Transform nonexistent times to NaT or shift the times.
In [484]: dti
Out[484]:
DatetimeIndex(['2015-03-29 02:30:00', '2015-03-29 03:30:00',
'2015-03-29 04:30:00'],
dtype='datetime64[ns]', freq='H')
In [485]: dti.tz_localize("Europe/Warsaw", nonexistent="shift_forward")
Out[485]:
DatetimeIndex(['2015-03-29 03:00:00+02:00', '2015-03-29 03:30:00+02:00',
'2015-03-29 04:30:00+02:00'],
dtype='datetime64[ns, Europe/Warsaw]', freq=None)
In [486]: dti.tz_localize("Europe/Warsaw", nonexistent="shift_backward")
Out[486]:
DatetimeIndex(['2015-03-29 01:59:59.999999999+01:00',
'2015-03-29 03:30:00+02:00',
'2015-03-29 04:30:00+02:00'],
dtype='datetime64[ns, Europe/Warsaw]', freq=None)
In [487]: dti.tz_localize("Europe/Warsaw", nonexistent=pd.Timedelta(1, unit="H"))
Out[487]:
DatetimeIndex(['2015-03-29 03:30:00+02:00', '2015-03-29 03:30:00+02:00',
'2015-03-29 04:30:00+02:00'],
dtype='datetime64[ns, Europe/Warsaw]', freq=None)
In [488]: dti.tz_localize("Europe/Warsaw", nonexistent="NaT")
Out[488]:
DatetimeIndex(['NaT', '2015-03-29 03:30:00+02:00',
'2015-03-29 04:30:00+02:00'],
dtype='datetime64[ns, Europe/Warsaw]', freq=None)
Time zone Series operations#
A Series with time zone naive values is
represented with a dtype of datetime64[ns].
In [489]: s_naive = pd.Series(pd.date_range("20130101", periods=3))
In [490]: s_naive
Out[490]:
0 2013-01-01
1 2013-01-02
2 2013-01-03
dtype: datetime64[ns]
A Series with a time zone aware values is
represented with a dtype of datetime64[ns, tz] where tz is the time zone
In [491]: s_aware = pd.Series(pd.date_range("20130101", periods=3, tz="US/Eastern"))
In [492]: s_aware
Out[492]:
0 2013-01-01 00:00:00-05:00
1 2013-01-02 00:00:00-05:00
2 2013-01-03 00:00:00-05:00
dtype: datetime64[ns, US/Eastern]
Both of these Series time zone information
can be manipulated via the .dt accessor, see the dt accessor section.
For example, to localize and convert a naive stamp to time zone aware.
In [493]: s_naive.dt.tz_localize("UTC").dt.tz_convert("US/Eastern")
Out[493]:
0 2012-12-31 19:00:00-05:00
1 2013-01-01 19:00:00-05:00
2 2013-01-02 19:00:00-05:00
dtype: datetime64[ns, US/Eastern]
Time zone information can also be manipulated using the astype method.
This method can convert between different timezone-aware dtypes.
# convert to a new time zone
In [494]: s_aware.astype("datetime64[ns, CET]")
Out[494]:
0 2013-01-01 06:00:00+01:00
1 2013-01-02 06:00:00+01:00
2 2013-01-03 06:00:00+01:00
dtype: datetime64[ns, CET]
Note
Using Series.to_numpy() on a Series, returns a NumPy array of the data.
NumPy does not currently support time zones (even though it is printing in the local time zone!),
therefore an object array of Timestamps is returned for time zone aware data:
In [495]: s_naive.to_numpy()
Out[495]:
array(['2013-01-01T00:00:00.000000000', '2013-01-02T00:00:00.000000000',
'2013-01-03T00:00:00.000000000'], dtype='datetime64[ns]')
In [496]: s_aware.to_numpy()
Out[496]:
array([Timestamp('2013-01-01 00:00:00-0500', tz='US/Eastern'),
Timestamp('2013-01-02 00:00:00-0500', tz='US/Eastern'),
Timestamp('2013-01-03 00:00:00-0500', tz='US/Eastern')],
dtype=object)
By converting to an object array of Timestamps, it preserves the time zone
information. For example, when converting back to a Series:
In [497]: pd.Series(s_aware.to_numpy())
Out[497]:
0 2013-01-01 00:00:00-05:00
1 2013-01-02 00:00:00-05:00
2 2013-01-03 00:00:00-05:00
dtype: datetime64[ns, US/Eastern]
However, if you want an actual NumPy datetime64[ns] array (with the values
converted to UTC) instead of an array of objects, you can specify the
dtype argument:
In [498]: s_aware.to_numpy(dtype="datetime64[ns]")
Out[498]:
array(['2013-01-01T05:00:00.000000000', '2013-01-02T05:00:00.000000000',
'2013-01-03T05:00:00.000000000'], dtype='datetime64[ns]')
| 657
| 1,024
|
Create a dataframe from a series with a TimeSeriesIndex multiplied by another series
Let's say I have a series, ser1 with a TimeSeriesIndex length x. I also have another series, ser2 length y. How do I multiply these so that I get a dataframe shape (x,y) where the index is from ser1 and the columns are the indices from ser2. I want every element of ser2 to be multiplied by the values of each element in ser1.
import pandas as pd
ser1 = pd.Series([100, 105, 110, 114, 89],index=pd.date_range(start='2021-01-01', end='2021-01-05', freq='D'), name='test')
test_ser2 = pd.Series([1, 2, 3, 4, 5], index=['a', 'b', 'c', 'd', 'e'])
Perhaps this is more elegantly done with numpy.
|
60,221,287
|
Updating pandas dataframe with new column
|
<p>I want to create a new column with all the distinct values across the rows. Each value in a row is a string(not list).</p>
<p>This is how dataframe looks like:</p>
<pre><code>+-----------------------------+-------------------------+---------------------------------------------+
| first | second | third |
+-----------------------------+-------------------------+---------------------------------------------+
|['able', 'shovel', 'door'] |['shovel raised'] |['shovel raised', 'raised', 'door', 'shovel']|
|['grade control'] |['grade'] |['grade'] |
|['light telling', 'love'] |['would love', 'closed'] |['closed', 'light'] |
+-----------------------------+-------------------------+---------------------------------------------+
</code></pre>
<p>This is how the dataframe should look like after creating a new column with distinct values.
<div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>df = pd.DataFrame({'first': "['able', 'shovel', 'door']" , 'second': "['shovel raised']", 'third': "['shovel raised', 'raised', 'door', 'shovel']", "Distinct_set": "['able', 'shovel', 'door', 'shovel raised', 'raised']" }, index = [0])</code></pre>
</div>
</div>
</p>
<p>How can I do it? </p>
| 60,221,424
| 2020-02-14T06:42:08.747000
| 3
| null | 0
| 533
|
python|pandas
|
<p>try this:</p>
<pre><code>df['new_col'] = df.apply(lambda x: list(set(x['first'] + x['second']+x['third'])), axis =1)
</code></pre>
<p>its creating set of single char as your data in cell is string.</p>
<p>"['able', 'shovel', 'door']"</p>
<p>to correct this use below:</p>
<pre><code>df['new_col'] = df.apply(lambda x: list(set(eval(x['first']) + eval(x['second'])+eval(x['third']))), axis =1)
</code></pre>
| 2020-02-14T06:55:10.790000
| 1
|
https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.update.html
|
pandas.DataFrame.update#
pandas.DataFrame.update#
DataFrame.update(other, join='left', overwrite=True, filter_func=None, errors='ignore')[source]#
Modify in place using non-NA values from another DataFrame.
Aligns on indices. There is no return value.
Parameters
otherDataFrame, or object coercible into a DataFrameShould have at least one matching index/column label
with the original DataFrame. If a Series is passed,
its name attribute must be set, and that will be
used as the column name to align with the original DataFrame.
join{‘left’}, default ‘left’Only left join is implemented, keeping the index and columns of the
try this:
df['new_col'] = df.apply(lambda x: list(set(x['first'] + x['second']+x['third'])), axis =1)
its creating set of single char as your data in cell is string.
"['able', 'shovel', 'door']"
to correct this use below:
df['new_col'] = df.apply(lambda x: list(set(eval(x['first']) + eval(x['second'])+eval(x['third']))), axis =1)
original object.
overwritebool, default TrueHow to handle non-NA values for overlapping keys:
True: overwrite original DataFrame’s values
with values from other.
False: only update values that are NA in
the original DataFrame.
filter_funccallable(1d-array) -> bool 1d-array, optionalCan choose to replace values other than NA. Return True for values
that should be updated.
errors{‘raise’, ‘ignore’}, default ‘ignore’If ‘raise’, will raise a ValueError if the DataFrame and other
both contain non-NA data in the same place.
Returns
Nonemethod directly changes calling object
Raises
ValueError
When errors=’raise’ and there’s overlapping non-NA data.
When errors is not either ‘ignore’ or ‘raise’
NotImplementedError
If join != ‘left’
See also
dict.updateSimilar method for dictionaries.
DataFrame.mergeFor column(s)-on-column(s) operations.
Examples
>>> df = pd.DataFrame({'A': [1, 2, 3],
... 'B': [400, 500, 600]})
>>> new_df = pd.DataFrame({'B': [4, 5, 6],
... 'C': [7, 8, 9]})
>>> df.update(new_df)
>>> df
A B
0 1 4
1 2 5
2 3 6
The DataFrame’s length does not increase as a result of the update,
only values at matching index/column labels are updated.
>>> df = pd.DataFrame({'A': ['a', 'b', 'c'],
... 'B': ['x', 'y', 'z']})
>>> new_df = pd.DataFrame({'B': ['d', 'e', 'f', 'g', 'h', 'i']})
>>> df.update(new_df)
>>> df
A B
0 a d
1 b e
2 c f
For Series, its name attribute must be set.
>>> df = pd.DataFrame({'A': ['a', 'b', 'c'],
... 'B': ['x', 'y', 'z']})
>>> new_column = pd.Series(['d', 'e'], name='B', index=[0, 2])
>>> df.update(new_column)
>>> df
A B
0 a d
1 b y
2 c e
>>> df = pd.DataFrame({'A': ['a', 'b', 'c'],
... 'B': ['x', 'y', 'z']})
>>> new_df = pd.DataFrame({'B': ['d', 'e']}, index=[1, 2])
>>> df.update(new_df)
>>> df
A B
0 a x
1 b d
2 c e
If other contains NaNs the corresponding values are not updated
in the original dataframe.
>>> df = pd.DataFrame({'A': [1, 2, 3],
... 'B': [400, 500, 600]})
>>> new_df = pd.DataFrame({'B': [4, np.nan, 6]})
>>> df.update(new_df)
>>> df
A B
0 1 4.0
1 2 500.0
2 3 6.0
| 634
| 968
|
Updating pandas dataframe with new column
I want to create a new column with all the distinct values across the rows. Each value in a row is a string(not list).
This is how dataframe looks like:
+-----------------------------+-------------------------+---------------------------------------------+
| first | second | third |
+-----------------------------+-------------------------+---------------------------------------------+
|['able', 'shovel', 'door'] |['shovel raised'] |['shovel raised', 'raised', 'door', 'shovel']|
|['grade control'] |['grade'] |['grade'] |
|['light telling', 'love'] |['would love', 'closed'] |['closed', 'light'] |
+-----------------------------+-------------------------+---------------------------------------------+
This is how the dataframe should look like after creating a new column with distinct values.
df = pd.DataFrame({'first': "['able', 'shovel', 'door']" , 'second': "['shovel raised']", 'third': "['shovel raised', 'raised', 'door', 'shovel']", "Distinct_set": "['able', 'shovel', 'door', 'shovel raised', 'raised']" }, index = [0])
How can I do it?
|
63,603,881
|
How to get total of groupby cumsum row by row
|
<p>I have a df that looks like this:</p>
<pre><code>519 962.966667 91.525424 out_of_range 0 55.932203
520 970.666667 91.525424 out_of_range 1 91.525424
521 971.766667 81.355932 out_of_range 2 91.525424
522 972.900000 76.271186 out_of_range 3 81.355932
523 974.000000 76.271186 out_of_range 4 76.271186
524 975.100000 76.271186 out_of_range 5 76.271186
525 975.833333 76.271186 out_of_range 6 76.271186
526 977.066667 76.271186 out_of_range 7 76.271186
527 977.933333 76.271186 out_of_range 8 76.271186
528 978.833333 76.271186 out_of_range 9 76.271186
529 980.066667 55.932203 in_range 0 76.271186
530 981.200000 55.932203 in_range 1 55.932203
531 985.933333 66.101695 in_range 2 55.932203
532 987.566667 66.101695 in_range 3 66.101695
533 989.033333 55.932203 in_range 4 66.101695
534 991.000000 111.864407 out_of_range 0 55.932203
535 1004.900000 111.864407 out_of_range 1 111.864407
536 1006.033333 111.864407 out_of_range 2 111.864407
537 1007.166667 66.101695 in_range 0 111.864407
538 1008.300000 66.101695 in_range 1 66.101695
</code></pre>
<p>df[3] indicates where a certain value is in or out a set range. df[4] indicates the cumulative count for each in_range or out_out_range group.</p>
<p>How do I create a column that applies the size of each in_range out_of_range group to the entire group, row by row, like this (last column):</p>
<pre><code>519 962.966667 91.525424 out_of_range 0 55.932203 9
520 970.666667 91.525424 out_of_range 1 91.525424 9
521 971.766667 81.355932 out_of_range 2 91.525424 9
522 972.900000 76.271186 out_of_range 3 81.355932 9
523 974.000000 76.271186 out_of_range 4 76.271186 9
524 975.100000 76.271186 out_of_range 5 76.271186 9
525 975.833333 76.271186 out_of_range 6 76.271186 9
526 977.066667 76.271186 out_of_range 7 76.271186 9
527 977.933333 76.271186 out_of_range 8 76.271186 9
528 978.833333 76.271186 out_of_range 9 76.271186 9
529 980.066667 55.932203 in_range 0 76.271186 4
530 981.200000 55.932203 in_range 1 55.932203 4
531 985.933333 66.101695 in_range 2 55.932203 4
532 987.566667 66.101695 in_range 3 66.101695 4
533 989.033333 55.932203 in_range 4 66.101695 4
534 991.000000 111.864407 out_of_range 0 55.932203 2
535 1004.900000 111.864407 out_of_range 1 111.864407 2
536 1006.033333 111.864407 out_of_range 2 111.864407 2
537 1007.166667 66.101695 in_range 0 111.864407 1
538 1008.300000 66.101695 in_range 1 66.101695 1
</code></pre>
| 63,604,073
| 2020-08-26T18:46:37.753000
| 1
| null | 1
| 23
|
python|pandas
|
<p>I'm not sure how you get the <code>cumcount</code> originally. You could have change <code>groupby().cumcount()</code> to <code>groupby().size()</code> to get the desired numbers.</p>
<p>That said, with the current dataframe, you can use <code>cumsum()</code> to identify the blocks and <code>groupby().transform()</code>:</p>
<pre><code>df['cumcount'] = df[4].groupby(df[4].eq(0).cumsum()).transform('max')
</code></pre>
<p>Output:</p>
<pre><code> 0 1 2 3 4 5 cumcount
0 519 962.966667 91.525424 out_of_range 0 55.932203 9
1 520 970.666667 91.525424 out_of_range 1 91.525424 9
2 521 971.766667 81.355932 out_of_range 2 91.525424 9
3 522 972.900000 76.271186 out_of_range 3 81.355932 9
4 523 974.000000 76.271186 out_of_range 4 76.271186 9
5 524 975.100000 76.271186 out_of_range 5 76.271186 9
6 525 975.833333 76.271186 out_of_range 6 76.271186 9
7 526 977.066667 76.271186 out_of_range 7 76.271186 9
8 527 977.933333 76.271186 out_of_range 8 76.271186 9
9 528 978.833333 76.271186 out_of_range 9 76.271186 9
10 529 980.066667 55.932203 in_range 0 76.271186 4
11 530 981.200000 55.932203 in_range 1 55.932203 4
12 531 985.933333 66.101695 in_range 2 55.932203 4
13 532 987.566667 66.101695 in_range 3 66.101695 4
14 533 989.033333 55.932203 in_range 4 66.101695 4
15 534 991.000000 111.864407 out_of_range 0 55.932203 2
16 535 1004.900000 111.864407 out_of_range 1 111.864407 2
17 536 1006.033333 111.864407 out_of_range 2 111.864407 2
18 537 1007.166667 66.101695 in_range 0 111.864407 1
19 538 1008.300000 66.101695 in_range 1 66.101695 1
</code></pre>
| 2020-08-26T18:58:52.393000
| 1
|
https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.cumsum.html
|
I'm not sure how you get the cumcount originally. You could have change groupby().cumcount() to groupby().size() to get the desired numbers.
That said, with the current dataframe, you can use cumsum() to identify the blocks and groupby().transform():
df['cumcount'] = df[4].groupby(df[4].eq(0).cumsum()).transform('max')
Output:
0 1 2 3 4 5 cumcount
0 519 962.966667 91.525424 out_of_range 0 55.932203 9
1 520 970.666667 91.525424 out_of_range 1 91.525424 9
2 521 971.766667 81.355932 out_of_range 2 91.525424 9
3 522 972.900000 76.271186 out_of_range 3 81.355932 9
4 523 974.000000 76.271186 out_of_range 4 76.271186 9
5 524 975.100000 76.271186 out_of_range 5 76.271186 9
6 525 975.833333 76.271186 out_of_range 6 76.271186 9
7 526 977.066667 76.271186 out_of_range 7 76.271186 9
8 527 977.933333 76.271186 out_of_range 8 76.271186 9
9 528 978.833333 76.271186 out_of_range 9 76.271186 9
10 529 980.066667 55.932203 in_range 0 76.271186 4
11 530 981.200000 55.932203 in_range 1 55.932203 4
12 531 985.933333 66.101695 in_range 2 55.932203 4
13 532 987.566667 66.101695 in_range 3 66.101695 4
14 533 989.033333 55.932203 in_range 4 66.101695 4
15 534 991.000000 111.864407 out_of_range 0 55.932203 2
16 535 1004.900000 111.864407 out_of_range 1 111.864407 2
17 536 1006.033333 111.864407 out_of_range 2 111.864407 2
18 537 1007.166667 66.101695 in_range 0 111.864407 1
19 538 1008.300000 66.101695 in_range 1 66.101695 1
| 0
| 1,841
|
How to get total of groupby cumsum row by row
I have a df that looks like this:
519 962.966667 91.525424 out_of_range 0 55.932203
520 970.666667 91.525424 out_of_range 1 91.525424
521 971.766667 81.355932 out_of_range 2 91.525424
522 972.900000 76.271186 out_of_range 3 81.355932
523 974.000000 76.271186 out_of_range 4 76.271186
524 975.100000 76.271186 out_of_range 5 76.271186
525 975.833333 76.271186 out_of_range 6 76.271186
526 977.066667 76.271186 out_of_range 7 76.271186
527 977.933333 76.271186 out_of_range 8 76.271186
528 978.833333 76.271186 out_of_range 9 76.271186
529 980.066667 55.932203 in_range 0 76.271186
530 981.200000 55.932203 in_range 1 55.932203
531 985.933333 66.101695 in_range 2 55.932203
532 987.566667 66.101695 in_range 3 66.101695
533 989.033333 55.932203 in_range 4 66.101695
534 991.000000 111.864407 out_of_range 0 55.932203
535 1004.900000 111.864407 out_of_range 1 111.864407
536 1006.033333 111.864407 out_of_range 2 111.864407
537 1007.166667 66.101695 in_range 0 111.864407
538 1008.300000 66.101695 in_range 1 66.101695
df[3] indicates where a certain value is in or out a set range. df[4] indicates the cumulative count for each in_range or out_out_range group.
How do I create a column that applies the size of each in_range out_of_range group to the entire group, row by row, like this (last column):
519 962.966667 91.525424 out_of_range 0 55.932203 9
520 970.666667 91.525424 out_of_range 1 91.525424 9
521 971.766667 81.355932 out_of_range 2 91.525424 9
522 972.900000 76.271186 out_of_range 3 81.355932 9
523 974.000000 76.271186 out_of_range 4 76.271186 9
524 975.100000 76.271186 out_of_range 5 76.271186 9
525 975.833333 76.271186 out_of_range 6 76.271186 9
526 977.066667 76.271186 out_of_range 7 76.271186 9
527 977.933333 76.271186 out_of_range 8 76.271186 9
528 978.833333 76.271186 out_of_range 9 76.271186 9
529 980.066667 55.932203 in_range 0 76.271186 4
530 981.200000 55.932203 in_range 1 55.932203 4
531 985.933333 66.101695 in_range 2 55.932203 4
532 987.566667 66.101695 in_range 3 66.101695 4
533 989.033333 55.932203 in_range 4 66.101695 4
534 991.000000 111.864407 out_of_range 0 55.932203 2
535 1004.900000 111.864407 out_of_range 1 111.864407 2
536 1006.033333 111.864407 out_of_range 2 111.864407 2
537 1007.166667 66.101695 in_range 0 111.864407 1
538 1008.300000 66.101695 in_range 1 66.101695 1
|
64,812,644
|
Pandas - create new column with the sum of last N values of another column
|
<p>I have this df:</p>
<pre><code> round_id team opponent home_dummy GC GP P
0 1.0 Flamengo Atlético-MG 1.0 1.0 0.0 0
1 4.0 Flamengo Grêmio 1.0 1.0 1.0 1
2 5.0 Flamengo Botafogo 1.0 1.0 1.0 1
3 6.0 Flamengo Santos 0.0 0.0 1.0 3
4 7.0 Flamengo Bahia 0.0 3.0 5.0 3
5 8.0 Flamengo Fortaleza 1.0 1.0 2.0 3
6 9.0 Flamengo Fluminense 0.0 1.0 2.0 3
7 10.0 Flamengo Ceará 0.0 2.0 0.0 0
8 3.0 Flamengo Coritiba 0.0 0.0 1.0 3
9 11.0 Flamengo Goiás 1.0 1.0 2.0 3
10 13.0 Flamengo Athlético-PR 1.0 1.0 3.0 3
11 14.0 Flamengo Sport 1.0 0.0 3.0 3
12 15.0 Flamengo Vasco 0.0 1.0 2.0 3
13 16.0 Flamengo Bragantino 1.0 1.0 1.0 1
14 17.0 Flamengo Corinthians 0.0 1.0 5.0 3
15 18.0 Flamengo Internacional 0.0 2.0 2.0 1
16 19.0 Flamengo São Paulo 1.0 4.0 1.0 0
17 12.0 Flamengo Palmeiras 0.0 1.0 1.0 1
18 2.0 Flamengo Atlético-GO 0.0 3.0 0.0 0
19 20.0 Flamengo Atlético-MG 0.0 4.0 0.0 0
</code></pre>
<hr />
<p>Now I'd like to add a column 'last_5', which consists of the sum of the last 5 'P' values, ending up with:</p>
<pre><code> rodada_id clube opponent home_dummy GC GP P last_5
0 1.0 Flamengo Atlético-MG 1.0 1.0 0.0 0 0
1 4.0 Flamengo Grêmio 1.0 1.0 1.0 1 0
2 5.0 Flamengo Botafogo 1.0 1.0 1.0 1 1
3 6.0 Flamengo Santos 0.0 0.0 1.0 3 2
4 7.0 Flamengo Bahia 0.0 3.0 5.0 3 5
5 8.0 Flamengo Fortaleza 1.0 1.0 2.0 3 8
6 9.0 Flamengo Fluminense 0.0 1.0 2.0 3 11
7 10.0 Flamengo Ceará 0.0 2.0 0.0 0 13
8 3.0 Flamengo Coritiba 0.0 0.0 1.0 3 12
9 11.0 Flamengo Goiás 1.0 1.0 2.0 3 12
10 13.0 Flamengo Athlético-PR 1.0 1.0 3.0 3 12
11 14.0 Flamengo Sport 1.0 0.0 3.0 3 12
12 15.0 Flamengo Vasco 0.0 1.0 2.0 3 12
13 16.0 Flamengo Bragantino 1.0 1.0 1.0 1 15
14 17.0 Flamengo Corinthians 0.0 1.0 5.0 3 13
15 18.0 Flamengo Internacional 0.0 2.0 2.0 1 11
16 19.0 Flamengo São Paulo 1.0 4.0 1.0 0 8
17 12.0 Flamengo Palmeiras 0.0 1.0 1.0 1 8
18 2.0 Flamengo Atlético-GO 0.0 3.0 0.0 0 6
19 20.0 Flamengo Atlético-MG 0.0 4.0 0.0 0 5
</code></pre>
<p>Please note that up to index 4 (n=5), the sum will have to be of the last 1, 2, 3, 4 rows.</p>
<p>I have tried:</p>
<pre><code>N = 5
df = df.groupby(df.P // N).sum()
</code></pre>
<p>But this does not work.</p>
| 64,812,675
| 2020-11-12T22:18:08.570000
| 1
| null | 1
| 24
|
pandas
|
<p>Let us try</p>
<pre><code>df['Last_5'] = df.P.rolling(5,min_periods=1).sum().shift().fillna(0)
Out[9]:
0 0.0
1 0.0
2 1.0
3 2.0
4 5.0
5 8.0
6 11.0
7 13.0
8 12.0
9 12.0
10 12.0
11 12.0
12 12.0
13 15.0
14 13.0
15 13.0
16 11.0
17 8.0
18 6.0
19 5.0
</code></pre>
| 2020-11-12T22:20:33.073000
| 1
|
https://pandas.pydata.org/docs/user_guide/groupby.html
|
Group by: split-apply-combine#
Group by: split-apply-combine#
By “group by” we are referring to a process involving one or more of the following
steps:
Splitting the data into groups based on some criteria.
Applying a function to each group independently.
Combining the results into a data structure.
Let us try
df['Last_5'] = df.P.rolling(5,min_periods=1).sum().shift().fillna(0)
Out[9]:
0 0.0
1 0.0
2 1.0
3 2.0
4 5.0
5 8.0
6 11.0
7 13.0
8 12.0
9 12.0
10 12.0
11 12.0
12 12.0
13 15.0
14 13.0
15 13.0
16 11.0
17 8.0
18 6.0
19 5.0
Out of these, the split step is the most straightforward. In fact, in many
situations we may wish to split the data set into groups and do something with
those groups. In the apply step, we might wish to do one of the
following:
Aggregation: compute a summary statistic (or statistics) for each
group. Some examples:
Compute group sums or means.
Compute group sizes / counts.
Transformation: perform some group-specific computations and return a
like-indexed object. Some examples:
Standardize data (zscore) within a group.
Filling NAs within groups with a value derived from each group.
Filtration: discard some groups, according to a group-wise computation
that evaluates True or False. Some examples:
Discard data that belongs to groups with only a few members.
Filter out data based on the group sum or mean.
Some combination of the above: GroupBy will examine the results of the apply
step and try to return a sensibly combined result if it doesn’t fit into
either of the above two categories.
Since the set of object instance methods on pandas data structures are generally
rich and expressive, we often simply want to invoke, say, a DataFrame function
on each group. The name GroupBy should be quite familiar to those who have used
a SQL-based tool (or itertools), in which you can write code like:
SELECT Column1, Column2, mean(Column3), sum(Column4)
FROM SomeTable
GROUP BY Column1, Column2
We aim to make operations like this natural and easy to express using
pandas. We’ll address each area of GroupBy functionality then provide some
non-trivial examples / use cases.
See the cookbook for some advanced strategies.
Splitting an object into groups#
pandas objects can be split on any of their axes. The abstract definition of
grouping is to provide a mapping of labels to group names. To create a GroupBy
object (more on what the GroupBy object is later), you may do the following:
In [1]: df = pd.DataFrame(
...: [
...: ("bird", "Falconiformes", 389.0),
...: ("bird", "Psittaciformes", 24.0),
...: ("mammal", "Carnivora", 80.2),
...: ("mammal", "Primates", np.nan),
...: ("mammal", "Carnivora", 58),
...: ],
...: index=["falcon", "parrot", "lion", "monkey", "leopard"],
...: columns=("class", "order", "max_speed"),
...: )
...:
In [2]: df
Out[2]:
class order max_speed
falcon bird Falconiformes 389.0
parrot bird Psittaciformes 24.0
lion mammal Carnivora 80.2
monkey mammal Primates NaN
leopard mammal Carnivora 58.0
# default is axis=0
In [3]: grouped = df.groupby("class")
In [4]: grouped = df.groupby("order", axis="columns")
In [5]: grouped = df.groupby(["class", "order"])
The mapping can be specified many different ways:
A Python function, to be called on each of the axis labels.
A list or NumPy array of the same length as the selected axis.
A dict or Series, providing a label -> group name mapping.
For DataFrame objects, a string indicating either a column name or
an index level name to be used to group.
df.groupby('A') is just syntactic sugar for df.groupby(df['A']).
A list of any of the above things.
Collectively we refer to the grouping objects as the keys. For example,
consider the following DataFrame:
Note
A string passed to groupby may refer to either a column or an index level.
If a string matches both a column name and an index level name, a
ValueError will be raised.
In [6]: df = pd.DataFrame(
...: {
...: "A": ["foo", "bar", "foo", "bar", "foo", "bar", "foo", "foo"],
...: "B": ["one", "one", "two", "three", "two", "two", "one", "three"],
...: "C": np.random.randn(8),
...: "D": np.random.randn(8),
...: }
...: )
...:
In [7]: df
Out[7]:
A B C D
0 foo one 0.469112 -0.861849
1 bar one -0.282863 -2.104569
2 foo two -1.509059 -0.494929
3 bar three -1.135632 1.071804
4 foo two 1.212112 0.721555
5 bar two -0.173215 -0.706771
6 foo one 0.119209 -1.039575
7 foo three -1.044236 0.271860
On a DataFrame, we obtain a GroupBy object by calling groupby().
We could naturally group by either the A or B columns, or both:
In [8]: grouped = df.groupby("A")
In [9]: grouped = df.groupby(["A", "B"])
If we also have a MultiIndex on columns A and B, we can group by all
but the specified columns
In [10]: df2 = df.set_index(["A", "B"])
In [11]: grouped = df2.groupby(level=df2.index.names.difference(["B"]))
In [12]: grouped.sum()
Out[12]:
C D
A
bar -1.591710 -1.739537
foo -0.752861 -1.402938
These will split the DataFrame on its index (rows). We could also split by the
columns:
In [13]: def get_letter_type(letter):
....: if letter.lower() in 'aeiou':
....: return 'vowel'
....: else:
....: return 'consonant'
....:
In [14]: grouped = df.groupby(get_letter_type, axis=1)
pandas Index objects support duplicate values. If a
non-unique index is used as the group key in a groupby operation, all values
for the same index value will be considered to be in one group and thus the
output of aggregation functions will only contain unique index values:
In [15]: lst = [1, 2, 3, 1, 2, 3]
In [16]: s = pd.Series([1, 2, 3, 10, 20, 30], lst)
In [17]: grouped = s.groupby(level=0)
In [18]: grouped.first()
Out[18]:
1 1
2 2
3 3
dtype: int64
In [19]: grouped.last()
Out[19]:
1 10
2 20
3 30
dtype: int64
In [20]: grouped.sum()
Out[20]:
1 11
2 22
3 33
dtype: int64
Note that no splitting occurs until it’s needed. Creating the GroupBy object
only verifies that you’ve passed a valid mapping.
Note
Many kinds of complicated data manipulations can be expressed in terms of
GroupBy operations (though can’t be guaranteed to be the most
efficient). You can get quite creative with the label mapping functions.
GroupBy sorting#
By default the group keys are sorted during the groupby operation. You may however pass sort=False for potential speedups:
In [21]: df2 = pd.DataFrame({"X": ["B", "B", "A", "A"], "Y": [1, 2, 3, 4]})
In [22]: df2.groupby(["X"]).sum()
Out[22]:
Y
X
A 7
B 3
In [23]: df2.groupby(["X"], sort=False).sum()
Out[23]:
Y
X
B 3
A 7
Note that groupby will preserve the order in which observations are sorted within each group.
For example, the groups created by groupby() below are in the order they appeared in the original DataFrame:
In [24]: df3 = pd.DataFrame({"X": ["A", "B", "A", "B"], "Y": [1, 4, 3, 2]})
In [25]: df3.groupby(["X"]).get_group("A")
Out[25]:
X Y
0 A 1
2 A 3
In [26]: df3.groupby(["X"]).get_group("B")
Out[26]:
X Y
1 B 4
3 B 2
New in version 1.1.0.
GroupBy dropna#
By default NA values are excluded from group keys during the groupby operation. However,
in case you want to include NA values in group keys, you could pass dropna=False to achieve it.
In [27]: df_list = [[1, 2, 3], [1, None, 4], [2, 1, 3], [1, 2, 2]]
In [28]: df_dropna = pd.DataFrame(df_list, columns=["a", "b", "c"])
In [29]: df_dropna
Out[29]:
a b c
0 1 2.0 3
1 1 NaN 4
2 2 1.0 3
3 1 2.0 2
# Default ``dropna`` is set to True, which will exclude NaNs in keys
In [30]: df_dropna.groupby(by=["b"], dropna=True).sum()
Out[30]:
a c
b
1.0 2 3
2.0 2 5
# In order to allow NaN in keys, set ``dropna`` to False
In [31]: df_dropna.groupby(by=["b"], dropna=False).sum()
Out[31]:
a c
b
1.0 2 3
2.0 2 5
NaN 1 4
The default setting of dropna argument is True which means NA are not included in group keys.
GroupBy object attributes#
The groups attribute is a dict whose keys are the computed unique groups
and corresponding values being the axis labels belonging to each group. In the
above example we have:
In [32]: df.groupby("A").groups
Out[32]: {'bar': [1, 3, 5], 'foo': [0, 2, 4, 6, 7]}
In [33]: df.groupby(get_letter_type, axis=1).groups
Out[33]: {'consonant': ['B', 'C', 'D'], 'vowel': ['A']}
Calling the standard Python len function on the GroupBy object just returns
the length of the groups dict, so it is largely just a convenience:
In [34]: grouped = df.groupby(["A", "B"])
In [35]: grouped.groups
Out[35]: {('bar', 'one'): [1], ('bar', 'three'): [3], ('bar', 'two'): [5], ('foo', 'one'): [0, 6], ('foo', 'three'): [7], ('foo', 'two'): [2, 4]}
In [36]: len(grouped)
Out[36]: 6
GroupBy will tab complete column names (and other attributes):
In [37]: df
Out[37]:
height weight gender
2000-01-01 42.849980 157.500553 male
2000-01-02 49.607315 177.340407 male
2000-01-03 56.293531 171.524640 male
2000-01-04 48.421077 144.251986 female
2000-01-05 46.556882 152.526206 male
2000-01-06 68.448851 168.272968 female
2000-01-07 70.757698 136.431469 male
2000-01-08 58.909500 176.499753 female
2000-01-09 76.435631 174.094104 female
2000-01-10 45.306120 177.540920 male
In [38]: gb = df.groupby("gender")
In [39]: gb.<TAB> # noqa: E225, E999
gb.agg gb.boxplot gb.cummin gb.describe gb.filter gb.get_group gb.height gb.last gb.median gb.ngroups gb.plot gb.rank gb.std gb.transform
gb.aggregate gb.count gb.cumprod gb.dtype gb.first gb.groups gb.hist gb.max gb.min gb.nth gb.prod gb.resample gb.sum gb.var
gb.apply gb.cummax gb.cumsum gb.fillna gb.gender gb.head gb.indices gb.mean gb.name gb.ohlc gb.quantile gb.size gb.tail gb.weight
GroupBy with MultiIndex#
With hierarchically-indexed data, it’s quite
natural to group by one of the levels of the hierarchy.
Let’s create a Series with a two-level MultiIndex.
In [40]: arrays = [
....: ["bar", "bar", "baz", "baz", "foo", "foo", "qux", "qux"],
....: ["one", "two", "one", "two", "one", "two", "one", "two"],
....: ]
....:
In [41]: index = pd.MultiIndex.from_arrays(arrays, names=["first", "second"])
In [42]: s = pd.Series(np.random.randn(8), index=index)
In [43]: s
Out[43]:
first second
bar one -0.919854
two -0.042379
baz one 1.247642
two -0.009920
foo one 0.290213
two 0.495767
qux one 0.362949
two 1.548106
dtype: float64
We can then group by one of the levels in s.
In [44]: grouped = s.groupby(level=0)
In [45]: grouped.sum()
Out[45]:
first
bar -0.962232
baz 1.237723
foo 0.785980
qux 1.911055
dtype: float64
If the MultiIndex has names specified, these can be passed instead of the level
number:
In [46]: s.groupby(level="second").sum()
Out[46]:
second
one 0.980950
two 1.991575
dtype: float64
Grouping with multiple levels is supported.
In [47]: s
Out[47]:
first second third
bar doo one -1.131345
two -0.089329
baz bee one 0.337863
two -0.945867
foo bop one -0.932132
two 1.956030
qux bop one 0.017587
two -0.016692
dtype: float64
In [48]: s.groupby(level=["first", "second"]).sum()
Out[48]:
first second
bar doo -1.220674
baz bee -0.608004
foo bop 1.023898
qux bop 0.000895
dtype: float64
Index level names may be supplied as keys.
In [49]: s.groupby(["first", "second"]).sum()
Out[49]:
first second
bar doo -1.220674
baz bee -0.608004
foo bop 1.023898
qux bop 0.000895
dtype: float64
More on the sum function and aggregation later.
Grouping DataFrame with Index levels and columns#
A DataFrame may be grouped by a combination of columns and index levels by
specifying the column names as strings and the index levels as pd.Grouper
objects.
In [50]: arrays = [
....: ["bar", "bar", "baz", "baz", "foo", "foo", "qux", "qux"],
....: ["one", "two", "one", "two", "one", "two", "one", "two"],
....: ]
....:
In [51]: index = pd.MultiIndex.from_arrays(arrays, names=["first", "second"])
In [52]: df = pd.DataFrame({"A": [1, 1, 1, 1, 2, 2, 3, 3], "B": np.arange(8)}, index=index)
In [53]: df
Out[53]:
A B
first second
bar one 1 0
two 1 1
baz one 1 2
two 1 3
foo one 2 4
two 2 5
qux one 3 6
two 3 7
The following example groups df by the second index level and
the A column.
In [54]: df.groupby([pd.Grouper(level=1), "A"]).sum()
Out[54]:
B
second A
one 1 2
2 4
3 6
two 1 4
2 5
3 7
Index levels may also be specified by name.
In [55]: df.groupby([pd.Grouper(level="second"), "A"]).sum()
Out[55]:
B
second A
one 1 2
2 4
3 6
two 1 4
2 5
3 7
Index level names may be specified as keys directly to groupby.
In [56]: df.groupby(["second", "A"]).sum()
Out[56]:
B
second A
one 1 2
2 4
3 6
two 1 4
2 5
3 7
DataFrame column selection in GroupBy#
Once you have created the GroupBy object from a DataFrame, you might want to do
something different for each of the columns. Thus, using [] similar to
getting a column from a DataFrame, you can do:
In [57]: df = pd.DataFrame(
....: {
....: "A": ["foo", "bar", "foo", "bar", "foo", "bar", "foo", "foo"],
....: "B": ["one", "one", "two", "three", "two", "two", "one", "three"],
....: "C": np.random.randn(8),
....: "D": np.random.randn(8),
....: }
....: )
....:
In [58]: df
Out[58]:
A B C D
0 foo one -0.575247 1.346061
1 bar one 0.254161 1.511763
2 foo two -1.143704 1.627081
3 bar three 0.215897 -0.990582
4 foo two 1.193555 -0.441652
5 bar two -0.077118 1.211526
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
In [59]: grouped = df.groupby(["A"])
In [60]: grouped_C = grouped["C"]
In [61]: grouped_D = grouped["D"]
This is mainly syntactic sugar for the alternative and much more verbose:
In [62]: df["C"].groupby(df["A"])
Out[62]: <pandas.core.groupby.generic.SeriesGroupBy object at 0x7f1ea100a490>
Additionally this method avoids recomputing the internal grouping information
derived from the passed key.
Iterating through groups#
With the GroupBy object in hand, iterating through the grouped data is very
natural and functions similarly to itertools.groupby():
In [63]: grouped = df.groupby('A')
In [64]: for name, group in grouped:
....: print(name)
....: print(group)
....:
bar
A B C D
1 bar one 0.254161 1.511763
3 bar three 0.215897 -0.990582
5 bar two -0.077118 1.211526
foo
A B C D
0 foo one -0.575247 1.346061
2 foo two -1.143704 1.627081
4 foo two 1.193555 -0.441652
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
In the case of grouping by multiple keys, the group name will be a tuple:
In [65]: for name, group in df.groupby(['A', 'B']):
....: print(name)
....: print(group)
....:
('bar', 'one')
A B C D
1 bar one 0.254161 1.511763
('bar', 'three')
A B C D
3 bar three 0.215897 -0.990582
('bar', 'two')
A B C D
5 bar two -0.077118 1.211526
('foo', 'one')
A B C D
0 foo one -0.575247 1.346061
6 foo one -0.408530 0.268520
('foo', 'three')
A B C D
7 foo three -0.862495 0.02458
('foo', 'two')
A B C D
2 foo two -1.143704 1.627081
4 foo two 1.193555 -0.441652
See Iterating through groups.
Selecting a group#
A single group can be selected using
get_group():
In [66]: grouped.get_group("bar")
Out[66]:
A B C D
1 bar one 0.254161 1.511763
3 bar three 0.215897 -0.990582
5 bar two -0.077118 1.211526
Or for an object grouped on multiple columns:
In [67]: df.groupby(["A", "B"]).get_group(("bar", "one"))
Out[67]:
A B C D
1 bar one 0.254161 1.511763
Aggregation#
Once the GroupBy object has been created, several methods are available to
perform a computation on the grouped data. These operations are similar to the
aggregating API, window API,
and resample API.
An obvious one is aggregation via the
aggregate() or equivalently
agg() method:
In [68]: grouped = df.groupby("A")
In [69]: grouped[["C", "D"]].aggregate(np.sum)
Out[69]:
C D
A
bar 0.392940 1.732707
foo -1.796421 2.824590
In [70]: grouped = df.groupby(["A", "B"])
In [71]: grouped.aggregate(np.sum)
Out[71]:
C D
A B
bar one 0.254161 1.511763
three 0.215897 -0.990582
two -0.077118 1.211526
foo one -0.983776 1.614581
three -0.862495 0.024580
two 0.049851 1.185429
As you can see, the result of the aggregation will have the group names as the
new index along the grouped axis. In the case of multiple keys, the result is a
MultiIndex by default, though this can be
changed by using the as_index option:
In [72]: grouped = df.groupby(["A", "B"], as_index=False)
In [73]: grouped.aggregate(np.sum)
Out[73]:
A B C D
0 bar one 0.254161 1.511763
1 bar three 0.215897 -0.990582
2 bar two -0.077118 1.211526
3 foo one -0.983776 1.614581
4 foo three -0.862495 0.024580
5 foo two 0.049851 1.185429
In [74]: df.groupby("A", as_index=False)[["C", "D"]].sum()
Out[74]:
A C D
0 bar 0.392940 1.732707
1 foo -1.796421 2.824590
Note that you could use the reset_index DataFrame function to achieve the
same result as the column names are stored in the resulting MultiIndex:
In [75]: df.groupby(["A", "B"]).sum().reset_index()
Out[75]:
A B C D
0 bar one 0.254161 1.511763
1 bar three 0.215897 -0.990582
2 bar two -0.077118 1.211526
3 foo one -0.983776 1.614581
4 foo three -0.862495 0.024580
5 foo two 0.049851 1.185429
Another simple aggregation example is to compute the size of each group.
This is included in GroupBy as the size method. It returns a Series whose
index are the group names and whose values are the sizes of each group.
In [76]: grouped.size()
Out[76]:
A B size
0 bar one 1
1 bar three 1
2 bar two 1
3 foo one 2
4 foo three 1
5 foo two 2
In [77]: grouped.describe()
Out[77]:
C ... D
count mean std min ... 25% 50% 75% max
0 1.0 0.254161 NaN 0.254161 ... 1.511763 1.511763 1.511763 1.511763
1 1.0 0.215897 NaN 0.215897 ... -0.990582 -0.990582 -0.990582 -0.990582
2 1.0 -0.077118 NaN -0.077118 ... 1.211526 1.211526 1.211526 1.211526
3 2.0 -0.491888 0.117887 -0.575247 ... 0.537905 0.807291 1.076676 1.346061
4 1.0 -0.862495 NaN -0.862495 ... 0.024580 0.024580 0.024580 0.024580
5 2.0 0.024925 1.652692 -1.143704 ... 0.075531 0.592714 1.109898 1.627081
[6 rows x 16 columns]
Another aggregation example is to compute the number of unique values of each group. This is similar to the value_counts function, except that it only counts unique values.
In [78]: ll = [['foo', 1], ['foo', 2], ['foo', 2], ['bar', 1], ['bar', 1]]
In [79]: df4 = pd.DataFrame(ll, columns=["A", "B"])
In [80]: df4
Out[80]:
A B
0 foo 1
1 foo 2
2 foo 2
3 bar 1
4 bar 1
In [81]: df4.groupby("A")["B"].nunique()
Out[81]:
A
bar 1
foo 2
Name: B, dtype: int64
Note
Aggregation functions will not return the groups that you are aggregating over
if they are named columns, when as_index=True, the default. The grouped columns will
be the indices of the returned object.
Passing as_index=False will return the groups that you are aggregating over, if they are
named columns.
Aggregating functions are the ones that reduce the dimension of the returned objects.
Some common aggregating functions are tabulated below:
Function
Description
mean()
Compute mean of groups
sum()
Compute sum of group values
size()
Compute group sizes
count()
Compute count of group
std()
Standard deviation of groups
var()
Compute variance of groups
sem()
Standard error of the mean of groups
describe()
Generates descriptive statistics
first()
Compute first of group values
last()
Compute last of group values
nth()
Take nth value, or a subset if n is a list
min()
Compute min of group values
max()
Compute max of group values
The aggregating functions above will exclude NA values. Any function which
reduces a Series to a scalar value is an aggregation function and will work,
a trivial example is df.groupby('A').agg(lambda ser: 1). Note that
nth() can act as a reducer or a
filter, see here.
Applying multiple functions at once#
With grouped Series you can also pass a list or dict of functions to do
aggregation with, outputting a DataFrame:
In [82]: grouped = df.groupby("A")
In [83]: grouped["C"].agg([np.sum, np.mean, np.std])
Out[83]:
sum mean std
A
bar 0.392940 0.130980 0.181231
foo -1.796421 -0.359284 0.912265
On a grouped DataFrame, you can pass a list of functions to apply to each
column, which produces an aggregated result with a hierarchical index:
In [84]: grouped[["C", "D"]].agg([np.sum, np.mean, np.std])
Out[84]:
C D
sum mean std sum mean std
A
bar 0.392940 0.130980 0.181231 1.732707 0.577569 1.366330
foo -1.796421 -0.359284 0.912265 2.824590 0.564918 0.884785
The resulting aggregations are named for the functions themselves. If you
need to rename, then you can add in a chained operation for a Series like this:
In [85]: (
....: grouped["C"]
....: .agg([np.sum, np.mean, np.std])
....: .rename(columns={"sum": "foo", "mean": "bar", "std": "baz"})
....: )
....:
Out[85]:
foo bar baz
A
bar 0.392940 0.130980 0.181231
foo -1.796421 -0.359284 0.912265
For a grouped DataFrame, you can rename in a similar manner:
In [86]: (
....: grouped[["C", "D"]].agg([np.sum, np.mean, np.std]).rename(
....: columns={"sum": "foo", "mean": "bar", "std": "baz"}
....: )
....: )
....:
Out[86]:
C D
foo bar baz foo bar baz
A
bar 0.392940 0.130980 0.181231 1.732707 0.577569 1.366330
foo -1.796421 -0.359284 0.912265 2.824590 0.564918 0.884785
Note
In general, the output column names should be unique. You can’t apply
the same function (or two functions with the same name) to the same
column.
In [87]: grouped["C"].agg(["sum", "sum"])
Out[87]:
sum sum
A
bar 0.392940 0.392940
foo -1.796421 -1.796421
pandas does allow you to provide multiple lambdas. In this case, pandas
will mangle the name of the (nameless) lambda functions, appending _<i>
to each subsequent lambda.
In [88]: grouped["C"].agg([lambda x: x.max() - x.min(), lambda x: x.median() - x.mean()])
Out[88]:
<lambda_0> <lambda_1>
A
bar 0.331279 0.084917
foo 2.337259 -0.215962
Named aggregation#
New in version 0.25.0.
To support column-specific aggregation with control over the output column names, pandas
accepts the special syntax in GroupBy.agg(), known as “named aggregation”, where
The keywords are the output column names
The values are tuples whose first element is the column to select
and the second element is the aggregation to apply to that column. pandas
provides the pandas.NamedAgg namedtuple with the fields ['column', 'aggfunc']
to make it clearer what the arguments are. As usual, the aggregation can
be a callable or a string alias.
In [89]: animals = pd.DataFrame(
....: {
....: "kind": ["cat", "dog", "cat", "dog"],
....: "height": [9.1, 6.0, 9.5, 34.0],
....: "weight": [7.9, 7.5, 9.9, 198.0],
....: }
....: )
....:
In [90]: animals
Out[90]:
kind height weight
0 cat 9.1 7.9
1 dog 6.0 7.5
2 cat 9.5 9.9
3 dog 34.0 198.0
In [91]: animals.groupby("kind").agg(
....: min_height=pd.NamedAgg(column="height", aggfunc="min"),
....: max_height=pd.NamedAgg(column="height", aggfunc="max"),
....: average_weight=pd.NamedAgg(column="weight", aggfunc=np.mean),
....: )
....:
Out[91]:
min_height max_height average_weight
kind
cat 9.1 9.5 8.90
dog 6.0 34.0 102.75
pandas.NamedAgg is just a namedtuple. Plain tuples are allowed as well.
In [92]: animals.groupby("kind").agg(
....: min_height=("height", "min"),
....: max_height=("height", "max"),
....: average_weight=("weight", np.mean),
....: )
....:
Out[92]:
min_height max_height average_weight
kind
cat 9.1 9.5 8.90
dog 6.0 34.0 102.75
If your desired output column names are not valid Python keywords, construct a dictionary
and unpack the keyword arguments
In [93]: animals.groupby("kind").agg(
....: **{
....: "total weight": pd.NamedAgg(column="weight", aggfunc=sum)
....: }
....: )
....:
Out[93]:
total weight
kind
cat 17.8
dog 205.5
Additional keyword arguments are not passed through to the aggregation functions. Only pairs
of (column, aggfunc) should be passed as **kwargs. If your aggregation functions
requires additional arguments, partially apply them with functools.partial().
Note
For Python 3.5 and earlier, the order of **kwargs in a functions was not
preserved. This means that the output column ordering would not be
consistent. To ensure consistent ordering, the keys (and so output columns)
will always be sorted for Python 3.5.
Named aggregation is also valid for Series groupby aggregations. In this case there’s
no column selection, so the values are just the functions.
In [94]: animals.groupby("kind").height.agg(
....: min_height="min",
....: max_height="max",
....: )
....:
Out[94]:
min_height max_height
kind
cat 9.1 9.5
dog 6.0 34.0
Applying different functions to DataFrame columns#
By passing a dict to aggregate you can apply a different aggregation to the
columns of a DataFrame:
In [95]: grouped.agg({"C": np.sum, "D": lambda x: np.std(x, ddof=1)})
Out[95]:
C D
A
bar 0.392940 1.366330
foo -1.796421 0.884785
The function names can also be strings. In order for a string to be valid it
must be either implemented on GroupBy or available via dispatching:
In [96]: grouped.agg({"C": "sum", "D": "std"})
Out[96]:
C D
A
bar 0.392940 1.366330
foo -1.796421 0.884785
Cython-optimized aggregation functions#
Some common aggregations, currently only sum, mean, std, and sem, have
optimized Cython implementations:
In [97]: df.groupby("A")[["C", "D"]].sum()
Out[97]:
C D
A
bar 0.392940 1.732707
foo -1.796421 2.824590
In [98]: df.groupby(["A", "B"]).mean()
Out[98]:
C D
A B
bar one 0.254161 1.511763
three 0.215897 -0.990582
two -0.077118 1.211526
foo one -0.491888 0.807291
three -0.862495 0.024580
two 0.024925 0.592714
Of course sum and mean are implemented on pandas objects, so the above
code would work even without the special versions via dispatching (see below).
Aggregations with User-Defined Functions#
Users can also provide their own functions for custom aggregations. When aggregating
with a User-Defined Function (UDF), the UDF should not mutate the provided Series, see
Mutating with User Defined Function (UDF) methods for more information.
In [99]: animals.groupby("kind")[["height"]].agg(lambda x: set(x))
Out[99]:
height
kind
cat {9.1, 9.5}
dog {34.0, 6.0}
The resulting dtype will reflect that of the aggregating function. If the results from different groups have
different dtypes, then a common dtype will be determined in the same way as DataFrame construction.
In [100]: animals.groupby("kind")[["height"]].agg(lambda x: x.astype(int).sum())
Out[100]:
height
kind
cat 18
dog 40
Transformation#
The transform method returns an object that is indexed the same
as the one being grouped. The transform function must:
Return a result that is either the same size as the group chunk or
broadcastable to the size of the group chunk (e.g., a scalar,
grouped.transform(lambda x: x.iloc[-1])).
Operate column-by-column on the group chunk. The transform is applied to
the first group chunk using chunk.apply.
Not perform in-place operations on the group chunk. Group chunks should
be treated as immutable, and changes to a group chunk may produce unexpected
results. For example, when using fillna, inplace must be False
(grouped.transform(lambda x: x.fillna(inplace=False))).
(Optionally) operates on the entire group chunk. If this is supported, a
fast path is used starting from the second chunk.
Deprecated since version 1.5.0: When using .transform on a grouped DataFrame and the transformation function
returns a DataFrame, currently pandas does not align the result’s index
with the input’s index. This behavior is deprecated and alignment will
be performed in a future version of pandas. You can apply .to_numpy() to the
result of the transformation function to avoid alignment.
Similar to Aggregations with User-Defined Functions, the resulting dtype will reflect that of the
transformation function. If the results from different groups have different dtypes, then
a common dtype will be determined in the same way as DataFrame construction.
Suppose we wished to standardize the data within each group:
In [101]: index = pd.date_range("10/1/1999", periods=1100)
In [102]: ts = pd.Series(np.random.normal(0.5, 2, 1100), index)
In [103]: ts = ts.rolling(window=100, min_periods=100).mean().dropna()
In [104]: ts.head()
Out[104]:
2000-01-08 0.779333
2000-01-09 0.778852
2000-01-10 0.786476
2000-01-11 0.782797
2000-01-12 0.798110
Freq: D, dtype: float64
In [105]: ts.tail()
Out[105]:
2002-09-30 0.660294
2002-10-01 0.631095
2002-10-02 0.673601
2002-10-03 0.709213
2002-10-04 0.719369
Freq: D, dtype: float64
In [106]: transformed = ts.groupby(lambda x: x.year).transform(
.....: lambda x: (x - x.mean()) / x.std()
.....: )
.....:
We would expect the result to now have mean 0 and standard deviation 1 within
each group, which we can easily check:
# Original Data
In [107]: grouped = ts.groupby(lambda x: x.year)
In [108]: grouped.mean()
Out[108]:
2000 0.442441
2001 0.526246
2002 0.459365
dtype: float64
In [109]: grouped.std()
Out[109]:
2000 0.131752
2001 0.210945
2002 0.128753
dtype: float64
# Transformed Data
In [110]: grouped_trans = transformed.groupby(lambda x: x.year)
In [111]: grouped_trans.mean()
Out[111]:
2000 -4.870756e-16
2001 -1.545187e-16
2002 4.136282e-16
dtype: float64
In [112]: grouped_trans.std()
Out[112]:
2000 1.0
2001 1.0
2002 1.0
dtype: float64
We can also visually compare the original and transformed data sets.
In [113]: compare = pd.DataFrame({"Original": ts, "Transformed": transformed})
In [114]: compare.plot()
Out[114]: <AxesSubplot: >
Transformation functions that have lower dimension outputs are broadcast to
match the shape of the input array.
In [115]: ts.groupby(lambda x: x.year).transform(lambda x: x.max() - x.min())
Out[115]:
2000-01-08 0.623893
2000-01-09 0.623893
2000-01-10 0.623893
2000-01-11 0.623893
2000-01-12 0.623893
...
2002-09-30 0.558275
2002-10-01 0.558275
2002-10-02 0.558275
2002-10-03 0.558275
2002-10-04 0.558275
Freq: D, Length: 1001, dtype: float64
Alternatively, the built-in methods could be used to produce the same outputs.
In [116]: max_ts = ts.groupby(lambda x: x.year).transform("max")
In [117]: min_ts = ts.groupby(lambda x: x.year).transform("min")
In [118]: max_ts - min_ts
Out[118]:
2000-01-08 0.623893
2000-01-09 0.623893
2000-01-10 0.623893
2000-01-11 0.623893
2000-01-12 0.623893
...
2002-09-30 0.558275
2002-10-01 0.558275
2002-10-02 0.558275
2002-10-03 0.558275
2002-10-04 0.558275
Freq: D, Length: 1001, dtype: float64
Another common data transform is to replace missing data with the group mean.
In [119]: data_df
Out[119]:
A B C
0 1.539708 -1.166480 0.533026
1 1.302092 -0.505754 NaN
2 -0.371983 1.104803 -0.651520
3 -1.309622 1.118697 -1.161657
4 -1.924296 0.396437 0.812436
.. ... ... ...
995 -0.093110 0.683847 -0.774753
996 -0.185043 1.438572 NaN
997 -0.394469 -0.642343 0.011374
998 -1.174126 1.857148 NaN
999 0.234564 0.517098 0.393534
[1000 rows x 3 columns]
In [120]: countries = np.array(["US", "UK", "GR", "JP"])
In [121]: key = countries[np.random.randint(0, 4, 1000)]
In [122]: grouped = data_df.groupby(key)
# Non-NA count in each group
In [123]: grouped.count()
Out[123]:
A B C
GR 209 217 189
JP 240 255 217
UK 216 231 193
US 239 250 217
In [124]: transformed = grouped.transform(lambda x: x.fillna(x.mean()))
We can verify that the group means have not changed in the transformed data
and that the transformed data contains no NAs.
In [125]: grouped_trans = transformed.groupby(key)
In [126]: grouped.mean() # original group means
Out[126]:
A B C
GR -0.098371 -0.015420 0.068053
JP 0.069025 0.023100 -0.077324
UK 0.034069 -0.052580 -0.116525
US 0.058664 -0.020399 0.028603
In [127]: grouped_trans.mean() # transformation did not change group means
Out[127]:
A B C
GR -0.098371 -0.015420 0.068053
JP 0.069025 0.023100 -0.077324
UK 0.034069 -0.052580 -0.116525
US 0.058664 -0.020399 0.028603
In [128]: grouped.count() # original has some missing data points
Out[128]:
A B C
GR 209 217 189
JP 240 255 217
UK 216 231 193
US 239 250 217
In [129]: grouped_trans.count() # counts after transformation
Out[129]:
A B C
GR 228 228 228
JP 267 267 267
UK 247 247 247
US 258 258 258
In [130]: grouped_trans.size() # Verify non-NA count equals group size
Out[130]:
GR 228
JP 267
UK 247
US 258
dtype: int64
Note
Some functions will automatically transform the input when applied to a
GroupBy object, but returning an object of the same shape as the original.
Passing as_index=False will not affect these transformation methods.
For example: fillna, ffill, bfill, shift..
In [131]: grouped.ffill()
Out[131]:
A B C
0 1.539708 -1.166480 0.533026
1 1.302092 -0.505754 0.533026
2 -0.371983 1.104803 -0.651520
3 -1.309622 1.118697 -1.161657
4 -1.924296 0.396437 0.812436
.. ... ... ...
995 -0.093110 0.683847 -0.774753
996 -0.185043 1.438572 -0.774753
997 -0.394469 -0.642343 0.011374
998 -1.174126 1.857148 -0.774753
999 0.234564 0.517098 0.393534
[1000 rows x 3 columns]
Window and resample operations#
It is possible to use resample(), expanding() and
rolling() as methods on groupbys.
The example below will apply the rolling() method on the samples of
the column B based on the groups of column A.
In [132]: df_re = pd.DataFrame({"A": [1] * 10 + [5] * 10, "B": np.arange(20)})
In [133]: df_re
Out[133]:
A B
0 1 0
1 1 1
2 1 2
3 1 3
4 1 4
.. .. ..
15 5 15
16 5 16
17 5 17
18 5 18
19 5 19
[20 rows x 2 columns]
In [134]: df_re.groupby("A").rolling(4).B.mean()
Out[134]:
A
1 0 NaN
1 NaN
2 NaN
3 1.5
4 2.5
...
5 15 13.5
16 14.5
17 15.5
18 16.5
19 17.5
Name: B, Length: 20, dtype: float64
The expanding() method will accumulate a given operation
(sum() in the example) for all the members of each particular
group.
In [135]: df_re.groupby("A").expanding().sum()
Out[135]:
B
A
1 0 0.0
1 1.0
2 3.0
3 6.0
4 10.0
... ...
5 15 75.0
16 91.0
17 108.0
18 126.0
19 145.0
[20 rows x 1 columns]
Suppose you want to use the resample() method to get a daily
frequency in each group of your dataframe and wish to complete the
missing values with the ffill() method.
In [136]: df_re = pd.DataFrame(
.....: {
.....: "date": pd.date_range(start="2016-01-01", periods=4, freq="W"),
.....: "group": [1, 1, 2, 2],
.....: "val": [5, 6, 7, 8],
.....: }
.....: ).set_index("date")
.....:
In [137]: df_re
Out[137]:
group val
date
2016-01-03 1 5
2016-01-10 1 6
2016-01-17 2 7
2016-01-24 2 8
In [138]: df_re.groupby("group").resample("1D").ffill()
Out[138]:
group val
group date
1 2016-01-03 1 5
2016-01-04 1 5
2016-01-05 1 5
2016-01-06 1 5
2016-01-07 1 5
... ... ...
2 2016-01-20 2 7
2016-01-21 2 7
2016-01-22 2 7
2016-01-23 2 7
2016-01-24 2 8
[16 rows x 2 columns]
Filtration#
The filter method returns a subset of the original object. Suppose we
want to take only elements that belong to groups with a group sum greater
than 2.
In [139]: sf = pd.Series([1, 1, 2, 3, 3, 3])
In [140]: sf.groupby(sf).filter(lambda x: x.sum() > 2)
Out[140]:
3 3
4 3
5 3
dtype: int64
The argument of filter must be a function that, applied to the group as a
whole, returns True or False.
Another useful operation is filtering out elements that belong to groups
with only a couple members.
In [141]: dff = pd.DataFrame({"A": np.arange(8), "B": list("aabbbbcc")})
In [142]: dff.groupby("B").filter(lambda x: len(x) > 2)
Out[142]:
A B
2 2 b
3 3 b
4 4 b
5 5 b
Alternatively, instead of dropping the offending groups, we can return a
like-indexed objects where the groups that do not pass the filter are filled
with NaNs.
In [143]: dff.groupby("B").filter(lambda x: len(x) > 2, dropna=False)
Out[143]:
A B
0 NaN NaN
1 NaN NaN
2 2.0 b
3 3.0 b
4 4.0 b
5 5.0 b
6 NaN NaN
7 NaN NaN
For DataFrames with multiple columns, filters should explicitly specify a column as the filter criterion.
In [144]: dff["C"] = np.arange(8)
In [145]: dff.groupby("B").filter(lambda x: len(x["C"]) > 2)
Out[145]:
A B C
2 2 b 2
3 3 b 3
4 4 b 4
5 5 b 5
Note
Some functions when applied to a groupby object will act as a filter on the input, returning
a reduced shape of the original (and potentially eliminating groups), but with the index unchanged.
Passing as_index=False will not affect these transformation methods.
For example: head, tail.
In [146]: dff.groupby("B").head(2)
Out[146]:
A B C
0 0 a 0
1 1 a 1
2 2 b 2
3 3 b 3
6 6 c 6
7 7 c 7
Dispatching to instance methods#
When doing an aggregation or transformation, you might just want to call an
instance method on each data group. This is pretty easy to do by passing lambda
functions:
In [147]: grouped = df.groupby("A")
In [148]: grouped.agg(lambda x: x.std())
Out[148]:
C D
A
bar 0.181231 1.366330
foo 0.912265 0.884785
But, it’s rather verbose and can be untidy if you need to pass additional
arguments. Using a bit of metaprogramming cleverness, GroupBy now has the
ability to “dispatch” method calls to the groups:
In [149]: grouped.std()
Out[149]:
C D
A
bar 0.181231 1.366330
foo 0.912265 0.884785
What is actually happening here is that a function wrapper is being
generated. When invoked, it takes any passed arguments and invokes the function
with any arguments on each group (in the above example, the std
function). The results are then combined together much in the style of agg
and transform (it actually uses apply to infer the gluing, documented
next). This enables some operations to be carried out rather succinctly:
In [150]: tsdf = pd.DataFrame(
.....: np.random.randn(1000, 3),
.....: index=pd.date_range("1/1/2000", periods=1000),
.....: columns=["A", "B", "C"],
.....: )
.....:
In [151]: tsdf.iloc[::2] = np.nan
In [152]: grouped = tsdf.groupby(lambda x: x.year)
In [153]: grouped.fillna(method="pad")
Out[153]:
A B C
2000-01-01 NaN NaN NaN
2000-01-02 -0.353501 -0.080957 -0.876864
2000-01-03 -0.353501 -0.080957 -0.876864
2000-01-04 0.050976 0.044273 -0.559849
2000-01-05 0.050976 0.044273 -0.559849
... ... ... ...
2002-09-22 0.005011 0.053897 -1.026922
2002-09-23 0.005011 0.053897 -1.026922
2002-09-24 -0.456542 -1.849051 1.559856
2002-09-25 -0.456542 -1.849051 1.559856
2002-09-26 1.123162 0.354660 1.128135
[1000 rows x 3 columns]
In this example, we chopped the collection of time series into yearly chunks
then independently called fillna on the
groups.
The nlargest and nsmallest methods work on Series style groupbys:
In [154]: s = pd.Series([9, 8, 7, 5, 19, 1, 4.2, 3.3])
In [155]: g = pd.Series(list("abababab"))
In [156]: gb = s.groupby(g)
In [157]: gb.nlargest(3)
Out[157]:
a 4 19.0
0 9.0
2 7.0
b 1 8.0
3 5.0
7 3.3
dtype: float64
In [158]: gb.nsmallest(3)
Out[158]:
a 6 4.2
2 7.0
0 9.0
b 5 1.0
7 3.3
3 5.0
dtype: float64
Flexible apply#
Some operations on the grouped data might not fit into either the aggregate or
transform categories. Or, you may simply want GroupBy to infer how to combine
the results. For these, use the apply function, which can be substituted
for both aggregate and transform in many standard use cases. However,
apply can handle some exceptional use cases.
Note
apply can act as a reducer, transformer, or filter function, depending
on exactly what is passed to it. It can depend on the passed function and
exactly what you are grouping. Thus the grouped column(s) may be included in
the output as well as set the indices.
In [159]: df
Out[159]:
A B C D
0 foo one -0.575247 1.346061
1 bar one 0.254161 1.511763
2 foo two -1.143704 1.627081
3 bar three 0.215897 -0.990582
4 foo two 1.193555 -0.441652
5 bar two -0.077118 1.211526
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
In [160]: grouped = df.groupby("A")
# could also just call .describe()
In [161]: grouped["C"].apply(lambda x: x.describe())
Out[161]:
A
bar count 3.000000
mean 0.130980
std 0.181231
min -0.077118
25% 0.069390
...
foo min -1.143704
25% -0.862495
50% -0.575247
75% -0.408530
max 1.193555
Name: C, Length: 16, dtype: float64
The dimension of the returned result can also change:
In [162]: grouped = df.groupby('A')['C']
In [163]: def f(group):
.....: return pd.DataFrame({'original': group,
.....: 'demeaned': group - group.mean()})
.....:
apply on a Series can operate on a returned value from the applied function,
that is itself a series, and possibly upcast the result to a DataFrame:
In [164]: def f(x):
.....: return pd.Series([x, x ** 2], index=["x", "x^2"])
.....:
In [165]: s = pd.Series(np.random.rand(5))
In [166]: s
Out[166]:
0 0.321438
1 0.493496
2 0.139505
3 0.910103
4 0.194158
dtype: float64
In [167]: s.apply(f)
Out[167]:
x x^2
0 0.321438 0.103323
1 0.493496 0.243538
2 0.139505 0.019462
3 0.910103 0.828287
4 0.194158 0.037697
Control grouped column(s) placement with group_keys#
Note
If group_keys=True is specified when calling groupby(),
functions passed to apply that return like-indexed outputs will have the
group keys added to the result index. Previous versions of pandas would add
the group keys only when the result from the applied function had a different
index than the input. If group_keys is not specified, the group keys will
not be added for like-indexed outputs. In the future this behavior
will change to always respect group_keys, which defaults to True.
Changed in version 1.5.0.
To control whether the grouped column(s) are included in the indices, you can use
the argument group_keys. Compare
In [168]: df.groupby("A", group_keys=True).apply(lambda x: x)
Out[168]:
A B C D
A
bar 1 bar one 0.254161 1.511763
3 bar three 0.215897 -0.990582
5 bar two -0.077118 1.211526
foo 0 foo one -0.575247 1.346061
2 foo two -1.143704 1.627081
4 foo two 1.193555 -0.441652
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
with
In [169]: df.groupby("A", group_keys=False).apply(lambda x: x)
Out[169]:
A B C D
0 foo one -0.575247 1.346061
1 bar one 0.254161 1.511763
2 foo two -1.143704 1.627081
3 bar three 0.215897 -0.990582
4 foo two 1.193555 -0.441652
5 bar two -0.077118 1.211526
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
Similar to Aggregations with User-Defined Functions, the resulting dtype will reflect that of the
apply function. If the results from different groups have different dtypes, then
a common dtype will be determined in the same way as DataFrame construction.
Numba Accelerated Routines#
New in version 1.1.
If Numba is installed as an optional dependency, the transform and
aggregate methods support engine='numba' and engine_kwargs arguments.
See enhancing performance with Numba for general usage of the arguments
and performance considerations.
The function signature must start with values, index exactly as the data belonging to each group
will be passed into values, and the group index will be passed into index.
Warning
When using engine='numba', there will be no “fall back” behavior internally. The group
data and group index will be passed as NumPy arrays to the JITed user defined function, and no
alternative execution attempts will be tried.
Other useful features#
Automatic exclusion of “nuisance” columns#
Again consider the example DataFrame we’ve been looking at:
In [170]: df
Out[170]:
A B C D
0 foo one -0.575247 1.346061
1 bar one 0.254161 1.511763
2 foo two -1.143704 1.627081
3 bar three 0.215897 -0.990582
4 foo two 1.193555 -0.441652
5 bar two -0.077118 1.211526
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
Suppose we wish to compute the standard deviation grouped by the A
column. There is a slight problem, namely that we don’t care about the data in
column B. We refer to this as a “nuisance” column. You can avoid nuisance
columns by specifying numeric_only=True:
In [171]: df.groupby("A").std(numeric_only=True)
Out[171]:
C D
A
bar 0.181231 1.366330
foo 0.912265 0.884785
Note that df.groupby('A').colname.std(). is more efficient than
df.groupby('A').std().colname, so if the result of an aggregation function
is only interesting over one column (here colname), it may be filtered
before applying the aggregation function.
Note
Any object column, also if it contains numerical values such as Decimal
objects, is considered as a “nuisance” columns. They are excluded from
aggregate functions automatically in groupby.
If you do wish to include decimal or object columns in an aggregation with
other non-nuisance data types, you must do so explicitly.
Warning
The automatic dropping of nuisance columns has been deprecated and will be removed
in a future version of pandas. If columns are included that cannot be operated
on, pandas will instead raise an error. In order to avoid this, either select
the columns you wish to operate on or specify numeric_only=True.
In [172]: from decimal import Decimal
In [173]: df_dec = pd.DataFrame(
.....: {
.....: "id": [1, 2, 1, 2],
.....: "int_column": [1, 2, 3, 4],
.....: "dec_column": [
.....: Decimal("0.50"),
.....: Decimal("0.15"),
.....: Decimal("0.25"),
.....: Decimal("0.40"),
.....: ],
.....: }
.....: )
.....:
# Decimal columns can be sum'd explicitly by themselves...
In [174]: df_dec.groupby(["id"])[["dec_column"]].sum()
Out[174]:
dec_column
id
1 0.75
2 0.55
# ...but cannot be combined with standard data types or they will be excluded
In [175]: df_dec.groupby(["id"])[["int_column", "dec_column"]].sum()
Out[175]:
int_column
id
1 4
2 6
# Use .agg function to aggregate over standard and "nuisance" data types
# at the same time
In [176]: df_dec.groupby(["id"]).agg({"int_column": "sum", "dec_column": "sum"})
Out[176]:
int_column dec_column
id
1 4 0.75
2 6 0.55
Handling of (un)observed Categorical values#
When using a Categorical grouper (as a single grouper, or as part of multiple groupers), the observed keyword
controls whether to return a cartesian product of all possible groupers values (observed=False) or only those
that are observed groupers (observed=True).
Show all values:
In [177]: pd.Series([1, 1, 1]).groupby(
.....: pd.Categorical(["a", "a", "a"], categories=["a", "b"]), observed=False
.....: ).count()
.....:
Out[177]:
a 3
b 0
dtype: int64
Show only the observed values:
In [178]: pd.Series([1, 1, 1]).groupby(
.....: pd.Categorical(["a", "a", "a"], categories=["a", "b"]), observed=True
.....: ).count()
.....:
Out[178]:
a 3
dtype: int64
The returned dtype of the grouped will always include all of the categories that were grouped.
In [179]: s = (
.....: pd.Series([1, 1, 1])
.....: .groupby(pd.Categorical(["a", "a", "a"], categories=["a", "b"]), observed=False)
.....: .count()
.....: )
.....:
In [180]: s.index.dtype
Out[180]: CategoricalDtype(categories=['a', 'b'], ordered=False)
NA and NaT group handling#
If there are any NaN or NaT values in the grouping key, these will be
automatically excluded. In other words, there will never be an “NA group” or
“NaT group”. This was not the case in older versions of pandas, but users were
generally discarding the NA group anyway (and supporting it was an
implementation headache).
Grouping with ordered factors#
Categorical variables represented as instance of pandas’s Categorical class
can be used as group keys. If so, the order of the levels will be preserved:
In [181]: data = pd.Series(np.random.randn(100))
In [182]: factor = pd.qcut(data, [0, 0.25, 0.5, 0.75, 1.0])
In [183]: data.groupby(factor).mean()
Out[183]:
(-2.645, -0.523] -1.362896
(-0.523, 0.0296] -0.260266
(0.0296, 0.654] 0.361802
(0.654, 2.21] 1.073801
dtype: float64
Grouping with a grouper specification#
You may need to specify a bit more data to properly group. You can
use the pd.Grouper to provide this local control.
In [184]: import datetime
In [185]: df = pd.DataFrame(
.....: {
.....: "Branch": "A A A A A A A B".split(),
.....: "Buyer": "Carl Mark Carl Carl Joe Joe Joe Carl".split(),
.....: "Quantity": [1, 3, 5, 1, 8, 1, 9, 3],
.....: "Date": [
.....: datetime.datetime(2013, 1, 1, 13, 0),
.....: datetime.datetime(2013, 1, 1, 13, 5),
.....: datetime.datetime(2013, 10, 1, 20, 0),
.....: datetime.datetime(2013, 10, 2, 10, 0),
.....: datetime.datetime(2013, 10, 1, 20, 0),
.....: datetime.datetime(2013, 10, 2, 10, 0),
.....: datetime.datetime(2013, 12, 2, 12, 0),
.....: datetime.datetime(2013, 12, 2, 14, 0),
.....: ],
.....: }
.....: )
.....:
In [186]: df
Out[186]:
Branch Buyer Quantity Date
0 A Carl 1 2013-01-01 13:00:00
1 A Mark 3 2013-01-01 13:05:00
2 A Carl 5 2013-10-01 20:00:00
3 A Carl 1 2013-10-02 10:00:00
4 A Joe 8 2013-10-01 20:00:00
5 A Joe 1 2013-10-02 10:00:00
6 A Joe 9 2013-12-02 12:00:00
7 B Carl 3 2013-12-02 14:00:00
Groupby a specific column with the desired frequency. This is like resampling.
In [187]: df.groupby([pd.Grouper(freq="1M", key="Date"), "Buyer"])[["Quantity"]].sum()
Out[187]:
Quantity
Date Buyer
2013-01-31 Carl 1
Mark 3
2013-10-31 Carl 6
Joe 9
2013-12-31 Carl 3
Joe 9
You have an ambiguous specification in that you have a named index and a column
that could be potential groupers.
In [188]: df = df.set_index("Date")
In [189]: df["Date"] = df.index + pd.offsets.MonthEnd(2)
In [190]: df.groupby([pd.Grouper(freq="6M", key="Date"), "Buyer"])[["Quantity"]].sum()
Out[190]:
Quantity
Date Buyer
2013-02-28 Carl 1
Mark 3
2014-02-28 Carl 9
Joe 18
In [191]: df.groupby([pd.Grouper(freq="6M", level="Date"), "Buyer"])[["Quantity"]].sum()
Out[191]:
Quantity
Date Buyer
2013-01-31 Carl 1
Mark 3
2014-01-31 Carl 9
Joe 18
Taking the first rows of each group#
Just like for a DataFrame or Series you can call head and tail on a groupby:
In [192]: df = pd.DataFrame([[1, 2], [1, 4], [5, 6]], columns=["A", "B"])
In [193]: df
Out[193]:
A B
0 1 2
1 1 4
2 5 6
In [194]: g = df.groupby("A")
In [195]: g.head(1)
Out[195]:
A B
0 1 2
2 5 6
In [196]: g.tail(1)
Out[196]:
A B
1 1 4
2 5 6
This shows the first or last n rows from each group.
Taking the nth row of each group#
To select from a DataFrame or Series the nth item, use
nth(). This is a reduction method, and
will return a single row (or no row) per group if you pass an int for n:
In [197]: df = pd.DataFrame([[1, np.nan], [1, 4], [5, 6]], columns=["A", "B"])
In [198]: g = df.groupby("A")
In [199]: g.nth(0)
Out[199]:
B
A
1 NaN
5 6.0
In [200]: g.nth(-1)
Out[200]:
B
A
1 4.0
5 6.0
In [201]: g.nth(1)
Out[201]:
B
A
1 4.0
If you want to select the nth not-null item, use the dropna kwarg. For a DataFrame this should be either 'any' or 'all' just like you would pass to dropna:
# nth(0) is the same as g.first()
In [202]: g.nth(0, dropna="any")
Out[202]:
B
A
1 4.0
5 6.0
In [203]: g.first()
Out[203]:
B
A
1 4.0
5 6.0
# nth(-1) is the same as g.last()
In [204]: g.nth(-1, dropna="any") # NaNs denote group exhausted when using dropna
Out[204]:
B
A
1 4.0
5 6.0
In [205]: g.last()
Out[205]:
B
A
1 4.0
5 6.0
In [206]: g.B.nth(0, dropna="all")
Out[206]:
A
1 4.0
5 6.0
Name: B, dtype: float64
As with other methods, passing as_index=False, will achieve a filtration, which returns the grouped row.
In [207]: df = pd.DataFrame([[1, np.nan], [1, 4], [5, 6]], columns=["A", "B"])
In [208]: g = df.groupby("A", as_index=False)
In [209]: g.nth(0)
Out[209]:
A B
0 1 NaN
2 5 6.0
In [210]: g.nth(-1)
Out[210]:
A B
1 1 4.0
2 5 6.0
You can also select multiple rows from each group by specifying multiple nth values as a list of ints.
In [211]: business_dates = pd.date_range(start="4/1/2014", end="6/30/2014", freq="B")
In [212]: df = pd.DataFrame(1, index=business_dates, columns=["a", "b"])
# get the first, 4th, and last date index for each month
In [213]: df.groupby([df.index.year, df.index.month]).nth([0, 3, -1])
Out[213]:
a b
2014 4 1 1
4 1 1
4 1 1
5 1 1
5 1 1
5 1 1
6 1 1
6 1 1
6 1 1
Enumerate group items#
To see the order in which each row appears within its group, use the
cumcount method:
In [214]: dfg = pd.DataFrame(list("aaabba"), columns=["A"])
In [215]: dfg
Out[215]:
A
0 a
1 a
2 a
3 b
4 b
5 a
In [216]: dfg.groupby("A").cumcount()
Out[216]:
0 0
1 1
2 2
3 0
4 1
5 3
dtype: int64
In [217]: dfg.groupby("A").cumcount(ascending=False)
Out[217]:
0 3
1 2
2 1
3 1
4 0
5 0
dtype: int64
Enumerate groups#
To see the ordering of the groups (as opposed to the order of rows
within a group given by cumcount) you can use
ngroup().
Note that the numbers given to the groups match the order in which the
groups would be seen when iterating over the groupby object, not the
order they are first observed.
In [218]: dfg = pd.DataFrame(list("aaabba"), columns=["A"])
In [219]: dfg
Out[219]:
A
0 a
1 a
2 a
3 b
4 b
5 a
In [220]: dfg.groupby("A").ngroup()
Out[220]:
0 0
1 0
2 0
3 1
4 1
5 0
dtype: int64
In [221]: dfg.groupby("A").ngroup(ascending=False)
Out[221]:
0 1
1 1
2 1
3 0
4 0
5 1
dtype: int64
Plotting#
Groupby also works with some plotting methods. For example, suppose we
suspect that some features in a DataFrame may differ by group, in this case,
the values in column 1 where the group is “B” are 3 higher on average.
In [222]: np.random.seed(1234)
In [223]: df = pd.DataFrame(np.random.randn(50, 2))
In [224]: df["g"] = np.random.choice(["A", "B"], size=50)
In [225]: df.loc[df["g"] == "B", 1] += 3
We can easily visualize this with a boxplot:
In [226]: df.groupby("g").boxplot()
Out[226]:
A AxesSubplot(0.1,0.15;0.363636x0.75)
B AxesSubplot(0.536364,0.15;0.363636x0.75)
dtype: object
The result of calling boxplot is a dictionary whose keys are the values
of our grouping column g (“A” and “B”). The values of the resulting dictionary
can be controlled by the return_type keyword of boxplot.
See the visualization documentation for more.
Warning
For historical reasons, df.groupby("g").boxplot() is not equivalent
to df.boxplot(by="g"). See here for
an explanation.
Piping function calls#
Similar to the functionality provided by DataFrame and Series, functions
that take GroupBy objects can be chained together using a pipe method to
allow for a cleaner, more readable syntax. To read about .pipe in general terms,
see here.
Combining .groupby and .pipe is often useful when you need to reuse
GroupBy objects.
As an example, imagine having a DataFrame with columns for stores, products,
revenue and quantity sold. We’d like to do a groupwise calculation of prices
(i.e. revenue/quantity) per store and per product. We could do this in a
multi-step operation, but expressing it in terms of piping can make the
code more readable. First we set the data:
In [227]: n = 1000
In [228]: df = pd.DataFrame(
.....: {
.....: "Store": np.random.choice(["Store_1", "Store_2"], n),
.....: "Product": np.random.choice(["Product_1", "Product_2"], n),
.....: "Revenue": (np.random.random(n) * 50 + 10).round(2),
.....: "Quantity": np.random.randint(1, 10, size=n),
.....: }
.....: )
.....:
In [229]: df.head(2)
Out[229]:
Store Product Revenue Quantity
0 Store_2 Product_1 26.12 1
1 Store_2 Product_1 28.86 1
Now, to find prices per store/product, we can simply do:
In [230]: (
.....: df.groupby(["Store", "Product"])
.....: .pipe(lambda grp: grp.Revenue.sum() / grp.Quantity.sum())
.....: .unstack()
.....: .round(2)
.....: )
.....:
Out[230]:
Product Product_1 Product_2
Store
Store_1 6.82 7.05
Store_2 6.30 6.64
Piping can also be expressive when you want to deliver a grouped object to some
arbitrary function, for example:
In [231]: def mean(groupby):
.....: return groupby.mean()
.....:
In [232]: df.groupby(["Store", "Product"]).pipe(mean)
Out[232]:
Revenue Quantity
Store Product
Store_1 Product_1 34.622727 5.075758
Product_2 35.482815 5.029630
Store_2 Product_1 32.972837 5.237589
Product_2 34.684360 5.224000
where mean takes a GroupBy object and finds the mean of the Revenue and Quantity
columns respectively for each Store-Product combination. The mean function can
be any function that takes in a GroupBy object; the .pipe will pass the GroupBy
object as a parameter into the function you specify.
Examples#
Regrouping by factor#
Regroup columns of a DataFrame according to their sum, and sum the aggregated ones.
In [233]: df = pd.DataFrame({"a": [1, 0, 0], "b": [0, 1, 0], "c": [1, 0, 0], "d": [2, 3, 4]})
In [234]: df
Out[234]:
a b c d
0 1 0 1 2
1 0 1 0 3
2 0 0 0 4
In [235]: df.groupby(df.sum(), axis=1).sum()
Out[235]:
1 9
0 2 2
1 1 3
2 0 4
Multi-column factorization#
By using ngroup(), we can extract
information about the groups in a way similar to factorize() (as described
further in the reshaping API) but which applies
naturally to multiple columns of mixed type and different
sources. This can be useful as an intermediate categorical-like step
in processing, when the relationships between the group rows are more
important than their content, or as input to an algorithm which only
accepts the integer encoding. (For more information about support in
pandas for full categorical data, see the Categorical
introduction and the
API documentation.)
In [236]: dfg = pd.DataFrame({"A": [1, 1, 2, 3, 2], "B": list("aaaba")})
In [237]: dfg
Out[237]:
A B
0 1 a
1 1 a
2 2 a
3 3 b
4 2 a
In [238]: dfg.groupby(["A", "B"]).ngroup()
Out[238]:
0 0
1 0
2 1
3 2
4 1
dtype: int64
In [239]: dfg.groupby(["A", [0, 0, 0, 1, 1]]).ngroup()
Out[239]:
0 0
1 0
2 1
3 3
4 2
dtype: int64
Groupby by indexer to ‘resample’ data#
Resampling produces new hypothetical samples (resamples) from already existing observed data or from a model that generates data. These new samples are similar to the pre-existing samples.
In order to resample to work on indices that are non-datetimelike, the following procedure can be utilized.
In the following examples, df.index // 5 returns a binary array which is used to determine what gets selected for the groupby operation.
Note
The below example shows how we can downsample by consolidation of samples into fewer samples. Here by using df.index // 5, we are aggregating the samples in bins. By applying std() function, we aggregate the information contained in many samples into a small subset of values which is their standard deviation thereby reducing the number of samples.
In [240]: df = pd.DataFrame(np.random.randn(10, 2))
In [241]: df
Out[241]:
0 1
0 -0.793893 0.321153
1 0.342250 1.618906
2 -0.975807 1.918201
3 -0.810847 -1.405919
4 -1.977759 0.461659
5 0.730057 -1.316938
6 -0.751328 0.528290
7 -0.257759 -1.081009
8 0.505895 -1.701948
9 -1.006349 0.020208
In [242]: df.index // 5
Out[242]: Int64Index([0, 0, 0, 0, 0, 1, 1, 1, 1, 1], dtype='int64')
In [243]: df.groupby(df.index // 5).std()
Out[243]:
0 1
0 0.823647 1.312912
1 0.760109 0.942941
Returning a Series to propagate names#
Group DataFrame columns, compute a set of metrics and return a named Series.
The Series name is used as the name for the column index. This is especially
useful in conjunction with reshaping operations such as stacking in which the
column index name will be used as the name of the inserted column:
In [244]: df = pd.DataFrame(
.....: {
.....: "a": [0, 0, 0, 0, 1, 1, 1, 1, 2, 2, 2, 2],
.....: "b": [0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1],
.....: "c": [1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0],
.....: "d": [0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1],
.....: }
.....: )
.....:
In [245]: def compute_metrics(x):
.....: result = {"b_sum": x["b"].sum(), "c_mean": x["c"].mean()}
.....: return pd.Series(result, name="metrics")
.....:
In [246]: result = df.groupby("a").apply(compute_metrics)
In [247]: result
Out[247]:
metrics b_sum c_mean
a
0 2.0 0.5
1 2.0 0.5
2 2.0 0.5
In [248]: result.stack()
Out[248]:
a metrics
0 b_sum 2.0
c_mean 0.5
1 b_sum 2.0
c_mean 0.5
2 b_sum 2.0
c_mean 0.5
dtype: float64
| 304
| 614
|
Pandas - create new column with the sum of last N values of another column
I have this df:
round_id team opponent home_dummy GC GP P
0 1.0 Flamengo Atlético-MG 1.0 1.0 0.0 0
1 4.0 Flamengo Grêmio 1.0 1.0 1.0 1
2 5.0 Flamengo Botafogo 1.0 1.0 1.0 1
3 6.0 Flamengo Santos 0.0 0.0 1.0 3
4 7.0 Flamengo Bahia 0.0 3.0 5.0 3
5 8.0 Flamengo Fortaleza 1.0 1.0 2.0 3
6 9.0 Flamengo Fluminense 0.0 1.0 2.0 3
7 10.0 Flamengo Ceará 0.0 2.0 0.0 0
8 3.0 Flamengo Coritiba 0.0 0.0 1.0 3
9 11.0 Flamengo Goiás 1.0 1.0 2.0 3
10 13.0 Flamengo Athlético-PR 1.0 1.0 3.0 3
11 14.0 Flamengo Sport 1.0 0.0 3.0 3
12 15.0 Flamengo Vasco 0.0 1.0 2.0 3
13 16.0 Flamengo Bragantino 1.0 1.0 1.0 1
14 17.0 Flamengo Corinthians 0.0 1.0 5.0 3
15 18.0 Flamengo Internacional 0.0 2.0 2.0 1
16 19.0 Flamengo São Paulo 1.0 4.0 1.0 0
17 12.0 Flamengo Palmeiras 0.0 1.0 1.0 1
18 2.0 Flamengo Atlético-GO 0.0 3.0 0.0 0
19 20.0 Flamengo Atlético-MG 0.0 4.0 0.0 0
Now I'd like to add a column 'last_5', which consists of the sum of the last 5 'P' values, ending up with:
rodada_id clube opponent home_dummy GC GP P last_5
0 1.0 Flamengo Atlético-MG 1.0 1.0 0.0 0 0
1 4.0 Flamengo Grêmio 1.0 1.0 1.0 1 0
2 5.0 Flamengo Botafogo 1.0 1.0 1.0 1 1
3 6.0 Flamengo Santos 0.0 0.0 1.0 3 2
4 7.0 Flamengo Bahia 0.0 3.0 5.0 3 5
5 8.0 Flamengo Fortaleza 1.0 1.0 2.0 3 8
6 9.0 Flamengo Fluminense 0.0 1.0 2.0 3 11
7 10.0 Flamengo Ceará 0.0 2.0 0.0 0 13
8 3.0 Flamengo Coritiba 0.0 0.0 1.0 3 12
9 11.0 Flamengo Goiás 1.0 1.0 2.0 3 12
10 13.0 Flamengo Athlético-PR 1.0 1.0 3.0 3 12
11 14.0 Flamengo Sport 1.0 0.0 3.0 3 12
12 15.0 Flamengo Vasco 0.0 1.0 2.0 3 12
13 16.0 Flamengo Bragantino 1.0 1.0 1.0 1 15
14 17.0 Flamengo Corinthians 0.0 1.0 5.0 3 13
15 18.0 Flamengo Internacional 0.0 2.0 2.0 1 11
16 19.0 Flamengo São Paulo 1.0 4.0 1.0 0 8
17 12.0 Flamengo Palmeiras 0.0 1.0 1.0 1 8
18 2.0 Flamengo Atlético-GO 0.0 3.0 0.0 0 6
19 20.0 Flamengo Atlético-MG 0.0 4.0 0.0 0 5
Please note that up to index 4 (n=5), the sum will have to be of the last 1, 2, 3, 4 rows.
I have tried:
N = 5
df = df.groupby(df.P // N).sum()
But this does not work.
|
59,620,657
|
group 2 columns and based on the group value take the group based on specific value
|
<p>My code:</p>
<pre><code>data = pd.DataFrame({'a': [1,2,3,4,5,6,7,8],
'group': [1,1,1,1,2,2,2,2],
'check':[0.5, 0.5,0.5,0.3,0.3,0.3,0.2,0.2]})
</code></pre>
<p>output:</p>
<pre><code>data.groupby(['group','check']).size()
group check
1 0.3 1
0.5 3
2 0.2 2
0.3 2
dtype: int64
</code></pre>
<p>I wish to get</p>
<p>Since we have group '1' and '2'.</p>
<p>based on the above output, I wish to take only the second group or any group above 1(given if we have more than 2 groups).</p>
<p>example output:</p>
<pre><code>group check
2 0.2 2
0.3 2
dtype: int64
</code></pre>
| 59,620,722
| 2020-01-07T00:25:12.007000
| 1
| null | 0
| 26
|
python|pandas
|
<p>You can do the following. So here, we are getting the individual <code>groups</code> and getting all the items where group key does not have 1 in the 0th element. Each key would be a tuple <code>(group_id, check_val)</code> and then concat them back and do a <code>groupby</code>.</p>
<pre><code>grps = [grp for k, grp in data.groupby(['group','check']).groups.items() if k[0]!=1]
new_df = pd.concat([data.loc[g] for g in grps]).groupby(['group', 'check']).size()
</code></pre>
<p>Which gives,</p>
<pre><code>group check
2 0.2 2
0.3 2
dtype: int64
</code></pre>
<h2>Option 2:</h2>
<pre><code>new_df = data.loc[(data['group']!=1)].groupby(['group', 'check']).size()
</code></pre>
| 2020-01-07T00:35:37.903000
| 1
|
https://pandas.pydata.org/docs/user_guide/groupby.html
|
Group by: split-apply-combine#
Group by: split-apply-combine#
By “group by” we are referring to a process involving one or more of the following
steps:
Splitting the data into groups based on some criteria.
Applying a function to each group independently.
Combining the results into a data structure.
Out of these, the split step is the most straightforward. In fact, in many
situations we may wish to split the data set into groups and do something with
You can do the following. So here, we are getting the individual groups and getting all the items where group key does not have 1 in the 0th element. Each key would be a tuple (group_id, check_val) and then concat them back and do a groupby.
grps = [grp for k, grp in data.groupby(['group','check']).groups.items() if k[0]!=1]
new_df = pd.concat([data.loc[g] for g in grps]).groupby(['group', 'check']).size()
Which gives,
group check
2 0.2 2
0.3 2
dtype: int64
Option 2:
new_df = data.loc[(data['group']!=1)].groupby(['group', 'check']).size()
those groups. In the apply step, we might wish to do one of the
following:
Aggregation: compute a summary statistic (or statistics) for each
group. Some examples:
Compute group sums or means.
Compute group sizes / counts.
Transformation: perform some group-specific computations and return a
like-indexed object. Some examples:
Standardize data (zscore) within a group.
Filling NAs within groups with a value derived from each group.
Filtration: discard some groups, according to a group-wise computation
that evaluates True or False. Some examples:
Discard data that belongs to groups with only a few members.
Filter out data based on the group sum or mean.
Some combination of the above: GroupBy will examine the results of the apply
step and try to return a sensibly combined result if it doesn’t fit into
either of the above two categories.
Since the set of object instance methods on pandas data structures are generally
rich and expressive, we often simply want to invoke, say, a DataFrame function
on each group. The name GroupBy should be quite familiar to those who have used
a SQL-based tool (or itertools), in which you can write code like:
SELECT Column1, Column2, mean(Column3), sum(Column4)
FROM SomeTable
GROUP BY Column1, Column2
We aim to make operations like this natural and easy to express using
pandas. We’ll address each area of GroupBy functionality then provide some
non-trivial examples / use cases.
See the cookbook for some advanced strategies.
Splitting an object into groups#
pandas objects can be split on any of their axes. The abstract definition of
grouping is to provide a mapping of labels to group names. To create a GroupBy
object (more on what the GroupBy object is later), you may do the following:
In [1]: df = pd.DataFrame(
...: [
...: ("bird", "Falconiformes", 389.0),
...: ("bird", "Psittaciformes", 24.0),
...: ("mammal", "Carnivora", 80.2),
...: ("mammal", "Primates", np.nan),
...: ("mammal", "Carnivora", 58),
...: ],
...: index=["falcon", "parrot", "lion", "monkey", "leopard"],
...: columns=("class", "order", "max_speed"),
...: )
...:
In [2]: df
Out[2]:
class order max_speed
falcon bird Falconiformes 389.0
parrot bird Psittaciformes 24.0
lion mammal Carnivora 80.2
monkey mammal Primates NaN
leopard mammal Carnivora 58.0
# default is axis=0
In [3]: grouped = df.groupby("class")
In [4]: grouped = df.groupby("order", axis="columns")
In [5]: grouped = df.groupby(["class", "order"])
The mapping can be specified many different ways:
A Python function, to be called on each of the axis labels.
A list or NumPy array of the same length as the selected axis.
A dict or Series, providing a label -> group name mapping.
For DataFrame objects, a string indicating either a column name or
an index level name to be used to group.
df.groupby('A') is just syntactic sugar for df.groupby(df['A']).
A list of any of the above things.
Collectively we refer to the grouping objects as the keys. For example,
consider the following DataFrame:
Note
A string passed to groupby may refer to either a column or an index level.
If a string matches both a column name and an index level name, a
ValueError will be raised.
In [6]: df = pd.DataFrame(
...: {
...: "A": ["foo", "bar", "foo", "bar", "foo", "bar", "foo", "foo"],
...: "B": ["one", "one", "two", "three", "two", "two", "one", "three"],
...: "C": np.random.randn(8),
...: "D": np.random.randn(8),
...: }
...: )
...:
In [7]: df
Out[7]:
A B C D
0 foo one 0.469112 -0.861849
1 bar one -0.282863 -2.104569
2 foo two -1.509059 -0.494929
3 bar three -1.135632 1.071804
4 foo two 1.212112 0.721555
5 bar two -0.173215 -0.706771
6 foo one 0.119209 -1.039575
7 foo three -1.044236 0.271860
On a DataFrame, we obtain a GroupBy object by calling groupby().
We could naturally group by either the A or B columns, or both:
In [8]: grouped = df.groupby("A")
In [9]: grouped = df.groupby(["A", "B"])
If we also have a MultiIndex on columns A and B, we can group by all
but the specified columns
In [10]: df2 = df.set_index(["A", "B"])
In [11]: grouped = df2.groupby(level=df2.index.names.difference(["B"]))
In [12]: grouped.sum()
Out[12]:
C D
A
bar -1.591710 -1.739537
foo -0.752861 -1.402938
These will split the DataFrame on its index (rows). We could also split by the
columns:
In [13]: def get_letter_type(letter):
....: if letter.lower() in 'aeiou':
....: return 'vowel'
....: else:
....: return 'consonant'
....:
In [14]: grouped = df.groupby(get_letter_type, axis=1)
pandas Index objects support duplicate values. If a
non-unique index is used as the group key in a groupby operation, all values
for the same index value will be considered to be in one group and thus the
output of aggregation functions will only contain unique index values:
In [15]: lst = [1, 2, 3, 1, 2, 3]
In [16]: s = pd.Series([1, 2, 3, 10, 20, 30], lst)
In [17]: grouped = s.groupby(level=0)
In [18]: grouped.first()
Out[18]:
1 1
2 2
3 3
dtype: int64
In [19]: grouped.last()
Out[19]:
1 10
2 20
3 30
dtype: int64
In [20]: grouped.sum()
Out[20]:
1 11
2 22
3 33
dtype: int64
Note that no splitting occurs until it’s needed. Creating the GroupBy object
only verifies that you’ve passed a valid mapping.
Note
Many kinds of complicated data manipulations can be expressed in terms of
GroupBy operations (though can’t be guaranteed to be the most
efficient). You can get quite creative with the label mapping functions.
GroupBy sorting#
By default the group keys are sorted during the groupby operation. You may however pass sort=False for potential speedups:
In [21]: df2 = pd.DataFrame({"X": ["B", "B", "A", "A"], "Y": [1, 2, 3, 4]})
In [22]: df2.groupby(["X"]).sum()
Out[22]:
Y
X
A 7
B 3
In [23]: df2.groupby(["X"], sort=False).sum()
Out[23]:
Y
X
B 3
A 7
Note that groupby will preserve the order in which observations are sorted within each group.
For example, the groups created by groupby() below are in the order they appeared in the original DataFrame:
In [24]: df3 = pd.DataFrame({"X": ["A", "B", "A", "B"], "Y": [1, 4, 3, 2]})
In [25]: df3.groupby(["X"]).get_group("A")
Out[25]:
X Y
0 A 1
2 A 3
In [26]: df3.groupby(["X"]).get_group("B")
Out[26]:
X Y
1 B 4
3 B 2
New in version 1.1.0.
GroupBy dropna#
By default NA values are excluded from group keys during the groupby operation. However,
in case you want to include NA values in group keys, you could pass dropna=False to achieve it.
In [27]: df_list = [[1, 2, 3], [1, None, 4], [2, 1, 3], [1, 2, 2]]
In [28]: df_dropna = pd.DataFrame(df_list, columns=["a", "b", "c"])
In [29]: df_dropna
Out[29]:
a b c
0 1 2.0 3
1 1 NaN 4
2 2 1.0 3
3 1 2.0 2
# Default ``dropna`` is set to True, which will exclude NaNs in keys
In [30]: df_dropna.groupby(by=["b"], dropna=True).sum()
Out[30]:
a c
b
1.0 2 3
2.0 2 5
# In order to allow NaN in keys, set ``dropna`` to False
In [31]: df_dropna.groupby(by=["b"], dropna=False).sum()
Out[31]:
a c
b
1.0 2 3
2.0 2 5
NaN 1 4
The default setting of dropna argument is True which means NA are not included in group keys.
GroupBy object attributes#
The groups attribute is a dict whose keys are the computed unique groups
and corresponding values being the axis labels belonging to each group. In the
above example we have:
In [32]: df.groupby("A").groups
Out[32]: {'bar': [1, 3, 5], 'foo': [0, 2, 4, 6, 7]}
In [33]: df.groupby(get_letter_type, axis=1).groups
Out[33]: {'consonant': ['B', 'C', 'D'], 'vowel': ['A']}
Calling the standard Python len function on the GroupBy object just returns
the length of the groups dict, so it is largely just a convenience:
In [34]: grouped = df.groupby(["A", "B"])
In [35]: grouped.groups
Out[35]: {('bar', 'one'): [1], ('bar', 'three'): [3], ('bar', 'two'): [5], ('foo', 'one'): [0, 6], ('foo', 'three'): [7], ('foo', 'two'): [2, 4]}
In [36]: len(grouped)
Out[36]: 6
GroupBy will tab complete column names (and other attributes):
In [37]: df
Out[37]:
height weight gender
2000-01-01 42.849980 157.500553 male
2000-01-02 49.607315 177.340407 male
2000-01-03 56.293531 171.524640 male
2000-01-04 48.421077 144.251986 female
2000-01-05 46.556882 152.526206 male
2000-01-06 68.448851 168.272968 female
2000-01-07 70.757698 136.431469 male
2000-01-08 58.909500 176.499753 female
2000-01-09 76.435631 174.094104 female
2000-01-10 45.306120 177.540920 male
In [38]: gb = df.groupby("gender")
In [39]: gb.<TAB> # noqa: E225, E999
gb.agg gb.boxplot gb.cummin gb.describe gb.filter gb.get_group gb.height gb.last gb.median gb.ngroups gb.plot gb.rank gb.std gb.transform
gb.aggregate gb.count gb.cumprod gb.dtype gb.first gb.groups gb.hist gb.max gb.min gb.nth gb.prod gb.resample gb.sum gb.var
gb.apply gb.cummax gb.cumsum gb.fillna gb.gender gb.head gb.indices gb.mean gb.name gb.ohlc gb.quantile gb.size gb.tail gb.weight
GroupBy with MultiIndex#
With hierarchically-indexed data, it’s quite
natural to group by one of the levels of the hierarchy.
Let’s create a Series with a two-level MultiIndex.
In [40]: arrays = [
....: ["bar", "bar", "baz", "baz", "foo", "foo", "qux", "qux"],
....: ["one", "two", "one", "two", "one", "two", "one", "two"],
....: ]
....:
In [41]: index = pd.MultiIndex.from_arrays(arrays, names=["first", "second"])
In [42]: s = pd.Series(np.random.randn(8), index=index)
In [43]: s
Out[43]:
first second
bar one -0.919854
two -0.042379
baz one 1.247642
two -0.009920
foo one 0.290213
two 0.495767
qux one 0.362949
two 1.548106
dtype: float64
We can then group by one of the levels in s.
In [44]: grouped = s.groupby(level=0)
In [45]: grouped.sum()
Out[45]:
first
bar -0.962232
baz 1.237723
foo 0.785980
qux 1.911055
dtype: float64
If the MultiIndex has names specified, these can be passed instead of the level
number:
In [46]: s.groupby(level="second").sum()
Out[46]:
second
one 0.980950
two 1.991575
dtype: float64
Grouping with multiple levels is supported.
In [47]: s
Out[47]:
first second third
bar doo one -1.131345
two -0.089329
baz bee one 0.337863
two -0.945867
foo bop one -0.932132
two 1.956030
qux bop one 0.017587
two -0.016692
dtype: float64
In [48]: s.groupby(level=["first", "second"]).sum()
Out[48]:
first second
bar doo -1.220674
baz bee -0.608004
foo bop 1.023898
qux bop 0.000895
dtype: float64
Index level names may be supplied as keys.
In [49]: s.groupby(["first", "second"]).sum()
Out[49]:
first second
bar doo -1.220674
baz bee -0.608004
foo bop 1.023898
qux bop 0.000895
dtype: float64
More on the sum function and aggregation later.
Grouping DataFrame with Index levels and columns#
A DataFrame may be grouped by a combination of columns and index levels by
specifying the column names as strings and the index levels as pd.Grouper
objects.
In [50]: arrays = [
....: ["bar", "bar", "baz", "baz", "foo", "foo", "qux", "qux"],
....: ["one", "two", "one", "two", "one", "two", "one", "two"],
....: ]
....:
In [51]: index = pd.MultiIndex.from_arrays(arrays, names=["first", "second"])
In [52]: df = pd.DataFrame({"A": [1, 1, 1, 1, 2, 2, 3, 3], "B": np.arange(8)}, index=index)
In [53]: df
Out[53]:
A B
first second
bar one 1 0
two 1 1
baz one 1 2
two 1 3
foo one 2 4
two 2 5
qux one 3 6
two 3 7
The following example groups df by the second index level and
the A column.
In [54]: df.groupby([pd.Grouper(level=1), "A"]).sum()
Out[54]:
B
second A
one 1 2
2 4
3 6
two 1 4
2 5
3 7
Index levels may also be specified by name.
In [55]: df.groupby([pd.Grouper(level="second"), "A"]).sum()
Out[55]:
B
second A
one 1 2
2 4
3 6
two 1 4
2 5
3 7
Index level names may be specified as keys directly to groupby.
In [56]: df.groupby(["second", "A"]).sum()
Out[56]:
B
second A
one 1 2
2 4
3 6
two 1 4
2 5
3 7
DataFrame column selection in GroupBy#
Once you have created the GroupBy object from a DataFrame, you might want to do
something different for each of the columns. Thus, using [] similar to
getting a column from a DataFrame, you can do:
In [57]: df = pd.DataFrame(
....: {
....: "A": ["foo", "bar", "foo", "bar", "foo", "bar", "foo", "foo"],
....: "B": ["one", "one", "two", "three", "two", "two", "one", "three"],
....: "C": np.random.randn(8),
....: "D": np.random.randn(8),
....: }
....: )
....:
In [58]: df
Out[58]:
A B C D
0 foo one -0.575247 1.346061
1 bar one 0.254161 1.511763
2 foo two -1.143704 1.627081
3 bar three 0.215897 -0.990582
4 foo two 1.193555 -0.441652
5 bar two -0.077118 1.211526
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
In [59]: grouped = df.groupby(["A"])
In [60]: grouped_C = grouped["C"]
In [61]: grouped_D = grouped["D"]
This is mainly syntactic sugar for the alternative and much more verbose:
In [62]: df["C"].groupby(df["A"])
Out[62]: <pandas.core.groupby.generic.SeriesGroupBy object at 0x7f1ea100a490>
Additionally this method avoids recomputing the internal grouping information
derived from the passed key.
Iterating through groups#
With the GroupBy object in hand, iterating through the grouped data is very
natural and functions similarly to itertools.groupby():
In [63]: grouped = df.groupby('A')
In [64]: for name, group in grouped:
....: print(name)
....: print(group)
....:
bar
A B C D
1 bar one 0.254161 1.511763
3 bar three 0.215897 -0.990582
5 bar two -0.077118 1.211526
foo
A B C D
0 foo one -0.575247 1.346061
2 foo two -1.143704 1.627081
4 foo two 1.193555 -0.441652
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
In the case of grouping by multiple keys, the group name will be a tuple:
In [65]: for name, group in df.groupby(['A', 'B']):
....: print(name)
....: print(group)
....:
('bar', 'one')
A B C D
1 bar one 0.254161 1.511763
('bar', 'three')
A B C D
3 bar three 0.215897 -0.990582
('bar', 'two')
A B C D
5 bar two -0.077118 1.211526
('foo', 'one')
A B C D
0 foo one -0.575247 1.346061
6 foo one -0.408530 0.268520
('foo', 'three')
A B C D
7 foo three -0.862495 0.02458
('foo', 'two')
A B C D
2 foo two -1.143704 1.627081
4 foo two 1.193555 -0.441652
See Iterating through groups.
Selecting a group#
A single group can be selected using
get_group():
In [66]: grouped.get_group("bar")
Out[66]:
A B C D
1 bar one 0.254161 1.511763
3 bar three 0.215897 -0.990582
5 bar two -0.077118 1.211526
Or for an object grouped on multiple columns:
In [67]: df.groupby(["A", "B"]).get_group(("bar", "one"))
Out[67]:
A B C D
1 bar one 0.254161 1.511763
Aggregation#
Once the GroupBy object has been created, several methods are available to
perform a computation on the grouped data. These operations are similar to the
aggregating API, window API,
and resample API.
An obvious one is aggregation via the
aggregate() or equivalently
agg() method:
In [68]: grouped = df.groupby("A")
In [69]: grouped[["C", "D"]].aggregate(np.sum)
Out[69]:
C D
A
bar 0.392940 1.732707
foo -1.796421 2.824590
In [70]: grouped = df.groupby(["A", "B"])
In [71]: grouped.aggregate(np.sum)
Out[71]:
C D
A B
bar one 0.254161 1.511763
three 0.215897 -0.990582
two -0.077118 1.211526
foo one -0.983776 1.614581
three -0.862495 0.024580
two 0.049851 1.185429
As you can see, the result of the aggregation will have the group names as the
new index along the grouped axis. In the case of multiple keys, the result is a
MultiIndex by default, though this can be
changed by using the as_index option:
In [72]: grouped = df.groupby(["A", "B"], as_index=False)
In [73]: grouped.aggregate(np.sum)
Out[73]:
A B C D
0 bar one 0.254161 1.511763
1 bar three 0.215897 -0.990582
2 bar two -0.077118 1.211526
3 foo one -0.983776 1.614581
4 foo three -0.862495 0.024580
5 foo two 0.049851 1.185429
In [74]: df.groupby("A", as_index=False)[["C", "D"]].sum()
Out[74]:
A C D
0 bar 0.392940 1.732707
1 foo -1.796421 2.824590
Note that you could use the reset_index DataFrame function to achieve the
same result as the column names are stored in the resulting MultiIndex:
In [75]: df.groupby(["A", "B"]).sum().reset_index()
Out[75]:
A B C D
0 bar one 0.254161 1.511763
1 bar three 0.215897 -0.990582
2 bar two -0.077118 1.211526
3 foo one -0.983776 1.614581
4 foo three -0.862495 0.024580
5 foo two 0.049851 1.185429
Another simple aggregation example is to compute the size of each group.
This is included in GroupBy as the size method. It returns a Series whose
index are the group names and whose values are the sizes of each group.
In [76]: grouped.size()
Out[76]:
A B size
0 bar one 1
1 bar three 1
2 bar two 1
3 foo one 2
4 foo three 1
5 foo two 2
In [77]: grouped.describe()
Out[77]:
C ... D
count mean std min ... 25% 50% 75% max
0 1.0 0.254161 NaN 0.254161 ... 1.511763 1.511763 1.511763 1.511763
1 1.0 0.215897 NaN 0.215897 ... -0.990582 -0.990582 -0.990582 -0.990582
2 1.0 -0.077118 NaN -0.077118 ... 1.211526 1.211526 1.211526 1.211526
3 2.0 -0.491888 0.117887 -0.575247 ... 0.537905 0.807291 1.076676 1.346061
4 1.0 -0.862495 NaN -0.862495 ... 0.024580 0.024580 0.024580 0.024580
5 2.0 0.024925 1.652692 -1.143704 ... 0.075531 0.592714 1.109898 1.627081
[6 rows x 16 columns]
Another aggregation example is to compute the number of unique values of each group. This is similar to the value_counts function, except that it only counts unique values.
In [78]: ll = [['foo', 1], ['foo', 2], ['foo', 2], ['bar', 1], ['bar', 1]]
In [79]: df4 = pd.DataFrame(ll, columns=["A", "B"])
In [80]: df4
Out[80]:
A B
0 foo 1
1 foo 2
2 foo 2
3 bar 1
4 bar 1
In [81]: df4.groupby("A")["B"].nunique()
Out[81]:
A
bar 1
foo 2
Name: B, dtype: int64
Note
Aggregation functions will not return the groups that you are aggregating over
if they are named columns, when as_index=True, the default. The grouped columns will
be the indices of the returned object.
Passing as_index=False will return the groups that you are aggregating over, if they are
named columns.
Aggregating functions are the ones that reduce the dimension of the returned objects.
Some common aggregating functions are tabulated below:
Function
Description
mean()
Compute mean of groups
sum()
Compute sum of group values
size()
Compute group sizes
count()
Compute count of group
std()
Standard deviation of groups
var()
Compute variance of groups
sem()
Standard error of the mean of groups
describe()
Generates descriptive statistics
first()
Compute first of group values
last()
Compute last of group values
nth()
Take nth value, or a subset if n is a list
min()
Compute min of group values
max()
Compute max of group values
The aggregating functions above will exclude NA values. Any function which
reduces a Series to a scalar value is an aggregation function and will work,
a trivial example is df.groupby('A').agg(lambda ser: 1). Note that
nth() can act as a reducer or a
filter, see here.
Applying multiple functions at once#
With grouped Series you can also pass a list or dict of functions to do
aggregation with, outputting a DataFrame:
In [82]: grouped = df.groupby("A")
In [83]: grouped["C"].agg([np.sum, np.mean, np.std])
Out[83]:
sum mean std
A
bar 0.392940 0.130980 0.181231
foo -1.796421 -0.359284 0.912265
On a grouped DataFrame, you can pass a list of functions to apply to each
column, which produces an aggregated result with a hierarchical index:
In [84]: grouped[["C", "D"]].agg([np.sum, np.mean, np.std])
Out[84]:
C D
sum mean std sum mean std
A
bar 0.392940 0.130980 0.181231 1.732707 0.577569 1.366330
foo -1.796421 -0.359284 0.912265 2.824590 0.564918 0.884785
The resulting aggregations are named for the functions themselves. If you
need to rename, then you can add in a chained operation for a Series like this:
In [85]: (
....: grouped["C"]
....: .agg([np.sum, np.mean, np.std])
....: .rename(columns={"sum": "foo", "mean": "bar", "std": "baz"})
....: )
....:
Out[85]:
foo bar baz
A
bar 0.392940 0.130980 0.181231
foo -1.796421 -0.359284 0.912265
For a grouped DataFrame, you can rename in a similar manner:
In [86]: (
....: grouped[["C", "D"]].agg([np.sum, np.mean, np.std]).rename(
....: columns={"sum": "foo", "mean": "bar", "std": "baz"}
....: )
....: )
....:
Out[86]:
C D
foo bar baz foo bar baz
A
bar 0.392940 0.130980 0.181231 1.732707 0.577569 1.366330
foo -1.796421 -0.359284 0.912265 2.824590 0.564918 0.884785
Note
In general, the output column names should be unique. You can’t apply
the same function (or two functions with the same name) to the same
column.
In [87]: grouped["C"].agg(["sum", "sum"])
Out[87]:
sum sum
A
bar 0.392940 0.392940
foo -1.796421 -1.796421
pandas does allow you to provide multiple lambdas. In this case, pandas
will mangle the name of the (nameless) lambda functions, appending _<i>
to each subsequent lambda.
In [88]: grouped["C"].agg([lambda x: x.max() - x.min(), lambda x: x.median() - x.mean()])
Out[88]:
<lambda_0> <lambda_1>
A
bar 0.331279 0.084917
foo 2.337259 -0.215962
Named aggregation#
New in version 0.25.0.
To support column-specific aggregation with control over the output column names, pandas
accepts the special syntax in GroupBy.agg(), known as “named aggregation”, where
The keywords are the output column names
The values are tuples whose first element is the column to select
and the second element is the aggregation to apply to that column. pandas
provides the pandas.NamedAgg namedtuple with the fields ['column', 'aggfunc']
to make it clearer what the arguments are. As usual, the aggregation can
be a callable or a string alias.
In [89]: animals = pd.DataFrame(
....: {
....: "kind": ["cat", "dog", "cat", "dog"],
....: "height": [9.1, 6.0, 9.5, 34.0],
....: "weight": [7.9, 7.5, 9.9, 198.0],
....: }
....: )
....:
In [90]: animals
Out[90]:
kind height weight
0 cat 9.1 7.9
1 dog 6.0 7.5
2 cat 9.5 9.9
3 dog 34.0 198.0
In [91]: animals.groupby("kind").agg(
....: min_height=pd.NamedAgg(column="height", aggfunc="min"),
....: max_height=pd.NamedAgg(column="height", aggfunc="max"),
....: average_weight=pd.NamedAgg(column="weight", aggfunc=np.mean),
....: )
....:
Out[91]:
min_height max_height average_weight
kind
cat 9.1 9.5 8.90
dog 6.0 34.0 102.75
pandas.NamedAgg is just a namedtuple. Plain tuples are allowed as well.
In [92]: animals.groupby("kind").agg(
....: min_height=("height", "min"),
....: max_height=("height", "max"),
....: average_weight=("weight", np.mean),
....: )
....:
Out[92]:
min_height max_height average_weight
kind
cat 9.1 9.5 8.90
dog 6.0 34.0 102.75
If your desired output column names are not valid Python keywords, construct a dictionary
and unpack the keyword arguments
In [93]: animals.groupby("kind").agg(
....: **{
....: "total weight": pd.NamedAgg(column="weight", aggfunc=sum)
....: }
....: )
....:
Out[93]:
total weight
kind
cat 17.8
dog 205.5
Additional keyword arguments are not passed through to the aggregation functions. Only pairs
of (column, aggfunc) should be passed as **kwargs. If your aggregation functions
requires additional arguments, partially apply them with functools.partial().
Note
For Python 3.5 and earlier, the order of **kwargs in a functions was not
preserved. This means that the output column ordering would not be
consistent. To ensure consistent ordering, the keys (and so output columns)
will always be sorted for Python 3.5.
Named aggregation is also valid for Series groupby aggregations. In this case there’s
no column selection, so the values are just the functions.
In [94]: animals.groupby("kind").height.agg(
....: min_height="min",
....: max_height="max",
....: )
....:
Out[94]:
min_height max_height
kind
cat 9.1 9.5
dog 6.0 34.0
Applying different functions to DataFrame columns#
By passing a dict to aggregate you can apply a different aggregation to the
columns of a DataFrame:
In [95]: grouped.agg({"C": np.sum, "D": lambda x: np.std(x, ddof=1)})
Out[95]:
C D
A
bar 0.392940 1.366330
foo -1.796421 0.884785
The function names can also be strings. In order for a string to be valid it
must be either implemented on GroupBy or available via dispatching:
In [96]: grouped.agg({"C": "sum", "D": "std"})
Out[96]:
C D
A
bar 0.392940 1.366330
foo -1.796421 0.884785
Cython-optimized aggregation functions#
Some common aggregations, currently only sum, mean, std, and sem, have
optimized Cython implementations:
In [97]: df.groupby("A")[["C", "D"]].sum()
Out[97]:
C D
A
bar 0.392940 1.732707
foo -1.796421 2.824590
In [98]: df.groupby(["A", "B"]).mean()
Out[98]:
C D
A B
bar one 0.254161 1.511763
three 0.215897 -0.990582
two -0.077118 1.211526
foo one -0.491888 0.807291
three -0.862495 0.024580
two 0.024925 0.592714
Of course sum and mean are implemented on pandas objects, so the above
code would work even without the special versions via dispatching (see below).
Aggregations with User-Defined Functions#
Users can also provide their own functions for custom aggregations. When aggregating
with a User-Defined Function (UDF), the UDF should not mutate the provided Series, see
Mutating with User Defined Function (UDF) methods for more information.
In [99]: animals.groupby("kind")[["height"]].agg(lambda x: set(x))
Out[99]:
height
kind
cat {9.1, 9.5}
dog {34.0, 6.0}
The resulting dtype will reflect that of the aggregating function. If the results from different groups have
different dtypes, then a common dtype will be determined in the same way as DataFrame construction.
In [100]: animals.groupby("kind")[["height"]].agg(lambda x: x.astype(int).sum())
Out[100]:
height
kind
cat 18
dog 40
Transformation#
The transform method returns an object that is indexed the same
as the one being grouped. The transform function must:
Return a result that is either the same size as the group chunk or
broadcastable to the size of the group chunk (e.g., a scalar,
grouped.transform(lambda x: x.iloc[-1])).
Operate column-by-column on the group chunk. The transform is applied to
the first group chunk using chunk.apply.
Not perform in-place operations on the group chunk. Group chunks should
be treated as immutable, and changes to a group chunk may produce unexpected
results. For example, when using fillna, inplace must be False
(grouped.transform(lambda x: x.fillna(inplace=False))).
(Optionally) operates on the entire group chunk. If this is supported, a
fast path is used starting from the second chunk.
Deprecated since version 1.5.0: When using .transform on a grouped DataFrame and the transformation function
returns a DataFrame, currently pandas does not align the result’s index
with the input’s index. This behavior is deprecated and alignment will
be performed in a future version of pandas. You can apply .to_numpy() to the
result of the transformation function to avoid alignment.
Similar to Aggregations with User-Defined Functions, the resulting dtype will reflect that of the
transformation function. If the results from different groups have different dtypes, then
a common dtype will be determined in the same way as DataFrame construction.
Suppose we wished to standardize the data within each group:
In [101]: index = pd.date_range("10/1/1999", periods=1100)
In [102]: ts = pd.Series(np.random.normal(0.5, 2, 1100), index)
In [103]: ts = ts.rolling(window=100, min_periods=100).mean().dropna()
In [104]: ts.head()
Out[104]:
2000-01-08 0.779333
2000-01-09 0.778852
2000-01-10 0.786476
2000-01-11 0.782797
2000-01-12 0.798110
Freq: D, dtype: float64
In [105]: ts.tail()
Out[105]:
2002-09-30 0.660294
2002-10-01 0.631095
2002-10-02 0.673601
2002-10-03 0.709213
2002-10-04 0.719369
Freq: D, dtype: float64
In [106]: transformed = ts.groupby(lambda x: x.year).transform(
.....: lambda x: (x - x.mean()) / x.std()
.....: )
.....:
We would expect the result to now have mean 0 and standard deviation 1 within
each group, which we can easily check:
# Original Data
In [107]: grouped = ts.groupby(lambda x: x.year)
In [108]: grouped.mean()
Out[108]:
2000 0.442441
2001 0.526246
2002 0.459365
dtype: float64
In [109]: grouped.std()
Out[109]:
2000 0.131752
2001 0.210945
2002 0.128753
dtype: float64
# Transformed Data
In [110]: grouped_trans = transformed.groupby(lambda x: x.year)
In [111]: grouped_trans.mean()
Out[111]:
2000 -4.870756e-16
2001 -1.545187e-16
2002 4.136282e-16
dtype: float64
In [112]: grouped_trans.std()
Out[112]:
2000 1.0
2001 1.0
2002 1.0
dtype: float64
We can also visually compare the original and transformed data sets.
In [113]: compare = pd.DataFrame({"Original": ts, "Transformed": transformed})
In [114]: compare.plot()
Out[114]: <AxesSubplot: >
Transformation functions that have lower dimension outputs are broadcast to
match the shape of the input array.
In [115]: ts.groupby(lambda x: x.year).transform(lambda x: x.max() - x.min())
Out[115]:
2000-01-08 0.623893
2000-01-09 0.623893
2000-01-10 0.623893
2000-01-11 0.623893
2000-01-12 0.623893
...
2002-09-30 0.558275
2002-10-01 0.558275
2002-10-02 0.558275
2002-10-03 0.558275
2002-10-04 0.558275
Freq: D, Length: 1001, dtype: float64
Alternatively, the built-in methods could be used to produce the same outputs.
In [116]: max_ts = ts.groupby(lambda x: x.year).transform("max")
In [117]: min_ts = ts.groupby(lambda x: x.year).transform("min")
In [118]: max_ts - min_ts
Out[118]:
2000-01-08 0.623893
2000-01-09 0.623893
2000-01-10 0.623893
2000-01-11 0.623893
2000-01-12 0.623893
...
2002-09-30 0.558275
2002-10-01 0.558275
2002-10-02 0.558275
2002-10-03 0.558275
2002-10-04 0.558275
Freq: D, Length: 1001, dtype: float64
Another common data transform is to replace missing data with the group mean.
In [119]: data_df
Out[119]:
A B C
0 1.539708 -1.166480 0.533026
1 1.302092 -0.505754 NaN
2 -0.371983 1.104803 -0.651520
3 -1.309622 1.118697 -1.161657
4 -1.924296 0.396437 0.812436
.. ... ... ...
995 -0.093110 0.683847 -0.774753
996 -0.185043 1.438572 NaN
997 -0.394469 -0.642343 0.011374
998 -1.174126 1.857148 NaN
999 0.234564 0.517098 0.393534
[1000 rows x 3 columns]
In [120]: countries = np.array(["US", "UK", "GR", "JP"])
In [121]: key = countries[np.random.randint(0, 4, 1000)]
In [122]: grouped = data_df.groupby(key)
# Non-NA count in each group
In [123]: grouped.count()
Out[123]:
A B C
GR 209 217 189
JP 240 255 217
UK 216 231 193
US 239 250 217
In [124]: transformed = grouped.transform(lambda x: x.fillna(x.mean()))
We can verify that the group means have not changed in the transformed data
and that the transformed data contains no NAs.
In [125]: grouped_trans = transformed.groupby(key)
In [126]: grouped.mean() # original group means
Out[126]:
A B C
GR -0.098371 -0.015420 0.068053
JP 0.069025 0.023100 -0.077324
UK 0.034069 -0.052580 -0.116525
US 0.058664 -0.020399 0.028603
In [127]: grouped_trans.mean() # transformation did not change group means
Out[127]:
A B C
GR -0.098371 -0.015420 0.068053
JP 0.069025 0.023100 -0.077324
UK 0.034069 -0.052580 -0.116525
US 0.058664 -0.020399 0.028603
In [128]: grouped.count() # original has some missing data points
Out[128]:
A B C
GR 209 217 189
JP 240 255 217
UK 216 231 193
US 239 250 217
In [129]: grouped_trans.count() # counts after transformation
Out[129]:
A B C
GR 228 228 228
JP 267 267 267
UK 247 247 247
US 258 258 258
In [130]: grouped_trans.size() # Verify non-NA count equals group size
Out[130]:
GR 228
JP 267
UK 247
US 258
dtype: int64
Note
Some functions will automatically transform the input when applied to a
GroupBy object, but returning an object of the same shape as the original.
Passing as_index=False will not affect these transformation methods.
For example: fillna, ffill, bfill, shift..
In [131]: grouped.ffill()
Out[131]:
A B C
0 1.539708 -1.166480 0.533026
1 1.302092 -0.505754 0.533026
2 -0.371983 1.104803 -0.651520
3 -1.309622 1.118697 -1.161657
4 -1.924296 0.396437 0.812436
.. ... ... ...
995 -0.093110 0.683847 -0.774753
996 -0.185043 1.438572 -0.774753
997 -0.394469 -0.642343 0.011374
998 -1.174126 1.857148 -0.774753
999 0.234564 0.517098 0.393534
[1000 rows x 3 columns]
Window and resample operations#
It is possible to use resample(), expanding() and
rolling() as methods on groupbys.
The example below will apply the rolling() method on the samples of
the column B based on the groups of column A.
In [132]: df_re = pd.DataFrame({"A": [1] * 10 + [5] * 10, "B": np.arange(20)})
In [133]: df_re
Out[133]:
A B
0 1 0
1 1 1
2 1 2
3 1 3
4 1 4
.. .. ..
15 5 15
16 5 16
17 5 17
18 5 18
19 5 19
[20 rows x 2 columns]
In [134]: df_re.groupby("A").rolling(4).B.mean()
Out[134]:
A
1 0 NaN
1 NaN
2 NaN
3 1.5
4 2.5
...
5 15 13.5
16 14.5
17 15.5
18 16.5
19 17.5
Name: B, Length: 20, dtype: float64
The expanding() method will accumulate a given operation
(sum() in the example) for all the members of each particular
group.
In [135]: df_re.groupby("A").expanding().sum()
Out[135]:
B
A
1 0 0.0
1 1.0
2 3.0
3 6.0
4 10.0
... ...
5 15 75.0
16 91.0
17 108.0
18 126.0
19 145.0
[20 rows x 1 columns]
Suppose you want to use the resample() method to get a daily
frequency in each group of your dataframe and wish to complete the
missing values with the ffill() method.
In [136]: df_re = pd.DataFrame(
.....: {
.....: "date": pd.date_range(start="2016-01-01", periods=4, freq="W"),
.....: "group": [1, 1, 2, 2],
.....: "val": [5, 6, 7, 8],
.....: }
.....: ).set_index("date")
.....:
In [137]: df_re
Out[137]:
group val
date
2016-01-03 1 5
2016-01-10 1 6
2016-01-17 2 7
2016-01-24 2 8
In [138]: df_re.groupby("group").resample("1D").ffill()
Out[138]:
group val
group date
1 2016-01-03 1 5
2016-01-04 1 5
2016-01-05 1 5
2016-01-06 1 5
2016-01-07 1 5
... ... ...
2 2016-01-20 2 7
2016-01-21 2 7
2016-01-22 2 7
2016-01-23 2 7
2016-01-24 2 8
[16 rows x 2 columns]
Filtration#
The filter method returns a subset of the original object. Suppose we
want to take only elements that belong to groups with a group sum greater
than 2.
In [139]: sf = pd.Series([1, 1, 2, 3, 3, 3])
In [140]: sf.groupby(sf).filter(lambda x: x.sum() > 2)
Out[140]:
3 3
4 3
5 3
dtype: int64
The argument of filter must be a function that, applied to the group as a
whole, returns True or False.
Another useful operation is filtering out elements that belong to groups
with only a couple members.
In [141]: dff = pd.DataFrame({"A": np.arange(8), "B": list("aabbbbcc")})
In [142]: dff.groupby("B").filter(lambda x: len(x) > 2)
Out[142]:
A B
2 2 b
3 3 b
4 4 b
5 5 b
Alternatively, instead of dropping the offending groups, we can return a
like-indexed objects where the groups that do not pass the filter are filled
with NaNs.
In [143]: dff.groupby("B").filter(lambda x: len(x) > 2, dropna=False)
Out[143]:
A B
0 NaN NaN
1 NaN NaN
2 2.0 b
3 3.0 b
4 4.0 b
5 5.0 b
6 NaN NaN
7 NaN NaN
For DataFrames with multiple columns, filters should explicitly specify a column as the filter criterion.
In [144]: dff["C"] = np.arange(8)
In [145]: dff.groupby("B").filter(lambda x: len(x["C"]) > 2)
Out[145]:
A B C
2 2 b 2
3 3 b 3
4 4 b 4
5 5 b 5
Note
Some functions when applied to a groupby object will act as a filter on the input, returning
a reduced shape of the original (and potentially eliminating groups), but with the index unchanged.
Passing as_index=False will not affect these transformation methods.
For example: head, tail.
In [146]: dff.groupby("B").head(2)
Out[146]:
A B C
0 0 a 0
1 1 a 1
2 2 b 2
3 3 b 3
6 6 c 6
7 7 c 7
Dispatching to instance methods#
When doing an aggregation or transformation, you might just want to call an
instance method on each data group. This is pretty easy to do by passing lambda
functions:
In [147]: grouped = df.groupby("A")
In [148]: grouped.agg(lambda x: x.std())
Out[148]:
C D
A
bar 0.181231 1.366330
foo 0.912265 0.884785
But, it’s rather verbose and can be untidy if you need to pass additional
arguments. Using a bit of metaprogramming cleverness, GroupBy now has the
ability to “dispatch” method calls to the groups:
In [149]: grouped.std()
Out[149]:
C D
A
bar 0.181231 1.366330
foo 0.912265 0.884785
What is actually happening here is that a function wrapper is being
generated. When invoked, it takes any passed arguments and invokes the function
with any arguments on each group (in the above example, the std
function). The results are then combined together much in the style of agg
and transform (it actually uses apply to infer the gluing, documented
next). This enables some operations to be carried out rather succinctly:
In [150]: tsdf = pd.DataFrame(
.....: np.random.randn(1000, 3),
.....: index=pd.date_range("1/1/2000", periods=1000),
.....: columns=["A", "B", "C"],
.....: )
.....:
In [151]: tsdf.iloc[::2] = np.nan
In [152]: grouped = tsdf.groupby(lambda x: x.year)
In [153]: grouped.fillna(method="pad")
Out[153]:
A B C
2000-01-01 NaN NaN NaN
2000-01-02 -0.353501 -0.080957 -0.876864
2000-01-03 -0.353501 -0.080957 -0.876864
2000-01-04 0.050976 0.044273 -0.559849
2000-01-05 0.050976 0.044273 -0.559849
... ... ... ...
2002-09-22 0.005011 0.053897 -1.026922
2002-09-23 0.005011 0.053897 -1.026922
2002-09-24 -0.456542 -1.849051 1.559856
2002-09-25 -0.456542 -1.849051 1.559856
2002-09-26 1.123162 0.354660 1.128135
[1000 rows x 3 columns]
In this example, we chopped the collection of time series into yearly chunks
then independently called fillna on the
groups.
The nlargest and nsmallest methods work on Series style groupbys:
In [154]: s = pd.Series([9, 8, 7, 5, 19, 1, 4.2, 3.3])
In [155]: g = pd.Series(list("abababab"))
In [156]: gb = s.groupby(g)
In [157]: gb.nlargest(3)
Out[157]:
a 4 19.0
0 9.0
2 7.0
b 1 8.0
3 5.0
7 3.3
dtype: float64
In [158]: gb.nsmallest(3)
Out[158]:
a 6 4.2
2 7.0
0 9.0
b 5 1.0
7 3.3
3 5.0
dtype: float64
Flexible apply#
Some operations on the grouped data might not fit into either the aggregate or
transform categories. Or, you may simply want GroupBy to infer how to combine
the results. For these, use the apply function, which can be substituted
for both aggregate and transform in many standard use cases. However,
apply can handle some exceptional use cases.
Note
apply can act as a reducer, transformer, or filter function, depending
on exactly what is passed to it. It can depend on the passed function and
exactly what you are grouping. Thus the grouped column(s) may be included in
the output as well as set the indices.
In [159]: df
Out[159]:
A B C D
0 foo one -0.575247 1.346061
1 bar one 0.254161 1.511763
2 foo two -1.143704 1.627081
3 bar three 0.215897 -0.990582
4 foo two 1.193555 -0.441652
5 bar two -0.077118 1.211526
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
In [160]: grouped = df.groupby("A")
# could also just call .describe()
In [161]: grouped["C"].apply(lambda x: x.describe())
Out[161]:
A
bar count 3.000000
mean 0.130980
std 0.181231
min -0.077118
25% 0.069390
...
foo min -1.143704
25% -0.862495
50% -0.575247
75% -0.408530
max 1.193555
Name: C, Length: 16, dtype: float64
The dimension of the returned result can also change:
In [162]: grouped = df.groupby('A')['C']
In [163]: def f(group):
.....: return pd.DataFrame({'original': group,
.....: 'demeaned': group - group.mean()})
.....:
apply on a Series can operate on a returned value from the applied function,
that is itself a series, and possibly upcast the result to a DataFrame:
In [164]: def f(x):
.....: return pd.Series([x, x ** 2], index=["x", "x^2"])
.....:
In [165]: s = pd.Series(np.random.rand(5))
In [166]: s
Out[166]:
0 0.321438
1 0.493496
2 0.139505
3 0.910103
4 0.194158
dtype: float64
In [167]: s.apply(f)
Out[167]:
x x^2
0 0.321438 0.103323
1 0.493496 0.243538
2 0.139505 0.019462
3 0.910103 0.828287
4 0.194158 0.037697
Control grouped column(s) placement with group_keys#
Note
If group_keys=True is specified when calling groupby(),
functions passed to apply that return like-indexed outputs will have the
group keys added to the result index. Previous versions of pandas would add
the group keys only when the result from the applied function had a different
index than the input. If group_keys is not specified, the group keys will
not be added for like-indexed outputs. In the future this behavior
will change to always respect group_keys, which defaults to True.
Changed in version 1.5.0.
To control whether the grouped column(s) are included in the indices, you can use
the argument group_keys. Compare
In [168]: df.groupby("A", group_keys=True).apply(lambda x: x)
Out[168]:
A B C D
A
bar 1 bar one 0.254161 1.511763
3 bar three 0.215897 -0.990582
5 bar two -0.077118 1.211526
foo 0 foo one -0.575247 1.346061
2 foo two -1.143704 1.627081
4 foo two 1.193555 -0.441652
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
with
In [169]: df.groupby("A", group_keys=False).apply(lambda x: x)
Out[169]:
A B C D
0 foo one -0.575247 1.346061
1 bar one 0.254161 1.511763
2 foo two -1.143704 1.627081
3 bar three 0.215897 -0.990582
4 foo two 1.193555 -0.441652
5 bar two -0.077118 1.211526
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
Similar to Aggregations with User-Defined Functions, the resulting dtype will reflect that of the
apply function. If the results from different groups have different dtypes, then
a common dtype will be determined in the same way as DataFrame construction.
Numba Accelerated Routines#
New in version 1.1.
If Numba is installed as an optional dependency, the transform and
aggregate methods support engine='numba' and engine_kwargs arguments.
See enhancing performance with Numba for general usage of the arguments
and performance considerations.
The function signature must start with values, index exactly as the data belonging to each group
will be passed into values, and the group index will be passed into index.
Warning
When using engine='numba', there will be no “fall back” behavior internally. The group
data and group index will be passed as NumPy arrays to the JITed user defined function, and no
alternative execution attempts will be tried.
Other useful features#
Automatic exclusion of “nuisance” columns#
Again consider the example DataFrame we’ve been looking at:
In [170]: df
Out[170]:
A B C D
0 foo one -0.575247 1.346061
1 bar one 0.254161 1.511763
2 foo two -1.143704 1.627081
3 bar three 0.215897 -0.990582
4 foo two 1.193555 -0.441652
5 bar two -0.077118 1.211526
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
Suppose we wish to compute the standard deviation grouped by the A
column. There is a slight problem, namely that we don’t care about the data in
column B. We refer to this as a “nuisance” column. You can avoid nuisance
columns by specifying numeric_only=True:
In [171]: df.groupby("A").std(numeric_only=True)
Out[171]:
C D
A
bar 0.181231 1.366330
foo 0.912265 0.884785
Note that df.groupby('A').colname.std(). is more efficient than
df.groupby('A').std().colname, so if the result of an aggregation function
is only interesting over one column (here colname), it may be filtered
before applying the aggregation function.
Note
Any object column, also if it contains numerical values such as Decimal
objects, is considered as a “nuisance” columns. They are excluded from
aggregate functions automatically in groupby.
If you do wish to include decimal or object columns in an aggregation with
other non-nuisance data types, you must do so explicitly.
Warning
The automatic dropping of nuisance columns has been deprecated and will be removed
in a future version of pandas. If columns are included that cannot be operated
on, pandas will instead raise an error. In order to avoid this, either select
the columns you wish to operate on or specify numeric_only=True.
In [172]: from decimal import Decimal
In [173]: df_dec = pd.DataFrame(
.....: {
.....: "id": [1, 2, 1, 2],
.....: "int_column": [1, 2, 3, 4],
.....: "dec_column": [
.....: Decimal("0.50"),
.....: Decimal("0.15"),
.....: Decimal("0.25"),
.....: Decimal("0.40"),
.....: ],
.....: }
.....: )
.....:
# Decimal columns can be sum'd explicitly by themselves...
In [174]: df_dec.groupby(["id"])[["dec_column"]].sum()
Out[174]:
dec_column
id
1 0.75
2 0.55
# ...but cannot be combined with standard data types or they will be excluded
In [175]: df_dec.groupby(["id"])[["int_column", "dec_column"]].sum()
Out[175]:
int_column
id
1 4
2 6
# Use .agg function to aggregate over standard and "nuisance" data types
# at the same time
In [176]: df_dec.groupby(["id"]).agg({"int_column": "sum", "dec_column": "sum"})
Out[176]:
int_column dec_column
id
1 4 0.75
2 6 0.55
Handling of (un)observed Categorical values#
When using a Categorical grouper (as a single grouper, or as part of multiple groupers), the observed keyword
controls whether to return a cartesian product of all possible groupers values (observed=False) or only those
that are observed groupers (observed=True).
Show all values:
In [177]: pd.Series([1, 1, 1]).groupby(
.....: pd.Categorical(["a", "a", "a"], categories=["a", "b"]), observed=False
.....: ).count()
.....:
Out[177]:
a 3
b 0
dtype: int64
Show only the observed values:
In [178]: pd.Series([1, 1, 1]).groupby(
.....: pd.Categorical(["a", "a", "a"], categories=["a", "b"]), observed=True
.....: ).count()
.....:
Out[178]:
a 3
dtype: int64
The returned dtype of the grouped will always include all of the categories that were grouped.
In [179]: s = (
.....: pd.Series([1, 1, 1])
.....: .groupby(pd.Categorical(["a", "a", "a"], categories=["a", "b"]), observed=False)
.....: .count()
.....: )
.....:
In [180]: s.index.dtype
Out[180]: CategoricalDtype(categories=['a', 'b'], ordered=False)
NA and NaT group handling#
If there are any NaN or NaT values in the grouping key, these will be
automatically excluded. In other words, there will never be an “NA group” or
“NaT group”. This was not the case in older versions of pandas, but users were
generally discarding the NA group anyway (and supporting it was an
implementation headache).
Grouping with ordered factors#
Categorical variables represented as instance of pandas’s Categorical class
can be used as group keys. If so, the order of the levels will be preserved:
In [181]: data = pd.Series(np.random.randn(100))
In [182]: factor = pd.qcut(data, [0, 0.25, 0.5, 0.75, 1.0])
In [183]: data.groupby(factor).mean()
Out[183]:
(-2.645, -0.523] -1.362896
(-0.523, 0.0296] -0.260266
(0.0296, 0.654] 0.361802
(0.654, 2.21] 1.073801
dtype: float64
Grouping with a grouper specification#
You may need to specify a bit more data to properly group. You can
use the pd.Grouper to provide this local control.
In [184]: import datetime
In [185]: df = pd.DataFrame(
.....: {
.....: "Branch": "A A A A A A A B".split(),
.....: "Buyer": "Carl Mark Carl Carl Joe Joe Joe Carl".split(),
.....: "Quantity": [1, 3, 5, 1, 8, 1, 9, 3],
.....: "Date": [
.....: datetime.datetime(2013, 1, 1, 13, 0),
.....: datetime.datetime(2013, 1, 1, 13, 5),
.....: datetime.datetime(2013, 10, 1, 20, 0),
.....: datetime.datetime(2013, 10, 2, 10, 0),
.....: datetime.datetime(2013, 10, 1, 20, 0),
.....: datetime.datetime(2013, 10, 2, 10, 0),
.....: datetime.datetime(2013, 12, 2, 12, 0),
.....: datetime.datetime(2013, 12, 2, 14, 0),
.....: ],
.....: }
.....: )
.....:
In [186]: df
Out[186]:
Branch Buyer Quantity Date
0 A Carl 1 2013-01-01 13:00:00
1 A Mark 3 2013-01-01 13:05:00
2 A Carl 5 2013-10-01 20:00:00
3 A Carl 1 2013-10-02 10:00:00
4 A Joe 8 2013-10-01 20:00:00
5 A Joe 1 2013-10-02 10:00:00
6 A Joe 9 2013-12-02 12:00:00
7 B Carl 3 2013-12-02 14:00:00
Groupby a specific column with the desired frequency. This is like resampling.
In [187]: df.groupby([pd.Grouper(freq="1M", key="Date"), "Buyer"])[["Quantity"]].sum()
Out[187]:
Quantity
Date Buyer
2013-01-31 Carl 1
Mark 3
2013-10-31 Carl 6
Joe 9
2013-12-31 Carl 3
Joe 9
You have an ambiguous specification in that you have a named index and a column
that could be potential groupers.
In [188]: df = df.set_index("Date")
In [189]: df["Date"] = df.index + pd.offsets.MonthEnd(2)
In [190]: df.groupby([pd.Grouper(freq="6M", key="Date"), "Buyer"])[["Quantity"]].sum()
Out[190]:
Quantity
Date Buyer
2013-02-28 Carl 1
Mark 3
2014-02-28 Carl 9
Joe 18
In [191]: df.groupby([pd.Grouper(freq="6M", level="Date"), "Buyer"])[["Quantity"]].sum()
Out[191]:
Quantity
Date Buyer
2013-01-31 Carl 1
Mark 3
2014-01-31 Carl 9
Joe 18
Taking the first rows of each group#
Just like for a DataFrame or Series you can call head and tail on a groupby:
In [192]: df = pd.DataFrame([[1, 2], [1, 4], [5, 6]], columns=["A", "B"])
In [193]: df
Out[193]:
A B
0 1 2
1 1 4
2 5 6
In [194]: g = df.groupby("A")
In [195]: g.head(1)
Out[195]:
A B
0 1 2
2 5 6
In [196]: g.tail(1)
Out[196]:
A B
1 1 4
2 5 6
This shows the first or last n rows from each group.
Taking the nth row of each group#
To select from a DataFrame or Series the nth item, use
nth(). This is a reduction method, and
will return a single row (or no row) per group if you pass an int for n:
In [197]: df = pd.DataFrame([[1, np.nan], [1, 4], [5, 6]], columns=["A", "B"])
In [198]: g = df.groupby("A")
In [199]: g.nth(0)
Out[199]:
B
A
1 NaN
5 6.0
In [200]: g.nth(-1)
Out[200]:
B
A
1 4.0
5 6.0
In [201]: g.nth(1)
Out[201]:
B
A
1 4.0
If you want to select the nth not-null item, use the dropna kwarg. For a DataFrame this should be either 'any' or 'all' just like you would pass to dropna:
# nth(0) is the same as g.first()
In [202]: g.nth(0, dropna="any")
Out[202]:
B
A
1 4.0
5 6.0
In [203]: g.first()
Out[203]:
B
A
1 4.0
5 6.0
# nth(-1) is the same as g.last()
In [204]: g.nth(-1, dropna="any") # NaNs denote group exhausted when using dropna
Out[204]:
B
A
1 4.0
5 6.0
In [205]: g.last()
Out[205]:
B
A
1 4.0
5 6.0
In [206]: g.B.nth(0, dropna="all")
Out[206]:
A
1 4.0
5 6.0
Name: B, dtype: float64
As with other methods, passing as_index=False, will achieve a filtration, which returns the grouped row.
In [207]: df = pd.DataFrame([[1, np.nan], [1, 4], [5, 6]], columns=["A", "B"])
In [208]: g = df.groupby("A", as_index=False)
In [209]: g.nth(0)
Out[209]:
A B
0 1 NaN
2 5 6.0
In [210]: g.nth(-1)
Out[210]:
A B
1 1 4.0
2 5 6.0
You can also select multiple rows from each group by specifying multiple nth values as a list of ints.
In [211]: business_dates = pd.date_range(start="4/1/2014", end="6/30/2014", freq="B")
In [212]: df = pd.DataFrame(1, index=business_dates, columns=["a", "b"])
# get the first, 4th, and last date index for each month
In [213]: df.groupby([df.index.year, df.index.month]).nth([0, 3, -1])
Out[213]:
a b
2014 4 1 1
4 1 1
4 1 1
5 1 1
5 1 1
5 1 1
6 1 1
6 1 1
6 1 1
Enumerate group items#
To see the order in which each row appears within its group, use the
cumcount method:
In [214]: dfg = pd.DataFrame(list("aaabba"), columns=["A"])
In [215]: dfg
Out[215]:
A
0 a
1 a
2 a
3 b
4 b
5 a
In [216]: dfg.groupby("A").cumcount()
Out[216]:
0 0
1 1
2 2
3 0
4 1
5 3
dtype: int64
In [217]: dfg.groupby("A").cumcount(ascending=False)
Out[217]:
0 3
1 2
2 1
3 1
4 0
5 0
dtype: int64
Enumerate groups#
To see the ordering of the groups (as opposed to the order of rows
within a group given by cumcount) you can use
ngroup().
Note that the numbers given to the groups match the order in which the
groups would be seen when iterating over the groupby object, not the
order they are first observed.
In [218]: dfg = pd.DataFrame(list("aaabba"), columns=["A"])
In [219]: dfg
Out[219]:
A
0 a
1 a
2 a
3 b
4 b
5 a
In [220]: dfg.groupby("A").ngroup()
Out[220]:
0 0
1 0
2 0
3 1
4 1
5 0
dtype: int64
In [221]: dfg.groupby("A").ngroup(ascending=False)
Out[221]:
0 1
1 1
2 1
3 0
4 0
5 1
dtype: int64
Plotting#
Groupby also works with some plotting methods. For example, suppose we
suspect that some features in a DataFrame may differ by group, in this case,
the values in column 1 where the group is “B” are 3 higher on average.
In [222]: np.random.seed(1234)
In [223]: df = pd.DataFrame(np.random.randn(50, 2))
In [224]: df["g"] = np.random.choice(["A", "B"], size=50)
In [225]: df.loc[df["g"] == "B", 1] += 3
We can easily visualize this with a boxplot:
In [226]: df.groupby("g").boxplot()
Out[226]:
A AxesSubplot(0.1,0.15;0.363636x0.75)
B AxesSubplot(0.536364,0.15;0.363636x0.75)
dtype: object
The result of calling boxplot is a dictionary whose keys are the values
of our grouping column g (“A” and “B”). The values of the resulting dictionary
can be controlled by the return_type keyword of boxplot.
See the visualization documentation for more.
Warning
For historical reasons, df.groupby("g").boxplot() is not equivalent
to df.boxplot(by="g"). See here for
an explanation.
Piping function calls#
Similar to the functionality provided by DataFrame and Series, functions
that take GroupBy objects can be chained together using a pipe method to
allow for a cleaner, more readable syntax. To read about .pipe in general terms,
see here.
Combining .groupby and .pipe is often useful when you need to reuse
GroupBy objects.
As an example, imagine having a DataFrame with columns for stores, products,
revenue and quantity sold. We’d like to do a groupwise calculation of prices
(i.e. revenue/quantity) per store and per product. We could do this in a
multi-step operation, but expressing it in terms of piping can make the
code more readable. First we set the data:
In [227]: n = 1000
In [228]: df = pd.DataFrame(
.....: {
.....: "Store": np.random.choice(["Store_1", "Store_2"], n),
.....: "Product": np.random.choice(["Product_1", "Product_2"], n),
.....: "Revenue": (np.random.random(n) * 50 + 10).round(2),
.....: "Quantity": np.random.randint(1, 10, size=n),
.....: }
.....: )
.....:
In [229]: df.head(2)
Out[229]:
Store Product Revenue Quantity
0 Store_2 Product_1 26.12 1
1 Store_2 Product_1 28.86 1
Now, to find prices per store/product, we can simply do:
In [230]: (
.....: df.groupby(["Store", "Product"])
.....: .pipe(lambda grp: grp.Revenue.sum() / grp.Quantity.sum())
.....: .unstack()
.....: .round(2)
.....: )
.....:
Out[230]:
Product Product_1 Product_2
Store
Store_1 6.82 7.05
Store_2 6.30 6.64
Piping can also be expressive when you want to deliver a grouped object to some
arbitrary function, for example:
In [231]: def mean(groupby):
.....: return groupby.mean()
.....:
In [232]: df.groupby(["Store", "Product"]).pipe(mean)
Out[232]:
Revenue Quantity
Store Product
Store_1 Product_1 34.622727 5.075758
Product_2 35.482815 5.029630
Store_2 Product_1 32.972837 5.237589
Product_2 34.684360 5.224000
where mean takes a GroupBy object and finds the mean of the Revenue and Quantity
columns respectively for each Store-Product combination. The mean function can
be any function that takes in a GroupBy object; the .pipe will pass the GroupBy
object as a parameter into the function you specify.
Examples#
Regrouping by factor#
Regroup columns of a DataFrame according to their sum, and sum the aggregated ones.
In [233]: df = pd.DataFrame({"a": [1, 0, 0], "b": [0, 1, 0], "c": [1, 0, 0], "d": [2, 3, 4]})
In [234]: df
Out[234]:
a b c d
0 1 0 1 2
1 0 1 0 3
2 0 0 0 4
In [235]: df.groupby(df.sum(), axis=1).sum()
Out[235]:
1 9
0 2 2
1 1 3
2 0 4
Multi-column factorization#
By using ngroup(), we can extract
information about the groups in a way similar to factorize() (as described
further in the reshaping API) but which applies
naturally to multiple columns of mixed type and different
sources. This can be useful as an intermediate categorical-like step
in processing, when the relationships between the group rows are more
important than their content, or as input to an algorithm which only
accepts the integer encoding. (For more information about support in
pandas for full categorical data, see the Categorical
introduction and the
API documentation.)
In [236]: dfg = pd.DataFrame({"A": [1, 1, 2, 3, 2], "B": list("aaaba")})
In [237]: dfg
Out[237]:
A B
0 1 a
1 1 a
2 2 a
3 3 b
4 2 a
In [238]: dfg.groupby(["A", "B"]).ngroup()
Out[238]:
0 0
1 0
2 1
3 2
4 1
dtype: int64
In [239]: dfg.groupby(["A", [0, 0, 0, 1, 1]]).ngroup()
Out[239]:
0 0
1 0
2 1
3 3
4 2
dtype: int64
Groupby by indexer to ‘resample’ data#
Resampling produces new hypothetical samples (resamples) from already existing observed data or from a model that generates data. These new samples are similar to the pre-existing samples.
In order to resample to work on indices that are non-datetimelike, the following procedure can be utilized.
In the following examples, df.index // 5 returns a binary array which is used to determine what gets selected for the groupby operation.
Note
The below example shows how we can downsample by consolidation of samples into fewer samples. Here by using df.index // 5, we are aggregating the samples in bins. By applying std() function, we aggregate the information contained in many samples into a small subset of values which is their standard deviation thereby reducing the number of samples.
In [240]: df = pd.DataFrame(np.random.randn(10, 2))
In [241]: df
Out[241]:
0 1
0 -0.793893 0.321153
1 0.342250 1.618906
2 -0.975807 1.918201
3 -0.810847 -1.405919
4 -1.977759 0.461659
5 0.730057 -1.316938
6 -0.751328 0.528290
7 -0.257759 -1.081009
8 0.505895 -1.701948
9 -1.006349 0.020208
In [242]: df.index // 5
Out[242]: Int64Index([0, 0, 0, 0, 0, 1, 1, 1, 1, 1], dtype='int64')
In [243]: df.groupby(df.index // 5).std()
Out[243]:
0 1
0 0.823647 1.312912
1 0.760109 0.942941
Returning a Series to propagate names#
Group DataFrame columns, compute a set of metrics and return a named Series.
The Series name is used as the name for the column index. This is especially
useful in conjunction with reshaping operations such as stacking in which the
column index name will be used as the name of the inserted column:
In [244]: df = pd.DataFrame(
.....: {
.....: "a": [0, 0, 0, 0, 1, 1, 1, 1, 2, 2, 2, 2],
.....: "b": [0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1],
.....: "c": [1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0],
.....: "d": [0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1],
.....: }
.....: )
.....:
In [245]: def compute_metrics(x):
.....: result = {"b_sum": x["b"].sum(), "c_mean": x["c"].mean()}
.....: return pd.Series(result, name="metrics")
.....:
In [246]: result = df.groupby("a").apply(compute_metrics)
In [247]: result
Out[247]:
metrics b_sum c_mean
a
0 2.0 0.5
1 2.0 0.5
2 2.0 0.5
In [248]: result.stack()
Out[248]:
a metrics
0 b_sum 2.0
c_mean 0.5
1 b_sum 2.0
c_mean 0.5
2 b_sum 2.0
c_mean 0.5
dtype: float64
| 459
| 1,030
|
group 2 columns and based on the group value take the group based on specific value
My code:
data = pd.DataFrame({'a': [1,2,3,4,5,6,7,8],
'group': [1,1,1,1,2,2,2,2],
'check':[0.5, 0.5,0.5,0.3,0.3,0.3,0.2,0.2]})
output:
data.groupby(['group','check']).size()
group check
1 0.3 1
0.5 3
2 0.2 2
0.3 2
dtype: int64
I wish to get
Since we have group '1' and '2'.
based on the above output, I wish to take only the second group or any group above 1(given if we have more than 2 groups).
example output:
group check
2 0.2 2
0.3 2
dtype: int64
|
62,433,925
|
Python Pandas updating same named columns and also done other calculation while in loop
|
<p>In dataframe, I want to iterate over same named columns and while iterating, when their sum exceeds "val_n" value. I want 4 things:
1) exceed_when (at what iteration it exceed from "val_n" value)
2) sum_col (sum of same named columns)
3) At the point of exceed when, I want to replace corresponding col value as (col - (sum_col - val_n)
4) And after exceed_when point, I want to replace rest of cols value to 0.</p>
<p>Dataframe look like:</p>
<pre><code>id col1 col2 col3 col4 col5 col6 col7 col8 col9 col10 col11 col12 col13 col14 val_n
1 350 350 350 350 350 350 350 350 350 350 0 0 0 0 3105.61
2 50 50 55 105 50 0 50 100 50 50 50 50 1025 1066.86 3185.6
3 0 0 0 0 0 3495.1 0 0 0 0 0 0 0 3495.1 3477.76
</code></pre>
<p>Required Dataframe:</p>
<pre><code>id col1 col2 col3 col4 col5 col6 col7 col8 col9 col10 col11 col12 col13 col14 val_n exceed_when sum_col
1 350 350 350 350 350 350 350 350 305.61 0 0 0 0 0 3105.61 9 3500
2 50 50 55 105 50 0 50 100 50 50 50 50 1025 1066.86 3185.6 2751.86
3 0 0 0 0 0 3477.76 0 0 0 0 0 0 0 0 3477.76 6 6990.2
</code></pre>
<p>This is what I have tried:</p>
<pre><code>def trans(row):
row['sum_col'] = 0
row['exceed_ind'] = 0
for i in range(1, 15):
row['sum_col'] += row['col' + str(i)]
if ((row['exceed_ind'] == 0) &
(row['sum_col'] >= row['val_n'])):
row['exceed_ind'] = 1
row['exceed_when'] = i
else:
continue
if row['exceed_when'] == i:
row['col' + str(i)] = (
row['col' + str(i)] - (
row['sum_col'] - row['val_n']))
elif row['exceed_when'] < i:
row['col' + str(i)] = 0
else:
row['col' + str(i)] = row['col' + str(i)]
return row
df1 = df.apply(trans, axis=1)
</code></pre>
<p>I am getting right results for sum_col, exceed when but conditions elif row['exceed_when'] < i , doesn't seems to be working and its not updated the expected 4th point i.e. replace rest of cols value to 0. I am NOT sure what I miss.</p>
<p>DDL to generate DataFrame:</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'id': [1, 2, 3],
'col1': [350, 50, 0],
'col2': [350, 50, 0],
'col3': [350, 55, 0],
'col4': [350, 105, 0],
'col5' : [350, 50, 0],
'col6': [350, 0, 3495.1],
'col7': [350, 50, 0],
'col8': [350, 100, 0],
'col9': [350, 50, 0],
'col10': [350, 50, 0],
'col11': [0, 50, 0],
'col12': [0, 50, 0],
'col13': [0, 1025, 0],
'col14': [0, 1066.86, 3495.1],
'val_n': [3105.61, 3185.6, 3477.76]
})
</code></pre>
<p>Thanks!</p>
| 62,457,485
| 2020-06-17T16:34:07.040000
| 1
| null | 0
| 26
|
python|pandas
|
<p>To my knowledge, the <code>.apply</code> function will only pass a copy of the <code>row</code> and all updates happen on the copy only, not the original <code>DataFrame</code> itself. In this case, you have to loop through the rows and update them using the index. </p>
<pre><code>df['sum_col'] = 0
df['exceed_ind'] = 0
df['exceed_when'] = 0
for idx, row in df.iterrows():
sum_col = 0
exceed_ind = 0
exceed_when = 0
for i in range(1, 15):
sum_col += row['col' + str(i)]
if ((exceed_ind == 0) &
(sum_col >= row['val_n'])):
exceed_ind = 1
exceed_when = i
df.loc[idx, 'exceed_ind'] = exceed_ind
df.loc[idx, 'exceed_when'] = exceed_when
df.loc[idx, 'col' + str(i)] = (row['col' + str(i)] - (sum_col - row['val_n']))
elif (exceed_ind==1) & (exceed_when < i):
df.loc[idx, 'col' + str(i)] = 0
df.loc[idx, 'sum_col'] = sum_col
print(df)
</code></pre>
<p>Result:</p>
<pre><code> col1 col2 col3 col4 col5 col6 col7 col8 col9 col10 col11 \
id
1 350 350 350 350 350 350.00 350 350 305.61 0 0
2 50 50 55 105 50 0.00 50 100 50.00 50 50
3 0 0 0 0 0 3477.76 0 0 0.00 0 0
col12 col13 col14 val_n sum_col exceed_ind exceed_when
id
1 0 0 0.00 3105.61 3500.00 1 9
2 50 1025 1066.86 3185.60 2751.86 0 0
3 0 0 0.00 3477.76 6990.20 1 6
</code></pre>
| 2020-06-18T18:59:47.133000
| 1
|
https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.update.html
|
To my knowledge, the .apply function will only pass a copy of the row and all updates happen on the copy only, not the original DataFrame itself. In this case, you have to loop through the rows and update them using the index.
df['sum_col'] = 0
df['exceed_ind'] = 0
df['exceed_when'] = 0
for idx, row in df.iterrows():
sum_col = 0
exceed_ind = 0
exceed_when = 0
for i in range(1, 15):
sum_col += row['col' + str(i)]
if ((exceed_ind == 0) &
(sum_col >= row['val_n'])):
exceed_ind = 1
exceed_when = i
df.loc[idx, 'exceed_ind'] = exceed_ind
df.loc[idx, 'exceed_when'] = exceed_when
df.loc[idx, 'col' + str(i)] = (row['col' + str(i)] - (sum_col - row['val_n']))
elif (exceed_ind==1) & (exceed_when < i):
df.loc[idx, 'col' + str(i)] = 0
df.loc[idx, 'sum_col'] = sum_col
print(df)
Result:
col1 col2 col3 col4 col5 col6 col7 col8 col9 col10 col11 \
id
1 350 350 350 350 350 350.00 350 350 305.61 0 0
2 50 50 55 105 50 0.00 50 100 50.00 50 50
3 0 0 0 0 0 3477.76 0 0 0.00 0 0
col12 col13 col14 val_n sum_col exceed_ind exceed_when
id
1 0 0 0.00 3105.61 3500.00 1 9
2 50 1025 1066.86 3185.60 2751.86 0 0
3 0 0 0.00 3477.76 6990.20 1 6
| 0
| 1,674
|
Python Pandas updating same named columns and also done other calculation while in loop
In dataframe, I want to iterate over same named columns and while iterating, when their sum exceeds "val_n" value. I want 4 things:
1) exceed_when (at what iteration it exceed from "val_n" value)
2) sum_col (sum of same named columns)
3) At the point of exceed when, I want to replace corresponding col value as (col - (sum_col - val_n)
4) And after exceed_when point, I want to replace rest of cols value to 0.
Dataframe look like:
id col1 col2 col3 col4 col5 col6 col7 col8 col9 col10 col11 col12 col13 col14 val_n
1 350 350 350 350 350 350 350 350 350 350 0 0 0 0 3105.61
2 50 50 55 105 50 0 50 100 50 50 50 50 1025 1066.86 3185.6
3 0 0 0 0 0 3495.1 0 0 0 0 0 0 0 3495.1 3477.76
Required Dataframe:
id col1 col2 col3 col4 col5 col6 col7 col8 col9 col10 col11 col12 col13 col14 val_n exceed_when sum_col
1 350 350 350 350 350 350 350 350 305.61 0 0 0 0 0 3105.61 9 3500
2 50 50 55 105 50 0 50 100 50 50 50 50 1025 1066.86 3185.6 2751.86
3 0 0 0 0 0 3477.76 0 0 0 0 0 0 0 0 3477.76 6 6990.2
This is what I have tried:
def trans(row):
row['sum_col'] = 0
row['exceed_ind'] = 0
for i in range(1, 15):
row['sum_col'] += row['col' + str(i)]
if ((row['exceed_ind'] == 0) &
(row['sum_col'] >= row['val_n'])):
row['exceed_ind'] = 1
row['exceed_when'] = i
else:
continue
if row['exceed_when'] == i:
row['col' + str(i)] = (
row['col' + str(i)] - (
row['sum_col'] - row['val_n']))
elif row['exceed_when'] < i:
row['col' + str(i)] = 0
else:
row['col' + str(i)] = row['col' + str(i)]
return row
df1 = df.apply(trans, axis=1)
I am getting right results for sum_col, exceed when but conditions elif row['exceed_when'] < i , doesn't seems to be working and its not updated the expected 4th point i.e. replace rest of cols value to 0. I am NOT sure what I miss.
DDL to generate DataFrame:
import pandas as pd
df = pd.DataFrame({'id': [1, 2, 3],
'col1': [350, 50, 0],
'col2': [350, 50, 0],
'col3': [350, 55, 0],
'col4': [350, 105, 0],
'col5' : [350, 50, 0],
'col6': [350, 0, 3495.1],
'col7': [350, 50, 0],
'col8': [350, 100, 0],
'col9': [350, 50, 0],
'col10': [350, 50, 0],
'col11': [0, 50, 0],
'col12': [0, 50, 0],
'col13': [0, 1025, 0],
'col14': [0, 1066.86, 3495.1],
'val_n': [3105.61, 3185.6, 3477.76]
})
Thanks!
|
67,486,299
|
How to use Multiple conditional statement in python
|
<p>Having 2 columns where i have to update the third column based on the conditional statement between 2 columns. How i can use the same , i have tried but the case is not working.</p>
<p>We need to check for the condition if Col1 is having value but col2 is blank.</p>
<p><strong>Input Data:</strong></p>
<pre><code>col1 col2 col3
azb225 AS277
Dzb555
NZb777 NZb777
ZQS285
NBC605 NZ3385
</code></pre>
<p><strong>Output Expected:</strong></p>
<pre><code>col1 col2 col3
azb225 AS277 Available
Dzb555 Not Available
NZb777 NZb777 Available
ZQS285 Not Available
Available
NBC605 NZ3385 Available
</code></pre>
<p><strong>code i have been using :</strong></p>
<pre><code>df['col3']=df.apply(lambda x:'Not Available' if (x['col1'].notna().all(axis=1)) and (x['col2'].isna().all(axis=1)) else 'Available',1)
</code></pre>
<p>But the above code is not working in this case.</p>
<p>Please Suggest.</p>
| 67,486,360
| 2021-05-11T11:59:24.850000
| 1
| null | 1
| 26
|
python|pandas
|
<p>Use <a href="https://numpy.org/doc/stable/reference/generated/numpy.where.html" rel="nofollow noreferrer"><code>numpy.where</code></a>:</p>
<pre><code>#if empty strings instead missing values
df = df.replace('', np.nan)
print (df)
col1 col2
0 azb225 AS277
1 Dzb555 NaN
2 NZb777 NZb777
3 ZQS285 NaN
4 NaN NaN
5 NBC605 NZ3385
df['col3']= np.where(df['col1'].notna() & df['col2'].isna(), 'Not Available','Available')
print (df)
col1 col2 col3
0 azb225 AS277 Available
1 Dzb555 NaN Not Available
2 NZb777 NZb777 Available
3 ZQS285 NaN Not Available
4 NaN NaN Available
5 NBC605 NZ3385 Available
</code></pre>
| 2021-05-11T12:02:15.373000
| 1
|
https://pandas.pydata.org/docs/dev/getting_started/intro_tutorials/03_subset_data.html
|
How do I select a subset of a DataFrame?#
In [1]: import pandas as pd
Data used for this tutorial:
Titanic data
This tutorial uses the Titanic data set, stored as CSV. The data
consists of the following data columns:
PassengerId: Id of every passenger.
Survived: Indication whether passenger survived. 0 for yes and 1 for no.
Use numpy.where:
#if empty strings instead missing values
df = df.replace('', np.nan)
print (df)
col1 col2
0 azb225 AS277
1 Dzb555 NaN
2 NZb777 NZb777
3 ZQS285 NaN
4 NaN NaN
5 NBC605 NZ3385
df['col3']= np.where(df['col1'].notna() & df['col2'].isna(), 'Not Available','Available')
print (df)
col1 col2 col3
0 azb225 AS277 Available
1 Dzb555 NaN Not Available
2 NZb777 NZb777 Available
3 ZQS285 NaN Not Available
4 NaN NaN Available
5 NBC605 NZ3385 Available
Pclass: One out of the 3 ticket classes: Class 1, Class 2 and Class 3.
Name: Name of passenger.
Sex: Gender of passenger.
Age: Age of passenger in years.
SibSp: Number of siblings or spouses aboard.
Parch: Number of parents or children aboard.
Ticket: Ticket number of passenger.
Fare: Indicating the fare.
Cabin: Cabin number of passenger.
Embarked: Port of embarkation.
To raw data
In [2]: titanic = pd.read_csv("data/titanic.csv")
In [3]: titanic.head()
Out[3]:
PassengerId Survived Pclass ... Fare Cabin Embarked
0 1 0 3 ... 7.2500 NaN S
1 2 1 1 ... 71.2833 C85 C
2 3 1 3 ... 7.9250 NaN S
3 4 1 1 ... 53.1000 C123 S
4 5 0 3 ... 8.0500 NaN S
[5 rows x 12 columns]
How do I select a subset of a DataFrame?#
How do I select specific columns from a DataFrame?#
I’m interested in the age of the Titanic passengers.
In [4]: ages = titanic["Age"]
In [5]: ages.head()
Out[5]:
0 22.0
1 38.0
2 26.0
3 35.0
4 35.0
Name: Age, dtype: float64
To select a single column, use square brackets [] with the column
name of the column of interest.
Each column in a DataFrame is a Series. As a single column is
selected, the returned object is a pandas Series. We can verify this
by checking the type of the output:
In [6]: type(titanic["Age"])
Out[6]: pandas.core.series.Series
And have a look at the shape of the output:
In [7]: titanic["Age"].shape
Out[7]: (891,)
DataFrame.shape is an attribute (remember tutorial on reading and writing, do not use parentheses for attributes) of a
pandas Series and DataFrame containing the number of rows and
columns: (nrows, ncolumns). A pandas Series is 1-dimensional and only
the number of rows is returned.
I’m interested in the age and sex of the Titanic passengers.
In [8]: age_sex = titanic[["Age", "Sex"]]
In [9]: age_sex.head()
Out[9]:
Age Sex
0 22.0 male
1 38.0 female
2 26.0 female
3 35.0 female
4 35.0 male
To select multiple columns, use a list of column names within the
selection brackets [].
Note
The inner square brackets define a
Python list with column names, whereas
the outer brackets are used to select the data from a pandas
DataFrame as seen in the previous example.
The returned data type is a pandas DataFrame:
In [10]: type(titanic[["Age", "Sex"]])
Out[10]: pandas.core.frame.DataFrame
In [11]: titanic[["Age", "Sex"]].shape
Out[11]: (891, 2)
The selection returned a DataFrame with 891 rows and 2 columns. Remember, a
DataFrame is 2-dimensional with both a row and column dimension.
To user guideFor basic information on indexing, see the user guide section on indexing and selecting data.
How do I filter specific rows from a DataFrame?#
I’m interested in the passengers older than 35 years.
In [12]: above_35 = titanic[titanic["Age"] > 35]
In [13]: above_35.head()
Out[13]:
PassengerId Survived Pclass ... Fare Cabin Embarked
1 2 1 1 ... 71.2833 C85 C
6 7 0 1 ... 51.8625 E46 S
11 12 1 1 ... 26.5500 C103 S
13 14 0 3 ... 31.2750 NaN S
15 16 1 2 ... 16.0000 NaN S
[5 rows x 12 columns]
To select rows based on a conditional expression, use a condition inside
the selection brackets [].
The condition inside the selection
brackets titanic["Age"] > 35 checks for which rows the Age
column has a value larger than 35:
In [14]: titanic["Age"] > 35
Out[14]:
0 False
1 True
2 False
3 False
4 False
...
886 False
887 False
888 False
889 False
890 False
Name: Age, Length: 891, dtype: bool
The output of the conditional expression (>, but also ==,
!=, <, <=,… would work) is actually a pandas Series of
boolean values (either True or False) with the same number of
rows as the original DataFrame. Such a Series of boolean values
can be used to filter the DataFrame by putting it in between the
selection brackets []. Only rows for which the value is True
will be selected.
We know from before that the original Titanic DataFrame consists of
891 rows. Let’s have a look at the number of rows which satisfy the
condition by checking the shape attribute of the resulting
DataFrame above_35:
In [15]: above_35.shape
Out[15]: (217, 12)
I’m interested in the Titanic passengers from cabin class 2 and 3.
In [16]: class_23 = titanic[titanic["Pclass"].isin([2, 3])]
In [17]: class_23.head()
Out[17]:
PassengerId Survived Pclass ... Fare Cabin Embarked
0 1 0 3 ... 7.2500 NaN S
2 3 1 3 ... 7.9250 NaN S
4 5 0 3 ... 8.0500 NaN S
5 6 0 3 ... 8.4583 NaN Q
7 8 0 3 ... 21.0750 NaN S
[5 rows x 12 columns]
Similar to the conditional expression, the isin() conditional function
returns a True for each row the values are in the provided list. To
filter the rows based on such a function, use the conditional function
inside the selection brackets []. In this case, the condition inside
the selection brackets titanic["Pclass"].isin([2, 3]) checks for
which rows the Pclass column is either 2 or 3.
The above is equivalent to filtering by rows for which the class is
either 2 or 3 and combining the two statements with an | (or)
operator:
In [18]: class_23 = titanic[(titanic["Pclass"] == 2) | (titanic["Pclass"] == 3)]
In [19]: class_23.head()
Out[19]:
PassengerId Survived Pclass ... Fare Cabin Embarked
0 1 0 3 ... 7.2500 NaN S
2 3 1 3 ... 7.9250 NaN S
4 5 0 3 ... 8.0500 NaN S
5 6 0 3 ... 8.4583 NaN Q
7 8 0 3 ... 21.0750 NaN S
[5 rows x 12 columns]
Note
When combining multiple conditional statements, each condition
must be surrounded by parentheses (). Moreover, you can not use
or/and but need to use the or operator | and the and
operator &.
To user guideSee the dedicated section in the user guide about boolean indexing or about the isin function.
I want to work with passenger data for which the age is known.
In [20]: age_no_na = titanic[titanic["Age"].notna()]
In [21]: age_no_na.head()
Out[21]:
PassengerId Survived Pclass ... Fare Cabin Embarked
0 1 0 3 ... 7.2500 NaN S
1 2 1 1 ... 71.2833 C85 C
2 3 1 3 ... 7.9250 NaN S
3 4 1 1 ... 53.1000 C123 S
4 5 0 3 ... 8.0500 NaN S
[5 rows x 12 columns]
The notna() conditional function returns a True for each row the
values are not a Null value. As such, this can be combined with the
selection brackets [] to filter the data table.
You might wonder what actually changed, as the first 5 lines are still
the same values. One way to verify is to check if the shape has changed:
In [22]: age_no_na.shape
Out[22]: (714, 12)
To user guideFor more dedicated functions on missing values, see the user guide section about handling missing data.
How do I select specific rows and columns from a DataFrame?#
I’m interested in the names of the passengers older than 35 years.
In [23]: adult_names = titanic.loc[titanic["Age"] > 35, "Name"]
In [24]: adult_names.head()
Out[24]:
1 Cumings, Mrs. John Bradley (Florence Briggs Th...
6 McCarthy, Mr. Timothy J
11 Bonnell, Miss. Elizabeth
13 Andersson, Mr. Anders Johan
15 Hewlett, Mrs. (Mary D Kingcome)
Name: Name, dtype: object
In this case, a subset of both rows and columns is made in one go and
just using selection brackets [] is not sufficient anymore. The
loc/iloc operators are required in front of the selection
brackets []. When using loc/iloc, the part before the comma
is the rows you want, and the part after the comma is the columns you
want to select.
When using the column names, row labels or a condition expression, use
the loc operator in front of the selection brackets []. For both
the part before and after the comma, you can use a single label, a list
of labels, a slice of labels, a conditional expression or a colon. Using
a colon specifies you want to select all rows or columns.
I’m interested in rows 10 till 25 and columns 3 to 5.
In [25]: titanic.iloc[9:25, 2:5]
Out[25]:
Pclass Name Sex
9 2 Nasser, Mrs. Nicholas (Adele Achem) female
10 3 Sandstrom, Miss. Marguerite Rut female
11 1 Bonnell, Miss. Elizabeth female
12 3 Saundercock, Mr. William Henry male
13 3 Andersson, Mr. Anders Johan male
.. ... ... ...
20 2 Fynney, Mr. Joseph J male
21 2 Beesley, Mr. Lawrence male
22 3 McGowan, Miss. Anna "Annie" female
23 1 Sloper, Mr. William Thompson male
24 3 Palsson, Miss. Torborg Danira female
[16 rows x 3 columns]
Again, a subset of both rows and columns is made in one go and just
using selection brackets [] is not sufficient anymore. When
specifically interested in certain rows and/or columns based on their
position in the table, use the iloc operator in front of the
selection brackets [].
When selecting specific rows and/or columns with loc or iloc,
new values can be assigned to the selected data. For example, to assign
the name anonymous to the first 3 elements of the third column:
In [26]: titanic.iloc[0:3, 3] = "anonymous"
In [27]: titanic.head()
Out[27]:
PassengerId Survived Pclass ... Fare Cabin Embarked
0 1 0 3 ... 7.2500 NaN S
1 2 1 1 ... 71.2833 C85 C
2 3 1 3 ... 7.9250 NaN S
3 4 1 1 ... 53.1000 C123 S
4 5 0 3 ... 8.0500 NaN S
[5 rows x 12 columns]
To user guideSee the user guide section on different choices for indexing to get more insight in the usage of loc and iloc.
REMEMBER
When selecting subsets of data, square brackets [] are used.
Inside these brackets, you can use a single column/row label, a list
of column/row labels, a slice of labels, a conditional expression or
a colon.
Select specific rows and/or columns using loc when using the row
and column names.
Select specific rows and/or columns using iloc when using the
positions in the table.
You can assign new values to a selection based on loc/iloc.
To user guideA full overview of indexing is provided in the user guide pages on indexing and selecting data.
| 340
| 897
|
How to use Multiple conditional statement in python
Having 2 columns where i have to update the third column based on the conditional statement between 2 columns. How i can use the same , i have tried but the case is not working.
We need to check for the condition if Col1 is having value but col2 is blank.
Input Data:
col1 col2 col3
azb225 AS277
Dzb555
NZb777 NZb777
ZQS285
NBC605 NZ3385
Output Expected:
col1 col2 col3
azb225 AS277 Available
Dzb555 Not Available
NZb777 NZb777 Available
ZQS285 Not Available
Available
NBC605 NZ3385 Available
code i have been using :
df['col3']=df.apply(lambda x:'Not Available' if (x['col1'].notna().all(axis=1)) and (x['col2'].isna().all(axis=1)) else 'Available',1)
But the above code is not working in this case.
Please Suggest.
|
66,165,833
|
Setting DataFrame value using a datetime as index
|
<p>I have two data frames, one with 3 rows and 4 columns + date as index dataframeA</p>
<pre><code> TYPE UNIT PRICE PERCENT
2010-01-05 REDUCE CAR 2300.00 3.0
2010-06-03 INCREASE BOAT 1000.00 2.0
2010-07-01 INCREASE CAR 3500.00 3.0
</code></pre>
<p>and another empty one with 100's of dates as index and two columns dataframeB</p>
<pre><code> CAR BOAT
2010-01-01 Nan 0.0
2010-01-02 Nan 0.0
2010-01-03 Nan 0.0
2010-01-04 Nan 0.0
2010-01-05 -69.00 0.0
.....
2010-06-03 Nan 20.00
...
2010-07-01 105.00 0.0
</code></pre>
<p>I need to read each row from the first data frame , find the corresponding date and based on the unit type assign it the corresponding percentage or reduction on the second data frame.</p>
<p>I was reading about not iterating when dealing with dataframes? not sure how else?. how can i evaluate each row and then set the value on dataframeB ?</p>
<p>I tried doing the following :</p>
<pre><code>for index, row in dataframeA.iterrows():
type = row['TYPE']
unit = row['UNIT']
price = row['PRICE']
percent = row['PERCENT']
then here with basic math come up with the reduction or
increase and assign to dataframeB do the same for the others
</code></pre>
<p>My question is, is this the right approach and also how do i assign the value i come up to the other dataframeB ?</p>
| 66,166,336
| 2021-02-12T03:01:00.567000
| 1
| null | 0
| 27
|
python|pandas
|
<p>If your first dataframe is limited to just the variables stated, you can do this. Not terribly elegant, but works. If you have many more combinations in the dataframe, it'd have to be rethought. See comments inline.</p>
<pre><code>df = pd.read_csv(io.StringIO(''' date TYPE UNIT PRICE PERCENT
2010-01-05 REDUCE CAR 2300.00 3.0
2010-06-03 INCREASE BOAT 1000.00 2.0
2010-07-01 INCREASE CAR 3500.00 3.0'''), sep='\s+', engine='python').set_index('date')
df1 = pd.read_csv(io.StringIO('''date
2010-01-01
2010-01-02
2010-01-03
2010-01-04
2010-01-05
2010-06-03
2010-07-01'''), engine='python').set_index('date')
# calculate your changes in first dataframe
df.loc[df.TYPE == 'REDUCE', 'Change'] = - df['PRICE'] * df['PERCENT'] / 100
df.loc[df.TYPE == 'INCREASE', 'Change'] = df['PRICE'] * df['PERCENT'] / 100
#merge the Changes into car and boat dataframes; rename columns
df_car = df[['Change']].loc[df.UNIT == 'CAR'].merge(df1, right_index=True, left_index=True, how='right')
df_car.rename(columns={'Change':'Car'}, inplace=True)
df_boat = df[['Change']].loc[df.UNIT == 'BOAT'].merge(df1, right_index=True, left_index=True, how='right')
df_boat.rename(columns={'Change':'Boat'}, inplace=True)
# merge car and boat
dfnew = df_car.merge(df_boat, right_index=True, left_index=True, how='right')
dfnew
Car Boat
date
2010-01-01 NaN NaN
2010-01-02 NaN NaN
2010-01-03 NaN NaN
2010-01-04 NaN NaN
2010-01-05 -69.000 NaN
2010-06-03 NaN 20.000
2010-07-01 105.000 NaN
</code></pre>
| 2021-02-12T04:25:21.803000
| 1
|
https://pandas.pydata.org/docs/reference/api/pandas.DatetimeIndex.html
|
If your first dataframe is limited to just the variables stated, you can do this. Not terribly elegant, but works. If you have many more combinations in the dataframe, it'd have to be rethought. See comments inline.
df = pd.read_csv(io.StringIO(''' date TYPE UNIT PRICE PERCENT
2010-01-05 REDUCE CAR 2300.00 3.0
2010-06-03 INCREASE BOAT 1000.00 2.0
2010-07-01 INCREASE CAR 3500.00 3.0'''), sep='\s+', engine='python').set_index('date')
df1 = pd.read_csv(io.StringIO('''date
2010-01-01
2010-01-02
2010-01-03
2010-01-04
2010-01-05
2010-06-03
2010-07-01'''), engine='python').set_index('date')
# calculate your changes in first dataframe
df.loc[df.TYPE == 'REDUCE', 'Change'] = - df['PRICE'] * df['PERCENT'] / 100
df.loc[df.TYPE == 'INCREASE', 'Change'] = df['PRICE'] * df['PERCENT'] / 100
#merge the Changes into car and boat dataframes; rename columns
df_car = df[['Change']].loc[df.UNIT == 'CAR'].merge(df1, right_index=True, left_index=True, how='right')
df_car.rename(columns={'Change':'Car'}, inplace=True)
df_boat = df[['Change']].loc[df.UNIT == 'BOAT'].merge(df1, right_index=True, left_index=True, how='right')
df_boat.rename(columns={'Change':'Boat'}, inplace=True)
# merge car and boat
dfnew = df_car.merge(df_boat, right_index=True, left_index=True, how='right')
dfnew
Car Boat
date
2010-01-01 NaN NaN
2010-01-02 NaN NaN
2010-01-03 NaN NaN
2010-01-04 NaN NaN
2010-01-05 -69.000 NaN
2010-06-03 NaN 20.000
2010-07-01 105.000 NaN
| 0
| 1,519
|
Setting DataFrame value using a datetime as index
I have two data frames, one with 3 rows and 4 columns + date as index dataframeA
TYPE UNIT PRICE PERCENT
2010-01-05 REDUCE CAR 2300.00 3.0
2010-06-03 INCREASE BOAT 1000.00 2.0
2010-07-01 INCREASE CAR 3500.00 3.0
and another empty one with 100's of dates as index and two columns dataframeB
CAR BOAT
2010-01-01 Nan 0.0
2010-01-02 Nan 0.0
2010-01-03 Nan 0.0
2010-01-04 Nan 0.0
2010-01-05 -69.00 0.0
.....
2010-06-03 Nan 20.00
...
2010-07-01 105.00 0.0
I need to read each row from the first data frame , find the corresponding date and based on the unit type assign it the corresponding percentage or reduction on the second data frame.
I was reading about not iterating when dealing with dataframes? not sure how else?. how can i evaluate each row and then set the value on dataframeB ?
I tried doing the following :
for index, row in dataframeA.iterrows():
type = row['TYPE']
unit = row['UNIT']
price = row['PRICE']
percent = row['PERCENT']
then here with basic math come up with the reduction or
increase and assign to dataframeB do the same for the others
My question is, is this the right approach and also how do i assign the value i come up to the other dataframeB ?
|
69,801,959
|
Pandas dataframe - sort and shift within a group
|
<p>I have a pandas dataframe that looks like below</p>
<p><a href="https://i.stack.imgur.com/NEylR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/NEylR.png" alt="enter image description here" /></a></p>
<p>This dataframe is already grouped by the three columns <code>O</code>, <code>A</code>, <code>N</code> but as you see it is NOT sorted by <code>time</code> column</p>
<p>My goal is to sort it based on the <code>time</code> column by maintaining the groupby of <code>O</code>, <code>A</code>, <code>N</code> and then do <code>shift(-1)</code> operation for <code>value</code> column to create a <code>value_next</code> observation.</p>
<p>The output should look like below (<code>NaN</code> is imputed with <code>-</code>1` for demonstration)</p>
<p><a href="https://i.stack.imgur.com/W5Ple.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/W5Ple.png" alt="enter image description here" /></a></p>
<p>I did below:</p>
<pre><code>import pandas as pd
# Initialize data to lists.
data = [{'time': 10, 'O': 1, 'A': 2, 'N':3, 'value': 10},
{'time': 7, 'O': 1, 'A': 2, 'N':3, 'value': 11},
{'time': 15, 'O': 1, 'A': 2, 'N':3, 'value': 12},
{'time': 11, 'O': 2, 'A': 2, 'N':3, 'value': 20},
{'time': 12, 'O': 2, 'A': 2, 'N':3, 'value': 21},
{'time': 1, 'O': 2, 'A': 2, 'N':3, 'value': 25}]
# Creates DataFrame.
df = pd.DataFrame(data)
#sorting
df.sort_values(by=['O', 'A', 'N', 'time'], ascending=[True, True, True, True])
#shift
df['value_next'] = df.groupby(['O', 'A', 'N'])['value'].shift(-1)
</code></pre>
<p>This generates output below which is different than the expected. What am I missing?</p>
<p><a href="https://i.stack.imgur.com/KV9V3.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/KV9V3.png" alt="enter image description here" /></a></p>
<p>Please suggest.</p>
| 69,802,053
| 2021-11-01T19:36:45.497000
| 1
| null | 0
| 283
|
python|pandas
|
<p><code>sort_values</code> is not an inplace operation by default. Either pass <code>inplace=True</code></p>
<pre><code>df.sort_values(['O','A', 'N', 'time'], inplace=True)
# other operations
</code></pre>
<p>or reassign:</p>
<pre><code>df = df.sort_values(...)
# other operations
</code></pre>
| 2021-11-01T19:45:48.460000
| 1
|
https://pandas.pydata.org/docs/user_guide/groupby.html
|
Group by: split-apply-combine#
Group by: split-apply-combine#
By “group by” we are referring to a process involving one or more of the following
steps:
Splitting the data into groups based on some criteria.
Applying a function to each group independently.
Combining the results into a data structure.
Out of these, the split step is the most straightforward. In fact, in many
situations we may wish to split the data set into groups and do something with
those groups. In the apply step, we might wish to do one of the
following:
Aggregation: compute a summary statistic (or statistics) for each
group. Some examples:
Compute group sums or means.
Compute group sizes / counts.
Transformation: perform some group-specific computations and return a
like-indexed object. Some examples:
sort_values is not an inplace operation by default. Either pass inplace=True
df.sort_values(['O','A', 'N', 'time'], inplace=True)
# other operations
or reassign:
df = df.sort_values(...)
# other operations
Standardize data (zscore) within a group.
Filling NAs within groups with a value derived from each group.
Filtration: discard some groups, according to a group-wise computation
that evaluates True or False. Some examples:
Discard data that belongs to groups with only a few members.
Filter out data based on the group sum or mean.
Some combination of the above: GroupBy will examine the results of the apply
step and try to return a sensibly combined result if it doesn’t fit into
either of the above two categories.
Since the set of object instance methods on pandas data structures are generally
rich and expressive, we often simply want to invoke, say, a DataFrame function
on each group. The name GroupBy should be quite familiar to those who have used
a SQL-based tool (or itertools), in which you can write code like:
SELECT Column1, Column2, mean(Column3), sum(Column4)
FROM SomeTable
GROUP BY Column1, Column2
We aim to make operations like this natural and easy to express using
pandas. We’ll address each area of GroupBy functionality then provide some
non-trivial examples / use cases.
See the cookbook for some advanced strategies.
Splitting an object into groups#
pandas objects can be split on any of their axes. The abstract definition of
grouping is to provide a mapping of labels to group names. To create a GroupBy
object (more on what the GroupBy object is later), you may do the following:
In [1]: df = pd.DataFrame(
...: [
...: ("bird", "Falconiformes", 389.0),
...: ("bird", "Psittaciformes", 24.0),
...: ("mammal", "Carnivora", 80.2),
...: ("mammal", "Primates", np.nan),
...: ("mammal", "Carnivora", 58),
...: ],
...: index=["falcon", "parrot", "lion", "monkey", "leopard"],
...: columns=("class", "order", "max_speed"),
...: )
...:
In [2]: df
Out[2]:
class order max_speed
falcon bird Falconiformes 389.0
parrot bird Psittaciformes 24.0
lion mammal Carnivora 80.2
monkey mammal Primates NaN
leopard mammal Carnivora 58.0
# default is axis=0
In [3]: grouped = df.groupby("class")
In [4]: grouped = df.groupby("order", axis="columns")
In [5]: grouped = df.groupby(["class", "order"])
The mapping can be specified many different ways:
A Python function, to be called on each of the axis labels.
A list or NumPy array of the same length as the selected axis.
A dict or Series, providing a label -> group name mapping.
For DataFrame objects, a string indicating either a column name or
an index level name to be used to group.
df.groupby('A') is just syntactic sugar for df.groupby(df['A']).
A list of any of the above things.
Collectively we refer to the grouping objects as the keys. For example,
consider the following DataFrame:
Note
A string passed to groupby may refer to either a column or an index level.
If a string matches both a column name and an index level name, a
ValueError will be raised.
In [6]: df = pd.DataFrame(
...: {
...: "A": ["foo", "bar", "foo", "bar", "foo", "bar", "foo", "foo"],
...: "B": ["one", "one", "two", "three", "two", "two", "one", "three"],
...: "C": np.random.randn(8),
...: "D": np.random.randn(8),
...: }
...: )
...:
In [7]: df
Out[7]:
A B C D
0 foo one 0.469112 -0.861849
1 bar one -0.282863 -2.104569
2 foo two -1.509059 -0.494929
3 bar three -1.135632 1.071804
4 foo two 1.212112 0.721555
5 bar two -0.173215 -0.706771
6 foo one 0.119209 -1.039575
7 foo three -1.044236 0.271860
On a DataFrame, we obtain a GroupBy object by calling groupby().
We could naturally group by either the A or B columns, or both:
In [8]: grouped = df.groupby("A")
In [9]: grouped = df.groupby(["A", "B"])
If we also have a MultiIndex on columns A and B, we can group by all
but the specified columns
In [10]: df2 = df.set_index(["A", "B"])
In [11]: grouped = df2.groupby(level=df2.index.names.difference(["B"]))
In [12]: grouped.sum()
Out[12]:
C D
A
bar -1.591710 -1.739537
foo -0.752861 -1.402938
These will split the DataFrame on its index (rows). We could also split by the
columns:
In [13]: def get_letter_type(letter):
....: if letter.lower() in 'aeiou':
....: return 'vowel'
....: else:
....: return 'consonant'
....:
In [14]: grouped = df.groupby(get_letter_type, axis=1)
pandas Index objects support duplicate values. If a
non-unique index is used as the group key in a groupby operation, all values
for the same index value will be considered to be in one group and thus the
output of aggregation functions will only contain unique index values:
In [15]: lst = [1, 2, 3, 1, 2, 3]
In [16]: s = pd.Series([1, 2, 3, 10, 20, 30], lst)
In [17]: grouped = s.groupby(level=0)
In [18]: grouped.first()
Out[18]:
1 1
2 2
3 3
dtype: int64
In [19]: grouped.last()
Out[19]:
1 10
2 20
3 30
dtype: int64
In [20]: grouped.sum()
Out[20]:
1 11
2 22
3 33
dtype: int64
Note that no splitting occurs until it’s needed. Creating the GroupBy object
only verifies that you’ve passed a valid mapping.
Note
Many kinds of complicated data manipulations can be expressed in terms of
GroupBy operations (though can’t be guaranteed to be the most
efficient). You can get quite creative with the label mapping functions.
GroupBy sorting#
By default the group keys are sorted during the groupby operation. You may however pass sort=False for potential speedups:
In [21]: df2 = pd.DataFrame({"X": ["B", "B", "A", "A"], "Y": [1, 2, 3, 4]})
In [22]: df2.groupby(["X"]).sum()
Out[22]:
Y
X
A 7
B 3
In [23]: df2.groupby(["X"], sort=False).sum()
Out[23]:
Y
X
B 3
A 7
Note that groupby will preserve the order in which observations are sorted within each group.
For example, the groups created by groupby() below are in the order they appeared in the original DataFrame:
In [24]: df3 = pd.DataFrame({"X": ["A", "B", "A", "B"], "Y": [1, 4, 3, 2]})
In [25]: df3.groupby(["X"]).get_group("A")
Out[25]:
X Y
0 A 1
2 A 3
In [26]: df3.groupby(["X"]).get_group("B")
Out[26]:
X Y
1 B 4
3 B 2
New in version 1.1.0.
GroupBy dropna#
By default NA values are excluded from group keys during the groupby operation. However,
in case you want to include NA values in group keys, you could pass dropna=False to achieve it.
In [27]: df_list = [[1, 2, 3], [1, None, 4], [2, 1, 3], [1, 2, 2]]
In [28]: df_dropna = pd.DataFrame(df_list, columns=["a", "b", "c"])
In [29]: df_dropna
Out[29]:
a b c
0 1 2.0 3
1 1 NaN 4
2 2 1.0 3
3 1 2.0 2
# Default ``dropna`` is set to True, which will exclude NaNs in keys
In [30]: df_dropna.groupby(by=["b"], dropna=True).sum()
Out[30]:
a c
b
1.0 2 3
2.0 2 5
# In order to allow NaN in keys, set ``dropna`` to False
In [31]: df_dropna.groupby(by=["b"], dropna=False).sum()
Out[31]:
a c
b
1.0 2 3
2.0 2 5
NaN 1 4
The default setting of dropna argument is True which means NA are not included in group keys.
GroupBy object attributes#
The groups attribute is a dict whose keys are the computed unique groups
and corresponding values being the axis labels belonging to each group. In the
above example we have:
In [32]: df.groupby("A").groups
Out[32]: {'bar': [1, 3, 5], 'foo': [0, 2, 4, 6, 7]}
In [33]: df.groupby(get_letter_type, axis=1).groups
Out[33]: {'consonant': ['B', 'C', 'D'], 'vowel': ['A']}
Calling the standard Python len function on the GroupBy object just returns
the length of the groups dict, so it is largely just a convenience:
In [34]: grouped = df.groupby(["A", "B"])
In [35]: grouped.groups
Out[35]: {('bar', 'one'): [1], ('bar', 'three'): [3], ('bar', 'two'): [5], ('foo', 'one'): [0, 6], ('foo', 'three'): [7], ('foo', 'two'): [2, 4]}
In [36]: len(grouped)
Out[36]: 6
GroupBy will tab complete column names (and other attributes):
In [37]: df
Out[37]:
height weight gender
2000-01-01 42.849980 157.500553 male
2000-01-02 49.607315 177.340407 male
2000-01-03 56.293531 171.524640 male
2000-01-04 48.421077 144.251986 female
2000-01-05 46.556882 152.526206 male
2000-01-06 68.448851 168.272968 female
2000-01-07 70.757698 136.431469 male
2000-01-08 58.909500 176.499753 female
2000-01-09 76.435631 174.094104 female
2000-01-10 45.306120 177.540920 male
In [38]: gb = df.groupby("gender")
In [39]: gb.<TAB> # noqa: E225, E999
gb.agg gb.boxplot gb.cummin gb.describe gb.filter gb.get_group gb.height gb.last gb.median gb.ngroups gb.plot gb.rank gb.std gb.transform
gb.aggregate gb.count gb.cumprod gb.dtype gb.first gb.groups gb.hist gb.max gb.min gb.nth gb.prod gb.resample gb.sum gb.var
gb.apply gb.cummax gb.cumsum gb.fillna gb.gender gb.head gb.indices gb.mean gb.name gb.ohlc gb.quantile gb.size gb.tail gb.weight
GroupBy with MultiIndex#
With hierarchically-indexed data, it’s quite
natural to group by one of the levels of the hierarchy.
Let’s create a Series with a two-level MultiIndex.
In [40]: arrays = [
....: ["bar", "bar", "baz", "baz", "foo", "foo", "qux", "qux"],
....: ["one", "two", "one", "two", "one", "two", "one", "two"],
....: ]
....:
In [41]: index = pd.MultiIndex.from_arrays(arrays, names=["first", "second"])
In [42]: s = pd.Series(np.random.randn(8), index=index)
In [43]: s
Out[43]:
first second
bar one -0.919854
two -0.042379
baz one 1.247642
two -0.009920
foo one 0.290213
two 0.495767
qux one 0.362949
two 1.548106
dtype: float64
We can then group by one of the levels in s.
In [44]: grouped = s.groupby(level=0)
In [45]: grouped.sum()
Out[45]:
first
bar -0.962232
baz 1.237723
foo 0.785980
qux 1.911055
dtype: float64
If the MultiIndex has names specified, these can be passed instead of the level
number:
In [46]: s.groupby(level="second").sum()
Out[46]:
second
one 0.980950
two 1.991575
dtype: float64
Grouping with multiple levels is supported.
In [47]: s
Out[47]:
first second third
bar doo one -1.131345
two -0.089329
baz bee one 0.337863
two -0.945867
foo bop one -0.932132
two 1.956030
qux bop one 0.017587
two -0.016692
dtype: float64
In [48]: s.groupby(level=["first", "second"]).sum()
Out[48]:
first second
bar doo -1.220674
baz bee -0.608004
foo bop 1.023898
qux bop 0.000895
dtype: float64
Index level names may be supplied as keys.
In [49]: s.groupby(["first", "second"]).sum()
Out[49]:
first second
bar doo -1.220674
baz bee -0.608004
foo bop 1.023898
qux bop 0.000895
dtype: float64
More on the sum function and aggregation later.
Grouping DataFrame with Index levels and columns#
A DataFrame may be grouped by a combination of columns and index levels by
specifying the column names as strings and the index levels as pd.Grouper
objects.
In [50]: arrays = [
....: ["bar", "bar", "baz", "baz", "foo", "foo", "qux", "qux"],
....: ["one", "two", "one", "two", "one", "two", "one", "two"],
....: ]
....:
In [51]: index = pd.MultiIndex.from_arrays(arrays, names=["first", "second"])
In [52]: df = pd.DataFrame({"A": [1, 1, 1, 1, 2, 2, 3, 3], "B": np.arange(8)}, index=index)
In [53]: df
Out[53]:
A B
first second
bar one 1 0
two 1 1
baz one 1 2
two 1 3
foo one 2 4
two 2 5
qux one 3 6
two 3 7
The following example groups df by the second index level and
the A column.
In [54]: df.groupby([pd.Grouper(level=1), "A"]).sum()
Out[54]:
B
second A
one 1 2
2 4
3 6
two 1 4
2 5
3 7
Index levels may also be specified by name.
In [55]: df.groupby([pd.Grouper(level="second"), "A"]).sum()
Out[55]:
B
second A
one 1 2
2 4
3 6
two 1 4
2 5
3 7
Index level names may be specified as keys directly to groupby.
In [56]: df.groupby(["second", "A"]).sum()
Out[56]:
B
second A
one 1 2
2 4
3 6
two 1 4
2 5
3 7
DataFrame column selection in GroupBy#
Once you have created the GroupBy object from a DataFrame, you might want to do
something different for each of the columns. Thus, using [] similar to
getting a column from a DataFrame, you can do:
In [57]: df = pd.DataFrame(
....: {
....: "A": ["foo", "bar", "foo", "bar", "foo", "bar", "foo", "foo"],
....: "B": ["one", "one", "two", "three", "two", "two", "one", "three"],
....: "C": np.random.randn(8),
....: "D": np.random.randn(8),
....: }
....: )
....:
In [58]: df
Out[58]:
A B C D
0 foo one -0.575247 1.346061
1 bar one 0.254161 1.511763
2 foo two -1.143704 1.627081
3 bar three 0.215897 -0.990582
4 foo two 1.193555 -0.441652
5 bar two -0.077118 1.211526
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
In [59]: grouped = df.groupby(["A"])
In [60]: grouped_C = grouped["C"]
In [61]: grouped_D = grouped["D"]
This is mainly syntactic sugar for the alternative and much more verbose:
In [62]: df["C"].groupby(df["A"])
Out[62]: <pandas.core.groupby.generic.SeriesGroupBy object at 0x7f1ea100a490>
Additionally this method avoids recomputing the internal grouping information
derived from the passed key.
Iterating through groups#
With the GroupBy object in hand, iterating through the grouped data is very
natural and functions similarly to itertools.groupby():
In [63]: grouped = df.groupby('A')
In [64]: for name, group in grouped:
....: print(name)
....: print(group)
....:
bar
A B C D
1 bar one 0.254161 1.511763
3 bar three 0.215897 -0.990582
5 bar two -0.077118 1.211526
foo
A B C D
0 foo one -0.575247 1.346061
2 foo two -1.143704 1.627081
4 foo two 1.193555 -0.441652
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
In the case of grouping by multiple keys, the group name will be a tuple:
In [65]: for name, group in df.groupby(['A', 'B']):
....: print(name)
....: print(group)
....:
('bar', 'one')
A B C D
1 bar one 0.254161 1.511763
('bar', 'three')
A B C D
3 bar three 0.215897 -0.990582
('bar', 'two')
A B C D
5 bar two -0.077118 1.211526
('foo', 'one')
A B C D
0 foo one -0.575247 1.346061
6 foo one -0.408530 0.268520
('foo', 'three')
A B C D
7 foo three -0.862495 0.02458
('foo', 'two')
A B C D
2 foo two -1.143704 1.627081
4 foo two 1.193555 -0.441652
See Iterating through groups.
Selecting a group#
A single group can be selected using
get_group():
In [66]: grouped.get_group("bar")
Out[66]:
A B C D
1 bar one 0.254161 1.511763
3 bar three 0.215897 -0.990582
5 bar two -0.077118 1.211526
Or for an object grouped on multiple columns:
In [67]: df.groupby(["A", "B"]).get_group(("bar", "one"))
Out[67]:
A B C D
1 bar one 0.254161 1.511763
Aggregation#
Once the GroupBy object has been created, several methods are available to
perform a computation on the grouped data. These operations are similar to the
aggregating API, window API,
and resample API.
An obvious one is aggregation via the
aggregate() or equivalently
agg() method:
In [68]: grouped = df.groupby("A")
In [69]: grouped[["C", "D"]].aggregate(np.sum)
Out[69]:
C D
A
bar 0.392940 1.732707
foo -1.796421 2.824590
In [70]: grouped = df.groupby(["A", "B"])
In [71]: grouped.aggregate(np.sum)
Out[71]:
C D
A B
bar one 0.254161 1.511763
three 0.215897 -0.990582
two -0.077118 1.211526
foo one -0.983776 1.614581
three -0.862495 0.024580
two 0.049851 1.185429
As you can see, the result of the aggregation will have the group names as the
new index along the grouped axis. In the case of multiple keys, the result is a
MultiIndex by default, though this can be
changed by using the as_index option:
In [72]: grouped = df.groupby(["A", "B"], as_index=False)
In [73]: grouped.aggregate(np.sum)
Out[73]:
A B C D
0 bar one 0.254161 1.511763
1 bar three 0.215897 -0.990582
2 bar two -0.077118 1.211526
3 foo one -0.983776 1.614581
4 foo three -0.862495 0.024580
5 foo two 0.049851 1.185429
In [74]: df.groupby("A", as_index=False)[["C", "D"]].sum()
Out[74]:
A C D
0 bar 0.392940 1.732707
1 foo -1.796421 2.824590
Note that you could use the reset_index DataFrame function to achieve the
same result as the column names are stored in the resulting MultiIndex:
In [75]: df.groupby(["A", "B"]).sum().reset_index()
Out[75]:
A B C D
0 bar one 0.254161 1.511763
1 bar three 0.215897 -0.990582
2 bar two -0.077118 1.211526
3 foo one -0.983776 1.614581
4 foo three -0.862495 0.024580
5 foo two 0.049851 1.185429
Another simple aggregation example is to compute the size of each group.
This is included in GroupBy as the size method. It returns a Series whose
index are the group names and whose values are the sizes of each group.
In [76]: grouped.size()
Out[76]:
A B size
0 bar one 1
1 bar three 1
2 bar two 1
3 foo one 2
4 foo three 1
5 foo two 2
In [77]: grouped.describe()
Out[77]:
C ... D
count mean std min ... 25% 50% 75% max
0 1.0 0.254161 NaN 0.254161 ... 1.511763 1.511763 1.511763 1.511763
1 1.0 0.215897 NaN 0.215897 ... -0.990582 -0.990582 -0.990582 -0.990582
2 1.0 -0.077118 NaN -0.077118 ... 1.211526 1.211526 1.211526 1.211526
3 2.0 -0.491888 0.117887 -0.575247 ... 0.537905 0.807291 1.076676 1.346061
4 1.0 -0.862495 NaN -0.862495 ... 0.024580 0.024580 0.024580 0.024580
5 2.0 0.024925 1.652692 -1.143704 ... 0.075531 0.592714 1.109898 1.627081
[6 rows x 16 columns]
Another aggregation example is to compute the number of unique values of each group. This is similar to the value_counts function, except that it only counts unique values.
In [78]: ll = [['foo', 1], ['foo', 2], ['foo', 2], ['bar', 1], ['bar', 1]]
In [79]: df4 = pd.DataFrame(ll, columns=["A", "B"])
In [80]: df4
Out[80]:
A B
0 foo 1
1 foo 2
2 foo 2
3 bar 1
4 bar 1
In [81]: df4.groupby("A")["B"].nunique()
Out[81]:
A
bar 1
foo 2
Name: B, dtype: int64
Note
Aggregation functions will not return the groups that you are aggregating over
if they are named columns, when as_index=True, the default. The grouped columns will
be the indices of the returned object.
Passing as_index=False will return the groups that you are aggregating over, if they are
named columns.
Aggregating functions are the ones that reduce the dimension of the returned objects.
Some common aggregating functions are tabulated below:
Function
Description
mean()
Compute mean of groups
sum()
Compute sum of group values
size()
Compute group sizes
count()
Compute count of group
std()
Standard deviation of groups
var()
Compute variance of groups
sem()
Standard error of the mean of groups
describe()
Generates descriptive statistics
first()
Compute first of group values
last()
Compute last of group values
nth()
Take nth value, or a subset if n is a list
min()
Compute min of group values
max()
Compute max of group values
The aggregating functions above will exclude NA values. Any function which
reduces a Series to a scalar value is an aggregation function and will work,
a trivial example is df.groupby('A').agg(lambda ser: 1). Note that
nth() can act as a reducer or a
filter, see here.
Applying multiple functions at once#
With grouped Series you can also pass a list or dict of functions to do
aggregation with, outputting a DataFrame:
In [82]: grouped = df.groupby("A")
In [83]: grouped["C"].agg([np.sum, np.mean, np.std])
Out[83]:
sum mean std
A
bar 0.392940 0.130980 0.181231
foo -1.796421 -0.359284 0.912265
On a grouped DataFrame, you can pass a list of functions to apply to each
column, which produces an aggregated result with a hierarchical index:
In [84]: grouped[["C", "D"]].agg([np.sum, np.mean, np.std])
Out[84]:
C D
sum mean std sum mean std
A
bar 0.392940 0.130980 0.181231 1.732707 0.577569 1.366330
foo -1.796421 -0.359284 0.912265 2.824590 0.564918 0.884785
The resulting aggregations are named for the functions themselves. If you
need to rename, then you can add in a chained operation for a Series like this:
In [85]: (
....: grouped["C"]
....: .agg([np.sum, np.mean, np.std])
....: .rename(columns={"sum": "foo", "mean": "bar", "std": "baz"})
....: )
....:
Out[85]:
foo bar baz
A
bar 0.392940 0.130980 0.181231
foo -1.796421 -0.359284 0.912265
For a grouped DataFrame, you can rename in a similar manner:
In [86]: (
....: grouped[["C", "D"]].agg([np.sum, np.mean, np.std]).rename(
....: columns={"sum": "foo", "mean": "bar", "std": "baz"}
....: )
....: )
....:
Out[86]:
C D
foo bar baz foo bar baz
A
bar 0.392940 0.130980 0.181231 1.732707 0.577569 1.366330
foo -1.796421 -0.359284 0.912265 2.824590 0.564918 0.884785
Note
In general, the output column names should be unique. You can’t apply
the same function (or two functions with the same name) to the same
column.
In [87]: grouped["C"].agg(["sum", "sum"])
Out[87]:
sum sum
A
bar 0.392940 0.392940
foo -1.796421 -1.796421
pandas does allow you to provide multiple lambdas. In this case, pandas
will mangle the name of the (nameless) lambda functions, appending _<i>
to each subsequent lambda.
In [88]: grouped["C"].agg([lambda x: x.max() - x.min(), lambda x: x.median() - x.mean()])
Out[88]:
<lambda_0> <lambda_1>
A
bar 0.331279 0.084917
foo 2.337259 -0.215962
Named aggregation#
New in version 0.25.0.
To support column-specific aggregation with control over the output column names, pandas
accepts the special syntax in GroupBy.agg(), known as “named aggregation”, where
The keywords are the output column names
The values are tuples whose first element is the column to select
and the second element is the aggregation to apply to that column. pandas
provides the pandas.NamedAgg namedtuple with the fields ['column', 'aggfunc']
to make it clearer what the arguments are. As usual, the aggregation can
be a callable or a string alias.
In [89]: animals = pd.DataFrame(
....: {
....: "kind": ["cat", "dog", "cat", "dog"],
....: "height": [9.1, 6.0, 9.5, 34.0],
....: "weight": [7.9, 7.5, 9.9, 198.0],
....: }
....: )
....:
In [90]: animals
Out[90]:
kind height weight
0 cat 9.1 7.9
1 dog 6.0 7.5
2 cat 9.5 9.9
3 dog 34.0 198.0
In [91]: animals.groupby("kind").agg(
....: min_height=pd.NamedAgg(column="height", aggfunc="min"),
....: max_height=pd.NamedAgg(column="height", aggfunc="max"),
....: average_weight=pd.NamedAgg(column="weight", aggfunc=np.mean),
....: )
....:
Out[91]:
min_height max_height average_weight
kind
cat 9.1 9.5 8.90
dog 6.0 34.0 102.75
pandas.NamedAgg is just a namedtuple. Plain tuples are allowed as well.
In [92]: animals.groupby("kind").agg(
....: min_height=("height", "min"),
....: max_height=("height", "max"),
....: average_weight=("weight", np.mean),
....: )
....:
Out[92]:
min_height max_height average_weight
kind
cat 9.1 9.5 8.90
dog 6.0 34.0 102.75
If your desired output column names are not valid Python keywords, construct a dictionary
and unpack the keyword arguments
In [93]: animals.groupby("kind").agg(
....: **{
....: "total weight": pd.NamedAgg(column="weight", aggfunc=sum)
....: }
....: )
....:
Out[93]:
total weight
kind
cat 17.8
dog 205.5
Additional keyword arguments are not passed through to the aggregation functions. Only pairs
of (column, aggfunc) should be passed as **kwargs. If your aggregation functions
requires additional arguments, partially apply them with functools.partial().
Note
For Python 3.5 and earlier, the order of **kwargs in a functions was not
preserved. This means that the output column ordering would not be
consistent. To ensure consistent ordering, the keys (and so output columns)
will always be sorted for Python 3.5.
Named aggregation is also valid for Series groupby aggregations. In this case there’s
no column selection, so the values are just the functions.
In [94]: animals.groupby("kind").height.agg(
....: min_height="min",
....: max_height="max",
....: )
....:
Out[94]:
min_height max_height
kind
cat 9.1 9.5
dog 6.0 34.0
Applying different functions to DataFrame columns#
By passing a dict to aggregate you can apply a different aggregation to the
columns of a DataFrame:
In [95]: grouped.agg({"C": np.sum, "D": lambda x: np.std(x, ddof=1)})
Out[95]:
C D
A
bar 0.392940 1.366330
foo -1.796421 0.884785
The function names can also be strings. In order for a string to be valid it
must be either implemented on GroupBy or available via dispatching:
In [96]: grouped.agg({"C": "sum", "D": "std"})
Out[96]:
C D
A
bar 0.392940 1.366330
foo -1.796421 0.884785
Cython-optimized aggregation functions#
Some common aggregations, currently only sum, mean, std, and sem, have
optimized Cython implementations:
In [97]: df.groupby("A")[["C", "D"]].sum()
Out[97]:
C D
A
bar 0.392940 1.732707
foo -1.796421 2.824590
In [98]: df.groupby(["A", "B"]).mean()
Out[98]:
C D
A B
bar one 0.254161 1.511763
three 0.215897 -0.990582
two -0.077118 1.211526
foo one -0.491888 0.807291
three -0.862495 0.024580
two 0.024925 0.592714
Of course sum and mean are implemented on pandas objects, so the above
code would work even without the special versions via dispatching (see below).
Aggregations with User-Defined Functions#
Users can also provide their own functions for custom aggregations. When aggregating
with a User-Defined Function (UDF), the UDF should not mutate the provided Series, see
Mutating with User Defined Function (UDF) methods for more information.
In [99]: animals.groupby("kind")[["height"]].agg(lambda x: set(x))
Out[99]:
height
kind
cat {9.1, 9.5}
dog {34.0, 6.0}
The resulting dtype will reflect that of the aggregating function. If the results from different groups have
different dtypes, then a common dtype will be determined in the same way as DataFrame construction.
In [100]: animals.groupby("kind")[["height"]].agg(lambda x: x.astype(int).sum())
Out[100]:
height
kind
cat 18
dog 40
Transformation#
The transform method returns an object that is indexed the same
as the one being grouped. The transform function must:
Return a result that is either the same size as the group chunk or
broadcastable to the size of the group chunk (e.g., a scalar,
grouped.transform(lambda x: x.iloc[-1])).
Operate column-by-column on the group chunk. The transform is applied to
the first group chunk using chunk.apply.
Not perform in-place operations on the group chunk. Group chunks should
be treated as immutable, and changes to a group chunk may produce unexpected
results. For example, when using fillna, inplace must be False
(grouped.transform(lambda x: x.fillna(inplace=False))).
(Optionally) operates on the entire group chunk. If this is supported, a
fast path is used starting from the second chunk.
Deprecated since version 1.5.0: When using .transform on a grouped DataFrame and the transformation function
returns a DataFrame, currently pandas does not align the result’s index
with the input’s index. This behavior is deprecated and alignment will
be performed in a future version of pandas. You can apply .to_numpy() to the
result of the transformation function to avoid alignment.
Similar to Aggregations with User-Defined Functions, the resulting dtype will reflect that of the
transformation function. If the results from different groups have different dtypes, then
a common dtype will be determined in the same way as DataFrame construction.
Suppose we wished to standardize the data within each group:
In [101]: index = pd.date_range("10/1/1999", periods=1100)
In [102]: ts = pd.Series(np.random.normal(0.5, 2, 1100), index)
In [103]: ts = ts.rolling(window=100, min_periods=100).mean().dropna()
In [104]: ts.head()
Out[104]:
2000-01-08 0.779333
2000-01-09 0.778852
2000-01-10 0.786476
2000-01-11 0.782797
2000-01-12 0.798110
Freq: D, dtype: float64
In [105]: ts.tail()
Out[105]:
2002-09-30 0.660294
2002-10-01 0.631095
2002-10-02 0.673601
2002-10-03 0.709213
2002-10-04 0.719369
Freq: D, dtype: float64
In [106]: transformed = ts.groupby(lambda x: x.year).transform(
.....: lambda x: (x - x.mean()) / x.std()
.....: )
.....:
We would expect the result to now have mean 0 and standard deviation 1 within
each group, which we can easily check:
# Original Data
In [107]: grouped = ts.groupby(lambda x: x.year)
In [108]: grouped.mean()
Out[108]:
2000 0.442441
2001 0.526246
2002 0.459365
dtype: float64
In [109]: grouped.std()
Out[109]:
2000 0.131752
2001 0.210945
2002 0.128753
dtype: float64
# Transformed Data
In [110]: grouped_trans = transformed.groupby(lambda x: x.year)
In [111]: grouped_trans.mean()
Out[111]:
2000 -4.870756e-16
2001 -1.545187e-16
2002 4.136282e-16
dtype: float64
In [112]: grouped_trans.std()
Out[112]:
2000 1.0
2001 1.0
2002 1.0
dtype: float64
We can also visually compare the original and transformed data sets.
In [113]: compare = pd.DataFrame({"Original": ts, "Transformed": transformed})
In [114]: compare.plot()
Out[114]: <AxesSubplot: >
Transformation functions that have lower dimension outputs are broadcast to
match the shape of the input array.
In [115]: ts.groupby(lambda x: x.year).transform(lambda x: x.max() - x.min())
Out[115]:
2000-01-08 0.623893
2000-01-09 0.623893
2000-01-10 0.623893
2000-01-11 0.623893
2000-01-12 0.623893
...
2002-09-30 0.558275
2002-10-01 0.558275
2002-10-02 0.558275
2002-10-03 0.558275
2002-10-04 0.558275
Freq: D, Length: 1001, dtype: float64
Alternatively, the built-in methods could be used to produce the same outputs.
In [116]: max_ts = ts.groupby(lambda x: x.year).transform("max")
In [117]: min_ts = ts.groupby(lambda x: x.year).transform("min")
In [118]: max_ts - min_ts
Out[118]:
2000-01-08 0.623893
2000-01-09 0.623893
2000-01-10 0.623893
2000-01-11 0.623893
2000-01-12 0.623893
...
2002-09-30 0.558275
2002-10-01 0.558275
2002-10-02 0.558275
2002-10-03 0.558275
2002-10-04 0.558275
Freq: D, Length: 1001, dtype: float64
Another common data transform is to replace missing data with the group mean.
In [119]: data_df
Out[119]:
A B C
0 1.539708 -1.166480 0.533026
1 1.302092 -0.505754 NaN
2 -0.371983 1.104803 -0.651520
3 -1.309622 1.118697 -1.161657
4 -1.924296 0.396437 0.812436
.. ... ... ...
995 -0.093110 0.683847 -0.774753
996 -0.185043 1.438572 NaN
997 -0.394469 -0.642343 0.011374
998 -1.174126 1.857148 NaN
999 0.234564 0.517098 0.393534
[1000 rows x 3 columns]
In [120]: countries = np.array(["US", "UK", "GR", "JP"])
In [121]: key = countries[np.random.randint(0, 4, 1000)]
In [122]: grouped = data_df.groupby(key)
# Non-NA count in each group
In [123]: grouped.count()
Out[123]:
A B C
GR 209 217 189
JP 240 255 217
UK 216 231 193
US 239 250 217
In [124]: transformed = grouped.transform(lambda x: x.fillna(x.mean()))
We can verify that the group means have not changed in the transformed data
and that the transformed data contains no NAs.
In [125]: grouped_trans = transformed.groupby(key)
In [126]: grouped.mean() # original group means
Out[126]:
A B C
GR -0.098371 -0.015420 0.068053
JP 0.069025 0.023100 -0.077324
UK 0.034069 -0.052580 -0.116525
US 0.058664 -0.020399 0.028603
In [127]: grouped_trans.mean() # transformation did not change group means
Out[127]:
A B C
GR -0.098371 -0.015420 0.068053
JP 0.069025 0.023100 -0.077324
UK 0.034069 -0.052580 -0.116525
US 0.058664 -0.020399 0.028603
In [128]: grouped.count() # original has some missing data points
Out[128]:
A B C
GR 209 217 189
JP 240 255 217
UK 216 231 193
US 239 250 217
In [129]: grouped_trans.count() # counts after transformation
Out[129]:
A B C
GR 228 228 228
JP 267 267 267
UK 247 247 247
US 258 258 258
In [130]: grouped_trans.size() # Verify non-NA count equals group size
Out[130]:
GR 228
JP 267
UK 247
US 258
dtype: int64
Note
Some functions will automatically transform the input when applied to a
GroupBy object, but returning an object of the same shape as the original.
Passing as_index=False will not affect these transformation methods.
For example: fillna, ffill, bfill, shift..
In [131]: grouped.ffill()
Out[131]:
A B C
0 1.539708 -1.166480 0.533026
1 1.302092 -0.505754 0.533026
2 -0.371983 1.104803 -0.651520
3 -1.309622 1.118697 -1.161657
4 -1.924296 0.396437 0.812436
.. ... ... ...
995 -0.093110 0.683847 -0.774753
996 -0.185043 1.438572 -0.774753
997 -0.394469 -0.642343 0.011374
998 -1.174126 1.857148 -0.774753
999 0.234564 0.517098 0.393534
[1000 rows x 3 columns]
Window and resample operations#
It is possible to use resample(), expanding() and
rolling() as methods on groupbys.
The example below will apply the rolling() method on the samples of
the column B based on the groups of column A.
In [132]: df_re = pd.DataFrame({"A": [1] * 10 + [5] * 10, "B": np.arange(20)})
In [133]: df_re
Out[133]:
A B
0 1 0
1 1 1
2 1 2
3 1 3
4 1 4
.. .. ..
15 5 15
16 5 16
17 5 17
18 5 18
19 5 19
[20 rows x 2 columns]
In [134]: df_re.groupby("A").rolling(4).B.mean()
Out[134]:
A
1 0 NaN
1 NaN
2 NaN
3 1.5
4 2.5
...
5 15 13.5
16 14.5
17 15.5
18 16.5
19 17.5
Name: B, Length: 20, dtype: float64
The expanding() method will accumulate a given operation
(sum() in the example) for all the members of each particular
group.
In [135]: df_re.groupby("A").expanding().sum()
Out[135]:
B
A
1 0 0.0
1 1.0
2 3.0
3 6.0
4 10.0
... ...
5 15 75.0
16 91.0
17 108.0
18 126.0
19 145.0
[20 rows x 1 columns]
Suppose you want to use the resample() method to get a daily
frequency in each group of your dataframe and wish to complete the
missing values with the ffill() method.
In [136]: df_re = pd.DataFrame(
.....: {
.....: "date": pd.date_range(start="2016-01-01", periods=4, freq="W"),
.....: "group": [1, 1, 2, 2],
.....: "val": [5, 6, 7, 8],
.....: }
.....: ).set_index("date")
.....:
In [137]: df_re
Out[137]:
group val
date
2016-01-03 1 5
2016-01-10 1 6
2016-01-17 2 7
2016-01-24 2 8
In [138]: df_re.groupby("group").resample("1D").ffill()
Out[138]:
group val
group date
1 2016-01-03 1 5
2016-01-04 1 5
2016-01-05 1 5
2016-01-06 1 5
2016-01-07 1 5
... ... ...
2 2016-01-20 2 7
2016-01-21 2 7
2016-01-22 2 7
2016-01-23 2 7
2016-01-24 2 8
[16 rows x 2 columns]
Filtration#
The filter method returns a subset of the original object. Suppose we
want to take only elements that belong to groups with a group sum greater
than 2.
In [139]: sf = pd.Series([1, 1, 2, 3, 3, 3])
In [140]: sf.groupby(sf).filter(lambda x: x.sum() > 2)
Out[140]:
3 3
4 3
5 3
dtype: int64
The argument of filter must be a function that, applied to the group as a
whole, returns True or False.
Another useful operation is filtering out elements that belong to groups
with only a couple members.
In [141]: dff = pd.DataFrame({"A": np.arange(8), "B": list("aabbbbcc")})
In [142]: dff.groupby("B").filter(lambda x: len(x) > 2)
Out[142]:
A B
2 2 b
3 3 b
4 4 b
5 5 b
Alternatively, instead of dropping the offending groups, we can return a
like-indexed objects where the groups that do not pass the filter are filled
with NaNs.
In [143]: dff.groupby("B").filter(lambda x: len(x) > 2, dropna=False)
Out[143]:
A B
0 NaN NaN
1 NaN NaN
2 2.0 b
3 3.0 b
4 4.0 b
5 5.0 b
6 NaN NaN
7 NaN NaN
For DataFrames with multiple columns, filters should explicitly specify a column as the filter criterion.
In [144]: dff["C"] = np.arange(8)
In [145]: dff.groupby("B").filter(lambda x: len(x["C"]) > 2)
Out[145]:
A B C
2 2 b 2
3 3 b 3
4 4 b 4
5 5 b 5
Note
Some functions when applied to a groupby object will act as a filter on the input, returning
a reduced shape of the original (and potentially eliminating groups), but with the index unchanged.
Passing as_index=False will not affect these transformation methods.
For example: head, tail.
In [146]: dff.groupby("B").head(2)
Out[146]:
A B C
0 0 a 0
1 1 a 1
2 2 b 2
3 3 b 3
6 6 c 6
7 7 c 7
Dispatching to instance methods#
When doing an aggregation or transformation, you might just want to call an
instance method on each data group. This is pretty easy to do by passing lambda
functions:
In [147]: grouped = df.groupby("A")
In [148]: grouped.agg(lambda x: x.std())
Out[148]:
C D
A
bar 0.181231 1.366330
foo 0.912265 0.884785
But, it’s rather verbose and can be untidy if you need to pass additional
arguments. Using a bit of metaprogramming cleverness, GroupBy now has the
ability to “dispatch” method calls to the groups:
In [149]: grouped.std()
Out[149]:
C D
A
bar 0.181231 1.366330
foo 0.912265 0.884785
What is actually happening here is that a function wrapper is being
generated. When invoked, it takes any passed arguments and invokes the function
with any arguments on each group (in the above example, the std
function). The results are then combined together much in the style of agg
and transform (it actually uses apply to infer the gluing, documented
next). This enables some operations to be carried out rather succinctly:
In [150]: tsdf = pd.DataFrame(
.....: np.random.randn(1000, 3),
.....: index=pd.date_range("1/1/2000", periods=1000),
.....: columns=["A", "B", "C"],
.....: )
.....:
In [151]: tsdf.iloc[::2] = np.nan
In [152]: grouped = tsdf.groupby(lambda x: x.year)
In [153]: grouped.fillna(method="pad")
Out[153]:
A B C
2000-01-01 NaN NaN NaN
2000-01-02 -0.353501 -0.080957 -0.876864
2000-01-03 -0.353501 -0.080957 -0.876864
2000-01-04 0.050976 0.044273 -0.559849
2000-01-05 0.050976 0.044273 -0.559849
... ... ... ...
2002-09-22 0.005011 0.053897 -1.026922
2002-09-23 0.005011 0.053897 -1.026922
2002-09-24 -0.456542 -1.849051 1.559856
2002-09-25 -0.456542 -1.849051 1.559856
2002-09-26 1.123162 0.354660 1.128135
[1000 rows x 3 columns]
In this example, we chopped the collection of time series into yearly chunks
then independently called fillna on the
groups.
The nlargest and nsmallest methods work on Series style groupbys:
In [154]: s = pd.Series([9, 8, 7, 5, 19, 1, 4.2, 3.3])
In [155]: g = pd.Series(list("abababab"))
In [156]: gb = s.groupby(g)
In [157]: gb.nlargest(3)
Out[157]:
a 4 19.0
0 9.0
2 7.0
b 1 8.0
3 5.0
7 3.3
dtype: float64
In [158]: gb.nsmallest(3)
Out[158]:
a 6 4.2
2 7.0
0 9.0
b 5 1.0
7 3.3
3 5.0
dtype: float64
Flexible apply#
Some operations on the grouped data might not fit into either the aggregate or
transform categories. Or, you may simply want GroupBy to infer how to combine
the results. For these, use the apply function, which can be substituted
for both aggregate and transform in many standard use cases. However,
apply can handle some exceptional use cases.
Note
apply can act as a reducer, transformer, or filter function, depending
on exactly what is passed to it. It can depend on the passed function and
exactly what you are grouping. Thus the grouped column(s) may be included in
the output as well as set the indices.
In [159]: df
Out[159]:
A B C D
0 foo one -0.575247 1.346061
1 bar one 0.254161 1.511763
2 foo two -1.143704 1.627081
3 bar three 0.215897 -0.990582
4 foo two 1.193555 -0.441652
5 bar two -0.077118 1.211526
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
In [160]: grouped = df.groupby("A")
# could also just call .describe()
In [161]: grouped["C"].apply(lambda x: x.describe())
Out[161]:
A
bar count 3.000000
mean 0.130980
std 0.181231
min -0.077118
25% 0.069390
...
foo min -1.143704
25% -0.862495
50% -0.575247
75% -0.408530
max 1.193555
Name: C, Length: 16, dtype: float64
The dimension of the returned result can also change:
In [162]: grouped = df.groupby('A')['C']
In [163]: def f(group):
.....: return pd.DataFrame({'original': group,
.....: 'demeaned': group - group.mean()})
.....:
apply on a Series can operate on a returned value from the applied function,
that is itself a series, and possibly upcast the result to a DataFrame:
In [164]: def f(x):
.....: return pd.Series([x, x ** 2], index=["x", "x^2"])
.....:
In [165]: s = pd.Series(np.random.rand(5))
In [166]: s
Out[166]:
0 0.321438
1 0.493496
2 0.139505
3 0.910103
4 0.194158
dtype: float64
In [167]: s.apply(f)
Out[167]:
x x^2
0 0.321438 0.103323
1 0.493496 0.243538
2 0.139505 0.019462
3 0.910103 0.828287
4 0.194158 0.037697
Control grouped column(s) placement with group_keys#
Note
If group_keys=True is specified when calling groupby(),
functions passed to apply that return like-indexed outputs will have the
group keys added to the result index. Previous versions of pandas would add
the group keys only when the result from the applied function had a different
index than the input. If group_keys is not specified, the group keys will
not be added for like-indexed outputs. In the future this behavior
will change to always respect group_keys, which defaults to True.
Changed in version 1.5.0.
To control whether the grouped column(s) are included in the indices, you can use
the argument group_keys. Compare
In [168]: df.groupby("A", group_keys=True).apply(lambda x: x)
Out[168]:
A B C D
A
bar 1 bar one 0.254161 1.511763
3 bar three 0.215897 -0.990582
5 bar two -0.077118 1.211526
foo 0 foo one -0.575247 1.346061
2 foo two -1.143704 1.627081
4 foo two 1.193555 -0.441652
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
with
In [169]: df.groupby("A", group_keys=False).apply(lambda x: x)
Out[169]:
A B C D
0 foo one -0.575247 1.346061
1 bar one 0.254161 1.511763
2 foo two -1.143704 1.627081
3 bar three 0.215897 -0.990582
4 foo two 1.193555 -0.441652
5 bar two -0.077118 1.211526
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
Similar to Aggregations with User-Defined Functions, the resulting dtype will reflect that of the
apply function. If the results from different groups have different dtypes, then
a common dtype will be determined in the same way as DataFrame construction.
Numba Accelerated Routines#
New in version 1.1.
If Numba is installed as an optional dependency, the transform and
aggregate methods support engine='numba' and engine_kwargs arguments.
See enhancing performance with Numba for general usage of the arguments
and performance considerations.
The function signature must start with values, index exactly as the data belonging to each group
will be passed into values, and the group index will be passed into index.
Warning
When using engine='numba', there will be no “fall back” behavior internally. The group
data and group index will be passed as NumPy arrays to the JITed user defined function, and no
alternative execution attempts will be tried.
Other useful features#
Automatic exclusion of “nuisance” columns#
Again consider the example DataFrame we’ve been looking at:
In [170]: df
Out[170]:
A B C D
0 foo one -0.575247 1.346061
1 bar one 0.254161 1.511763
2 foo two -1.143704 1.627081
3 bar three 0.215897 -0.990582
4 foo two 1.193555 -0.441652
5 bar two -0.077118 1.211526
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
Suppose we wish to compute the standard deviation grouped by the A
column. There is a slight problem, namely that we don’t care about the data in
column B. We refer to this as a “nuisance” column. You can avoid nuisance
columns by specifying numeric_only=True:
In [171]: df.groupby("A").std(numeric_only=True)
Out[171]:
C D
A
bar 0.181231 1.366330
foo 0.912265 0.884785
Note that df.groupby('A').colname.std(). is more efficient than
df.groupby('A').std().colname, so if the result of an aggregation function
is only interesting over one column (here colname), it may be filtered
before applying the aggregation function.
Note
Any object column, also if it contains numerical values such as Decimal
objects, is considered as a “nuisance” columns. They are excluded from
aggregate functions automatically in groupby.
If you do wish to include decimal or object columns in an aggregation with
other non-nuisance data types, you must do so explicitly.
Warning
The automatic dropping of nuisance columns has been deprecated and will be removed
in a future version of pandas. If columns are included that cannot be operated
on, pandas will instead raise an error. In order to avoid this, either select
the columns you wish to operate on or specify numeric_only=True.
In [172]: from decimal import Decimal
In [173]: df_dec = pd.DataFrame(
.....: {
.....: "id": [1, 2, 1, 2],
.....: "int_column": [1, 2, 3, 4],
.....: "dec_column": [
.....: Decimal("0.50"),
.....: Decimal("0.15"),
.....: Decimal("0.25"),
.....: Decimal("0.40"),
.....: ],
.....: }
.....: )
.....:
# Decimal columns can be sum'd explicitly by themselves...
In [174]: df_dec.groupby(["id"])[["dec_column"]].sum()
Out[174]:
dec_column
id
1 0.75
2 0.55
# ...but cannot be combined with standard data types or they will be excluded
In [175]: df_dec.groupby(["id"])[["int_column", "dec_column"]].sum()
Out[175]:
int_column
id
1 4
2 6
# Use .agg function to aggregate over standard and "nuisance" data types
# at the same time
In [176]: df_dec.groupby(["id"]).agg({"int_column": "sum", "dec_column": "sum"})
Out[176]:
int_column dec_column
id
1 4 0.75
2 6 0.55
Handling of (un)observed Categorical values#
When using a Categorical grouper (as a single grouper, or as part of multiple groupers), the observed keyword
controls whether to return a cartesian product of all possible groupers values (observed=False) or only those
that are observed groupers (observed=True).
Show all values:
In [177]: pd.Series([1, 1, 1]).groupby(
.....: pd.Categorical(["a", "a", "a"], categories=["a", "b"]), observed=False
.....: ).count()
.....:
Out[177]:
a 3
b 0
dtype: int64
Show only the observed values:
In [178]: pd.Series([1, 1, 1]).groupby(
.....: pd.Categorical(["a", "a", "a"], categories=["a", "b"]), observed=True
.....: ).count()
.....:
Out[178]:
a 3
dtype: int64
The returned dtype of the grouped will always include all of the categories that were grouped.
In [179]: s = (
.....: pd.Series([1, 1, 1])
.....: .groupby(pd.Categorical(["a", "a", "a"], categories=["a", "b"]), observed=False)
.....: .count()
.....: )
.....:
In [180]: s.index.dtype
Out[180]: CategoricalDtype(categories=['a', 'b'], ordered=False)
NA and NaT group handling#
If there are any NaN or NaT values in the grouping key, these will be
automatically excluded. In other words, there will never be an “NA group” or
“NaT group”. This was not the case in older versions of pandas, but users were
generally discarding the NA group anyway (and supporting it was an
implementation headache).
Grouping with ordered factors#
Categorical variables represented as instance of pandas’s Categorical class
can be used as group keys. If so, the order of the levels will be preserved:
In [181]: data = pd.Series(np.random.randn(100))
In [182]: factor = pd.qcut(data, [0, 0.25, 0.5, 0.75, 1.0])
In [183]: data.groupby(factor).mean()
Out[183]:
(-2.645, -0.523] -1.362896
(-0.523, 0.0296] -0.260266
(0.0296, 0.654] 0.361802
(0.654, 2.21] 1.073801
dtype: float64
Grouping with a grouper specification#
You may need to specify a bit more data to properly group. You can
use the pd.Grouper to provide this local control.
In [184]: import datetime
In [185]: df = pd.DataFrame(
.....: {
.....: "Branch": "A A A A A A A B".split(),
.....: "Buyer": "Carl Mark Carl Carl Joe Joe Joe Carl".split(),
.....: "Quantity": [1, 3, 5, 1, 8, 1, 9, 3],
.....: "Date": [
.....: datetime.datetime(2013, 1, 1, 13, 0),
.....: datetime.datetime(2013, 1, 1, 13, 5),
.....: datetime.datetime(2013, 10, 1, 20, 0),
.....: datetime.datetime(2013, 10, 2, 10, 0),
.....: datetime.datetime(2013, 10, 1, 20, 0),
.....: datetime.datetime(2013, 10, 2, 10, 0),
.....: datetime.datetime(2013, 12, 2, 12, 0),
.....: datetime.datetime(2013, 12, 2, 14, 0),
.....: ],
.....: }
.....: )
.....:
In [186]: df
Out[186]:
Branch Buyer Quantity Date
0 A Carl 1 2013-01-01 13:00:00
1 A Mark 3 2013-01-01 13:05:00
2 A Carl 5 2013-10-01 20:00:00
3 A Carl 1 2013-10-02 10:00:00
4 A Joe 8 2013-10-01 20:00:00
5 A Joe 1 2013-10-02 10:00:00
6 A Joe 9 2013-12-02 12:00:00
7 B Carl 3 2013-12-02 14:00:00
Groupby a specific column with the desired frequency. This is like resampling.
In [187]: df.groupby([pd.Grouper(freq="1M", key="Date"), "Buyer"])[["Quantity"]].sum()
Out[187]:
Quantity
Date Buyer
2013-01-31 Carl 1
Mark 3
2013-10-31 Carl 6
Joe 9
2013-12-31 Carl 3
Joe 9
You have an ambiguous specification in that you have a named index and a column
that could be potential groupers.
In [188]: df = df.set_index("Date")
In [189]: df["Date"] = df.index + pd.offsets.MonthEnd(2)
In [190]: df.groupby([pd.Grouper(freq="6M", key="Date"), "Buyer"])[["Quantity"]].sum()
Out[190]:
Quantity
Date Buyer
2013-02-28 Carl 1
Mark 3
2014-02-28 Carl 9
Joe 18
In [191]: df.groupby([pd.Grouper(freq="6M", level="Date"), "Buyer"])[["Quantity"]].sum()
Out[191]:
Quantity
Date Buyer
2013-01-31 Carl 1
Mark 3
2014-01-31 Carl 9
Joe 18
Taking the first rows of each group#
Just like for a DataFrame or Series you can call head and tail on a groupby:
In [192]: df = pd.DataFrame([[1, 2], [1, 4], [5, 6]], columns=["A", "B"])
In [193]: df
Out[193]:
A B
0 1 2
1 1 4
2 5 6
In [194]: g = df.groupby("A")
In [195]: g.head(1)
Out[195]:
A B
0 1 2
2 5 6
In [196]: g.tail(1)
Out[196]:
A B
1 1 4
2 5 6
This shows the first or last n rows from each group.
Taking the nth row of each group#
To select from a DataFrame or Series the nth item, use
nth(). This is a reduction method, and
will return a single row (or no row) per group if you pass an int for n:
In [197]: df = pd.DataFrame([[1, np.nan], [1, 4], [5, 6]], columns=["A", "B"])
In [198]: g = df.groupby("A")
In [199]: g.nth(0)
Out[199]:
B
A
1 NaN
5 6.0
In [200]: g.nth(-1)
Out[200]:
B
A
1 4.0
5 6.0
In [201]: g.nth(1)
Out[201]:
B
A
1 4.0
If you want to select the nth not-null item, use the dropna kwarg. For a DataFrame this should be either 'any' or 'all' just like you would pass to dropna:
# nth(0) is the same as g.first()
In [202]: g.nth(0, dropna="any")
Out[202]:
B
A
1 4.0
5 6.0
In [203]: g.first()
Out[203]:
B
A
1 4.0
5 6.0
# nth(-1) is the same as g.last()
In [204]: g.nth(-1, dropna="any") # NaNs denote group exhausted when using dropna
Out[204]:
B
A
1 4.0
5 6.0
In [205]: g.last()
Out[205]:
B
A
1 4.0
5 6.0
In [206]: g.B.nth(0, dropna="all")
Out[206]:
A
1 4.0
5 6.0
Name: B, dtype: float64
As with other methods, passing as_index=False, will achieve a filtration, which returns the grouped row.
In [207]: df = pd.DataFrame([[1, np.nan], [1, 4], [5, 6]], columns=["A", "B"])
In [208]: g = df.groupby("A", as_index=False)
In [209]: g.nth(0)
Out[209]:
A B
0 1 NaN
2 5 6.0
In [210]: g.nth(-1)
Out[210]:
A B
1 1 4.0
2 5 6.0
You can also select multiple rows from each group by specifying multiple nth values as a list of ints.
In [211]: business_dates = pd.date_range(start="4/1/2014", end="6/30/2014", freq="B")
In [212]: df = pd.DataFrame(1, index=business_dates, columns=["a", "b"])
# get the first, 4th, and last date index for each month
In [213]: df.groupby([df.index.year, df.index.month]).nth([0, 3, -1])
Out[213]:
a b
2014 4 1 1
4 1 1
4 1 1
5 1 1
5 1 1
5 1 1
6 1 1
6 1 1
6 1 1
Enumerate group items#
To see the order in which each row appears within its group, use the
cumcount method:
In [214]: dfg = pd.DataFrame(list("aaabba"), columns=["A"])
In [215]: dfg
Out[215]:
A
0 a
1 a
2 a
3 b
4 b
5 a
In [216]: dfg.groupby("A").cumcount()
Out[216]:
0 0
1 1
2 2
3 0
4 1
5 3
dtype: int64
In [217]: dfg.groupby("A").cumcount(ascending=False)
Out[217]:
0 3
1 2
2 1
3 1
4 0
5 0
dtype: int64
Enumerate groups#
To see the ordering of the groups (as opposed to the order of rows
within a group given by cumcount) you can use
ngroup().
Note that the numbers given to the groups match the order in which the
groups would be seen when iterating over the groupby object, not the
order they are first observed.
In [218]: dfg = pd.DataFrame(list("aaabba"), columns=["A"])
In [219]: dfg
Out[219]:
A
0 a
1 a
2 a
3 b
4 b
5 a
In [220]: dfg.groupby("A").ngroup()
Out[220]:
0 0
1 0
2 0
3 1
4 1
5 0
dtype: int64
In [221]: dfg.groupby("A").ngroup(ascending=False)
Out[221]:
0 1
1 1
2 1
3 0
4 0
5 1
dtype: int64
Plotting#
Groupby also works with some plotting methods. For example, suppose we
suspect that some features in a DataFrame may differ by group, in this case,
the values in column 1 where the group is “B” are 3 higher on average.
In [222]: np.random.seed(1234)
In [223]: df = pd.DataFrame(np.random.randn(50, 2))
In [224]: df["g"] = np.random.choice(["A", "B"], size=50)
In [225]: df.loc[df["g"] == "B", 1] += 3
We can easily visualize this with a boxplot:
In [226]: df.groupby("g").boxplot()
Out[226]:
A AxesSubplot(0.1,0.15;0.363636x0.75)
B AxesSubplot(0.536364,0.15;0.363636x0.75)
dtype: object
The result of calling boxplot is a dictionary whose keys are the values
of our grouping column g (“A” and “B”). The values of the resulting dictionary
can be controlled by the return_type keyword of boxplot.
See the visualization documentation for more.
Warning
For historical reasons, df.groupby("g").boxplot() is not equivalent
to df.boxplot(by="g"). See here for
an explanation.
Piping function calls#
Similar to the functionality provided by DataFrame and Series, functions
that take GroupBy objects can be chained together using a pipe method to
allow for a cleaner, more readable syntax. To read about .pipe in general terms,
see here.
Combining .groupby and .pipe is often useful when you need to reuse
GroupBy objects.
As an example, imagine having a DataFrame with columns for stores, products,
revenue and quantity sold. We’d like to do a groupwise calculation of prices
(i.e. revenue/quantity) per store and per product. We could do this in a
multi-step operation, but expressing it in terms of piping can make the
code more readable. First we set the data:
In [227]: n = 1000
In [228]: df = pd.DataFrame(
.....: {
.....: "Store": np.random.choice(["Store_1", "Store_2"], n),
.....: "Product": np.random.choice(["Product_1", "Product_2"], n),
.....: "Revenue": (np.random.random(n) * 50 + 10).round(2),
.....: "Quantity": np.random.randint(1, 10, size=n),
.....: }
.....: )
.....:
In [229]: df.head(2)
Out[229]:
Store Product Revenue Quantity
0 Store_2 Product_1 26.12 1
1 Store_2 Product_1 28.86 1
Now, to find prices per store/product, we can simply do:
In [230]: (
.....: df.groupby(["Store", "Product"])
.....: .pipe(lambda grp: grp.Revenue.sum() / grp.Quantity.sum())
.....: .unstack()
.....: .round(2)
.....: )
.....:
Out[230]:
Product Product_1 Product_2
Store
Store_1 6.82 7.05
Store_2 6.30 6.64
Piping can also be expressive when you want to deliver a grouped object to some
arbitrary function, for example:
In [231]: def mean(groupby):
.....: return groupby.mean()
.....:
In [232]: df.groupby(["Store", "Product"]).pipe(mean)
Out[232]:
Revenue Quantity
Store Product
Store_1 Product_1 34.622727 5.075758
Product_2 35.482815 5.029630
Store_2 Product_1 32.972837 5.237589
Product_2 34.684360 5.224000
where mean takes a GroupBy object and finds the mean of the Revenue and Quantity
columns respectively for each Store-Product combination. The mean function can
be any function that takes in a GroupBy object; the .pipe will pass the GroupBy
object as a parameter into the function you specify.
Examples#
Regrouping by factor#
Regroup columns of a DataFrame according to their sum, and sum the aggregated ones.
In [233]: df = pd.DataFrame({"a": [1, 0, 0], "b": [0, 1, 0], "c": [1, 0, 0], "d": [2, 3, 4]})
In [234]: df
Out[234]:
a b c d
0 1 0 1 2
1 0 1 0 3
2 0 0 0 4
In [235]: df.groupby(df.sum(), axis=1).sum()
Out[235]:
1 9
0 2 2
1 1 3
2 0 4
Multi-column factorization#
By using ngroup(), we can extract
information about the groups in a way similar to factorize() (as described
further in the reshaping API) but which applies
naturally to multiple columns of mixed type and different
sources. This can be useful as an intermediate categorical-like step
in processing, when the relationships between the group rows are more
important than their content, or as input to an algorithm which only
accepts the integer encoding. (For more information about support in
pandas for full categorical data, see the Categorical
introduction and the
API documentation.)
In [236]: dfg = pd.DataFrame({"A": [1, 1, 2, 3, 2], "B": list("aaaba")})
In [237]: dfg
Out[237]:
A B
0 1 a
1 1 a
2 2 a
3 3 b
4 2 a
In [238]: dfg.groupby(["A", "B"]).ngroup()
Out[238]:
0 0
1 0
2 1
3 2
4 1
dtype: int64
In [239]: dfg.groupby(["A", [0, 0, 0, 1, 1]]).ngroup()
Out[239]:
0 0
1 0
2 1
3 3
4 2
dtype: int64
Groupby by indexer to ‘resample’ data#
Resampling produces new hypothetical samples (resamples) from already existing observed data or from a model that generates data. These new samples are similar to the pre-existing samples.
In order to resample to work on indices that are non-datetimelike, the following procedure can be utilized.
In the following examples, df.index // 5 returns a binary array which is used to determine what gets selected for the groupby operation.
Note
The below example shows how we can downsample by consolidation of samples into fewer samples. Here by using df.index // 5, we are aggregating the samples in bins. By applying std() function, we aggregate the information contained in many samples into a small subset of values which is their standard deviation thereby reducing the number of samples.
In [240]: df = pd.DataFrame(np.random.randn(10, 2))
In [241]: df
Out[241]:
0 1
0 -0.793893 0.321153
1 0.342250 1.618906
2 -0.975807 1.918201
3 -0.810847 -1.405919
4 -1.977759 0.461659
5 0.730057 -1.316938
6 -0.751328 0.528290
7 -0.257759 -1.081009
8 0.505895 -1.701948
9 -1.006349 0.020208
In [242]: df.index // 5
Out[242]: Int64Index([0, 0, 0, 0, 0, 1, 1, 1, 1, 1], dtype='int64')
In [243]: df.groupby(df.index // 5).std()
Out[243]:
0 1
0 0.823647 1.312912
1 0.760109 0.942941
Returning a Series to propagate names#
Group DataFrame columns, compute a set of metrics and return a named Series.
The Series name is used as the name for the column index. This is especially
useful in conjunction with reshaping operations such as stacking in which the
column index name will be used as the name of the inserted column:
In [244]: df = pd.DataFrame(
.....: {
.....: "a": [0, 0, 0, 0, 1, 1, 1, 1, 2, 2, 2, 2],
.....: "b": [0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1],
.....: "c": [1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0],
.....: "d": [0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1],
.....: }
.....: )
.....:
In [245]: def compute_metrics(x):
.....: result = {"b_sum": x["b"].sum(), "c_mean": x["c"].mean()}
.....: return pd.Series(result, name="metrics")
.....:
In [246]: result = df.groupby("a").apply(compute_metrics)
In [247]: result
Out[247]:
metrics b_sum c_mean
a
0 2.0 0.5
1 2.0 0.5
2 2.0 0.5
In [248]: result.stack()
Out[248]:
a metrics
0 b_sum 2.0
c_mean 0.5
1 b_sum 2.0
c_mean 0.5
2 b_sum 2.0
c_mean 0.5
dtype: float64
| 793
| 1,000
|
Pandas dataframe - sort and shift within a group
I have a pandas dataframe that looks like below
This dataframe is already grouped by the three columns O, A, N but as you see it is NOT sorted by time column
My goal is to sort it based on the time column by maintaining the groupby of O, A, N and then do shift(-1) operation for value column to create a value_next observation.
The output should look like below (NaN is imputed with -1` for demonstration)
I did below:
import pandas as pd
# Initialize data to lists.
data = [{'time': 10, 'O': 1, 'A': 2, 'N':3, 'value': 10},
{'time': 7, 'O': 1, 'A': 2, 'N':3, 'value': 11},
{'time': 15, 'O': 1, 'A': 2, 'N':3, 'value': 12},
{'time': 11, 'O': 2, 'A': 2, 'N':3, 'value': 20},
{'time': 12, 'O': 2, 'A': 2, 'N':3, 'value': 21},
{'time': 1, 'O': 2, 'A': 2, 'N':3, 'value': 25}]
# Creates DataFrame.
df = pd.DataFrame(data)
#sorting
df.sort_values(by=['O', 'A', 'N', 'time'], ascending=[True, True, True, True])
#shift
df['value_next'] = df.groupby(['O', 'A', 'N'])['value'].shift(-1)
This generates output below which is different than the expected. What am I missing?
Please suggest.
|
64,206,194
|
Append value to list inside a column manipulates all rows instead of one
|
<p>I have the following Dataframe:</p>
<pre><code> text values
0 a text []
1 another text []
2 some more text []
3 and again some text []
</code></pre>
<p>I want to append items to a specific list by index. For example I want to add "value" to the first row.
However when I do <code>df.iloc[0]['values'].append("value")</code>, "value" is added to every list in the column values:</p>
<pre><code> text values
0 a text ["value"]
1 another text ["value"]
2 some more text ["value"]
3 and again some text ["value"]
</code></pre>
<p>I also tried <code>df['values'].iloc[0].append("value")</code>, same result. Any idea what am I doing wrong?</p>
| 64,206,436
| 2020-10-05T09:45:40.340000
| 1
| null | 1
| 28
|
python|pandas
|
<p>This is probably due to the fact that values within the 'values' column always refer to the same object. Look at the following example:</p>
<pre><code>import pandas as pd
lst = []
df = pd.DataFrame({'values': [[] for i in range(5)]})
df2 = pd.DataFrame({'values': [lst for i in range(5)]})
df.iloc[0]['values'].append(3)
df2.iloc[0]['values'].append(3)
</code></pre>
<p>Let's now print the content of these two dataframes:</p>
<pre><code>>>> df
values
0 [3]
1 []
2 []
3 []
4 []
>>> df2
values
0 [3]
1 [3]
2 [3]
3 [3]
4 [3]
</code></pre>
<p>If I was you I would dig into your code and check if those values always refer to the same object.</p>
| 2020-10-05T10:01:39.830000
| 1
|
https://pandas.pydata.org/docs/user_guide/groupby.html
|
Group by: split-apply-combine#
Group by: split-apply-combine#
By “group by” we are referring to a process involving one or more of the following
steps:
Splitting the data into groups based on some criteria.
Applying a function to each group independently.
Combining the results into a data structure.
Out of these, the split step is the most straightforward. In fact, in many
situations we may wish to split the data set into groups and do something with
those groups. In the apply step, we might wish to do one of the
following:
Aggregation: compute a summary statistic (or statistics) for each
This is probably due to the fact that values within the 'values' column always refer to the same object. Look at the following example:
import pandas as pd
lst = []
df = pd.DataFrame({'values': [[] for i in range(5)]})
df2 = pd.DataFrame({'values': [lst for i in range(5)]})
df.iloc[0]['values'].append(3)
df2.iloc[0]['values'].append(3)
Let's now print the content of these two dataframes:
>>> df
values
0 [3]
1 []
2 []
3 []
4 []
>>> df2
values
0 [3]
1 [3]
2 [3]
3 [3]
4 [3]
If I was you I would dig into your code and check if those values always refer to the same object.
group. Some examples:
Compute group sums or means.
Compute group sizes / counts.
Transformation: perform some group-specific computations and return a
like-indexed object. Some examples:
Standardize data (zscore) within a group.
Filling NAs within groups with a value derived from each group.
Filtration: discard some groups, according to a group-wise computation
that evaluates True or False. Some examples:
Discard data that belongs to groups with only a few members.
Filter out data based on the group sum or mean.
Some combination of the above: GroupBy will examine the results of the apply
step and try to return a sensibly combined result if it doesn’t fit into
either of the above two categories.
Since the set of object instance methods on pandas data structures are generally
rich and expressive, we often simply want to invoke, say, a DataFrame function
on each group. The name GroupBy should be quite familiar to those who have used
a SQL-based tool (or itertools), in which you can write code like:
SELECT Column1, Column2, mean(Column3), sum(Column4)
FROM SomeTable
GROUP BY Column1, Column2
We aim to make operations like this natural and easy to express using
pandas. We’ll address each area of GroupBy functionality then provide some
non-trivial examples / use cases.
See the cookbook for some advanced strategies.
Splitting an object into groups#
pandas objects can be split on any of their axes. The abstract definition of
grouping is to provide a mapping of labels to group names. To create a GroupBy
object (more on what the GroupBy object is later), you may do the following:
In [1]: df = pd.DataFrame(
...: [
...: ("bird", "Falconiformes", 389.0),
...: ("bird", "Psittaciformes", 24.0),
...: ("mammal", "Carnivora", 80.2),
...: ("mammal", "Primates", np.nan),
...: ("mammal", "Carnivora", 58),
...: ],
...: index=["falcon", "parrot", "lion", "monkey", "leopard"],
...: columns=("class", "order", "max_speed"),
...: )
...:
In [2]: df
Out[2]:
class order max_speed
falcon bird Falconiformes 389.0
parrot bird Psittaciformes 24.0
lion mammal Carnivora 80.2
monkey mammal Primates NaN
leopard mammal Carnivora 58.0
# default is axis=0
In [3]: grouped = df.groupby("class")
In [4]: grouped = df.groupby("order", axis="columns")
In [5]: grouped = df.groupby(["class", "order"])
The mapping can be specified many different ways:
A Python function, to be called on each of the axis labels.
A list or NumPy array of the same length as the selected axis.
A dict or Series, providing a label -> group name mapping.
For DataFrame objects, a string indicating either a column name or
an index level name to be used to group.
df.groupby('A') is just syntactic sugar for df.groupby(df['A']).
A list of any of the above things.
Collectively we refer to the grouping objects as the keys. For example,
consider the following DataFrame:
Note
A string passed to groupby may refer to either a column or an index level.
If a string matches both a column name and an index level name, a
ValueError will be raised.
In [6]: df = pd.DataFrame(
...: {
...: "A": ["foo", "bar", "foo", "bar", "foo", "bar", "foo", "foo"],
...: "B": ["one", "one", "two", "three", "two", "two", "one", "three"],
...: "C": np.random.randn(8),
...: "D": np.random.randn(8),
...: }
...: )
...:
In [7]: df
Out[7]:
A B C D
0 foo one 0.469112 -0.861849
1 bar one -0.282863 -2.104569
2 foo two -1.509059 -0.494929
3 bar three -1.135632 1.071804
4 foo two 1.212112 0.721555
5 bar two -0.173215 -0.706771
6 foo one 0.119209 -1.039575
7 foo three -1.044236 0.271860
On a DataFrame, we obtain a GroupBy object by calling groupby().
We could naturally group by either the A or B columns, or both:
In [8]: grouped = df.groupby("A")
In [9]: grouped = df.groupby(["A", "B"])
If we also have a MultiIndex on columns A and B, we can group by all
but the specified columns
In [10]: df2 = df.set_index(["A", "B"])
In [11]: grouped = df2.groupby(level=df2.index.names.difference(["B"]))
In [12]: grouped.sum()
Out[12]:
C D
A
bar -1.591710 -1.739537
foo -0.752861 -1.402938
These will split the DataFrame on its index (rows). We could also split by the
columns:
In [13]: def get_letter_type(letter):
....: if letter.lower() in 'aeiou':
....: return 'vowel'
....: else:
....: return 'consonant'
....:
In [14]: grouped = df.groupby(get_letter_type, axis=1)
pandas Index objects support duplicate values. If a
non-unique index is used as the group key in a groupby operation, all values
for the same index value will be considered to be in one group and thus the
output of aggregation functions will only contain unique index values:
In [15]: lst = [1, 2, 3, 1, 2, 3]
In [16]: s = pd.Series([1, 2, 3, 10, 20, 30], lst)
In [17]: grouped = s.groupby(level=0)
In [18]: grouped.first()
Out[18]:
1 1
2 2
3 3
dtype: int64
In [19]: grouped.last()
Out[19]:
1 10
2 20
3 30
dtype: int64
In [20]: grouped.sum()
Out[20]:
1 11
2 22
3 33
dtype: int64
Note that no splitting occurs until it’s needed. Creating the GroupBy object
only verifies that you’ve passed a valid mapping.
Note
Many kinds of complicated data manipulations can be expressed in terms of
GroupBy operations (though can’t be guaranteed to be the most
efficient). You can get quite creative with the label mapping functions.
GroupBy sorting#
By default the group keys are sorted during the groupby operation. You may however pass sort=False for potential speedups:
In [21]: df2 = pd.DataFrame({"X": ["B", "B", "A", "A"], "Y": [1, 2, 3, 4]})
In [22]: df2.groupby(["X"]).sum()
Out[22]:
Y
X
A 7
B 3
In [23]: df2.groupby(["X"], sort=False).sum()
Out[23]:
Y
X
B 3
A 7
Note that groupby will preserve the order in which observations are sorted within each group.
For example, the groups created by groupby() below are in the order they appeared in the original DataFrame:
In [24]: df3 = pd.DataFrame({"X": ["A", "B", "A", "B"], "Y": [1, 4, 3, 2]})
In [25]: df3.groupby(["X"]).get_group("A")
Out[25]:
X Y
0 A 1
2 A 3
In [26]: df3.groupby(["X"]).get_group("B")
Out[26]:
X Y
1 B 4
3 B 2
New in version 1.1.0.
GroupBy dropna#
By default NA values are excluded from group keys during the groupby operation. However,
in case you want to include NA values in group keys, you could pass dropna=False to achieve it.
In [27]: df_list = [[1, 2, 3], [1, None, 4], [2, 1, 3], [1, 2, 2]]
In [28]: df_dropna = pd.DataFrame(df_list, columns=["a", "b", "c"])
In [29]: df_dropna
Out[29]:
a b c
0 1 2.0 3
1 1 NaN 4
2 2 1.0 3
3 1 2.0 2
# Default ``dropna`` is set to True, which will exclude NaNs in keys
In [30]: df_dropna.groupby(by=["b"], dropna=True).sum()
Out[30]:
a c
b
1.0 2 3
2.0 2 5
# In order to allow NaN in keys, set ``dropna`` to False
In [31]: df_dropna.groupby(by=["b"], dropna=False).sum()
Out[31]:
a c
b
1.0 2 3
2.0 2 5
NaN 1 4
The default setting of dropna argument is True which means NA are not included in group keys.
GroupBy object attributes#
The groups attribute is a dict whose keys are the computed unique groups
and corresponding values being the axis labels belonging to each group. In the
above example we have:
In [32]: df.groupby("A").groups
Out[32]: {'bar': [1, 3, 5], 'foo': [0, 2, 4, 6, 7]}
In [33]: df.groupby(get_letter_type, axis=1).groups
Out[33]: {'consonant': ['B', 'C', 'D'], 'vowel': ['A']}
Calling the standard Python len function on the GroupBy object just returns
the length of the groups dict, so it is largely just a convenience:
In [34]: grouped = df.groupby(["A", "B"])
In [35]: grouped.groups
Out[35]: {('bar', 'one'): [1], ('bar', 'three'): [3], ('bar', 'two'): [5], ('foo', 'one'): [0, 6], ('foo', 'three'): [7], ('foo', 'two'): [2, 4]}
In [36]: len(grouped)
Out[36]: 6
GroupBy will tab complete column names (and other attributes):
In [37]: df
Out[37]:
height weight gender
2000-01-01 42.849980 157.500553 male
2000-01-02 49.607315 177.340407 male
2000-01-03 56.293531 171.524640 male
2000-01-04 48.421077 144.251986 female
2000-01-05 46.556882 152.526206 male
2000-01-06 68.448851 168.272968 female
2000-01-07 70.757698 136.431469 male
2000-01-08 58.909500 176.499753 female
2000-01-09 76.435631 174.094104 female
2000-01-10 45.306120 177.540920 male
In [38]: gb = df.groupby("gender")
In [39]: gb.<TAB> # noqa: E225, E999
gb.agg gb.boxplot gb.cummin gb.describe gb.filter gb.get_group gb.height gb.last gb.median gb.ngroups gb.plot gb.rank gb.std gb.transform
gb.aggregate gb.count gb.cumprod gb.dtype gb.first gb.groups gb.hist gb.max gb.min gb.nth gb.prod gb.resample gb.sum gb.var
gb.apply gb.cummax gb.cumsum gb.fillna gb.gender gb.head gb.indices gb.mean gb.name gb.ohlc gb.quantile gb.size gb.tail gb.weight
GroupBy with MultiIndex#
With hierarchically-indexed data, it’s quite
natural to group by one of the levels of the hierarchy.
Let’s create a Series with a two-level MultiIndex.
In [40]: arrays = [
....: ["bar", "bar", "baz", "baz", "foo", "foo", "qux", "qux"],
....: ["one", "two", "one", "two", "one", "two", "one", "two"],
....: ]
....:
In [41]: index = pd.MultiIndex.from_arrays(arrays, names=["first", "second"])
In [42]: s = pd.Series(np.random.randn(8), index=index)
In [43]: s
Out[43]:
first second
bar one -0.919854
two -0.042379
baz one 1.247642
two -0.009920
foo one 0.290213
two 0.495767
qux one 0.362949
two 1.548106
dtype: float64
We can then group by one of the levels in s.
In [44]: grouped = s.groupby(level=0)
In [45]: grouped.sum()
Out[45]:
first
bar -0.962232
baz 1.237723
foo 0.785980
qux 1.911055
dtype: float64
If the MultiIndex has names specified, these can be passed instead of the level
number:
In [46]: s.groupby(level="second").sum()
Out[46]:
second
one 0.980950
two 1.991575
dtype: float64
Grouping with multiple levels is supported.
In [47]: s
Out[47]:
first second third
bar doo one -1.131345
two -0.089329
baz bee one 0.337863
two -0.945867
foo bop one -0.932132
two 1.956030
qux bop one 0.017587
two -0.016692
dtype: float64
In [48]: s.groupby(level=["first", "second"]).sum()
Out[48]:
first second
bar doo -1.220674
baz bee -0.608004
foo bop 1.023898
qux bop 0.000895
dtype: float64
Index level names may be supplied as keys.
In [49]: s.groupby(["first", "second"]).sum()
Out[49]:
first second
bar doo -1.220674
baz bee -0.608004
foo bop 1.023898
qux bop 0.000895
dtype: float64
More on the sum function and aggregation later.
Grouping DataFrame with Index levels and columns#
A DataFrame may be grouped by a combination of columns and index levels by
specifying the column names as strings and the index levels as pd.Grouper
objects.
In [50]: arrays = [
....: ["bar", "bar", "baz", "baz", "foo", "foo", "qux", "qux"],
....: ["one", "two", "one", "two", "one", "two", "one", "two"],
....: ]
....:
In [51]: index = pd.MultiIndex.from_arrays(arrays, names=["first", "second"])
In [52]: df = pd.DataFrame({"A": [1, 1, 1, 1, 2, 2, 3, 3], "B": np.arange(8)}, index=index)
In [53]: df
Out[53]:
A B
first second
bar one 1 0
two 1 1
baz one 1 2
two 1 3
foo one 2 4
two 2 5
qux one 3 6
two 3 7
The following example groups df by the second index level and
the A column.
In [54]: df.groupby([pd.Grouper(level=1), "A"]).sum()
Out[54]:
B
second A
one 1 2
2 4
3 6
two 1 4
2 5
3 7
Index levels may also be specified by name.
In [55]: df.groupby([pd.Grouper(level="second"), "A"]).sum()
Out[55]:
B
second A
one 1 2
2 4
3 6
two 1 4
2 5
3 7
Index level names may be specified as keys directly to groupby.
In [56]: df.groupby(["second", "A"]).sum()
Out[56]:
B
second A
one 1 2
2 4
3 6
two 1 4
2 5
3 7
DataFrame column selection in GroupBy#
Once you have created the GroupBy object from a DataFrame, you might want to do
something different for each of the columns. Thus, using [] similar to
getting a column from a DataFrame, you can do:
In [57]: df = pd.DataFrame(
....: {
....: "A": ["foo", "bar", "foo", "bar", "foo", "bar", "foo", "foo"],
....: "B": ["one", "one", "two", "three", "two", "two", "one", "three"],
....: "C": np.random.randn(8),
....: "D": np.random.randn(8),
....: }
....: )
....:
In [58]: df
Out[58]:
A B C D
0 foo one -0.575247 1.346061
1 bar one 0.254161 1.511763
2 foo two -1.143704 1.627081
3 bar three 0.215897 -0.990582
4 foo two 1.193555 -0.441652
5 bar two -0.077118 1.211526
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
In [59]: grouped = df.groupby(["A"])
In [60]: grouped_C = grouped["C"]
In [61]: grouped_D = grouped["D"]
This is mainly syntactic sugar for the alternative and much more verbose:
In [62]: df["C"].groupby(df["A"])
Out[62]: <pandas.core.groupby.generic.SeriesGroupBy object at 0x7f1ea100a490>
Additionally this method avoids recomputing the internal grouping information
derived from the passed key.
Iterating through groups#
With the GroupBy object in hand, iterating through the grouped data is very
natural and functions similarly to itertools.groupby():
In [63]: grouped = df.groupby('A')
In [64]: for name, group in grouped:
....: print(name)
....: print(group)
....:
bar
A B C D
1 bar one 0.254161 1.511763
3 bar three 0.215897 -0.990582
5 bar two -0.077118 1.211526
foo
A B C D
0 foo one -0.575247 1.346061
2 foo two -1.143704 1.627081
4 foo two 1.193555 -0.441652
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
In the case of grouping by multiple keys, the group name will be a tuple:
In [65]: for name, group in df.groupby(['A', 'B']):
....: print(name)
....: print(group)
....:
('bar', 'one')
A B C D
1 bar one 0.254161 1.511763
('bar', 'three')
A B C D
3 bar three 0.215897 -0.990582
('bar', 'two')
A B C D
5 bar two -0.077118 1.211526
('foo', 'one')
A B C D
0 foo one -0.575247 1.346061
6 foo one -0.408530 0.268520
('foo', 'three')
A B C D
7 foo three -0.862495 0.02458
('foo', 'two')
A B C D
2 foo two -1.143704 1.627081
4 foo two 1.193555 -0.441652
See Iterating through groups.
Selecting a group#
A single group can be selected using
get_group():
In [66]: grouped.get_group("bar")
Out[66]:
A B C D
1 bar one 0.254161 1.511763
3 bar three 0.215897 -0.990582
5 bar two -0.077118 1.211526
Or for an object grouped on multiple columns:
In [67]: df.groupby(["A", "B"]).get_group(("bar", "one"))
Out[67]:
A B C D
1 bar one 0.254161 1.511763
Aggregation#
Once the GroupBy object has been created, several methods are available to
perform a computation on the grouped data. These operations are similar to the
aggregating API, window API,
and resample API.
An obvious one is aggregation via the
aggregate() or equivalently
agg() method:
In [68]: grouped = df.groupby("A")
In [69]: grouped[["C", "D"]].aggregate(np.sum)
Out[69]:
C D
A
bar 0.392940 1.732707
foo -1.796421 2.824590
In [70]: grouped = df.groupby(["A", "B"])
In [71]: grouped.aggregate(np.sum)
Out[71]:
C D
A B
bar one 0.254161 1.511763
three 0.215897 -0.990582
two -0.077118 1.211526
foo one -0.983776 1.614581
three -0.862495 0.024580
two 0.049851 1.185429
As you can see, the result of the aggregation will have the group names as the
new index along the grouped axis. In the case of multiple keys, the result is a
MultiIndex by default, though this can be
changed by using the as_index option:
In [72]: grouped = df.groupby(["A", "B"], as_index=False)
In [73]: grouped.aggregate(np.sum)
Out[73]:
A B C D
0 bar one 0.254161 1.511763
1 bar three 0.215897 -0.990582
2 bar two -0.077118 1.211526
3 foo one -0.983776 1.614581
4 foo three -0.862495 0.024580
5 foo two 0.049851 1.185429
In [74]: df.groupby("A", as_index=False)[["C", "D"]].sum()
Out[74]:
A C D
0 bar 0.392940 1.732707
1 foo -1.796421 2.824590
Note that you could use the reset_index DataFrame function to achieve the
same result as the column names are stored in the resulting MultiIndex:
In [75]: df.groupby(["A", "B"]).sum().reset_index()
Out[75]:
A B C D
0 bar one 0.254161 1.511763
1 bar three 0.215897 -0.990582
2 bar two -0.077118 1.211526
3 foo one -0.983776 1.614581
4 foo three -0.862495 0.024580
5 foo two 0.049851 1.185429
Another simple aggregation example is to compute the size of each group.
This is included in GroupBy as the size method. It returns a Series whose
index are the group names and whose values are the sizes of each group.
In [76]: grouped.size()
Out[76]:
A B size
0 bar one 1
1 bar three 1
2 bar two 1
3 foo one 2
4 foo three 1
5 foo two 2
In [77]: grouped.describe()
Out[77]:
C ... D
count mean std min ... 25% 50% 75% max
0 1.0 0.254161 NaN 0.254161 ... 1.511763 1.511763 1.511763 1.511763
1 1.0 0.215897 NaN 0.215897 ... -0.990582 -0.990582 -0.990582 -0.990582
2 1.0 -0.077118 NaN -0.077118 ... 1.211526 1.211526 1.211526 1.211526
3 2.0 -0.491888 0.117887 -0.575247 ... 0.537905 0.807291 1.076676 1.346061
4 1.0 -0.862495 NaN -0.862495 ... 0.024580 0.024580 0.024580 0.024580
5 2.0 0.024925 1.652692 -1.143704 ... 0.075531 0.592714 1.109898 1.627081
[6 rows x 16 columns]
Another aggregation example is to compute the number of unique values of each group. This is similar to the value_counts function, except that it only counts unique values.
In [78]: ll = [['foo', 1], ['foo', 2], ['foo', 2], ['bar', 1], ['bar', 1]]
In [79]: df4 = pd.DataFrame(ll, columns=["A", "B"])
In [80]: df4
Out[80]:
A B
0 foo 1
1 foo 2
2 foo 2
3 bar 1
4 bar 1
In [81]: df4.groupby("A")["B"].nunique()
Out[81]:
A
bar 1
foo 2
Name: B, dtype: int64
Note
Aggregation functions will not return the groups that you are aggregating over
if they are named columns, when as_index=True, the default. The grouped columns will
be the indices of the returned object.
Passing as_index=False will return the groups that you are aggregating over, if they are
named columns.
Aggregating functions are the ones that reduce the dimension of the returned objects.
Some common aggregating functions are tabulated below:
Function
Description
mean()
Compute mean of groups
sum()
Compute sum of group values
size()
Compute group sizes
count()
Compute count of group
std()
Standard deviation of groups
var()
Compute variance of groups
sem()
Standard error of the mean of groups
describe()
Generates descriptive statistics
first()
Compute first of group values
last()
Compute last of group values
nth()
Take nth value, or a subset if n is a list
min()
Compute min of group values
max()
Compute max of group values
The aggregating functions above will exclude NA values. Any function which
reduces a Series to a scalar value is an aggregation function and will work,
a trivial example is df.groupby('A').agg(lambda ser: 1). Note that
nth() can act as a reducer or a
filter, see here.
Applying multiple functions at once#
With grouped Series you can also pass a list or dict of functions to do
aggregation with, outputting a DataFrame:
In [82]: grouped = df.groupby("A")
In [83]: grouped["C"].agg([np.sum, np.mean, np.std])
Out[83]:
sum mean std
A
bar 0.392940 0.130980 0.181231
foo -1.796421 -0.359284 0.912265
On a grouped DataFrame, you can pass a list of functions to apply to each
column, which produces an aggregated result with a hierarchical index:
In [84]: grouped[["C", "D"]].agg([np.sum, np.mean, np.std])
Out[84]:
C D
sum mean std sum mean std
A
bar 0.392940 0.130980 0.181231 1.732707 0.577569 1.366330
foo -1.796421 -0.359284 0.912265 2.824590 0.564918 0.884785
The resulting aggregations are named for the functions themselves. If you
need to rename, then you can add in a chained operation for a Series like this:
In [85]: (
....: grouped["C"]
....: .agg([np.sum, np.mean, np.std])
....: .rename(columns={"sum": "foo", "mean": "bar", "std": "baz"})
....: )
....:
Out[85]:
foo bar baz
A
bar 0.392940 0.130980 0.181231
foo -1.796421 -0.359284 0.912265
For a grouped DataFrame, you can rename in a similar manner:
In [86]: (
....: grouped[["C", "D"]].agg([np.sum, np.mean, np.std]).rename(
....: columns={"sum": "foo", "mean": "bar", "std": "baz"}
....: )
....: )
....:
Out[86]:
C D
foo bar baz foo bar baz
A
bar 0.392940 0.130980 0.181231 1.732707 0.577569 1.366330
foo -1.796421 -0.359284 0.912265 2.824590 0.564918 0.884785
Note
In general, the output column names should be unique. You can’t apply
the same function (or two functions with the same name) to the same
column.
In [87]: grouped["C"].agg(["sum", "sum"])
Out[87]:
sum sum
A
bar 0.392940 0.392940
foo -1.796421 -1.796421
pandas does allow you to provide multiple lambdas. In this case, pandas
will mangle the name of the (nameless) lambda functions, appending _<i>
to each subsequent lambda.
In [88]: grouped["C"].agg([lambda x: x.max() - x.min(), lambda x: x.median() - x.mean()])
Out[88]:
<lambda_0> <lambda_1>
A
bar 0.331279 0.084917
foo 2.337259 -0.215962
Named aggregation#
New in version 0.25.0.
To support column-specific aggregation with control over the output column names, pandas
accepts the special syntax in GroupBy.agg(), known as “named aggregation”, where
The keywords are the output column names
The values are tuples whose first element is the column to select
and the second element is the aggregation to apply to that column. pandas
provides the pandas.NamedAgg namedtuple with the fields ['column', 'aggfunc']
to make it clearer what the arguments are. As usual, the aggregation can
be a callable or a string alias.
In [89]: animals = pd.DataFrame(
....: {
....: "kind": ["cat", "dog", "cat", "dog"],
....: "height": [9.1, 6.0, 9.5, 34.0],
....: "weight": [7.9, 7.5, 9.9, 198.0],
....: }
....: )
....:
In [90]: animals
Out[90]:
kind height weight
0 cat 9.1 7.9
1 dog 6.0 7.5
2 cat 9.5 9.9
3 dog 34.0 198.0
In [91]: animals.groupby("kind").agg(
....: min_height=pd.NamedAgg(column="height", aggfunc="min"),
....: max_height=pd.NamedAgg(column="height", aggfunc="max"),
....: average_weight=pd.NamedAgg(column="weight", aggfunc=np.mean),
....: )
....:
Out[91]:
min_height max_height average_weight
kind
cat 9.1 9.5 8.90
dog 6.0 34.0 102.75
pandas.NamedAgg is just a namedtuple. Plain tuples are allowed as well.
In [92]: animals.groupby("kind").agg(
....: min_height=("height", "min"),
....: max_height=("height", "max"),
....: average_weight=("weight", np.mean),
....: )
....:
Out[92]:
min_height max_height average_weight
kind
cat 9.1 9.5 8.90
dog 6.0 34.0 102.75
If your desired output column names are not valid Python keywords, construct a dictionary
and unpack the keyword arguments
In [93]: animals.groupby("kind").agg(
....: **{
....: "total weight": pd.NamedAgg(column="weight", aggfunc=sum)
....: }
....: )
....:
Out[93]:
total weight
kind
cat 17.8
dog 205.5
Additional keyword arguments are not passed through to the aggregation functions. Only pairs
of (column, aggfunc) should be passed as **kwargs. If your aggregation functions
requires additional arguments, partially apply them with functools.partial().
Note
For Python 3.5 and earlier, the order of **kwargs in a functions was not
preserved. This means that the output column ordering would not be
consistent. To ensure consistent ordering, the keys (and so output columns)
will always be sorted for Python 3.5.
Named aggregation is also valid for Series groupby aggregations. In this case there’s
no column selection, so the values are just the functions.
In [94]: animals.groupby("kind").height.agg(
....: min_height="min",
....: max_height="max",
....: )
....:
Out[94]:
min_height max_height
kind
cat 9.1 9.5
dog 6.0 34.0
Applying different functions to DataFrame columns#
By passing a dict to aggregate you can apply a different aggregation to the
columns of a DataFrame:
In [95]: grouped.agg({"C": np.sum, "D": lambda x: np.std(x, ddof=1)})
Out[95]:
C D
A
bar 0.392940 1.366330
foo -1.796421 0.884785
The function names can also be strings. In order for a string to be valid it
must be either implemented on GroupBy or available via dispatching:
In [96]: grouped.agg({"C": "sum", "D": "std"})
Out[96]:
C D
A
bar 0.392940 1.366330
foo -1.796421 0.884785
Cython-optimized aggregation functions#
Some common aggregations, currently only sum, mean, std, and sem, have
optimized Cython implementations:
In [97]: df.groupby("A")[["C", "D"]].sum()
Out[97]:
C D
A
bar 0.392940 1.732707
foo -1.796421 2.824590
In [98]: df.groupby(["A", "B"]).mean()
Out[98]:
C D
A B
bar one 0.254161 1.511763
three 0.215897 -0.990582
two -0.077118 1.211526
foo one -0.491888 0.807291
three -0.862495 0.024580
two 0.024925 0.592714
Of course sum and mean are implemented on pandas objects, so the above
code would work even without the special versions via dispatching (see below).
Aggregations with User-Defined Functions#
Users can also provide their own functions for custom aggregations. When aggregating
with a User-Defined Function (UDF), the UDF should not mutate the provided Series, see
Mutating with User Defined Function (UDF) methods for more information.
In [99]: animals.groupby("kind")[["height"]].agg(lambda x: set(x))
Out[99]:
height
kind
cat {9.1, 9.5}
dog {34.0, 6.0}
The resulting dtype will reflect that of the aggregating function. If the results from different groups have
different dtypes, then a common dtype will be determined in the same way as DataFrame construction.
In [100]: animals.groupby("kind")[["height"]].agg(lambda x: x.astype(int).sum())
Out[100]:
height
kind
cat 18
dog 40
Transformation#
The transform method returns an object that is indexed the same
as the one being grouped. The transform function must:
Return a result that is either the same size as the group chunk or
broadcastable to the size of the group chunk (e.g., a scalar,
grouped.transform(lambda x: x.iloc[-1])).
Operate column-by-column on the group chunk. The transform is applied to
the first group chunk using chunk.apply.
Not perform in-place operations on the group chunk. Group chunks should
be treated as immutable, and changes to a group chunk may produce unexpected
results. For example, when using fillna, inplace must be False
(grouped.transform(lambda x: x.fillna(inplace=False))).
(Optionally) operates on the entire group chunk. If this is supported, a
fast path is used starting from the second chunk.
Deprecated since version 1.5.0: When using .transform on a grouped DataFrame and the transformation function
returns a DataFrame, currently pandas does not align the result’s index
with the input’s index. This behavior is deprecated and alignment will
be performed in a future version of pandas. You can apply .to_numpy() to the
result of the transformation function to avoid alignment.
Similar to Aggregations with User-Defined Functions, the resulting dtype will reflect that of the
transformation function. If the results from different groups have different dtypes, then
a common dtype will be determined in the same way as DataFrame construction.
Suppose we wished to standardize the data within each group:
In [101]: index = pd.date_range("10/1/1999", periods=1100)
In [102]: ts = pd.Series(np.random.normal(0.5, 2, 1100), index)
In [103]: ts = ts.rolling(window=100, min_periods=100).mean().dropna()
In [104]: ts.head()
Out[104]:
2000-01-08 0.779333
2000-01-09 0.778852
2000-01-10 0.786476
2000-01-11 0.782797
2000-01-12 0.798110
Freq: D, dtype: float64
In [105]: ts.tail()
Out[105]:
2002-09-30 0.660294
2002-10-01 0.631095
2002-10-02 0.673601
2002-10-03 0.709213
2002-10-04 0.719369
Freq: D, dtype: float64
In [106]: transformed = ts.groupby(lambda x: x.year).transform(
.....: lambda x: (x - x.mean()) / x.std()
.....: )
.....:
We would expect the result to now have mean 0 and standard deviation 1 within
each group, which we can easily check:
# Original Data
In [107]: grouped = ts.groupby(lambda x: x.year)
In [108]: grouped.mean()
Out[108]:
2000 0.442441
2001 0.526246
2002 0.459365
dtype: float64
In [109]: grouped.std()
Out[109]:
2000 0.131752
2001 0.210945
2002 0.128753
dtype: float64
# Transformed Data
In [110]: grouped_trans = transformed.groupby(lambda x: x.year)
In [111]: grouped_trans.mean()
Out[111]:
2000 -4.870756e-16
2001 -1.545187e-16
2002 4.136282e-16
dtype: float64
In [112]: grouped_trans.std()
Out[112]:
2000 1.0
2001 1.0
2002 1.0
dtype: float64
We can also visually compare the original and transformed data sets.
In [113]: compare = pd.DataFrame({"Original": ts, "Transformed": transformed})
In [114]: compare.plot()
Out[114]: <AxesSubplot: >
Transformation functions that have lower dimension outputs are broadcast to
match the shape of the input array.
In [115]: ts.groupby(lambda x: x.year).transform(lambda x: x.max() - x.min())
Out[115]:
2000-01-08 0.623893
2000-01-09 0.623893
2000-01-10 0.623893
2000-01-11 0.623893
2000-01-12 0.623893
...
2002-09-30 0.558275
2002-10-01 0.558275
2002-10-02 0.558275
2002-10-03 0.558275
2002-10-04 0.558275
Freq: D, Length: 1001, dtype: float64
Alternatively, the built-in methods could be used to produce the same outputs.
In [116]: max_ts = ts.groupby(lambda x: x.year).transform("max")
In [117]: min_ts = ts.groupby(lambda x: x.year).transform("min")
In [118]: max_ts - min_ts
Out[118]:
2000-01-08 0.623893
2000-01-09 0.623893
2000-01-10 0.623893
2000-01-11 0.623893
2000-01-12 0.623893
...
2002-09-30 0.558275
2002-10-01 0.558275
2002-10-02 0.558275
2002-10-03 0.558275
2002-10-04 0.558275
Freq: D, Length: 1001, dtype: float64
Another common data transform is to replace missing data with the group mean.
In [119]: data_df
Out[119]:
A B C
0 1.539708 -1.166480 0.533026
1 1.302092 -0.505754 NaN
2 -0.371983 1.104803 -0.651520
3 -1.309622 1.118697 -1.161657
4 -1.924296 0.396437 0.812436
.. ... ... ...
995 -0.093110 0.683847 -0.774753
996 -0.185043 1.438572 NaN
997 -0.394469 -0.642343 0.011374
998 -1.174126 1.857148 NaN
999 0.234564 0.517098 0.393534
[1000 rows x 3 columns]
In [120]: countries = np.array(["US", "UK", "GR", "JP"])
In [121]: key = countries[np.random.randint(0, 4, 1000)]
In [122]: grouped = data_df.groupby(key)
# Non-NA count in each group
In [123]: grouped.count()
Out[123]:
A B C
GR 209 217 189
JP 240 255 217
UK 216 231 193
US 239 250 217
In [124]: transformed = grouped.transform(lambda x: x.fillna(x.mean()))
We can verify that the group means have not changed in the transformed data
and that the transformed data contains no NAs.
In [125]: grouped_trans = transformed.groupby(key)
In [126]: grouped.mean() # original group means
Out[126]:
A B C
GR -0.098371 -0.015420 0.068053
JP 0.069025 0.023100 -0.077324
UK 0.034069 -0.052580 -0.116525
US 0.058664 -0.020399 0.028603
In [127]: grouped_trans.mean() # transformation did not change group means
Out[127]:
A B C
GR -0.098371 -0.015420 0.068053
JP 0.069025 0.023100 -0.077324
UK 0.034069 -0.052580 -0.116525
US 0.058664 -0.020399 0.028603
In [128]: grouped.count() # original has some missing data points
Out[128]:
A B C
GR 209 217 189
JP 240 255 217
UK 216 231 193
US 239 250 217
In [129]: grouped_trans.count() # counts after transformation
Out[129]:
A B C
GR 228 228 228
JP 267 267 267
UK 247 247 247
US 258 258 258
In [130]: grouped_trans.size() # Verify non-NA count equals group size
Out[130]:
GR 228
JP 267
UK 247
US 258
dtype: int64
Note
Some functions will automatically transform the input when applied to a
GroupBy object, but returning an object of the same shape as the original.
Passing as_index=False will not affect these transformation methods.
For example: fillna, ffill, bfill, shift..
In [131]: grouped.ffill()
Out[131]:
A B C
0 1.539708 -1.166480 0.533026
1 1.302092 -0.505754 0.533026
2 -0.371983 1.104803 -0.651520
3 -1.309622 1.118697 -1.161657
4 -1.924296 0.396437 0.812436
.. ... ... ...
995 -0.093110 0.683847 -0.774753
996 -0.185043 1.438572 -0.774753
997 -0.394469 -0.642343 0.011374
998 -1.174126 1.857148 -0.774753
999 0.234564 0.517098 0.393534
[1000 rows x 3 columns]
Window and resample operations#
It is possible to use resample(), expanding() and
rolling() as methods on groupbys.
The example below will apply the rolling() method on the samples of
the column B based on the groups of column A.
In [132]: df_re = pd.DataFrame({"A": [1] * 10 + [5] * 10, "B": np.arange(20)})
In [133]: df_re
Out[133]:
A B
0 1 0
1 1 1
2 1 2
3 1 3
4 1 4
.. .. ..
15 5 15
16 5 16
17 5 17
18 5 18
19 5 19
[20 rows x 2 columns]
In [134]: df_re.groupby("A").rolling(4).B.mean()
Out[134]:
A
1 0 NaN
1 NaN
2 NaN
3 1.5
4 2.5
...
5 15 13.5
16 14.5
17 15.5
18 16.5
19 17.5
Name: B, Length: 20, dtype: float64
The expanding() method will accumulate a given operation
(sum() in the example) for all the members of each particular
group.
In [135]: df_re.groupby("A").expanding().sum()
Out[135]:
B
A
1 0 0.0
1 1.0
2 3.0
3 6.0
4 10.0
... ...
5 15 75.0
16 91.0
17 108.0
18 126.0
19 145.0
[20 rows x 1 columns]
Suppose you want to use the resample() method to get a daily
frequency in each group of your dataframe and wish to complete the
missing values with the ffill() method.
In [136]: df_re = pd.DataFrame(
.....: {
.....: "date": pd.date_range(start="2016-01-01", periods=4, freq="W"),
.....: "group": [1, 1, 2, 2],
.....: "val": [5, 6, 7, 8],
.....: }
.....: ).set_index("date")
.....:
In [137]: df_re
Out[137]:
group val
date
2016-01-03 1 5
2016-01-10 1 6
2016-01-17 2 7
2016-01-24 2 8
In [138]: df_re.groupby("group").resample("1D").ffill()
Out[138]:
group val
group date
1 2016-01-03 1 5
2016-01-04 1 5
2016-01-05 1 5
2016-01-06 1 5
2016-01-07 1 5
... ... ...
2 2016-01-20 2 7
2016-01-21 2 7
2016-01-22 2 7
2016-01-23 2 7
2016-01-24 2 8
[16 rows x 2 columns]
Filtration#
The filter method returns a subset of the original object. Suppose we
want to take only elements that belong to groups with a group sum greater
than 2.
In [139]: sf = pd.Series([1, 1, 2, 3, 3, 3])
In [140]: sf.groupby(sf).filter(lambda x: x.sum() > 2)
Out[140]:
3 3
4 3
5 3
dtype: int64
The argument of filter must be a function that, applied to the group as a
whole, returns True or False.
Another useful operation is filtering out elements that belong to groups
with only a couple members.
In [141]: dff = pd.DataFrame({"A": np.arange(8), "B": list("aabbbbcc")})
In [142]: dff.groupby("B").filter(lambda x: len(x) > 2)
Out[142]:
A B
2 2 b
3 3 b
4 4 b
5 5 b
Alternatively, instead of dropping the offending groups, we can return a
like-indexed objects where the groups that do not pass the filter are filled
with NaNs.
In [143]: dff.groupby("B").filter(lambda x: len(x) > 2, dropna=False)
Out[143]:
A B
0 NaN NaN
1 NaN NaN
2 2.0 b
3 3.0 b
4 4.0 b
5 5.0 b
6 NaN NaN
7 NaN NaN
For DataFrames with multiple columns, filters should explicitly specify a column as the filter criterion.
In [144]: dff["C"] = np.arange(8)
In [145]: dff.groupby("B").filter(lambda x: len(x["C"]) > 2)
Out[145]:
A B C
2 2 b 2
3 3 b 3
4 4 b 4
5 5 b 5
Note
Some functions when applied to a groupby object will act as a filter on the input, returning
a reduced shape of the original (and potentially eliminating groups), but with the index unchanged.
Passing as_index=False will not affect these transformation methods.
For example: head, tail.
In [146]: dff.groupby("B").head(2)
Out[146]:
A B C
0 0 a 0
1 1 a 1
2 2 b 2
3 3 b 3
6 6 c 6
7 7 c 7
Dispatching to instance methods#
When doing an aggregation or transformation, you might just want to call an
instance method on each data group. This is pretty easy to do by passing lambda
functions:
In [147]: grouped = df.groupby("A")
In [148]: grouped.agg(lambda x: x.std())
Out[148]:
C D
A
bar 0.181231 1.366330
foo 0.912265 0.884785
But, it’s rather verbose and can be untidy if you need to pass additional
arguments. Using a bit of metaprogramming cleverness, GroupBy now has the
ability to “dispatch” method calls to the groups:
In [149]: grouped.std()
Out[149]:
C D
A
bar 0.181231 1.366330
foo 0.912265 0.884785
What is actually happening here is that a function wrapper is being
generated. When invoked, it takes any passed arguments and invokes the function
with any arguments on each group (in the above example, the std
function). The results are then combined together much in the style of agg
and transform (it actually uses apply to infer the gluing, documented
next). This enables some operations to be carried out rather succinctly:
In [150]: tsdf = pd.DataFrame(
.....: np.random.randn(1000, 3),
.....: index=pd.date_range("1/1/2000", periods=1000),
.....: columns=["A", "B", "C"],
.....: )
.....:
In [151]: tsdf.iloc[::2] = np.nan
In [152]: grouped = tsdf.groupby(lambda x: x.year)
In [153]: grouped.fillna(method="pad")
Out[153]:
A B C
2000-01-01 NaN NaN NaN
2000-01-02 -0.353501 -0.080957 -0.876864
2000-01-03 -0.353501 -0.080957 -0.876864
2000-01-04 0.050976 0.044273 -0.559849
2000-01-05 0.050976 0.044273 -0.559849
... ... ... ...
2002-09-22 0.005011 0.053897 -1.026922
2002-09-23 0.005011 0.053897 -1.026922
2002-09-24 -0.456542 -1.849051 1.559856
2002-09-25 -0.456542 -1.849051 1.559856
2002-09-26 1.123162 0.354660 1.128135
[1000 rows x 3 columns]
In this example, we chopped the collection of time series into yearly chunks
then independently called fillna on the
groups.
The nlargest and nsmallest methods work on Series style groupbys:
In [154]: s = pd.Series([9, 8, 7, 5, 19, 1, 4.2, 3.3])
In [155]: g = pd.Series(list("abababab"))
In [156]: gb = s.groupby(g)
In [157]: gb.nlargest(3)
Out[157]:
a 4 19.0
0 9.0
2 7.0
b 1 8.0
3 5.0
7 3.3
dtype: float64
In [158]: gb.nsmallest(3)
Out[158]:
a 6 4.2
2 7.0
0 9.0
b 5 1.0
7 3.3
3 5.0
dtype: float64
Flexible apply#
Some operations on the grouped data might not fit into either the aggregate or
transform categories. Or, you may simply want GroupBy to infer how to combine
the results. For these, use the apply function, which can be substituted
for both aggregate and transform in many standard use cases. However,
apply can handle some exceptional use cases.
Note
apply can act as a reducer, transformer, or filter function, depending
on exactly what is passed to it. It can depend on the passed function and
exactly what you are grouping. Thus the grouped column(s) may be included in
the output as well as set the indices.
In [159]: df
Out[159]:
A B C D
0 foo one -0.575247 1.346061
1 bar one 0.254161 1.511763
2 foo two -1.143704 1.627081
3 bar three 0.215897 -0.990582
4 foo two 1.193555 -0.441652
5 bar two -0.077118 1.211526
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
In [160]: grouped = df.groupby("A")
# could also just call .describe()
In [161]: grouped["C"].apply(lambda x: x.describe())
Out[161]:
A
bar count 3.000000
mean 0.130980
std 0.181231
min -0.077118
25% 0.069390
...
foo min -1.143704
25% -0.862495
50% -0.575247
75% -0.408530
max 1.193555
Name: C, Length: 16, dtype: float64
The dimension of the returned result can also change:
In [162]: grouped = df.groupby('A')['C']
In [163]: def f(group):
.....: return pd.DataFrame({'original': group,
.....: 'demeaned': group - group.mean()})
.....:
apply on a Series can operate on a returned value from the applied function,
that is itself a series, and possibly upcast the result to a DataFrame:
In [164]: def f(x):
.....: return pd.Series([x, x ** 2], index=["x", "x^2"])
.....:
In [165]: s = pd.Series(np.random.rand(5))
In [166]: s
Out[166]:
0 0.321438
1 0.493496
2 0.139505
3 0.910103
4 0.194158
dtype: float64
In [167]: s.apply(f)
Out[167]:
x x^2
0 0.321438 0.103323
1 0.493496 0.243538
2 0.139505 0.019462
3 0.910103 0.828287
4 0.194158 0.037697
Control grouped column(s) placement with group_keys#
Note
If group_keys=True is specified when calling groupby(),
functions passed to apply that return like-indexed outputs will have the
group keys added to the result index. Previous versions of pandas would add
the group keys only when the result from the applied function had a different
index than the input. If group_keys is not specified, the group keys will
not be added for like-indexed outputs. In the future this behavior
will change to always respect group_keys, which defaults to True.
Changed in version 1.5.0.
To control whether the grouped column(s) are included in the indices, you can use
the argument group_keys. Compare
In [168]: df.groupby("A", group_keys=True).apply(lambda x: x)
Out[168]:
A B C D
A
bar 1 bar one 0.254161 1.511763
3 bar three 0.215897 -0.990582
5 bar two -0.077118 1.211526
foo 0 foo one -0.575247 1.346061
2 foo two -1.143704 1.627081
4 foo two 1.193555 -0.441652
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
with
In [169]: df.groupby("A", group_keys=False).apply(lambda x: x)
Out[169]:
A B C D
0 foo one -0.575247 1.346061
1 bar one 0.254161 1.511763
2 foo two -1.143704 1.627081
3 bar three 0.215897 -0.990582
4 foo two 1.193555 -0.441652
5 bar two -0.077118 1.211526
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
Similar to Aggregations with User-Defined Functions, the resulting dtype will reflect that of the
apply function. If the results from different groups have different dtypes, then
a common dtype will be determined in the same way as DataFrame construction.
Numba Accelerated Routines#
New in version 1.1.
If Numba is installed as an optional dependency, the transform and
aggregate methods support engine='numba' and engine_kwargs arguments.
See enhancing performance with Numba for general usage of the arguments
and performance considerations.
The function signature must start with values, index exactly as the data belonging to each group
will be passed into values, and the group index will be passed into index.
Warning
When using engine='numba', there will be no “fall back” behavior internally. The group
data and group index will be passed as NumPy arrays to the JITed user defined function, and no
alternative execution attempts will be tried.
Other useful features#
Automatic exclusion of “nuisance” columns#
Again consider the example DataFrame we’ve been looking at:
In [170]: df
Out[170]:
A B C D
0 foo one -0.575247 1.346061
1 bar one 0.254161 1.511763
2 foo two -1.143704 1.627081
3 bar three 0.215897 -0.990582
4 foo two 1.193555 -0.441652
5 bar two -0.077118 1.211526
6 foo one -0.408530 0.268520
7 foo three -0.862495 0.024580
Suppose we wish to compute the standard deviation grouped by the A
column. There is a slight problem, namely that we don’t care about the data in
column B. We refer to this as a “nuisance” column. You can avoid nuisance
columns by specifying numeric_only=True:
In [171]: df.groupby("A").std(numeric_only=True)
Out[171]:
C D
A
bar 0.181231 1.366330
foo 0.912265 0.884785
Note that df.groupby('A').colname.std(). is more efficient than
df.groupby('A').std().colname, so if the result of an aggregation function
is only interesting over one column (here colname), it may be filtered
before applying the aggregation function.
Note
Any object column, also if it contains numerical values such as Decimal
objects, is considered as a “nuisance” columns. They are excluded from
aggregate functions automatically in groupby.
If you do wish to include decimal or object columns in an aggregation with
other non-nuisance data types, you must do so explicitly.
Warning
The automatic dropping of nuisance columns has been deprecated and will be removed
in a future version of pandas. If columns are included that cannot be operated
on, pandas will instead raise an error. In order to avoid this, either select
the columns you wish to operate on or specify numeric_only=True.
In [172]: from decimal import Decimal
In [173]: df_dec = pd.DataFrame(
.....: {
.....: "id": [1, 2, 1, 2],
.....: "int_column": [1, 2, 3, 4],
.....: "dec_column": [
.....: Decimal("0.50"),
.....: Decimal("0.15"),
.....: Decimal("0.25"),
.....: Decimal("0.40"),
.....: ],
.....: }
.....: )
.....:
# Decimal columns can be sum'd explicitly by themselves...
In [174]: df_dec.groupby(["id"])[["dec_column"]].sum()
Out[174]:
dec_column
id
1 0.75
2 0.55
# ...but cannot be combined with standard data types or they will be excluded
In [175]: df_dec.groupby(["id"])[["int_column", "dec_column"]].sum()
Out[175]:
int_column
id
1 4
2 6
# Use .agg function to aggregate over standard and "nuisance" data types
# at the same time
In [176]: df_dec.groupby(["id"]).agg({"int_column": "sum", "dec_column": "sum"})
Out[176]:
int_column dec_column
id
1 4 0.75
2 6 0.55
Handling of (un)observed Categorical values#
When using a Categorical grouper (as a single grouper, or as part of multiple groupers), the observed keyword
controls whether to return a cartesian product of all possible groupers values (observed=False) or only those
that are observed groupers (observed=True).
Show all values:
In [177]: pd.Series([1, 1, 1]).groupby(
.....: pd.Categorical(["a", "a", "a"], categories=["a", "b"]), observed=False
.....: ).count()
.....:
Out[177]:
a 3
b 0
dtype: int64
Show only the observed values:
In [178]: pd.Series([1, 1, 1]).groupby(
.....: pd.Categorical(["a", "a", "a"], categories=["a", "b"]), observed=True
.....: ).count()
.....:
Out[178]:
a 3
dtype: int64
The returned dtype of the grouped will always include all of the categories that were grouped.
In [179]: s = (
.....: pd.Series([1, 1, 1])
.....: .groupby(pd.Categorical(["a", "a", "a"], categories=["a", "b"]), observed=False)
.....: .count()
.....: )
.....:
In [180]: s.index.dtype
Out[180]: CategoricalDtype(categories=['a', 'b'], ordered=False)
NA and NaT group handling#
If there are any NaN or NaT values in the grouping key, these will be
automatically excluded. In other words, there will never be an “NA group” or
“NaT group”. This was not the case in older versions of pandas, but users were
generally discarding the NA group anyway (and supporting it was an
implementation headache).
Grouping with ordered factors#
Categorical variables represented as instance of pandas’s Categorical class
can be used as group keys. If so, the order of the levels will be preserved:
In [181]: data = pd.Series(np.random.randn(100))
In [182]: factor = pd.qcut(data, [0, 0.25, 0.5, 0.75, 1.0])
In [183]: data.groupby(factor).mean()
Out[183]:
(-2.645, -0.523] -1.362896
(-0.523, 0.0296] -0.260266
(0.0296, 0.654] 0.361802
(0.654, 2.21] 1.073801
dtype: float64
Grouping with a grouper specification#
You may need to specify a bit more data to properly group. You can
use the pd.Grouper to provide this local control.
In [184]: import datetime
In [185]: df = pd.DataFrame(
.....: {
.....: "Branch": "A A A A A A A B".split(),
.....: "Buyer": "Carl Mark Carl Carl Joe Joe Joe Carl".split(),
.....: "Quantity": [1, 3, 5, 1, 8, 1, 9, 3],
.....: "Date": [
.....: datetime.datetime(2013, 1, 1, 13, 0),
.....: datetime.datetime(2013, 1, 1, 13, 5),
.....: datetime.datetime(2013, 10, 1, 20, 0),
.....: datetime.datetime(2013, 10, 2, 10, 0),
.....: datetime.datetime(2013, 10, 1, 20, 0),
.....: datetime.datetime(2013, 10, 2, 10, 0),
.....: datetime.datetime(2013, 12, 2, 12, 0),
.....: datetime.datetime(2013, 12, 2, 14, 0),
.....: ],
.....: }
.....: )
.....:
In [186]: df
Out[186]:
Branch Buyer Quantity Date
0 A Carl 1 2013-01-01 13:00:00
1 A Mark 3 2013-01-01 13:05:00
2 A Carl 5 2013-10-01 20:00:00
3 A Carl 1 2013-10-02 10:00:00
4 A Joe 8 2013-10-01 20:00:00
5 A Joe 1 2013-10-02 10:00:00
6 A Joe 9 2013-12-02 12:00:00
7 B Carl 3 2013-12-02 14:00:00
Groupby a specific column with the desired frequency. This is like resampling.
In [187]: df.groupby([pd.Grouper(freq="1M", key="Date"), "Buyer"])[["Quantity"]].sum()
Out[187]:
Quantity
Date Buyer
2013-01-31 Carl 1
Mark 3
2013-10-31 Carl 6
Joe 9
2013-12-31 Carl 3
Joe 9
You have an ambiguous specification in that you have a named index and a column
that could be potential groupers.
In [188]: df = df.set_index("Date")
In [189]: df["Date"] = df.index + pd.offsets.MonthEnd(2)
In [190]: df.groupby([pd.Grouper(freq="6M", key="Date"), "Buyer"])[["Quantity"]].sum()
Out[190]:
Quantity
Date Buyer
2013-02-28 Carl 1
Mark 3
2014-02-28 Carl 9
Joe 18
In [191]: df.groupby([pd.Grouper(freq="6M", level="Date"), "Buyer"])[["Quantity"]].sum()
Out[191]:
Quantity
Date Buyer
2013-01-31 Carl 1
Mark 3
2014-01-31 Carl 9
Joe 18
Taking the first rows of each group#
Just like for a DataFrame or Series you can call head and tail on a groupby:
In [192]: df = pd.DataFrame([[1, 2], [1, 4], [5, 6]], columns=["A", "B"])
In [193]: df
Out[193]:
A B
0 1 2
1 1 4
2 5 6
In [194]: g = df.groupby("A")
In [195]: g.head(1)
Out[195]:
A B
0 1 2
2 5 6
In [196]: g.tail(1)
Out[196]:
A B
1 1 4
2 5 6
This shows the first or last n rows from each group.
Taking the nth row of each group#
To select from a DataFrame or Series the nth item, use
nth(). This is a reduction method, and
will return a single row (or no row) per group if you pass an int for n:
In [197]: df = pd.DataFrame([[1, np.nan], [1, 4], [5, 6]], columns=["A", "B"])
In [198]: g = df.groupby("A")
In [199]: g.nth(0)
Out[199]:
B
A
1 NaN
5 6.0
In [200]: g.nth(-1)
Out[200]:
B
A
1 4.0
5 6.0
In [201]: g.nth(1)
Out[201]:
B
A
1 4.0
If you want to select the nth not-null item, use the dropna kwarg. For a DataFrame this should be either 'any' or 'all' just like you would pass to dropna:
# nth(0) is the same as g.first()
In [202]: g.nth(0, dropna="any")
Out[202]:
B
A
1 4.0
5 6.0
In [203]: g.first()
Out[203]:
B
A
1 4.0
5 6.0
# nth(-1) is the same as g.last()
In [204]: g.nth(-1, dropna="any") # NaNs denote group exhausted when using dropna
Out[204]:
B
A
1 4.0
5 6.0
In [205]: g.last()
Out[205]:
B
A
1 4.0
5 6.0
In [206]: g.B.nth(0, dropna="all")
Out[206]:
A
1 4.0
5 6.0
Name: B, dtype: float64
As with other methods, passing as_index=False, will achieve a filtration, which returns the grouped row.
In [207]: df = pd.DataFrame([[1, np.nan], [1, 4], [5, 6]], columns=["A", "B"])
In [208]: g = df.groupby("A", as_index=False)
In [209]: g.nth(0)
Out[209]:
A B
0 1 NaN
2 5 6.0
In [210]: g.nth(-1)
Out[210]:
A B
1 1 4.0
2 5 6.0
You can also select multiple rows from each group by specifying multiple nth values as a list of ints.
In [211]: business_dates = pd.date_range(start="4/1/2014", end="6/30/2014", freq="B")
In [212]: df = pd.DataFrame(1, index=business_dates, columns=["a", "b"])
# get the first, 4th, and last date index for each month
In [213]: df.groupby([df.index.year, df.index.month]).nth([0, 3, -1])
Out[213]:
a b
2014 4 1 1
4 1 1
4 1 1
5 1 1
5 1 1
5 1 1
6 1 1
6 1 1
6 1 1
Enumerate group items#
To see the order in which each row appears within its group, use the
cumcount method:
In [214]: dfg = pd.DataFrame(list("aaabba"), columns=["A"])
In [215]: dfg
Out[215]:
A
0 a
1 a
2 a
3 b
4 b
5 a
In [216]: dfg.groupby("A").cumcount()
Out[216]:
0 0
1 1
2 2
3 0
4 1
5 3
dtype: int64
In [217]: dfg.groupby("A").cumcount(ascending=False)
Out[217]:
0 3
1 2
2 1
3 1
4 0
5 0
dtype: int64
Enumerate groups#
To see the ordering of the groups (as opposed to the order of rows
within a group given by cumcount) you can use
ngroup().
Note that the numbers given to the groups match the order in which the
groups would be seen when iterating over the groupby object, not the
order they are first observed.
In [218]: dfg = pd.DataFrame(list("aaabba"), columns=["A"])
In [219]: dfg
Out[219]:
A
0 a
1 a
2 a
3 b
4 b
5 a
In [220]: dfg.groupby("A").ngroup()
Out[220]:
0 0
1 0
2 0
3 1
4 1
5 0
dtype: int64
In [221]: dfg.groupby("A").ngroup(ascending=False)
Out[221]:
0 1
1 1
2 1
3 0
4 0
5 1
dtype: int64
Plotting#
Groupby also works with some plotting methods. For example, suppose we
suspect that some features in a DataFrame may differ by group, in this case,
the values in column 1 where the group is “B” are 3 higher on average.
In [222]: np.random.seed(1234)
In [223]: df = pd.DataFrame(np.random.randn(50, 2))
In [224]: df["g"] = np.random.choice(["A", "B"], size=50)
In [225]: df.loc[df["g"] == "B", 1] += 3
We can easily visualize this with a boxplot:
In [226]: df.groupby("g").boxplot()
Out[226]:
A AxesSubplot(0.1,0.15;0.363636x0.75)
B AxesSubplot(0.536364,0.15;0.363636x0.75)
dtype: object
The result of calling boxplot is a dictionary whose keys are the values
of our grouping column g (“A” and “B”). The values of the resulting dictionary
can be controlled by the return_type keyword of boxplot.
See the visualization documentation for more.
Warning
For historical reasons, df.groupby("g").boxplot() is not equivalent
to df.boxplot(by="g"). See here for
an explanation.
Piping function calls#
Similar to the functionality provided by DataFrame and Series, functions
that take GroupBy objects can be chained together using a pipe method to
allow for a cleaner, more readable syntax. To read about .pipe in general terms,
see here.
Combining .groupby and .pipe is often useful when you need to reuse
GroupBy objects.
As an example, imagine having a DataFrame with columns for stores, products,
revenue and quantity sold. We’d like to do a groupwise calculation of prices
(i.e. revenue/quantity) per store and per product. We could do this in a
multi-step operation, but expressing it in terms of piping can make the
code more readable. First we set the data:
In [227]: n = 1000
In [228]: df = pd.DataFrame(
.....: {
.....: "Store": np.random.choice(["Store_1", "Store_2"], n),
.....: "Product": np.random.choice(["Product_1", "Product_2"], n),
.....: "Revenue": (np.random.random(n) * 50 + 10).round(2),
.....: "Quantity": np.random.randint(1, 10, size=n),
.....: }
.....: )
.....:
In [229]: df.head(2)
Out[229]:
Store Product Revenue Quantity
0 Store_2 Product_1 26.12 1
1 Store_2 Product_1 28.86 1
Now, to find prices per store/product, we can simply do:
In [230]: (
.....: df.groupby(["Store", "Product"])
.....: .pipe(lambda grp: grp.Revenue.sum() / grp.Quantity.sum())
.....: .unstack()
.....: .round(2)
.....: )
.....:
Out[230]:
Product Product_1 Product_2
Store
Store_1 6.82 7.05
Store_2 6.30 6.64
Piping can also be expressive when you want to deliver a grouped object to some
arbitrary function, for example:
In [231]: def mean(groupby):
.....: return groupby.mean()
.....:
In [232]: df.groupby(["Store", "Product"]).pipe(mean)
Out[232]:
Revenue Quantity
Store Product
Store_1 Product_1 34.622727 5.075758
Product_2 35.482815 5.029630
Store_2 Product_1 32.972837 5.237589
Product_2 34.684360 5.224000
where mean takes a GroupBy object and finds the mean of the Revenue and Quantity
columns respectively for each Store-Product combination. The mean function can
be any function that takes in a GroupBy object; the .pipe will pass the GroupBy
object as a parameter into the function you specify.
Examples#
Regrouping by factor#
Regroup columns of a DataFrame according to their sum, and sum the aggregated ones.
In [233]: df = pd.DataFrame({"a": [1, 0, 0], "b": [0, 1, 0], "c": [1, 0, 0], "d": [2, 3, 4]})
In [234]: df
Out[234]:
a b c d
0 1 0 1 2
1 0 1 0 3
2 0 0 0 4
In [235]: df.groupby(df.sum(), axis=1).sum()
Out[235]:
1 9
0 2 2
1 1 3
2 0 4
Multi-column factorization#
By using ngroup(), we can extract
information about the groups in a way similar to factorize() (as described
further in the reshaping API) but which applies
naturally to multiple columns of mixed type and different
sources. This can be useful as an intermediate categorical-like step
in processing, when the relationships between the group rows are more
important than their content, or as input to an algorithm which only
accepts the integer encoding. (For more information about support in
pandas for full categorical data, see the Categorical
introduction and the
API documentation.)
In [236]: dfg = pd.DataFrame({"A": [1, 1, 2, 3, 2], "B": list("aaaba")})
In [237]: dfg
Out[237]:
A B
0 1 a
1 1 a
2 2 a
3 3 b
4 2 a
In [238]: dfg.groupby(["A", "B"]).ngroup()
Out[238]:
0 0
1 0
2 1
3 2
4 1
dtype: int64
In [239]: dfg.groupby(["A", [0, 0, 0, 1, 1]]).ngroup()
Out[239]:
0 0
1 0
2 1
3 3
4 2
dtype: int64
Groupby by indexer to ‘resample’ data#
Resampling produces new hypothetical samples (resamples) from already existing observed data or from a model that generates data. These new samples are similar to the pre-existing samples.
In order to resample to work on indices that are non-datetimelike, the following procedure can be utilized.
In the following examples, df.index // 5 returns a binary array which is used to determine what gets selected for the groupby operation.
Note
The below example shows how we can downsample by consolidation of samples into fewer samples. Here by using df.index // 5, we are aggregating the samples in bins. By applying std() function, we aggregate the information contained in many samples into a small subset of values which is their standard deviation thereby reducing the number of samples.
In [240]: df = pd.DataFrame(np.random.randn(10, 2))
In [241]: df
Out[241]:
0 1
0 -0.793893 0.321153
1 0.342250 1.618906
2 -0.975807 1.918201
3 -0.810847 -1.405919
4 -1.977759 0.461659
5 0.730057 -1.316938
6 -0.751328 0.528290
7 -0.257759 -1.081009
8 0.505895 -1.701948
9 -1.006349 0.020208
In [242]: df.index // 5
Out[242]: Int64Index([0, 0, 0, 0, 0, 1, 1, 1, 1, 1], dtype='int64')
In [243]: df.groupby(df.index // 5).std()
Out[243]:
0 1
0 0.823647 1.312912
1 0.760109 0.942941
Returning a Series to propagate names#
Group DataFrame columns, compute a set of metrics and return a named Series.
The Series name is used as the name for the column index. This is especially
useful in conjunction with reshaping operations such as stacking in which the
column index name will be used as the name of the inserted column:
In [244]: df = pd.DataFrame(
.....: {
.....: "a": [0, 0, 0, 0, 1, 1, 1, 1, 2, 2, 2, 2],
.....: "b": [0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1],
.....: "c": [1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0],
.....: "d": [0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1],
.....: }
.....: )
.....:
In [245]: def compute_metrics(x):
.....: result = {"b_sum": x["b"].sum(), "c_mean": x["c"].mean()}
.....: return pd.Series(result, name="metrics")
.....:
In [246]: result = df.groupby("a").apply(compute_metrics)
In [247]: result
Out[247]:
metrics b_sum c_mean
a
0 2.0 0.5
1 2.0 0.5
2 2.0 0.5
In [248]: result.stack()
Out[248]:
a metrics
0 b_sum 2.0
c_mean 0.5
1 b_sum 2.0
c_mean 0.5
2 b_sum 2.0
c_mean 0.5
dtype: float64
| 601
| 1,218
|
Append value to list inside a column manipulates all rows instead of one
I have the following Dataframe:
text values
0 a text []
1 another text []
2 some more text []
3 and again some text []
I want to append items to a specific list by index. For example I want to add "value" to the first row.
However when I do df.iloc[0]['values'].append("value"), "value" is added to every list in the column values:
text values
0 a text ["value"]
1 another text ["value"]
2 some more text ["value"]
3 and again some text ["value"]
I also tried df['values'].iloc[0].append("value"), same result. Any idea what am I doing wrong?
|
66,388,376
|
Pandas long reshape for several variables
|
<p>I want to reshape my long dataframe to wide by sorting it by <code>Session</code>. In this example <code>Session</code> is from 1-10.</p>
<pre><code> Session Tube Window Counts Length
0 1 1 1 0.0 0.0
1 1 1 2 0.0 0.0
2 1 1 3 0.0 0.0
3 1 1 4 0.0 0.0
4 1 1 5 0.0 0.0
... ... ... ... ... ...
17995 10 53 36 0.0 0.0
17996 10 53 37 0.0 0.0
17997 10 53 38 0.0 0.0
17998 10 53 39 0.0 0.0
17999 10 53 40 0.0 0.0
</code></pre>
<p>What I am expecting is something like:</p>
<pre><code> Session Tube Window Counts_1 Length_1 Session Counts_2 Length_2
0 1 1 1 0.0 0.0 0 2 0.0 0.0
1 1 1 2 0.0 0.0 1 2 0.0 0.0
2 1 1 3 0.0 0.0 2 2 0.0 0.0
3 1 1 4 0.0 0.0 3 2 0.0 0.0
4 1 1 5 0.0 0.0 4 2 0.0 0.0
... ... ... ... ... ... ... ... ... ... ... ...
17995 10 53 36 0.0 0.0
</code></pre>
<p>I could not find the solution. What I tried leads to a complete wide dataset.</p>
<pre><code>df['idx'] = df.groupby('Session').cumcount()+1
df = df.pivot_table(index=['Session'], columns='idx',
values=['Counts', 'Length'], aggfunc='first')
df = df.sort_index(axis=1, level=1)
df.columns = [f'{x}_{y}' for x,y in df.columns]
df = df.reset_index()
Session Counts_1 Length_1 Counts_2 Length_2 Counts_3 Length_3 Counts_4 Length_4 Counts_5 Length_5 ... Length_1795 Counts_1796 Length_1796 Counts_1797 Length_1797 Counts_1798 Length_1798 Counts_1799 Length_1799 Counts_1800 Length_1800
0 1 0.0 0.000000 0.0 0.000000 0.0 0.000000 0.0 0.000000 0.0 0.000000 ... 0.000000 0.0 0.000000 0.0 0.000000 0.0 0.000000 0.0 0.000000 0.0 0.000000
1 2 0.0 0.000000 0.0 0.000000 0.0 0.000000 0.0 0.000000 0.0 0.000000 ... 0.000000 0.0 0.000000 0.0 0.000000 0.0 0.000000 0.0 0.000000 0.0 0.000000
2 3 0.0 6.892889 0.0 2.503830 0.0 3.108580 0.0 5.188438 0.0 9.779242 ... 0.000000 0.0 0.000000 0.0 0.000000 0.0 0.000000 0.0 0.000000 0.0 0.000000
3 4 1.0 12.787159 0.0 13.847412 7.0 44.928269 0.0 48.511435 2.0 33.264356 ... 0.000000 0.0 0.000000 0.0 0.000000 0.0 0.000000 0.0 0.000000 0.0 0.000000
4 5 0.0 13.345436 2.0 27.415005 20.0 83.130315 19.0 85.475996 2.0 10.147958 ... 0.000000 0.0 0.000000 0.0 0.000000 0.0 0.000000 0.0 0.000000 0.0 0.000000
5 6 2.0 13.141503 8.0 22.965002 5.0 48.737279 15.0 85.403915 1.0 17.414609 ... 0.000000 6.0 12.399834 0.0 0.710808 0.0 0.000000 0.0 1.661978 0.0 0.000000
6 7 1.0 7.852842 0.0 13.613426 14.0 46.148978 23.0 87.446535 0.0 13.759176 ... 2.231295 8.0 39.022340 1.0 7.304392 3.0 9.228959 0.0 6.885822 0.0 1.606200
7 8 0.0 0.884018 3.0 35.323813 8.0 32.846301 10.0 71.691744 0.0 4.310296 ... 2.753615 6.0 25.003670 6.0 22.113324 0.0 0.615790 0.0 11.812815 2.0 9.991712
8 9 4.0 24.700817 13.0 31.637755 3.0 30.312104 5.0 50.490115 0.0 3.830024 ... 5.977912 11.0 44.305738 1.0 13.523643 0.0 1.374856 1.0 9.066218 1.0 8.376995
9 10 0.0 17.651236 10.0 44.311858 29.0 55.415964 12.0 43.457016 1.0 41.503212 ... 0.000000 0.0 0.000000 0.0 0.000000 0.0 0.000000 0.0 0.000000 0.0 0.000000
</code></pre>
| 66,389,019
| 2021-02-26T15:19:13.997000
| 1
| null | 0
| 28
|
python|pandas
|
<p>You could try to pivot your dataframe, after building a custom index per session:</p>
<pre><code>df2 = df.assign(index=df.groupby(['Session']).cumcount()).pivot(
'index', 'Session', ['Tube', 'Window', 'Counts', 'Length']).rename_axis(index=None)
</code></pre>
<p>With you sample data it would give:</p>
<pre><code> Tube Window Counts Length
Session 1 10 1 10 1 10 1 10
0 1.0 53.0 1.0 36.0 0.0 0.0 0.0 0.0
1 1.0 53.0 2.0 37.0 0.0 0.0 0.0 0.0
2 1.0 53.0 3.0 38.0 0.0 0.0 0.0 0.0
3 1.0 53.0 4.0 39.0 0.0 0.0 0.0 0.0
4 1.0 53.0 5.0 40.0 0.0 0.0 0.0 0.0
</code></pre>
<p>Not that bad but we have a MultiIndex for the columns and in a wrong order. Let us go further:</p>
<pre><code>df2.columns = df2.columns.to_flat_index()
df2 = df2.reindex(columns=sorted(df2.columns, key=lambda x: x[1]))
</code></pre>
<p>We now have:</p>
<pre><code> (Tube, 1) (Window, 1) ... (Counts, 10) (Length, 10)
0 1.0 1.0 ... 0.0 0.0
1 1.0 2.0 ... 0.0 0.0
2 1.0 3.0 ... 0.0 0.0
3 1.0 4.0 ... 0.0 0.0
4 1.0 5.0 ... 0.0 0.0
</code></pre>
<p>Last step:</p>
<pre><code>df2 = df2.rename(columns=lambda x: '_'.join(str(i) for i in x))
</code></pre>
<p>to finaly get:</p>
<pre><code> Tube_1 Window_1 Counts_1 ... Window_10 Counts_10 Length_10
0 1.0 1.0 0.0 ... 36.0 0.0 0.0
1 1.0 2.0 0.0 ... 37.0 0.0 0.0
2 1.0 3.0 0.0 ... 38.0 0.0 0.0
3 1.0 4.0 0.0 ... 39.0 0.0 0.0
4 1.0 5.0 0.0 ... 40.0 0.0 0.0
</code></pre>
| 2021-02-26T16:00:10.023000
| 1
|
https://pandas.pydata.org/docs/reference/api/pandas.wide_to_long.html
|
You could try to pivot your dataframe, after building a custom index per session:
df2 = df.assign(index=df.groupby(['Session']).cumcount()).pivot(
'index', 'Session', ['Tube', 'Window', 'Counts', 'Length']).rename_axis(index=None)
With you sample data it would give:
Tube Window Counts Length
Session 1 10 1 10 1 10 1 10
0 1.0 53.0 1.0 36.0 0.0 0.0 0.0 0.0
1 1.0 53.0 2.0 37.0 0.0 0.0 0.0 0.0
2 1.0 53.0 3.0 38.0 0.0 0.0 0.0 0.0
3 1.0 53.0 4.0 39.0 0.0 0.0 0.0 0.0
4 1.0 53.0 5.0 40.0 0.0 0.0 0.0 0.0
Not that bad but we have a MultiIndex for the columns and in a wrong order. Let us go further:
df2.columns = df2.columns.to_flat_index()
df2 = df2.reindex(columns=sorted(df2.columns, key=lambda x: x[1]))
We now have:
(Tube, 1) (Window, 1) ... (Counts, 10) (Length, 10)
0 1.0 1.0 ... 0.0 0.0
1 1.0 2.0 ... 0.0 0.0
2 1.0 3.0 ... 0.0 0.0
3 1.0 4.0 ... 0.0 0.0
4 1.0 5.0 ... 0.0 0.0
Last step:
df2 = df2.rename(columns=lambda x: '_'.join(str(i) for i in x))
to finaly get:
Tube_1 Window_1 Counts_1 ... Window_10 Counts_10 Length_10
0 1.0 1.0 0.0 ... 36.0 0.0 0.0
1 1.0 2.0 0.0 ... 37.0 0.0 0.0
2 1.0 3.0 0.0 ... 38.0 0.0 0.0
3 1.0 4.0 0.0 ... 39.0 0.0 0.0
4 1.0 5.0 0.0 ... 40.0 0.0 0.0
| 0
| 1,737
|
Pandas long reshape for several variables
I want to reshape my long dataframe to wide by sorting it by Session. In this example Session is from 1-10.
Session Tube Window Counts Length
0 1 1 1 0.0 0.0
1 1 1 2 0.0 0.0
2 1 1 3 0.0 0.0
3 1 1 4 0.0 0.0
4 1 1 5 0.0 0.0
... ... ... ... ... ...
17995 10 53 36 0.0 0.0
17996 10 53 37 0.0 0.0
17997 10 53 38 0.0 0.0
17998 10 53 39 0.0 0.0
17999 10 53 40 0.0 0.0
What I am expecting is something like:
Session Tube Window Counts_1 Length_1 Session Counts_2 Length_2
0 1 1 1 0.0 0.0 0 2 0.0 0.0
1 1 1 2 0.0 0.0 1 2 0.0 0.0
2 1 1 3 0.0 0.0 2 2 0.0 0.0
3 1 1 4 0.0 0.0 3 2 0.0 0.0
4 1 1 5 0.0 0.0 4 2 0.0 0.0
... ... ... ... ... ... ... ... ... ... ... ...
17995 10 53 36 0.0 0.0
I could not find the solution. What I tried leads to a complete wide dataset.
df['idx'] = df.groupby('Session').cumcount()+1
df = df.pivot_table(index=['Session'], columns='idx',
values=['Counts', 'Length'], aggfunc='first')
df = df.sort_index(axis=1, level=1)
df.columns = [f'{x}_{y}' for x,y in df.columns]
df = df.reset_index()
Session Counts_1 Length_1 Counts_2 Length_2 Counts_3 Length_3 Counts_4 Length_4 Counts_5 Length_5 ... Length_1795 Counts_1796 Length_1796 Counts_1797 Length_1797 Counts_1798 Length_1798 Counts_1799 Length_1799 Counts_1800 Length_1800
0 1 0.0 0.000000 0.0 0.000000 0.0 0.000000 0.0 0.000000 0.0 0.000000 ... 0.000000 0.0 0.000000 0.0 0.000000 0.0 0.000000 0.0 0.000000 0.0 0.000000
1 2 0.0 0.000000 0.0 0.000000 0.0 0.000000 0.0 0.000000 0.0 0.000000 ... 0.000000 0.0 0.000000 0.0 0.000000 0.0 0.000000 0.0 0.000000 0.0 0.000000
2 3 0.0 6.892889 0.0 2.503830 0.0 3.108580 0.0 5.188438 0.0 9.779242 ... 0.000000 0.0 0.000000 0.0 0.000000 0.0 0.000000 0.0 0.000000 0.0 0.000000
3 4 1.0 12.787159 0.0 13.847412 7.0 44.928269 0.0 48.511435 2.0 33.264356 ... 0.000000 0.0 0.000000 0.0 0.000000 0.0 0.000000 0.0 0.000000 0.0 0.000000
4 5 0.0 13.345436 2.0 27.415005 20.0 83.130315 19.0 85.475996 2.0 10.147958 ... 0.000000 0.0 0.000000 0.0 0.000000 0.0 0.000000 0.0 0.000000 0.0 0.000000
5 6 2.0 13.141503 8.0 22.965002 5.0 48.737279 15.0 85.403915 1.0 17.414609 ... 0.000000 6.0 12.399834 0.0 0.710808 0.0 0.000000 0.0 1.661978 0.0 0.000000
6 7 1.0 7.852842 0.0 13.613426 14.0 46.148978 23.0 87.446535 0.0 13.759176 ... 2.231295 8.0 39.022340 1.0 7.304392 3.0 9.228959 0.0 6.885822 0.0 1.606200
7 8 0.0 0.884018 3.0 35.323813 8.0 32.846301 10.0 71.691744 0.0 4.310296 ... 2.753615 6.0 25.003670 6.0 22.113324 0.0 0.615790 0.0 11.812815 2.0 9.991712
8 9 4.0 24.700817 13.0 31.637755 3.0 30.312104 5.0 50.490115 0.0 3.830024 ... 5.977912 11.0 44.305738 1.0 13.523643 0.0 1.374856 1.0 9.066218 1.0 8.376995
9 10 0.0 17.651236 10.0 44.311858 29.0 55.415964 12.0 43.457016 1.0 41.503212 ... 0.000000 0.0 0.000000 0.0 0.000000 0.0 0.000000 0.0 0.000000 0.0 0.000000
|
68,495,547
|
Accomplishing `A.merge(B).merge(C).merge(D) ....` using `pandas.concat()`
|
<p>I have several dozen data frames like the following:</p>
<pre><code>import pandas as pd
import numpy as np
A = pd.DataFrame({'col1': np.random.rand(5) ,'col2': np.random.rand(5)})
A.index = [11111, 22222, 33333, 44444, 55555]
B = pd.DataFrame({'col3': np.random.rand(5) ,'col4': np.random.rand(5)})
B.index = [77777, 22222, 33333, 55555, 88888
</code></pre>
<p>]</p>
<p>I would like to do an outer join on the indices. I can obtain the desired result using <code>A.merge(B)</code> with the following:</p>
<pre><code>A.merge(B, how='outer', left_index=True, right_index=True)
</code></pre>
<p>yielding</p>
<pre><code> col1 col2 col3 col4
11111 0.195266 0.765243 NaN NaN
22222 0.524872 0.978260 0.769246 0.318719
33333 0.581588 0.391997 0.962788 0.864938
44444 0.490709 0.082014 NaN NaN
55555 0.339119 0.807546 0.545300 0.378834
77777 NaN NaN 0.345498 0.634918
88888 NaN NaN 0.976489 0.871800
</code></pre>
<p>This is what I want. Unfortunately, <code>.merge()</code> is very slow for large dataframes, and elsewhere on this site, I have read that I should use <code>pd.concat()</code> instead. But in this case, <code>pd.concat([A, B])</code>
does not work, because it does not accept the <code>left_index</code> and <code>right_index</code> keywords. Instead it just stacks the two on top of one another:</p>
<pre><code> col1 col2 col3 col4
11111 0.195266 0.765243 NaN NaN
22222 0.524872 0.978260 NaN NaN
33333 0.581588 0.391997 NaN NaN
44444 0.490709 0.082014 NaN NaN
55555 0.339119 0.807546 NaN NaN
77777 NaN NaN 0.345498 0.634918
22222 NaN NaN 0.769246 0.318719
33333 NaN NaN 0.962788 0.864938
55555 NaN NaN 0.545300 0.378834
88888 NaN NaN 0.976489 0.871800
</code></pre>
<p>Is there a way to accomplish this join using <code>pd.concat()</code>? Or am I stuck with <code>merge</code>?</p>
| 68,495,763
| 2021-07-23T07:31:23.280000
| 1
| null | -1
| 28
|
python|pandas
|
<p>Just use axis=1 to change the axis to concatenate along, which is default 0:</p>
<pre><code>C = pd.concat([A, B], axis=1)
print(C)
</code></pre>
<p>output will like this:</p>
<pre><code> col1 col2 col3 col4
11111 0.707499 0.644641 NaN NaN
22222 0.971488 0.320773 0.528505 0.257957
33333 0.173358 0.244919 0.899253 0.305035
44444 0.544763 0.101368 NaN NaN
55555 0.160257 0.456790 0.834480 0.889750
77777 NaN NaN 0.339059 0.968170
88888 NaN NaN 0.315871 0.984425
</code></pre>
<p>for more detail about how to merge, you can see the offical document:</p>
<p><a href="https://pandas.pydata.org/pandas-docs/stable/user_guide/merging.html" rel="nofollow noreferrer">https://pandas.pydata.org/pandas-docs/stable/user_guide/merging.html</a></p>
| 2021-07-23T07:46:27.717000
| 1
|
https://pandas.pydata.org/docs/dev/whatsnew/v0.20.0.html?highlight=namedtuple
|
Version 0.20.1 (May 5, 2017)#
Version 0.20.1 (May 5, 2017)#
This is a major release from 0.19.2 and includes a number of API changes, deprecations, new features,
enhancements, and performance improvements along with a large number of bug fixes. We recommend that all
users upgrade to this version.
Highlights include:
New .agg() API for Series/DataFrame similar to the groupby-rolling-resample API’s, see here
Integration with the feather-format, including a new top-level pd.read_feather() and DataFrame.to_feather() method, see here.
Just use axis=1 to change the axis to concatenate along, which is default 0:
C = pd.concat([A, B], axis=1)
print(C)
output will like this:
col1 col2 col3 col4
11111 0.707499 0.644641 NaN NaN
22222 0.971488 0.320773 0.528505 0.257957
33333 0.173358 0.244919 0.899253 0.305035
44444 0.544763 0.101368 NaN NaN
55555 0.160257 0.456790 0.834480 0.889750
77777 NaN NaN 0.339059 0.968170
88888 NaN NaN 0.315871 0.984425
for more detail about how to merge, you can see the offical document:
https://pandas.pydata.org/pandas-docs/stable/user_guide/merging.html
The .ix indexer has been deprecated, see here
Panel has been deprecated, see here
Addition of an IntervalIndex and Interval scalar type, see here
Improved user API when grouping by index levels in .groupby(), see here
Improved support for UInt64 dtypes, see here
A new orient for JSON serialization, orient='table', that uses the Table Schema spec and that gives the possibility for a more interactive repr in the Jupyter Notebook, see here
Experimental support for exporting styled DataFrames (DataFrame.style) to Excel, see here
Window binary corr/cov operations now return a MultiIndexed DataFrame rather than a Panel, as Panel is now deprecated, see here
Support for S3 handling now uses s3fs, see here
Google BigQuery support now uses the pandas-gbq library, see here
Warning
pandas has changed the internal structure and layout of the code base.
This can affect imports that are not from the top-level pandas.* namespace, please see the changes here.
Check the API Changes and deprecations before updating.
Note
This is a combined release for 0.20.0 and 0.20.1.
Version 0.20.1 contains one additional change for backwards-compatibility with downstream projects using pandas’ utils routines. (GH16250)
What’s new in v0.20.0
New features
Method agg API for DataFrame/Series
Keyword argument dtype for data IO
Method .to_datetime() has gained an origin parameter
GroupBy enhancements
Better support for compressed URLs in read_csv
Pickle file IO now supports compression
UInt64 support improved
GroupBy on categoricals
Table schema output
SciPy sparse matrix from/to SparseDataFrame
Excel output for styled DataFrames
IntervalIndex
Other enhancements
Backwards incompatible API changes
Possible incompatibility for HDF5 formats created with pandas < 0.13.0
Map on Index types now return other Index types
Accessing datetime fields of Index now return Index
pd.unique will now be consistent with extension types
S3 file handling
Partial string indexing changes
Concat of different float dtypes will not automatically upcast
pandas Google BigQuery support has moved
Memory usage for Index is more accurate
DataFrame.sort_index changes
GroupBy describe formatting
Window binary corr/cov operations return a MultiIndex DataFrame
HDFStore where string comparison
Index.intersection and inner join now preserve the order of the left Index
Pivot table always returns a DataFrame
Other API changes
Reorganization of the library: privacy changes
Modules privacy has changed
pandas.errors
pandas.testing
pandas.plotting
Other development changes
Deprecations
Deprecate .ix
Deprecate Panel
Deprecate groupby.agg() with a dictionary when renaming
Deprecate .plotting
Other deprecations
Removal of prior version deprecations/changes
Performance improvements
Bug fixes
Conversion
Indexing
IO
Plotting
GroupBy/resample/rolling
Sparse
Reshaping
Numeric
Other
Contributors
New features#
Method agg API for DataFrame/Series#
Series & DataFrame have been enhanced to support the aggregation API. This is a familiar API
from groupby, window operations, and resampling. This allows aggregation operations in a concise way
by using agg() and transform(). The full documentation
is here (GH1623).
Here is a sample
In [1]: df = pd.DataFrame(np.random.randn(10, 3), columns=['A', 'B', 'C'],
...: index=pd.date_range('1/1/2000', periods=10))
...:
In [2]: df.iloc[3:7] = np.nan
In [3]: df
Out[3]:
A B C
2000-01-01 0.469112 -0.282863 -1.509059
2000-01-02 -1.135632 1.212112 -0.173215
2000-01-03 0.119209 -1.044236 -0.861849
2000-01-04 NaN NaN NaN
2000-01-05 NaN NaN NaN
2000-01-06 NaN NaN NaN
2000-01-07 NaN NaN NaN
2000-01-08 0.113648 -1.478427 0.524988
2000-01-09 0.404705 0.577046 -1.715002
2000-01-10 -1.039268 -0.370647 -1.157892
[10 rows x 3 columns]
One can operate using string function names, callables, lists, or dictionaries of these.
Using a single function is equivalent to .apply.
In [4]: df.agg('sum')
Out[4]:
A -1.068226
B -1.387015
C -4.892029
Length: 3, dtype: float64
Multiple aggregations with a list of functions.
In [5]: df.agg(['sum', 'min'])
Out[5]:
A B C
sum -1.068226 -1.387015 -4.892029
min -1.135632 -1.478427 -1.715002
[2 rows x 3 columns]
Using a dict provides the ability to apply specific aggregations per column.
You will get a matrix-like output of all of the aggregators. The output has one column
per unique function. Those functions applied to a particular column will be NaN:
In [6]: df.agg({'A': ['sum', 'min'], 'B': ['min', 'max']})
Out[6]:
A B
sum -1.068226 NaN
min -1.135632 -1.478427
max NaN 1.212112
[3 rows x 2 columns]
The API also supports a .transform() function for broadcasting results.
In [7]: df.transform(['abs', lambda x: x - x.min()])
Out[7]:
A B C
abs <lambda> abs <lambda> abs <lambda>
2000-01-01 0.469112 1.604745 0.282863 1.195563 1.509059 0.205944
2000-01-02 1.135632 0.000000 1.212112 2.690539 0.173215 1.541787
2000-01-03 0.119209 1.254841 1.044236 0.434191 0.861849 0.853153
2000-01-04 NaN NaN NaN NaN NaN NaN
2000-01-05 NaN NaN NaN NaN NaN NaN
2000-01-06 NaN NaN NaN NaN NaN NaN
2000-01-07 NaN NaN NaN NaN NaN NaN
2000-01-08 0.113648 1.249281 1.478427 0.000000 0.524988 2.239990
2000-01-09 0.404705 1.540338 0.577046 2.055473 1.715002 0.000000
2000-01-10 1.039268 0.096364 0.370647 1.107780 1.157892 0.557110
[10 rows x 6 columns]
When presented with mixed dtypes that cannot be aggregated, .agg() will only take the valid
aggregations. This is similar to how groupby .agg() works. (GH15015)
In [8]: df = pd.DataFrame({'A': [1, 2, 3],
...: 'B': [1., 2., 3.],
...: 'C': ['foo', 'bar', 'baz'],
...: 'D': pd.date_range('20130101', periods=3)})
...:
In [9]: df.dtypes
Out[9]:
A int64
B float64
C object
D datetime64[ns]
Length: 4, dtype: object
In [10]: df.agg(['min', 'sum'])
Out[10]:
A B C D
min 1 1.0 bar 2013-01-01
sum 6 6.0 foobarbaz NaT
Keyword argument dtype for data IO#
The 'python' engine for read_csv(), as well as the read_fwf() function for parsing
fixed-width text files and read_excel() for parsing Excel files, now accept the dtype keyword argument for specifying the types of specific columns (GH14295). See the io docs for more information.
In [10]: data = "a b\n1 2\n3 4"
In [11]: pd.read_fwf(StringIO(data)).dtypes
Out[11]:
a int64
b int64
Length: 2, dtype: object
In [12]: pd.read_fwf(StringIO(data), dtype={'a': 'float64', 'b': 'object'}).dtypes
Out[12]:
a float64
b object
Length: 2, dtype: object
Method .to_datetime() has gained an origin parameter#
to_datetime() has gained a new parameter, origin, to define a reference date
from where to compute the resulting timestamps when parsing numerical values with a specific unit specified. (GH11276, GH11745)
For example, with 1960-01-01 as the starting date:
In [13]: pd.to_datetime([1, 2, 3], unit='D', origin=pd.Timestamp('1960-01-01'))
Out[13]: DatetimeIndex(['1960-01-02', '1960-01-03', '1960-01-04'], dtype='datetime64[ns]', freq=None)
The default is set at origin='unix', which defaults to 1970-01-01 00:00:00, which is
commonly called ‘unix epoch’ or POSIX time. This was the previous default, so this is a backward compatible change.
In [14]: pd.to_datetime([1, 2, 3], unit='D')
Out[14]: DatetimeIndex(['1970-01-02', '1970-01-03', '1970-01-04'], dtype='datetime64[ns]', freq=None)
GroupBy enhancements#
Strings passed to DataFrame.groupby() as the by parameter may now reference either column names or index level names. Previously, only column names could be referenced. This allows to easily group by a column and index level at the same time. (GH5677)
In [15]: arrays = [['bar', 'bar', 'baz', 'baz', 'foo', 'foo', 'qux', 'qux'],
....: ['one', 'two', 'one', 'two', 'one', 'two', 'one', 'two']]
....:
In [16]: index = pd.MultiIndex.from_arrays(arrays, names=['first', 'second'])
In [17]: df = pd.DataFrame({'A': [1, 1, 1, 1, 2, 2, 3, 3],
....: 'B': np.arange(8)},
....: index=index)
....:
In [18]: df
Out[18]:
A B
first second
bar one 1 0
two 1 1
baz one 1 2
two 1 3
foo one 2 4
two 2 5
qux one 3 6
two 3 7
[8 rows x 2 columns]
In [19]: df.groupby(['second', 'A']).sum()
Out[19]:
B
second A
one 1 2
2 4
3 6
two 1 4
2 5
3 7
[6 rows x 1 columns]
Better support for compressed URLs in read_csv#
The compression code was refactored (GH12688). As a result, reading
dataframes from URLs in read_csv() or read_table() now supports
additional compression methods: xz, bz2, and zip (GH14570).
Previously, only gzip compression was supported. By default, compression of
URLs and paths are now inferred using their file extensions. Additionally,
support for bz2 compression in the python 2 C-engine improved (GH14874).
In [20]: url = ('https://github.com/{repo}/raw/{branch}/{path}'
....: .format(repo='pandas-dev/pandas',
....: branch='main',
....: path='pandas/tests/io/parser/data/salaries.csv.bz2'))
....:
# default, infer compression
In [21]: df = pd.read_csv(url, sep='\t', compression='infer')
# explicitly specify compression
In [22]: df = pd.read_csv(url, sep='\t', compression='bz2')
In [23]: df.head(2)
Out[23]:
S X E M
0 13876 1 1 1
1 11608 1 3 0
[2 rows x 4 columns]
Pickle file IO now supports compression#
read_pickle(), DataFrame.to_pickle() and Series.to_pickle()
can now read from and write to compressed pickle files. Compression methods
can be an explicit parameter or be inferred from the file extension.
See the docs here.
In [24]: df = pd.DataFrame({'A': np.random.randn(1000),
....: 'B': 'foo',
....: 'C': pd.date_range('20130101', periods=1000, freq='s')})
....:
Using an explicit compression type
In [25]: df.to_pickle("data.pkl.compress", compression="gzip")
In [26]: rt = pd.read_pickle("data.pkl.compress", compression="gzip")
In [27]: rt.head()
Out[27]:
A B C
0 -1.344312 foo 2013-01-01 00:00:00
1 0.844885 foo 2013-01-01 00:00:01
2 1.075770 foo 2013-01-01 00:00:02
3 -0.109050 foo 2013-01-01 00:00:03
4 1.643563 foo 2013-01-01 00:00:04
[5 rows x 3 columns]
The default is to infer the compression type from the extension (compression='infer'):
In [28]: df.to_pickle("data.pkl.gz")
In [29]: rt = pd.read_pickle("data.pkl.gz")
In [30]: rt.head()
Out[30]:
A B C
0 -1.344312 foo 2013-01-01 00:00:00
1 0.844885 foo 2013-01-01 00:00:01
2 1.075770 foo 2013-01-01 00:00:02
3 -0.109050 foo 2013-01-01 00:00:03
4 1.643563 foo 2013-01-01 00:00:04
[5 rows x 3 columns]
In [31]: df["A"].to_pickle("s1.pkl.bz2")
In [32]: rt = pd.read_pickle("s1.pkl.bz2")
In [33]: rt.head()
Out[33]:
0 -1.344312
1 0.844885
2 1.075770
3 -0.109050
4 1.643563
Name: A, Length: 5, dtype: float64
UInt64 support improved#
pandas has significantly improved support for operations involving unsigned,
or purely non-negative, integers. Previously, handling these integers would
result in improper rounding or data-type casting, leading to incorrect results.
Notably, a new numerical index, UInt64Index, has been created (GH14937)
In [1]: idx = pd.UInt64Index([1, 2, 3])
In [2]: df = pd.DataFrame({'A': ['a', 'b', 'c']}, index=idx)
In [3]: df.index
Out[3]: UInt64Index([1, 2, 3], dtype='uint64')
Bug in converting object elements of array-like objects to unsigned 64-bit integers (GH4471, GH14982)
Bug in Series.unique() in which unsigned 64-bit integers were causing overflow (GH14721)
Bug in DataFrame construction in which unsigned 64-bit integer elements were being converted to objects (GH14881)
Bug in pd.read_csv() in which unsigned 64-bit integer elements were being improperly converted to the wrong data types (GH14983)
Bug in pd.unique() in which unsigned 64-bit integers were causing overflow (GH14915)
Bug in pd.value_counts() in which unsigned 64-bit integers were being erroneously truncated in the output (GH14934)
GroupBy on categoricals#
In previous versions, .groupby(..., sort=False) would fail with a ValueError when grouping on a categorical series with some categories not appearing in the data. (GH13179)
In [34]: chromosomes = np.r_[np.arange(1, 23).astype(str), ['X', 'Y']]
In [35]: df = pd.DataFrame({
....: 'A': np.random.randint(100),
....: 'B': np.random.randint(100),
....: 'C': np.random.randint(100),
....: 'chromosomes': pd.Categorical(np.random.choice(chromosomes, 100),
....: categories=chromosomes,
....: ordered=True)})
....:
In [36]: df
Out[36]:
A B C chromosomes
0 87 22 81 4
1 87 22 81 13
2 87 22 81 22
3 87 22 81 2
4 87 22 81 6
.. .. .. .. ...
95 87 22 81 8
96 87 22 81 11
97 87 22 81 X
98 87 22 81 1
99 87 22 81 19
[100 rows x 4 columns]
Previous behavior:
In [3]: df[df.chromosomes != '1'].groupby('chromosomes', sort=False).sum()
---------------------------------------------------------------------------
ValueError: items in new_categories are not the same as in old categories
New behavior:
In [37]: df[df.chromosomes != '1'].groupby('chromosomes', sort=False).sum()
Out[37]:
A B C
chromosomes
4 348 88 324
13 261 66 243
22 348 88 324
2 348 88 324
6 174 44 162
... ... .. ...
3 348 88 324
11 348 88 324
19 174 44 162
1 0 0 0
21 0 0 0
[24 rows x 3 columns]
Table schema output#
The new orient 'table' for DataFrame.to_json()
will generate a Table Schema compatible string representation of
the data.
In [38]: df = pd.DataFrame(
....: {'A': [1, 2, 3],
....: 'B': ['a', 'b', 'c'],
....: 'C': pd.date_range('2016-01-01', freq='d', periods=3)},
....: index=pd.Index(range(3), name='idx'))
....:
In [39]: df
Out[39]:
A B C
idx
0 1 a 2016-01-01
1 2 b 2016-01-02
2 3 c 2016-01-03
[3 rows x 3 columns]
In [40]: df.to_json(orient='table')
Out[40]: '{"schema":{"fields":[{"name":"idx","type":"integer"},{"name":"A","type":"integer"},{"name":"B","type":"string"},{"name":"C","type":"datetime"}],"primaryKey":["idx"],"pandas_version":"1.4.0"},"data":[{"idx":0,"A":1,"B":"a","C":"2016-01-01T00:00:00.000"},{"idx":1,"A":2,"B":"b","C":"2016-01-02T00:00:00.000"},{"idx":2,"A":3,"B":"c","C":"2016-01-03T00:00:00.000"}]}'
See IO: Table Schema for more information.
Additionally, the repr for DataFrame and Series can now publish
this JSON Table schema representation of the Series or DataFrame if you are
using IPython (or another frontend like nteract using the Jupyter messaging
protocol).
This gives frontends like the Jupyter notebook and nteract
more flexibility in how they display pandas objects, since they have
more information about the data.
You must enable this by setting the display.html.table_schema option to True.
SciPy sparse matrix from/to SparseDataFrame#
pandas now supports creating sparse dataframes directly from scipy.sparse.spmatrix instances.
See the documentation for more information. (GH4343)
All sparse formats are supported, but matrices that are not in COOrdinate format will be converted, copying data as needed.
from scipy.sparse import csr_matrix
arr = np.random.random(size=(1000, 5))
arr[arr < .9] = 0
sp_arr = csr_matrix(arr)
sp_arr
sdf = pd.SparseDataFrame(sp_arr)
sdf
To convert a SparseDataFrame back to sparse SciPy matrix in COO format, you can use:
sdf.to_coo()
Excel output for styled DataFrames#
Experimental support has been added to export DataFrame.style formats to Excel using the openpyxl engine. (GH15530)
For example, after running the following, styled.xlsx renders as below:
In [41]: np.random.seed(24)
In [42]: df = pd.DataFrame({'A': np.linspace(1, 10, 10)})
In [43]: df = pd.concat([df, pd.DataFrame(np.random.RandomState(24).randn(10, 4),
....: columns=list('BCDE'))],
....: axis=1)
....:
In [44]: df.iloc[0, 2] = np.nan
In [45]: df
Out[45]:
A B C D E
0 1.0 1.329212 NaN -0.316280 -0.990810
1 2.0 -1.070816 -1.438713 0.564417 0.295722
2 3.0 -1.626404 0.219565 0.678805 1.889273
3 4.0 0.961538 0.104011 -0.481165 0.850229
4 5.0 1.453425 1.057737 0.165562 0.515018
5 6.0 -1.336936 0.562861 1.392855 -0.063328
6 7.0 0.121668 1.207603 -0.002040 1.627796
7 8.0 0.354493 1.037528 -0.385684 0.519818
8 9.0 1.686583 -1.325963 1.428984 -2.089354
9 10.0 -0.129820 0.631523 -0.586538 0.290720
[10 rows x 5 columns]
In [46]: styled = (df.style
....: .applymap(lambda val: 'color:red;' if val < 0 else 'color:black;')
....: .highlight_max())
....:
In [47]: styled.to_excel('styled.xlsx', engine='openpyxl')
See the Style documentation for more detail.
IntervalIndex#
pandas has gained an IntervalIndex with its own dtype, interval as well as the Interval scalar type. These allow first-class support for interval
notation, specifically as a return type for the categories in cut() and qcut(). The IntervalIndex allows some unique indexing, see the
docs. (GH7640, GH8625)
Warning
These indexing behaviors of the IntervalIndex are provisional and may change in a future version of pandas. Feedback on usage is welcome.
Previous behavior:
The returned categories were strings, representing Intervals
In [1]: c = pd.cut(range(4), bins=2)
In [2]: c
Out[2]:
[(-0.003, 1.5], (-0.003, 1.5], (1.5, 3], (1.5, 3]]
Categories (2, object): [(-0.003, 1.5] < (1.5, 3]]
In [3]: c.categories
Out[3]: Index(['(-0.003, 1.5]', '(1.5, 3]'], dtype='object')
New behavior:
In [48]: c = pd.cut(range(4), bins=2)
In [49]: c
Out[49]:
[(-0.003, 1.5], (-0.003, 1.5], (1.5, 3.0], (1.5, 3.0]]
Categories (2, interval[float64, right]): [(-0.003, 1.5] < (1.5, 3.0]]
In [50]: c.categories
Out[50]: IntervalIndex([(-0.003, 1.5], (1.5, 3.0]], dtype='interval[float64, right]')
Furthermore, this allows one to bin other data with these same bins, with NaN representing a missing
value similar to other dtypes.
In [51]: pd.cut([0, 3, 5, 1], bins=c.categories)
Out[51]:
[(-0.003, 1.5], (1.5, 3.0], NaN, (-0.003, 1.5]]
Categories (2, interval[float64, right]): [(-0.003, 1.5] < (1.5, 3.0]]
An IntervalIndex can also be used in Series and DataFrame as the index.
In [52]: df = pd.DataFrame({'A': range(4),
....: 'B': pd.cut([0, 3, 1, 1], bins=c.categories)
....: }).set_index('B')
....:
In [53]: df
Out[53]:
A
B
(-0.003, 1.5] 0
(1.5, 3.0] 1
(-0.003, 1.5] 2
(-0.003, 1.5] 3
[4 rows x 1 columns]
Selecting via a specific interval:
In [54]: df.loc[pd.Interval(1.5, 3.0)]
Out[54]:
A 1
Name: (1.5, 3.0], Length: 1, dtype: int64
Selecting via a scalar value that is contained in the intervals.
In [55]: df.loc[0]
Out[55]:
A
B
(-0.003, 1.5] 0
(-0.003, 1.5] 2
(-0.003, 1.5] 3
[3 rows x 1 columns]
Other enhancements#
DataFrame.rolling() now accepts the parameter closed='right'|'left'|'both'|'neither' to choose the rolling window-endpoint closedness. See the documentation (GH13965)
Integration with the feather-format, including a new top-level pd.read_feather() and DataFrame.to_feather() method, see here.
Series.str.replace() now accepts a callable, as replacement, which is passed to re.sub (GH15055)
Series.str.replace() now accepts a compiled regular expression as a pattern (GH15446)
Series.sort_index accepts parameters kind and na_position (GH13589, GH14444)
DataFrame and DataFrame.groupby() have gained a nunique() method to count the distinct values over an axis (GH14336, GH15197).
DataFrame has gained a melt() method, equivalent to pd.melt(), for unpivoting from a wide to long format (GH12640).
pd.read_excel() now preserves sheet order when using sheetname=None (GH9930)
Multiple offset aliases with decimal points are now supported (e.g. 0.5min is parsed as 30s) (GH8419)
.isnull() and .notnull() have been added to Index object to make them more consistent with the Series API (GH15300)
New UnsortedIndexError (subclass of KeyError) raised when indexing/slicing into an
unsorted MultiIndex (GH11897). This allows differentiation between errors due to lack
of sorting or an incorrect key. See here
MultiIndex has gained a .to_frame() method to convert to a DataFrame (GH12397)
pd.cut and pd.qcut now support datetime64 and timedelta64 dtypes (GH14714, GH14798)
pd.qcut has gained the duplicates='raise'|'drop' option to control whether to raise on duplicated edges (GH7751)
Series provides a to_excel method to output Excel files (GH8825)
The usecols argument in pd.read_csv() now accepts a callable function as a value (GH14154)
The skiprows argument in pd.read_csv() now accepts a callable function as a value (GH10882)
The nrows and chunksize arguments in pd.read_csv() are supported if both are passed (GH6774, GH15755)
DataFrame.plot now prints a title above each subplot if suplots=True and title is a list of strings (GH14753)
DataFrame.plot can pass the matplotlib 2.0 default color cycle as a single string as color parameter, see here. (GH15516)
Series.interpolate() now supports timedelta as an index type with method='time' (GH6424)
Addition of a level keyword to DataFrame/Series.rename to rename
labels in the specified level of a MultiIndex (GH4160).
DataFrame.reset_index() will now interpret a tuple index.name as a key spanning across levels of columns, if this is a MultiIndex (GH16164)
Timedelta.isoformat method added for formatting Timedeltas as an ISO 8601 duration. See the Timedelta docs (GH15136)
.select_dtypes() now allows the string datetimetz to generically select datetimes with tz (GH14910)
The .to_latex() method will now accept multicolumn and multirow arguments to use the accompanying LaTeX enhancements
pd.merge_asof() gained the option direction='backward'|'forward'|'nearest' (GH14887)
Series/DataFrame.asfreq() have gained a fill_value parameter, to fill missing values (GH3715).
Series/DataFrame.resample.asfreq have gained a fill_value parameter, to fill missing values during resampling (GH3715).
pandas.util.hash_pandas_object() has gained the ability to hash a MultiIndex (GH15224)
Series/DataFrame.squeeze() have gained the axis parameter. (GH15339)
DataFrame.to_excel() has a new freeze_panes parameter to turn on Freeze Panes when exporting to Excel (GH15160)
pd.read_html() will parse multiple header rows, creating a MultiIndex header. (GH13434).
HTML table output skips colspan or rowspan attribute if equal to 1. (GH15403)
pandas.io.formats.style.Styler template now has blocks for easier extension, see the example notebook (GH15649)
Styler.render() now accepts **kwargs to allow user-defined variables in the template (GH15649)
Compatibility with Jupyter notebook 5.0; MultiIndex column labels are left-aligned and MultiIndex row-labels are top-aligned (GH15379)
TimedeltaIndex now has a custom date-tick formatter specifically designed for nanosecond level precision (GH8711)
pd.api.types.union_categoricals gained the ignore_ordered argument to allow ignoring the ordered attribute of unioned categoricals (GH13410). See the categorical union docs for more information.
DataFrame.to_latex() and DataFrame.to_string() now allow optional header aliases. (GH15536)
Re-enable the parse_dates keyword of pd.read_excel() to parse string columns as dates (GH14326)
Added .empty property to subclasses of Index. (GH15270)
Enabled floor division for Timedelta and TimedeltaIndex (GH15828)
pandas.io.json.json_normalize() gained the option errors='ignore'|'raise'; the default is errors='raise' which is backward compatible. (GH14583)
pandas.io.json.json_normalize() with an empty list will return an empty DataFrame (GH15534)
pandas.io.json.json_normalize() has gained a sep option that accepts str to separate joined fields; the default is “.”, which is backward compatible. (GH14883)
MultiIndex.remove_unused_levels() has been added to facilitate removing unused levels. (GH15694)
pd.read_csv() will now raise a ParserError error whenever any parsing error occurs (GH15913, GH15925)
pd.read_csv() now supports the error_bad_lines and warn_bad_lines arguments for the Python parser (GH15925)
The display.show_dimensions option can now also be used to specify
whether the length of a Series should be shown in its repr (GH7117).
parallel_coordinates() has gained a sort_labels keyword argument that sorts class labels and the colors assigned to them (GH15908)
Options added to allow one to turn on/off using bottleneck and numexpr, see here (GH16157)
DataFrame.style.bar() now accepts two more options to further customize the bar chart. Bar alignment is set with align='left'|'mid'|'zero', the default is “left”, which is backward compatible; You can now pass a list of color=[color_negative, color_positive]. (GH14757)
Backwards incompatible API changes#
Possible incompatibility for HDF5 formats created with pandas < 0.13.0#
pd.TimeSeries was deprecated officially in 0.17.0, though has already been an alias since 0.13.0. It has
been dropped in favor of pd.Series. (GH15098).
This may cause HDF5 files that were created in prior versions to become unreadable if pd.TimeSeries
was used. This is most likely to be for pandas < 0.13.0. If you find yourself in this situation.
You can use a recent prior version of pandas to read in your HDF5 files,
then write them out again after applying the procedure below.
In [2]: s = pd.TimeSeries([1, 2, 3], index=pd.date_range('20130101', periods=3))
In [3]: s
Out[3]:
2013-01-01 1
2013-01-02 2
2013-01-03 3
Freq: D, dtype: int64
In [4]: type(s)
Out[4]: pandas.core.series.TimeSeries
In [5]: s = pd.Series(s)
In [6]: s
Out[6]:
2013-01-01 1
2013-01-02 2
2013-01-03 3
Freq: D, dtype: int64
In [7]: type(s)
Out[7]: pandas.core.series.Series
Map on Index types now return other Index types#
map on an Index now returns an Index, not a numpy array (GH12766)
In [56]: idx = pd.Index([1, 2])
In [57]: idx
Out[57]: Index([1, 2], dtype='int64')
In [58]: mi = pd.MultiIndex.from_tuples([(1, 2), (2, 4)])
In [59]: mi
Out[59]:
MultiIndex([(1, 2),
(2, 4)],
)
Previous behavior:
In [5]: idx.map(lambda x: x * 2)
Out[5]: array([2, 4])
In [6]: idx.map(lambda x: (x, x * 2))
Out[6]: array([(1, 2), (2, 4)], dtype=object)
In [7]: mi.map(lambda x: x)
Out[7]: array([(1, 2), (2, 4)], dtype=object)
In [8]: mi.map(lambda x: x[0])
Out[8]: array([1, 2])
New behavior:
In [60]: idx.map(lambda x: x * 2)
Out[60]: Index([2, 4], dtype='int64')
In [61]: idx.map(lambda x: (x, x * 2))
Out[61]:
MultiIndex([(1, 2),
(2, 4)],
)
In [62]: mi.map(lambda x: x)
Out[62]:
MultiIndex([(1, 2),
(2, 4)],
)
In [63]: mi.map(lambda x: x[0])
Out[63]: Index([1, 2], dtype='int64')
map on a Series with datetime64 values may return int64 dtypes rather than int32
In [64]: s = pd.Series(pd.date_range('2011-01-02T00:00', '2011-01-02T02:00', freq='H')
....: .tz_localize('Asia/Tokyo'))
....:
In [65]: s
Out[65]:
0 2011-01-02 00:00:00+09:00
1 2011-01-02 01:00:00+09:00
2 2011-01-02 02:00:00+09:00
Length: 3, dtype: datetime64[ns, Asia/Tokyo]
Previous behavior:
In [9]: s.map(lambda x: x.hour)
Out[9]:
0 0
1 1
2 2
dtype: int32
New behavior:
In [66]: s.map(lambda x: x.hour)
Out[66]:
0 0
1 1
2 2
Length: 3, dtype: int32
Accessing datetime fields of Index now return Index#
The datetime-related attributes (see here
for an overview) of DatetimeIndex, PeriodIndex and TimedeltaIndex previously
returned numpy arrays. They will now return a new Index object, except
in the case of a boolean field, where the result will still be a boolean ndarray. (GH15022)
Previous behaviour:
In [1]: idx = pd.date_range("2015-01-01", periods=5, freq='10H')
In [2]: idx.hour
Out[2]: array([ 0, 10, 20, 6, 16], dtype=int32)
New behavior:
In [67]: idx = pd.date_range("2015-01-01", periods=5, freq='10H')
In [68]: idx.hour
Out[68]: Index([0, 10, 20, 6, 16], dtype='int32')
This has the advantage that specific Index methods are still available on the
result. On the other hand, this might have backward incompatibilities: e.g.
compared to numpy arrays, Index objects are not mutable. To get the original
ndarray, you can always convert explicitly using np.asarray(idx.hour).
pd.unique will now be consistent with extension types#
In prior versions, using Series.unique() and pandas.unique() on Categorical and tz-aware
data-types would yield different return types. These are now made consistent. (GH15903)
Datetime tz-aware
Previous behaviour:
# Series
In [5]: pd.Series([pd.Timestamp('20160101', tz='US/Eastern'),
...: pd.Timestamp('20160101', tz='US/Eastern')]).unique()
Out[5]: array([Timestamp('2016-01-01 00:00:00-0500', tz='US/Eastern')], dtype=object)
In [6]: pd.unique(pd.Series([pd.Timestamp('20160101', tz='US/Eastern'),
...: pd.Timestamp('20160101', tz='US/Eastern')]))
Out[6]: array(['2016-01-01T05:00:00.000000000'], dtype='datetime64[ns]')
# Index
In [7]: pd.Index([pd.Timestamp('20160101', tz='US/Eastern'),
...: pd.Timestamp('20160101', tz='US/Eastern')]).unique()
Out[7]: DatetimeIndex(['2016-01-01 00:00:00-05:00'], dtype='datetime64[ns, US/Eastern]', freq=None)
In [8]: pd.unique([pd.Timestamp('20160101', tz='US/Eastern'),
...: pd.Timestamp('20160101', tz='US/Eastern')])
Out[8]: array(['2016-01-01T05:00:00.000000000'], dtype='datetime64[ns]')
New behavior:
# Series, returns an array of Timestamp tz-aware
In [69]: pd.Series([pd.Timestamp(r'20160101', tz=r'US/Eastern'),
....: pd.Timestamp(r'20160101', tz=r'US/Eastern')]).unique()
....:
Out[69]:
<DatetimeArray>
['2016-01-01 00:00:00-05:00']
Length: 1, dtype: datetime64[ns, US/Eastern]
In [70]: pd.unique(pd.Series([pd.Timestamp('20160101', tz='US/Eastern'),
....: pd.Timestamp('20160101', tz='US/Eastern')]))
....:
Out[70]:
<DatetimeArray>
['2016-01-01 00:00:00-05:00']
Length: 1, dtype: datetime64[ns, US/Eastern]
# Index, returns a DatetimeIndex
In [71]: pd.Index([pd.Timestamp('20160101', tz='US/Eastern'),
....: pd.Timestamp('20160101', tz='US/Eastern')]).unique()
....:
Out[71]: DatetimeIndex(['2016-01-01 00:00:00-05:00'], dtype='datetime64[ns, US/Eastern]', freq=None)
In [72]: pd.unique(pd.Index([pd.Timestamp('20160101', tz='US/Eastern'),
....: pd.Timestamp('20160101', tz='US/Eastern')]))
....:
Out[72]: DatetimeIndex(['2016-01-01 00:00:00-05:00'], dtype='datetime64[ns, US/Eastern]', freq=None)
Categoricals
Previous behaviour:
In [1]: pd.Series(list('baabc'), dtype='category').unique()
Out[1]:
[b, a, c]
Categories (3, object): [b, a, c]
In [2]: pd.unique(pd.Series(list('baabc'), dtype='category'))
Out[2]: array(['b', 'a', 'c'], dtype=object)
New behavior:
# returns a Categorical
In [73]: pd.Series(list('baabc'), dtype='category').unique()
Out[73]:
['b', 'a', 'c']
Categories (3, object): ['a', 'b', 'c']
In [74]: pd.unique(pd.Series(list('baabc'), dtype='category'))
Out[74]:
['b', 'a', 'c']
Categories (3, object): ['a', 'b', 'c']
S3 file handling#
pandas now uses s3fs for handling S3 connections. This shouldn’t break
any code. However, since s3fs is not a required dependency, you will need to install it separately, like boto
in prior versions of pandas. (GH11915).
Partial string indexing changes#
DatetimeIndex Partial String Indexing now works as an exact match, provided that string resolution coincides with index resolution, including a case when both are seconds (GH14826). See Slice vs. Exact Match for details.
In [75]: df = pd.DataFrame({'a': [1, 2, 3]}, pd.DatetimeIndex(['2011-12-31 23:59:59',
....: '2012-01-01 00:00:00',
....: '2012-01-01 00:00:01']))
....:
Previous behavior:
In [4]: df['2011-12-31 23:59:59']
Out[4]:
a
2011-12-31 23:59:59 1
In [5]: df['a']['2011-12-31 23:59:59']
Out[5]:
2011-12-31 23:59:59 1
Name: a, dtype: int64
New behavior:
In [4]: df['2011-12-31 23:59:59']
KeyError: '2011-12-31 23:59:59'
In [5]: df['a']['2011-12-31 23:59:59']
Out[5]: 1
Concat of different float dtypes will not automatically upcast#
Previously, concat of multiple objects with different float dtypes would automatically upcast results to a dtype of float64.
Now the smallest acceptable dtype will be used (GH13247)
In [76]: df1 = pd.DataFrame(np.array([1.0], dtype=np.float32, ndmin=2))
In [77]: df1.dtypes
Out[77]:
0 float32
Length: 1, dtype: object
In [78]: df2 = pd.DataFrame(np.array([np.nan], dtype=np.float32, ndmin=2))
In [79]: df2.dtypes
Out[79]:
0 float32
Length: 1, dtype: object
Previous behavior:
In [7]: pd.concat([df1, df2]).dtypes
Out[7]:
0 float64
dtype: object
New behavior:
In [80]: pd.concat([df1, df2]).dtypes
Out[80]:
0 float32
Length: 1, dtype: object
pandas Google BigQuery support has moved#
pandas has split off Google BigQuery support into a separate package pandas-gbq. You can conda install pandas-gbq -c conda-forge or
pip install pandas-gbq to get it. The functionality of read_gbq() and DataFrame.to_gbq() remain the same with the
currently released version of pandas-gbq=0.1.4. Documentation is now hosted here (GH15347)
Memory usage for Index is more accurate#
In previous versions, showing .memory_usage() on a pandas structure that has an index, would only include actual index values and not include structures that facilitated fast indexing. This will generally be different for Index and MultiIndex and less-so for other index types. (GH15237)
Previous behavior:
In [8]: index = pd.Index(['foo', 'bar', 'baz'])
In [9]: index.memory_usage(deep=True)
Out[9]: 180
In [10]: index.get_loc('foo')
Out[10]: 0
In [11]: index.memory_usage(deep=True)
Out[11]: 180
New behavior:
In [8]: index = pd.Index(['foo', 'bar', 'baz'])
In [9]: index.memory_usage(deep=True)
Out[9]: 180
In [10]: index.get_loc('foo')
Out[10]: 0
In [11]: index.memory_usage(deep=True)
Out[11]: 260
DataFrame.sort_index changes#
In certain cases, calling .sort_index() on a MultiIndexed DataFrame would return the same DataFrame without seeming to sort.
This would happen with a lexsorted, but non-monotonic levels. (GH15622, GH15687, GH14015, GH13431, GH15797)
This is unchanged from prior versions, but shown for illustration purposes:
In [81]: df = pd.DataFrame(np.arange(6), columns=['value'],
....: index=pd.MultiIndex.from_product([list('BA'), range(3)]))
....:
In [82]: df
Out[82]:
value
B 0 0
1 1
2 2
A 0 3
1 4
2 5
[6 rows x 1 columns]
In [87]: df.index.is_lexsorted()
Out[87]: False
In [88]: df.index.is_monotonic
Out[88]: False
Sorting works as expected
In [83]: df.sort_index()
Out[83]:
value
A 0 3
1 4
2 5
B 0 0
1 1
2 2
[6 rows x 1 columns]
In [90]: df.sort_index().index.is_lexsorted()
Out[90]: True
In [91]: df.sort_index().index.is_monotonic
Out[91]: True
However, this example, which has a non-monotonic 2nd level,
doesn’t behave as desired.
In [84]: df = pd.DataFrame({'value': [1, 2, 3, 4]},
....: index=pd.MultiIndex([['a', 'b'], ['bb', 'aa']],
....: [[0, 0, 1, 1], [0, 1, 0, 1]]))
....:
In [85]: df
Out[85]:
value
a bb 1
aa 2
b bb 3
aa 4
[4 rows x 1 columns]
Previous behavior:
In [11]: df.sort_index()
Out[11]:
value
a bb 1
aa 2
b bb 3
aa 4
In [14]: df.sort_index().index.is_lexsorted()
Out[14]: True
In [15]: df.sort_index().index.is_monotonic
Out[15]: False
New behavior:
In [94]: df.sort_index()
Out[94]:
value
a aa 2
bb 1
b aa 4
bb 3
[4 rows x 1 columns]
In [95]: df.sort_index().index.is_lexsorted()
Out[95]: True
In [96]: df.sort_index().index.is_monotonic
Out[96]: True
GroupBy describe formatting#
The output formatting of groupby.describe() now labels the describe() metrics in the columns instead of the index.
This format is consistent with groupby.agg() when applying multiple functions at once. (GH4792)
Previous behavior:
In [1]: df = pd.DataFrame({'A': [1, 1, 2, 2], 'B': [1, 2, 3, 4]})
In [2]: df.groupby('A').describe()
Out[2]:
B
A
1 count 2.000000
mean 1.500000
std 0.707107
min 1.000000
25% 1.250000
50% 1.500000
75% 1.750000
max 2.000000
2 count 2.000000
mean 3.500000
std 0.707107
min 3.000000
25% 3.250000
50% 3.500000
75% 3.750000
max 4.000000
In [3]: df.groupby('A').agg([np.mean, np.std, np.min, np.max])
Out[3]:
B
mean std amin amax
A
1 1.5 0.707107 1 2
2 3.5 0.707107 3 4
New behavior:
In [86]: df = pd.DataFrame({'A': [1, 1, 2, 2], 'B': [1, 2, 3, 4]})
In [87]: df.groupby('A').describe()
Out[87]:
B
count mean std min 25% 50% 75% max
A
1 2.0 1.5 0.707107 1.0 1.25 1.5 1.75 2.0
2 2.0 3.5 0.707107 3.0 3.25 3.5 3.75 4.0
[2 rows x 8 columns]
In [88]: df.groupby('A').agg([np.mean, np.std, np.min, np.max])
Out[88]:
B
mean std amin amax
A
1 1.5 0.707107 1 2
2 3.5 0.707107 3 4
[2 rows x 4 columns]
Window binary corr/cov operations return a MultiIndex DataFrame#
A binary window operation, like .corr() or .cov(), when operating on a .rolling(..), .expanding(..), or .ewm(..) object,
will now return a 2-level MultiIndexed DataFrame rather than a Panel, as Panel is now deprecated,
see here. These are equivalent in function,
but a MultiIndexed DataFrame enjoys more support in pandas.
See the section on Windowed Binary Operations for more information. (GH15677)
In [89]: np.random.seed(1234)
In [90]: df = pd.DataFrame(np.random.rand(100, 2),
....: columns=pd.Index(['A', 'B'], name='bar'),
....: index=pd.date_range('20160101',
....: periods=100, freq='D', name='foo'))
....:
In [91]: df.tail()
Out[91]:
bar A B
foo
2016-04-05 0.640880 0.126205
2016-04-06 0.171465 0.737086
2016-04-07 0.127029 0.369650
2016-04-08 0.604334 0.103104
2016-04-09 0.802374 0.945553
[5 rows x 2 columns]
Previous behavior:
In [2]: df.rolling(12).corr()
Out[2]:
<class 'pandas.core.panel.Panel'>
Dimensions: 100 (items) x 2 (major_axis) x 2 (minor_axis)
Items axis: 2016-01-01 00:00:00 to 2016-04-09 00:00:00
Major_axis axis: A to B
Minor_axis axis: A to B
New behavior:
In [92]: res = df.rolling(12).corr()
In [93]: res.tail()
Out[93]:
bar A B
foo bar
2016-04-07 B -0.132090 1.000000
2016-04-08 A 1.000000 -0.145775
B -0.145775 1.000000
2016-04-09 A 1.000000 0.119645
B 0.119645 1.000000
[5 rows x 2 columns]
Retrieving a correlation matrix for a cross-section
In [94]: df.rolling(12).corr().loc['2016-04-07']
Out[94]:
bar A B
bar
A 1.00000 -0.13209
B -0.13209 1.00000
[2 rows x 2 columns]
HDFStore where string comparison#
In previous versions most types could be compared to string column in a HDFStore
usually resulting in an invalid comparison, returning an empty result frame. These comparisons will now raise a
TypeError (GH15492)
In [95]: df = pd.DataFrame({'unparsed_date': ['2014-01-01', '2014-01-01']})
In [96]: df.to_hdf('store.h5', 'key', format='table', data_columns=True)
In [97]: df.dtypes
Out[97]:
unparsed_date object
Length: 1, dtype: object
Previous behavior:
In [4]: pd.read_hdf('store.h5', 'key', where='unparsed_date > ts')
File "<string>", line 1
(unparsed_date > 1970-01-01 00:00:01.388552400)
^
SyntaxError: invalid token
New behavior:
In [18]: ts = pd.Timestamp('2014-01-01')
In [19]: pd.read_hdf('store.h5', 'key', where='unparsed_date > ts')
TypeError: Cannot compare 2014-01-01 00:00:00 of
type <class 'pandas.tslib.Timestamp'> to string column
Index.intersection and inner join now preserve the order of the left Index#
Index.intersection() now preserves the order of the calling Index (left)
instead of the other Index (right) (GH15582). This affects inner
joins, DataFrame.join() and merge(), and the .align method.
Index.intersection
In [98]: left = pd.Index([2, 1, 0])
In [99]: left
Out[99]: Index([2, 1, 0], dtype='int64')
In [100]: right = pd.Index([1, 2, 3])
In [101]: right
Out[101]: Index([1, 2, 3], dtype='int64')
Previous behavior:
In [4]: left.intersection(right)
Out[4]: Int64Index([1, 2], dtype='int64')
New behavior:
In [102]: left.intersection(right)
Out[102]: Index([2, 1], dtype='int64')
DataFrame.join and pd.merge
In [103]: left = pd.DataFrame({'a': [20, 10, 0]}, index=[2, 1, 0])
In [104]: left
Out[104]:
a
2 20
1 10
0 0
[3 rows x 1 columns]
In [105]: right = pd.DataFrame({'b': [100, 200, 300]}, index=[1, 2, 3])
In [106]: right
Out[106]:
b
1 100
2 200
3 300
[3 rows x 1 columns]
Previous behavior:
In [4]: left.join(right, how='inner')
Out[4]:
a b
1 10 100
2 20 200
New behavior:
In [107]: left.join(right, how='inner')
Out[107]:
a b
2 20 200
1 10 100
[2 rows x 2 columns]
Pivot table always returns a DataFrame#
The documentation for pivot_table() states that a DataFrame is always returned. Here a bug
is fixed that allowed this to return a Series under certain circumstance. (GH4386)
In [108]: df = pd.DataFrame({'col1': [3, 4, 5],
.....: 'col2': ['C', 'D', 'E'],
.....: 'col3': [1, 3, 9]})
.....:
In [109]: df
Out[109]:
col1 col2 col3
0 3 C 1
1 4 D 3
2 5 E 9
[3 rows x 3 columns]
Previous behavior:
In [2]: df.pivot_table('col1', index=['col3', 'col2'], aggfunc=np.sum)
Out[2]:
col3 col2
1 C 3
3 D 4
9 E 5
Name: col1, dtype: int64
New behavior:
In [110]: df.pivot_table('col1', index=['col3', 'col2'], aggfunc=np.sum)
Out[110]:
col1
col3 col2
1 C 3
3 D 4
9 E 5
[3 rows x 1 columns]
Other API changes#
numexpr version is now required to be >= 2.4.6 and it will not be used at all if this requisite is not fulfilled (GH15213).
CParserError has been renamed to ParserError in pd.read_csv() and will be removed in the future (GH12665)
SparseArray.cumsum() and SparseSeries.cumsum() will now always return SparseArray and SparseSeries respectively (GH12855)
DataFrame.applymap() with an empty DataFrame will return a copy of the empty DataFrame instead of a Series (GH8222)
Series.map() now respects default values of dictionary subclasses with a __missing__ method, such as collections.Counter (GH15999)
.loc has compat with .ix for accepting iterators, and NamedTuples (GH15120)
interpolate() and fillna() will raise a ValueError if the limit keyword argument is not greater than 0. (GH9217)
pd.read_csv() will now issue a ParserWarning whenever there are conflicting values provided by the dialect parameter and the user (GH14898)
pd.read_csv() will now raise a ValueError for the C engine if the quote character is larger than one byte (GH11592)
inplace arguments now require a boolean value, else a ValueError is thrown (GH14189)
pandas.api.types.is_datetime64_ns_dtype will now report True on a tz-aware dtype, similar to pandas.api.types.is_datetime64_any_dtype
DataFrame.asof() will return a null filled Series instead the scalar NaN if a match is not found (GH15118)
Specific support for copy.copy() and copy.deepcopy() functions on NDFrame objects (GH15444)
Series.sort_values() accepts a one element list of bool for consistency with the behavior of DataFrame.sort_values() (GH15604)
.merge() and .join() on category dtype columns will now preserve the category dtype when possible (GH10409)
SparseDataFrame.default_fill_value will be 0, previously was nan in the return from pd.get_dummies(..., sparse=True) (GH15594)
The default behaviour of Series.str.match has changed from extracting
groups to matching the pattern. The extracting behaviour was deprecated
since pandas version 0.13.0 and can be done with the Series.str.extract
method (GH5224). As a consequence, the as_indexer keyword is
ignored (no longer needed to specify the new behaviour) and is deprecated.
NaT will now correctly report False for datetimelike boolean operations such as is_month_start (GH15781)
NaT will now correctly return np.nan for Timedelta and Period accessors such as days and quarter (GH15782)
NaT will now returns NaT for tz_localize and tz_convert
methods (GH15830)
DataFrame and Panel constructors with invalid input will now raise ValueError rather than PandasError, if called with scalar inputs and not axes (GH15541)
DataFrame and Panel constructors with invalid input will now raise ValueError rather than pandas.core.common.PandasError, if called with scalar inputs and not axes; The exception PandasError is removed as well. (GH15541)
The exception pandas.core.common.AmbiguousIndexError is removed as it is not referenced (GH15541)
Reorganization of the library: privacy changes#
Modules privacy has changed#
Some formerly public python/c/c++/cython extension modules have been moved and/or renamed. These are all removed from the public API.
Furthermore, the pandas.core, pandas.compat, and pandas.util top-level modules are now considered to be PRIVATE.
If indicated, a deprecation warning will be issued if you reference these modules. (GH12588)
Previous Location
New Location
Deprecated
pandas.lib
pandas._libs.lib
X
pandas.tslib
pandas._libs.tslib
X
pandas.computation
pandas.core.computation
X
pandas.msgpack
pandas.io.msgpack
pandas.index
pandas._libs.index
pandas.algos
pandas._libs.algos
pandas.hashtable
pandas._libs.hashtable
pandas.indexes
pandas.core.indexes
pandas.json
pandas._libs.json / pandas.io.json
X
pandas.parser
pandas._libs.parsers
X
pandas.formats
pandas.io.formats
pandas.sparse
pandas.core.sparse
pandas.tools
pandas.core.reshape
X
pandas.types
pandas.core.dtypes
X
pandas.io.sas.saslib
pandas.io.sas._sas
pandas._join
pandas._libs.join
pandas._hash
pandas._libs.hashing
pandas._period
pandas._libs.period
pandas._sparse
pandas._libs.sparse
pandas._testing
pandas._libs.testing
pandas._window
pandas._libs.window
Some new subpackages are created with public functionality that is not directly
exposed in the top-level namespace: pandas.errors, pandas.plotting and
pandas.testing (more details below). Together with pandas.api.types and
certain functions in the pandas.io and pandas.tseries submodules,
these are now the public subpackages.
Further changes:
The function union_categoricals() is now importable from pandas.api.types, formerly from pandas.types.concat (GH15998)
The type import pandas.tslib.NaTType is deprecated and can be replaced by using type(pandas.NaT) (GH16146)
The public functions in pandas.tools.hashing deprecated from that locations, but are now importable from pandas.util (GH16223)
The modules in pandas.util: decorators, print_versions, doctools, validators, depr_module are now private. Only the functions exposed in pandas.util itself are public (GH16223)
pandas.errors#
We are adding a standard public module for all pandas exceptions & warnings pandas.errors. (GH14800). Previously
these exceptions & warnings could be imported from pandas.core.common or pandas.io.common. These exceptions and warnings
will be removed from the *.common locations in a future release. (GH15541)
The following are now part of this API:
['DtypeWarning',
'EmptyDataError',
'OutOfBoundsDatetime',
'ParserError',
'ParserWarning',
'PerformanceWarning',
'UnsortedIndexError',
'UnsupportedFunctionCall']
pandas.testing#
We are adding a standard module that exposes the public testing functions in pandas.testing (GH9895). Those functions can be used when writing tests for functionality using pandas objects.
The following testing functions are now part of this API:
testing.assert_frame_equal()
testing.assert_series_equal()
testing.assert_index_equal()
pandas.plotting#
A new public pandas.plotting module has been added that holds plotting functionality that was previously in either pandas.tools.plotting or in the top-level namespace. See the deprecations sections for more details.
Other development changes#
Building pandas for development now requires cython >= 0.23 (GH14831)
Require at least 0.23 version of cython to avoid problems with character encodings (GH14699)
Switched the test framework to use pytest (GH13097)
Reorganization of tests directory layout (GH14854, GH15707).
Deprecations#
Deprecate .ix#
The .ix indexer is deprecated, in favor of the more strict .iloc and .loc indexers. .ix offers a lot of magic on the inference of what the user wants to do. More specifically, .ix can decide to index positionally OR via labels, depending on the data type of the index. This has caused quite a bit of user confusion over the years. The full indexing documentation is here. (GH14218)
The recommended methods of indexing are:
.loc if you want to label index
.iloc if you want to positionally index.
Using .ix will now show a DeprecationWarning with a link to some examples of how to convert code here.
In [111]: df = pd.DataFrame({'A': [1, 2, 3],
.....: 'B': [4, 5, 6]},
.....: index=list('abc'))
.....:
In [112]: df
Out[112]:
A B
a 1 4
b 2 5
c 3 6
[3 rows x 2 columns]
Previous behavior, where you wish to get the 0th and the 2nd elements from the index in the ‘A’ column.
In [3]: df.ix[[0, 2], 'A']
Out[3]:
a 1
c 3
Name: A, dtype: int64
Using .loc. Here we will select the appropriate indexes from the index, then use label indexing.
In [113]: df.loc[df.index[[0, 2]], 'A']
Out[113]:
a 1
c 3
Name: A, Length: 2, dtype: int64
Using .iloc. Here we will get the location of the ‘A’ column, then use positional indexing to select things.
In [114]: df.iloc[[0, 2], df.columns.get_loc('A')]
Out[114]:
a 1
c 3
Name: A, Length: 2, dtype: int64
Deprecate Panel#
Panel is deprecated and will be removed in a future version. The recommended way to represent 3-D data are
with a MultiIndex on a DataFrame via the to_frame() or with the xarray package. pandas
provides a to_xarray() method to automate this conversion (GH13563).
In [133]: import pandas._testing as tm
In [134]: p = tm.makePanel()
In [135]: p
Out[135]:
<class 'pandas.core.panel.Panel'>
Dimensions: 3 (items) x 3 (major_axis) x 4 (minor_axis)
Items axis: ItemA to ItemC
Major_axis axis: 2000-01-03 00:00:00 to 2000-01-05 00:00:00
Minor_axis axis: A to D
Convert to a MultiIndex DataFrame
In [136]: p.to_frame()
Out[136]:
ItemA ItemB ItemC
major minor
2000-01-03 A 0.628776 -1.409432 0.209395
B 0.988138 -1.347533 -0.896581
C -0.938153 1.272395 -0.161137
D -0.223019 -0.591863 -1.051539
2000-01-04 A 0.186494 1.422986 -0.592886
B -0.072608 0.363565 1.104352
C -1.239072 -1.449567 0.889157
D 2.123692 -0.414505 -0.319561
2000-01-05 A 0.952478 -2.147855 -1.473116
B -0.550603 -0.014752 -0.431550
C 0.139683 -1.195524 0.288377
D 0.122273 -1.425795 -0.619993
[12 rows x 3 columns]
Convert to an xarray DataArray
In [137]: p.to_xarray()
Out[137]:
<xarray.DataArray (items: 3, major_axis: 3, minor_axis: 4)>
array([[[ 0.628776, 0.988138, -0.938153, -0.223019],
[ 0.186494, -0.072608, -1.239072, 2.123692],
[ 0.952478, -0.550603, 0.139683, 0.122273]],
[[-1.409432, -1.347533, 1.272395, -0.591863],
[ 1.422986, 0.363565, -1.449567, -0.414505],
[-2.147855, -0.014752, -1.195524, -1.425795]],
[[ 0.209395, -0.896581, -0.161137, -1.051539],
[-0.592886, 1.104352, 0.889157, -0.319561],
[-1.473116, -0.43155 , 0.288377, -0.619993]]])
Coordinates:
* items (items) object 'ItemA' 'ItemB' 'ItemC'
* major_axis (major_axis) datetime64[ns] 2000-01-03 2000-01-04 2000-01-05
* minor_axis (minor_axis) object 'A' 'B' 'C' 'D'
Deprecate groupby.agg() with a dictionary when renaming#
The .groupby(..).agg(..), .rolling(..).agg(..), and .resample(..).agg(..) syntax can accept a variable of inputs, including scalars,
list, and a dict of column names to scalars or lists. This provides a useful syntax for constructing multiple
(potentially different) aggregations.
However, .agg(..) can also accept a dict that allows ‘renaming’ of the result columns. This is a complicated and confusing syntax, as well as not consistent
between Series and DataFrame. We are deprecating this ‘renaming’ functionality.
We are deprecating passing a dict to a grouped/rolled/resampled Series. This allowed
one to rename the resulting aggregation, but this had a completely different
meaning than passing a dictionary to a grouped DataFrame, which accepts column-to-aggregations.
We are deprecating passing a dict-of-dicts to a grouped/rolled/resampled DataFrame in a similar manner.
This is an illustrative example:
In [115]: df = pd.DataFrame({'A': [1, 1, 1, 2, 2],
.....: 'B': range(5),
.....: 'C': range(5)})
.....:
In [116]: df
Out[116]:
A B C
0 1 0 0
1 1 1 1
2 1 2 2
3 2 3 3
4 2 4 4
[5 rows x 3 columns]
Here is a typical useful syntax for computing different aggregations for different columns. This
is a natural, and useful syntax. We aggregate from the dict-to-list by taking the specified
columns and applying the list of functions. This returns a MultiIndex for the columns (this is not deprecated).
In [117]: df.groupby('A').agg({'B': 'sum', 'C': 'min'})
Out[117]:
B C
A
1 3 0
2 7 3
[2 rows x 2 columns]
Here’s an example of the first deprecation, passing a dict to a grouped Series. This
is a combination aggregation & renaming:
In [6]: df.groupby('A').B.agg({'foo': 'count'})
FutureWarning: using a dict on a Series for aggregation
is deprecated and will be removed in a future version
Out[6]:
foo
A
1 3
2 2
You can accomplish the same operation, more idiomatically by:
In [118]: df.groupby('A').B.agg(['count']).rename(columns={'count': 'foo'})
Out[118]:
foo
A
1 3
2 2
[2 rows x 1 columns]
Here’s an example of the second deprecation, passing a dict-of-dict to a grouped DataFrame:
In [23]: (df.groupby('A')
...: .agg({'B': {'foo': 'sum'}, 'C': {'bar': 'min'}})
...: )
FutureWarning: using a dict with renaming is deprecated and
will be removed in a future version
Out[23]:
B C
foo bar
A
1 3 0
2 7 3
You can accomplish nearly the same by:
In [119]: (df.groupby('A')
.....: .agg({'B': 'sum', 'C': 'min'})
.....: .rename(columns={'B': 'foo', 'C': 'bar'})
.....: )
.....:
Out[119]:
foo bar
A
1 3 0
2 7 3
[2 rows x 2 columns]
Deprecate .plotting#
The pandas.tools.plotting module has been deprecated, in favor of the top level pandas.plotting module. All the public plotting functions are now available
from pandas.plotting (GH12548).
Furthermore, the top-level pandas.scatter_matrix and pandas.plot_params are deprecated.
Users can import these from pandas.plotting as well.
Previous script:
pd.tools.plotting.scatter_matrix(df)
pd.scatter_matrix(df)
Should be changed to:
pd.plotting.scatter_matrix(df)
Other deprecations#
SparseArray.to_dense() has deprecated the fill parameter, as that parameter was not being respected (GH14647)
SparseSeries.to_dense() has deprecated the sparse_only parameter (GH14647)
Series.repeat() has deprecated the reps parameter in favor of repeats (GH12662)
The Series constructor and .astype method have deprecated accepting timestamp dtypes without a frequency (e.g. np.datetime64) for the dtype parameter (GH15524)
Index.repeat() and MultiIndex.repeat() have deprecated the n parameter in favor of repeats (GH12662)
Categorical.searchsorted() and Series.searchsorted() have deprecated the v parameter in favor of value (GH12662)
TimedeltaIndex.searchsorted(), DatetimeIndex.searchsorted(), and PeriodIndex.searchsorted() have deprecated the key parameter in favor of value (GH12662)
DataFrame.astype() has deprecated the raise_on_error parameter in favor of errors (GH14878)
Series.sortlevel and DataFrame.sortlevel have been deprecated in favor of Series.sort_index and DataFrame.sort_index (GH15099)
importing concat from pandas.tools.merge has been deprecated in favor of imports from the pandas namespace. This should only affect explicit imports (GH15358)
Series/DataFrame/Panel.consolidate() been deprecated as a public method. (GH15483)
The as_indexer keyword of Series.str.match() has been deprecated (ignored keyword) (GH15257).
The following top-level pandas functions have been deprecated and will be removed in a future version (GH13790, GH15940)
pd.pnow(), replaced by Period.now()
pd.Term, is removed, as it is not applicable to user code. Instead use in-line string expressions in the where clause when searching in HDFStore
pd.Expr, is removed, as it is not applicable to user code.
pd.match(), is removed.
pd.groupby(), replaced by using the .groupby() method directly on a Series/DataFrame
pd.get_store(), replaced by a direct call to pd.HDFStore(...)
is_any_int_dtype, is_floating_dtype, and is_sequence are deprecated from pandas.api.types (GH16042)
Removal of prior version deprecations/changes#
The pandas.rpy module is removed. Similar functionality can be accessed
through the rpy2 project.
See the R interfacing docs for more details.
The pandas.io.ga module with a google-analytics interface is removed (GH11308).
Similar functionality can be found in the Google2Pandas package.
pd.to_datetime and pd.to_timedelta have dropped the coerce parameter in favor of errors (GH13602)
pandas.stats.fama_macbeth, pandas.stats.ols, pandas.stats.plm and pandas.stats.var, as well as the top-level pandas.fama_macbeth and pandas.ols routines are removed. Similar functionality can be found in the statsmodels package. (GH11898)
The TimeSeries and SparseTimeSeries classes, aliases of Series
and SparseSeries, are removed (GH10890, GH15098).
Series.is_time_series is dropped in favor of Series.index.is_all_dates (GH15098)
The deprecated irow, icol, iget and iget_value methods are removed
in favor of iloc and iat as explained here (GH10711).
The deprecated DataFrame.iterkv() has been removed in favor of DataFrame.iteritems() (GH10711)
The Categorical constructor has dropped the name parameter (GH10632)
Categorical has dropped support for NaN categories (GH10748)
The take_last parameter has been dropped from duplicated(), drop_duplicates(), nlargest(), and nsmallest() methods (GH10236, GH10792, GH10920)
Series, Index, and DataFrame have dropped the sort and order methods (GH10726)
Where clauses in pytables are only accepted as strings and expressions types and not other data-types (GH12027)
DataFrame has dropped the combineAdd and combineMult methods in favor of add and mul respectively (GH10735)
Performance improvements#
Improved performance of pd.wide_to_long() (GH14779)
Improved performance of pd.factorize() by releasing the GIL with object dtype when inferred as strings (GH14859, GH16057)
Improved performance of timeseries plotting with an irregular DatetimeIndex
(or with compat_x=True) (GH15073).
Improved performance of groupby().cummin() and groupby().cummax() (GH15048, GH15109, GH15561, GH15635)
Improved performance and reduced memory when indexing with a MultiIndex (GH15245)
When reading buffer object in read_sas() method without specified format, filepath string is inferred rather than buffer object. (GH14947)
Improved performance of .rank() for categorical data (GH15498)
Improved performance when using .unstack() (GH15503)
Improved performance of merge/join on category columns (GH10409)
Improved performance of drop_duplicates() on bool columns (GH12963)
Improve performance of pd.core.groupby.GroupBy.apply when the applied
function used the .name attribute of the group DataFrame (GH15062).
Improved performance of iloc indexing with a list or array (GH15504).
Improved performance of Series.sort_index() with a monotonic index (GH15694)
Improved performance in pd.read_csv() on some platforms with buffered reads (GH16039)
Bug fixes#
Conversion#
Bug in Timestamp.replace now raises TypeError when incorrect argument names are given; previously this raised ValueError (GH15240)
Bug in Timestamp.replace with compat for passing long integers (GH15030)
Bug in Timestamp returning UTC based time/date attributes when a timezone was provided (GH13303, GH6538)
Bug in Timestamp incorrectly localizing timezones during construction (GH11481, GH15777)
Bug in TimedeltaIndex addition where overflow was being allowed without error (GH14816)
Bug in TimedeltaIndex raising a ValueError when boolean indexing with loc (GH14946)
Bug in catching an overflow in Timestamp + Timedelta/Offset operations (GH15126)
Bug in DatetimeIndex.round() and Timestamp.round() floating point accuracy when rounding by milliseconds or less (GH14440, GH15578)
Bug in astype() where inf values were incorrectly converted to integers. Now raises error now with astype() for Series and DataFrames (GH14265)
Bug in DataFrame(..).apply(to_numeric) when values are of type decimal.Decimal. (GH14827)
Bug in describe() when passing a numpy array which does not contain the median to the percentiles keyword argument (GH14908)
Cleaned up PeriodIndex constructor, including raising on floats more consistently (GH13277)
Bug in using __deepcopy__ on empty NDFrame objects (GH15370)
Bug in .replace() may result in incorrect dtypes. (GH12747, GH15765)
Bug in Series.replace and DataFrame.replace which failed on empty replacement dicts (GH15289)
Bug in Series.replace which replaced a numeric by string (GH15743)
Bug in Index construction with NaN elements and integer dtype specified (GH15187)
Bug in Series construction with a datetimetz (GH14928)
Bug in Series.dt.round() inconsistent behaviour on NaT ‘s with different arguments (GH14940)
Bug in Series constructor when both copy=True and dtype arguments are provided (GH15125)
Incorrect dtyped Series was returned by comparison methods (e.g., lt, gt, …) against a constant for an empty DataFrame (GH15077)
Bug in Series.ffill() with mixed dtypes containing tz-aware datetimes. (GH14956)
Bug in DataFrame.fillna() where the argument downcast was ignored when fillna value was of type dict (GH15277)
Bug in .asfreq(), where frequency was not set for empty Series (GH14320)
Bug in DataFrame construction with nulls and datetimes in a list-like (GH15869)
Bug in DataFrame.fillna() with tz-aware datetimes (GH15855)
Bug in is_string_dtype, is_timedelta64_ns_dtype, and is_string_like_dtype in which an error was raised when None was passed in (GH15941)
Bug in the return type of pd.unique on a Categorical, which was returning an ndarray and not a Categorical (GH15903)
Bug in Index.to_series() where the index was not copied (and so mutating later would change the original), (GH15949)
Bug in indexing with partial string indexing with a len-1 DataFrame (GH16071)
Bug in Series construction where passing invalid dtype didn’t raise an error. (GH15520)
Indexing#
Bug in Index power operations with reversed operands (GH14973)
Bug in DataFrame.sort_values() when sorting by multiple columns where one column is of type int64 and contains NaT (GH14922)
Bug in DataFrame.reindex() in which method was ignored when passing columns (GH14992)
Bug in DataFrame.loc with indexing a MultiIndex with a Series indexer (GH14730, GH15424)
Bug in DataFrame.loc with indexing a MultiIndex with a numpy array (GH15434)
Bug in Series.asof which raised if the series contained all np.nan (GH15713)
Bug in .at when selecting from a tz-aware column (GH15822)
Bug in Series.where() and DataFrame.where() where array-like conditionals were being rejected (GH15414)
Bug in Series.where() where TZ-aware data was converted to float representation (GH15701)
Bug in .loc that would not return the correct dtype for scalar access for a DataFrame (GH11617)
Bug in output formatting of a MultiIndex when names are integers (GH12223, GH15262)
Bug in Categorical.searchsorted() where alphabetical instead of the provided categorical order was used (GH14522)
Bug in Series.iloc where a Categorical object for list-like indexes input was returned, where a Series was expected. (GH14580)
Bug in DataFrame.isin comparing datetimelike to empty frame (GH15473)
Bug in .reset_index() when an all NaN level of a MultiIndex would fail (GH6322)
Bug in .reset_index() when raising error for index name already present in MultiIndex columns (GH16120)
Bug in creating a MultiIndex with tuples and not passing a list of names; this will now raise ValueError (GH15110)
Bug in the HTML display with a MultiIndex and truncation (GH14882)
Bug in the display of .info() where a qualifier (+) would always be displayed with a MultiIndex that contains only non-strings (GH15245)
Bug in pd.concat() where the names of MultiIndex of resulting DataFrame are not handled correctly when None is presented in the names of MultiIndex of input DataFrame (GH15787)
Bug in DataFrame.sort_index() and Series.sort_index() where na_position doesn’t work with a MultiIndex (GH14784, GH16604)
Bug in pd.concat() when combining objects with a CategoricalIndex (GH16111)
Bug in indexing with a scalar and a CategoricalIndex (GH16123)
IO#
Bug in pd.to_numeric() in which float and unsigned integer elements were being improperly casted (GH14941, GH15005)
Bug in pd.read_fwf() where the skiprows parameter was not being respected during column width inference (GH11256)
Bug in pd.read_csv() in which the dialect parameter was not being verified before processing (GH14898)
Bug in pd.read_csv() in which missing data was being improperly handled with usecols (GH6710)
Bug in pd.read_csv() in which a file containing a row with many columns followed by rows with fewer columns would cause a crash (GH14125)
Bug in pd.read_csv() for the C engine where usecols were being indexed incorrectly with parse_dates (GH14792)
Bug in pd.read_csv() with parse_dates when multi-line headers are specified (GH15376)
Bug in pd.read_csv() with float_precision='round_trip' which caused a segfault when a text entry is parsed (GH15140)
Bug in pd.read_csv() when an index was specified and no values were specified as null values (GH15835)
Bug in pd.read_csv() in which certain invalid file objects caused the Python interpreter to crash (GH15337)
Bug in pd.read_csv() in which invalid values for nrows and chunksize were allowed (GH15767)
Bug in pd.read_csv() for the Python engine in which unhelpful error messages were being raised when parsing errors occurred (GH15910)
Bug in pd.read_csv() in which the skipfooter parameter was not being properly validated (GH15925)
Bug in pd.to_csv() in which there was numeric overflow when a timestamp index was being written (GH15982)
Bug in pd.util.hashing.hash_pandas_object() in which hashing of categoricals depended on the ordering of categories, instead of just their values. (GH15143)
Bug in .to_json() where lines=True and contents (keys or values) contain escaped characters (GH15096)
Bug in .to_json() causing single byte ascii characters to be expanded to four byte unicode (GH15344)
Bug in .to_json() for the C engine where rollover was not correctly handled for case where frac is odd and diff is exactly 0.5 (GH15716, GH15864)
Bug in pd.read_json() for Python 2 where lines=True and contents contain non-ascii unicode characters (GH15132)
Bug in pd.read_msgpack() in which Series categoricals were being improperly processed (GH14901)
Bug in pd.read_msgpack() which did not allow loading of a dataframe with an index of type CategoricalIndex (GH15487)
Bug in pd.read_msgpack() when deserializing a CategoricalIndex (GH15487)
Bug in DataFrame.to_records() with converting a DatetimeIndex with a timezone (GH13937)
Bug in DataFrame.to_records() which failed with unicode characters in column names (GH11879)
Bug in .to_sql() when writing a DataFrame with numeric index names (GH15404).
Bug in DataFrame.to_html() with index=False and max_rows raising in IndexError (GH14998)
Bug in pd.read_hdf() passing a Timestamp to the where parameter with a non date column (GH15492)
Bug in DataFrame.to_stata() and StataWriter which produces incorrectly formatted files to be produced for some locales (GH13856)
Bug in StataReader and StataWriter which allows invalid encodings (GH15723)
Bug in the Series repr not showing the length when the output was truncated (GH15962).
Plotting#
Bug in DataFrame.hist where plt.tight_layout caused an AttributeError (use matplotlib >= 2.0.1) (GH9351)
Bug in DataFrame.boxplot where fontsize was not applied to the tick labels on both axes (GH15108)
Bug in the date and time converters pandas registers with matplotlib not handling multiple dimensions (GH16026)
Bug in pd.scatter_matrix() could accept either color or c, but not both (GH14855)
GroupBy/resample/rolling#
Bug in .groupby(..).resample() when passed the on= kwarg. (GH15021)
Properly set __name__ and __qualname__ for Groupby.* functions (GH14620)
Bug in GroupBy.get_group() failing with a categorical grouper (GH15155)
Bug in .groupby(...).rolling(...) when on is specified and using a DatetimeIndex (GH15130, GH13966)
Bug in groupby operations with timedelta64 when passing numeric_only=False (GH5724)
Bug in groupby.apply() coercing object dtypes to numeric types, when not all values were numeric (GH14423, GH15421, GH15670)
Bug in resample, where a non-string loffset argument would not be applied when resampling a timeseries (GH13218)
Bug in DataFrame.groupby().describe() when grouping on Index containing tuples (GH14848)
Bug in groupby().nunique() with a datetimelike-grouper where bins counts were incorrect (GH13453)
Bug in groupby.transform() that would coerce the resultant dtypes back to the original (GH10972, GH11444)
Bug in groupby.agg() incorrectly localizing timezone on datetime (GH15426, GH10668, GH13046)
Bug in .rolling/expanding() functions where count() was not counting np.Inf, nor handling object dtypes (GH12541)
Bug in .rolling() where pd.Timedelta or datetime.timedelta was not accepted as a window argument (GH15440)
Bug in Rolling.quantile function that caused a segmentation fault when called with a quantile value outside of the range [0, 1] (GH15463)
Bug in DataFrame.resample().median() if duplicate column names are present (GH14233)
Sparse#
Bug in SparseSeries.reindex on single level with list of length 1 (GH15447)
Bug in repr-formatting a SparseDataFrame after a value was set on (a copy of) one of its series (GH15488)
Bug in SparseDataFrame construction with lists not coercing to dtype (GH15682)
Bug in sparse array indexing in which indices were not being validated (GH15863)
Reshaping#
Bug in pd.merge_asof() where left_index or right_index caused a failure when multiple by was specified (GH15676)
Bug in pd.merge_asof() where left_index/right_index together caused a failure when tolerance was specified (GH15135)
Bug in DataFrame.pivot_table() where dropna=True would not drop all-NaN columns when the columns was a category dtype (GH15193)
Bug in pd.melt() where passing a tuple value for value_vars caused a TypeError (GH15348)
Bug in pd.pivot_table() where no error was raised when values argument was not in the columns (GH14938)
Bug in pd.concat() in which concatenating with an empty dataframe with join='inner' was being improperly handled (GH15328)
Bug with sort=True in DataFrame.join and pd.merge when joining on indexes (GH15582)
Bug in DataFrame.nsmallest and DataFrame.nlargest where identical values resulted in duplicated rows (GH15297)
Bug in pandas.pivot_table() incorrectly raising UnicodeError when passing unicode input for margins keyword (GH13292)
Numeric#
Bug in .rank() which incorrectly ranks ordered categories (GH15420)
Bug in .corr() and .cov() where the column and index were the same object (GH14617)
Bug in .mode() where mode was not returned if was only a single value (GH15714)
Bug in pd.cut() with a single bin on an all 0s array (GH15428)
Bug in pd.qcut() with a single quantile and an array with identical values (GH15431)
Bug in pandas.tools.utils.cartesian_product() with large input can cause overflow on windows (GH15265)
Bug in .eval() which caused multi-line evals to fail with local variables not on the first line (GH15342)
Other#
Compat with SciPy 0.19.0 for testing on .interpolate() (GH15662)
Compat for 32-bit platforms for .qcut/cut; bins will now be int64 dtype (GH14866)
Bug in interactions with Qt when a QtApplication already exists (GH14372)
Avoid use of np.finfo() during import pandas removed to mitigate deadlock on Python GIL misuse (GH14641)
Contributors#
A total of 204 people contributed patches to this release. People with a
“+” by their names contributed a patch for the first time.
Adam J. Stewart +
Adrian +
Ajay Saxena
Akash Tandon +
Albert Villanova del Moral +
Aleksey Bilogur +
Alexis Mignon +
Amol Kahat +
Andreas Winkler +
Andrew Kittredge +
Anthonios Partheniou
Arco Bast +
Ashish Singal +
Baurzhan Muftakhidinov +
Ben Kandel
Ben Thayer +
Ben Welsh +
Bill Chambers +
Brandon M. Burroughs
Brian +
Brian McFee +
Carlos Souza +
Chris
Chris Ham
Chris Warth
Christoph Gohlke
Christoph Paulik +
Christopher C. Aycock
Clemens Brunner +
D.S. McNeil +
DaanVanHauwermeiren +
Daniel Himmelstein
Dave Willmer
David Cook +
David Gwynne +
David Hoffman +
David Krych
Diego Fernandez +
Dimitris Spathis +
Dmitry L +
Dody Suria Wijaya +
Dominik Stanczak +
Dr-Irv
Dr. Irv +
Elliott Sales de Andrade +
Ennemoser Christoph +
Francesc Alted +
Fumito Hamamura +
Giacomo Ferroni
Graham R. Jeffries +
Greg Williams +
Guilherme Beltramini +
Guilherme Samora +
Hao Wu +
Harshit Patni +
Ilya V. Schurov +
Iván Vallés Pérez
Jackie Leng +
Jaehoon Hwang +
James Draper +
James Goppert +
James McBride +
James Santucci +
Jan Schulz
Jeff Carey
Jeff Reback
JennaVergeynst +
Jim +
Jim Crist
Joe Jevnik
Joel Nothman +
John +
John Tucker +
John W. O’Brien
John Zwinck
Jon M. Mease
Jon Mease
Jonathan Whitmore +
Jonathan de Bruin +
Joost Kranendonk +
Joris Van den Bossche
Joshua Bradt +
Julian Santander
Julien Marrec +
Jun Kim +
Justin Solinsky +
Kacawi +
Kamal Kamalaldin +
Kerby Shedden
Kernc
Keshav Ramaswamy
Kevin Sheppard
Kyle Kelley
Larry Ren
Leon Yin +
Line Pedersen +
Lorenzo Cestaro +
Luca Scarabello
Lukasz +
Mahmoud Lababidi
Mark Mandel +
Matt Roeschke
Matthew Brett
Matthew Roeschke +
Matti Picus
Maximilian Roos
Michael Charlton +
Michael Felt
Michael Lamparski +
Michiel Stock +
Mikolaj Chwalisz +
Min RK
Miroslav Šedivý +
Mykola Golubyev
Nate Yoder
Nathalie Rud +
Nicholas Ver Halen
Nick Chmura +
Nolan Nichols +
Pankaj Pandey +
Pawel Kordek
Pete Huang +
Peter +
Peter Csizsek +
Petio Petrov +
Phil Ruffwind +
Pietro Battiston
Piotr Chromiec
Prasanjit Prakash +
Rob Forgione +
Robert Bradshaw
Robin +
Rodolfo Fernandez
Roger Thomas
Rouz Azari +
Sahil Dua
Sam Foo +
Sami Salonen +
Sarah Bird +
Sarma Tangirala +
Scott Sanderson
Sebastian Bank
Sebastian Gsänger +
Shawn Heide
Shyam Saladi +
Sinhrks
Stephen Rauch +
Sébastien de Menten +
Tara Adiseshan
Thiago Serafim
Thoralf Gutierrez +
Thrasibule +
Tobias Gustafsson +
Tom Augspurger
Tong SHEN +
Tong Shen +
TrigonaMinima +
Uwe +
Wes Turner
Wiktor Tomczak +
WillAyd
Yaroslav Halchenko
Yimeng Zhang +
abaldenko +
adrian-stepien +
alexandercbooth +
atbd +
bastewart +
bmagnusson +
carlosdanielcsantos +
chaimdemulder +
chris-b1
dickreuter +
discort +
dr-leo +
dubourg
dwkenefick +
funnycrab +
gfyoung
goldenbull +
[email protected]
jojomdt +
linebp +
manu +
manuels +
mattip +
maxalbert +
mcocdawc +
nuffe +
paul-mannino
pbreach +
sakkemo +
scls19fr
sinhrks
stijnvanhoey +
the-nose-knows +
themrmax +
tomrod +
tzinckgraf
wandersoncferreira
watercrossing +
wcwagner
xgdgsc +
yui-knk
| 539
| 1,186
|
Accomplishing `A.merge(B).merge(C).merge(D) ....` using `pandas.concat()`
I have several dozen data frames like the following:
import pandas as pd
import numpy as np
A = pd.DataFrame({'col1': np.random.rand(5) ,'col2': np.random.rand(5)})
A.index = [11111, 22222, 33333, 44444, 55555]
B = pd.DataFrame({'col3': np.random.rand(5) ,'col4': np.random.rand(5)})
B.index = [77777, 22222, 33333, 55555, 88888
]
I would like to do an outer join on the indices. I can obtain the desired result using A.merge(B) with the following:
A.merge(B, how='outer', left_index=True, right_index=True)
yielding
col1 col2 col3 col4
11111 0.195266 0.765243 NaN NaN
22222 0.524872 0.978260 0.769246 0.318719
33333 0.581588 0.391997 0.962788 0.864938
44444 0.490709 0.082014 NaN NaN
55555 0.339119 0.807546 0.545300 0.378834
77777 NaN NaN 0.345498 0.634918
88888 NaN NaN 0.976489 0.871800
This is what I want. Unfortunately, .merge() is very slow for large dataframes, and elsewhere on this site, I have read that I should use pd.concat() instead. But in this case, pd.concat([A, B])
does not work, because it does not accept the left_index and right_index keywords. Instead it just stacks the two on top of one another:
col1 col2 col3 col4
11111 0.195266 0.765243 NaN NaN
22222 0.524872 0.978260 NaN NaN
33333 0.581588 0.391997 NaN NaN
44444 0.490709 0.082014 NaN NaN
55555 0.339119 0.807546 NaN NaN
77777 NaN NaN 0.345498 0.634918
22222 NaN NaN 0.769246 0.318719
33333 NaN NaN 0.962788 0.864938
55555 NaN NaN 0.545300 0.378834
88888 NaN NaN 0.976489 0.871800
Is there a way to accomplish this join using pd.concat()? Or am I stuck with merge?
|
63,640,545
|
Swapping values between two pandas columns
|
<p>How can I swap values in a dataframe based on a defined condition?</p>
<p>Given:</p>
<pre><code>DF[['Exchange','predictions']]
Exchange predictions
0 PINK <UNK>
1 PINK <UNK>
2 PINK <UNK>
3 PINK <UNK>
4 PINK <UNK>
... ... ...
490541 NASDAQ PINK
490542 NaN PINK
490543 NASDAQ PINK
490544 NaN PINK
490545 NASDAQ PINK
</code></pre>
<p>I would like Exchange replaced with value in predictions only if Exchange value is NaN and Prediction value is not < UNK >.</p>
| 63,640,595
| 2020-08-28T20:33:16.550000
| 1
| null | 0
| 28
|
python|pandas
|
<p>Let us try <code>fillna</code> with partial condition <code>Series</code></p>
<pre><code>df.Exchange.fillna(df.loc[df['predictions'].ne('<UNK>'), 'predictions'], inplace=True)
df
Out[210]:
Exchange predictions
0 PINK <UNK>
1 PINK <UNK>
2 PINK <UNK>
3 PINK <UNK>
4 PINK <UNK>
... ...
490541 NASDAQ PINK
490542 PINK PINK
490543 NASDAQ PINK
490544 PINK PINK
490545 NASDAQ PINK
</code></pre>
| 2020-08-28T20:37:29.527000
| 1
|
https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.swaplevel.html
|
pandas.DataFrame.swaplevel#
pandas.DataFrame.swaplevel#
DataFrame.swaplevel(i=- 2, j=- 1, axis=0)[source]#
Swap levels i and j in a MultiIndex.
Default is to swap the two innermost levels of the index.
Let us try fillna with partial condition Series
df.Exchange.fillna(df.loc[df['predictions'].ne('<UNK>'), 'predictions'], inplace=True)
df
Out[210]:
Exchange predictions
0 PINK <UNK>
1 PINK <UNK>
2 PINK <UNK>
3 PINK <UNK>
4 PINK <UNK>
... ...
490541 NASDAQ PINK
490542 PINK PINK
490543 NASDAQ PINK
490544 PINK PINK
490545 NASDAQ PINK
Parameters
i, jint or strLevels of the indices to be swapped. Can pass level name as string.
axis{0 or ‘index’, 1 or ‘columns’}, default 0The axis to swap levels on. 0 or ‘index’ for row-wise, 1 or
‘columns’ for column-wise.
Returns
DataFrameDataFrame with levels swapped in MultiIndex.
Examples
>>> df = pd.DataFrame(
... {"Grade": ["A", "B", "A", "C"]},
... index=[
... ["Final exam", "Final exam", "Coursework", "Coursework"],
... ["History", "Geography", "History", "Geography"],
... ["January", "February", "March", "April"],
... ],
... )
>>> df
Grade
Final exam History January A
Geography February B
Coursework History March A
Geography April C
In the following example, we will swap the levels of the indices.
Here, we will swap the levels column-wise, but levels can be swapped row-wise
in a similar manner. Note that column-wise is the default behaviour.
By not supplying any arguments for i and j, we swap the last and second to
last indices.
>>> df.swaplevel()
Grade
Final exam January History A
February Geography B
Coursework March History A
April Geography C
By supplying one argument, we can choose which index to swap the last
index with. We can for example swap the first index with the last one as
follows.
>>> df.swaplevel(0)
Grade
January History Final exam A
February Geography Final exam B
March History Coursework A
April Geography Coursework C
We can also define explicitly which indices we want to swap by supplying values
for both i and j. Here, we for example swap the first and second indices.
>>> df.swaplevel(0, 1)
Grade
History Final exam January A
Geography Final exam February B
History Coursework March A
Geography Coursework April C
| 206
| 688
|
Swapping values between two pandas columns
How can I swap values in a dataframe based on a defined condition?
Given:
DF[['Exchange','predictions']]
Exchange predictions
0 PINK <UNK>
1 PINK <UNK>
2 PINK <UNK>
3 PINK <UNK>
4 PINK <UNK>
... ... ...
490541 NASDAQ PINK
490542 NaN PINK
490543 NASDAQ PINK
490544 NaN PINK
490545 NASDAQ PINK
I would like Exchange replaced with value in predictions only if Exchange value is NaN and Prediction value is not < UNK >.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.