What is non-concatenation axis in Pandas? - pandas

I am appending a dataframe df1 to a dataframe df2 with both having the same columns but not necessarily in the same order.
df = df1.append(df2)
I am seeing this warning as a result of the above operation.
"FutureWarning: Sorting because non-concatenation axis is not aligned. A future version of pandas will change to not sort by default."
I understand that the resulting dataframe has columns in alphabetical order after this operation but I am trying to underrstand the definition of non-concatenation axis which is mentioned in the warning. What is it and where is it significant in pandas?

It is the other axis—the axis along which you do not concatenate. If you're concatenating along the index (axis=0), then the non-concatenation axis would be 1 (i.e., the columns), and vice versa.

Related

How do I append a column from a numpy array to a pd dataframe?

I have a numpy array of 100 predicted values called first_100. If I convert these to a dataframe they are indexed as 0,1,2 etc. However, the predictions are for values that are in random indexed order, 66,201,32 etc. I want to be able to put the actual values and the predictions in the same dataframe, but I'm really struggling.
The real values are in a dataframe called first_100_train.
I've tried the following:
pd.concat([first_100, first_100_train], axis=1)
This doesn't work and for some reason returns the entire dataframe and indexed from 0 so there are lots of NaNs...
first_100_train['Prediction'] = first_100[0]
This is almost what I want, but again because the indexes are different the data doesn't match up. I'd really appreciate any suggestions.
EDIT: After managing to join the dataframes I now have this:
I'd like to be able to drop the final column...
Here is first_100.head()
and first_100_train.head()
The problem is that index 2 from first_100 actually corresponds to index 480 of first_100_train
Set default index values by DataFrame.reset_index and drop=True for correct alignment:
pd.concat([first_100.reset_index(drop=True),
first_100_train.reset_index(drop=True)], axis=1)
Or if first DataFrame have default RangeIndex solution is simplify:
pd.concat([first_100,
first_100_train.reset_index(drop=True)], axis=1)

Resampling/interpolating/extrapolating columns of a pandas dataframe

I am interested in knowing how to interpolate/resample/extrapolate columns of a pandas dataframe for pure numerical and datetime type indices. I'd like to perform this with either straight-forward linear interpolation or spline interpolation.
Consider first a simple pandas data frame that has a numerical index (signifying time) and a couple of columns:
import pandas as pd
import numpy as np
df = pd.DataFrame(np.random.randn(10,2), index=np.arange(0,20,2))
print(df)
0 1
0 0.937961 0.943746
2 1.687854 0.866076
4 0.410656 -0.025926
6 -2.042386 0.956386
8 1.153727 -0.505902
10 -1.546215 0.081702
12 0.922419 0.614947
14 0.865873 -0.014047
16 0.225841 -0.831088
18 -0.048279 0.314828
I would like to resample the columns of this dataframe over some denser grid of time indices which possibly extend beyond the last time index (thus requiring extrapolation).
Denote the denser grid of indices as, for example:
t = np.arange(0,40,.6)
The interpolate method for a pandas dataframe seems to interpolate only nan's and thus requires those new indices (which may or may not coincide with the original indices) to already be part of the dataframe. I guess I could append a data frame of nans at the new indices to the original dataframe (excluding any indices appearing in both the old and new dataframes) and call interpolate and then remove the original time indices. Or, I could do everything in scipy and create a new dataframe at the desired time indices.
Is there a more direct way to do this?
In addition, I'd like to know how to do this same thing when the indices are, in fact, datetimes. That is, when, for example:
df.index = np.array('2015-07-04 02:12:40', dtype=np.datetime64) + np.arange(0,20,2)

Does a DataFrame with a single row have all the attributes of a DataFrame?

I am slicing a DataFrame from a large DataFrame and daughter df have only one row. Does a daughter df with a single row has same attributes like parent df?
import numpy as np
import pandas as pd
dates = pd.date_range('20130101',periods=6)
df = pd.DataFrame(np.random.randn(6,2),index=dates,columns=['col1','col2'])
df1=df.iloc[1]
type(df1)
>> pandas.core.series.Series
df1.columns
>>'Series' object has no attribute 'columns'
Is there a way I can use all attributes of pd.DataFrame on a pd.series ?
Possibly what you are looking for is a dataframe with one row:
>>> pd.DataFrame(df1).T # T -> transpose
col1 col2
2013-01-02 -0.428913 1.265936
What happens when you do df.iloc[1] is that pandas converts that to a series, which is one-dimensional, and the columns become the index. You can still do df1['col1'], but you can't do df.columns because a series is basically a column, and hence the old columns are now the new index
As a result, you can returns the former columns like this:
>>> df1.index.tolist()
['col1', 'col2']
This used to confuse me quite a bit. I also expected df.iloc[1] to be a dataframe with one row, but it has always been the default behavior of pandas to automatically convert any one dimensional dataframe slice (whether row or column) to a series. It's pretty natural for a row, but less so for a column (since the columns become the index), but really is not a problem once you understand what is happening.

Converting row of DataFrame to a Series (Pandas)

So I could not find how to do this in the documentation, but I am reading a row from a dataframe as such:
self.data = df[n:n+1]
But this results in self.data being a 1 row and 7 column dataframe, instead of just a series. However, the test cases for my course depend on it being a series. Is there an easy way to make that conversion?
Just use .ix:
df.ix[n]
That assumes that your df.index lists the rows in numerical order.

dataframe multiply some columns with a series

I have a dataframe df1 where the index is a DatetimeIndex and there are 5 columns, col1, col2, col3, col4, col5.
I have another df2 which has an almost equal datetimeindex (some days of df1 may be missing from df1), and a single 'Value' column.
I would like to multiply df1 in-place by the Value from df2 when the dates are the same. But not for all columns col1...col5, only col1...col4
I can see it is possible to multiply col1*Value, then col2*Value and so on... and make up a new dataframe to replace df1.
Is there a more efficient way?
You an achieve this, by reindexing the second dataframe so they are the same shape, and then using the dataframe operator mul:
Create two data frames with datetime series. The second one using only business days to make sure we have gaps between the two. Set the dates as indices.
import pandas as pd
# first frame
rng1 = pd.date_range('1/1/2011', periods=90, freq='D')
df1 = pd.DataFrame({'value':range(1,91),'date':rng1})
df1.set_index('date', inplace =True)
# second frame with a business day date index
rng2 = pd.date_range('1/1/2011', periods=90, freq='B')
df2 = pd.DataFrame({'date':rng2})
df2['value_to_multiply'] = range(1-91)
df2.set_index('date', inplace =True)
reindex the second frame with the index from the first. Df1 will now have gaps for non-business days filled with the first previous valid observation.
# reindex the second dataframe to match the first
df2 =df2.reindex(index= df1.index, method = 'ffill')
Multiple df2 by df1['value_to_multiply_by']:
# multiple filling nans with 1 to avoid propagating nans
# nans can still exists if there are no valid previous observations such as at the beginning of a dataframe
df1.mul(df2['value_to_multiply_by'].fillna(1), axis=0)