I want plot the graphs one by one from the dataframe with FOR operator.
names_list = df.columns.tolist()
for name in names_list:
df[name].plot(figsize=(25, 5))
This code is no good. The graphs are depicted in one figure, but should be in different ones.
How can I get multiple charts instead of one?
Try the following:
names_list = df.columns.tolist()
for name in names_list:
fig, ax = plt.subplots(figsize=(25, 5))
df[name].plot(ax=ax)
If you're able to use seaborn here's an example using a FacetGrid:
import seaborn as sns, matplotlib.pyplot as plt
In [102]: df.head(3)
Out[102]:
Date Consumption Wind Solar Wind+Solar name
0 2006-01-01 1069.184 NaN NaN NaN mid
1 2006-01-02 1380.521 NaN NaN NaN mid
2 2006-01-03 1442.533 NaN NaN NaN high
g = sns.FacetGrid(data=df,col='name',col_wrap=1,hue='name')
g.fig.set_size_inches(6,3) # compressed just to show example
g.map(sns.lineplot,'Date','Consumption')
plt.show()
Result:
Related
I have a pandas df in the form of
cell1 cell2 cell3
1 Nan Nan
Cell1 Nan 3 Nan
Cell2 3 Nan 2
Cell3 2 3 Nan
Cell4 Nan 2 3
Cell5 3 1 Nan
I want to put in a plot like the format in the picture (some kind of a heatmap?). Not necessarily exactly the same, just the same concept
Each value has its own color (for example, 1=red, 2=yellow, 3=blue, NaN=white), changing the brightness/fading by the value, adding black outline to the cell by value
if it's possible I'd love the columns name to be below and rotated. (and also if possible, without seeing the number in the cell, just the color)
the plot format
I've looked at packages like seaborn and style but didn't manage to do it, maybe I did colored the cells, who knows, but I didn't manage to preform the plot so I can see it, I'd appreciate every kind of help
Importing the packages to be used:
import pandas
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
import matplotlib.patches as mpatches
import matplotlib.colors as mcolors
First, you need to convert all the NaN values to a 0 in your pandas dataframe.
df = df.replace(np.nan, 0)
You need a colormap to map the values to colors starting with 0. You can create a colormap using the matplotlib.colors.LinearSegmentedColormap class. You can change the colors as you wish.
cmap = mcolors.LinearSegmentedColormap.from_list("", ["white","red","yellow","blue"])
Then, you can draw the heatmap using the seaborn.heatmap function. We will avoid the annot attribute to avoid displaying the values in the cells.
sns.heatmap(df, cmap=cmap, linewidths=0.5, linecolor='black', annot=True, fmt='')
plt.show()
Alternatively, if you want to draw the heatmap without the color axis, you can set the cbar attribute to False. Instead you can add a legend with specific colors for each value to the plot.
legend = ['1', '2', '3']
colors = ['red', 'yellow', 'blue']
patches = [mpatches.Patch(color=colors[i], label=legend[i]) for i in range(len(legend))]
sns.heatmap(df, cmap=cmap, linewidths=0.5, linecolor='black', annot=True, fmt='', cbar=False)
plt.legend(handles=patches, bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
I am receiving this error when I try to upload a csv file as a geodataframe.According to other questions resolutions on this site, this method should do the trick.
Here is the code that I am using to: upload the file as a gdf, then produce a subset dataframe with only some of the columns present.
cp_union = gpd.read_file(r'C:\Users\User\Desktop\CPAWS\terrestrial_outputs\cp_union.csv')
cp_union.crs = 'epsg:3005'
cp_trimmed = cp_union[['COSEWIC_status','reason_for_designation','cnm_eng','iucn_cat','mgmt_e','status_e','classification','sq_m']]
As stated in the title, the error that i am receiving is this: ValueError: GeoDataFrame does not support multiple columns using the geometry column name 'geometry'. Is there some part of the process of saving a gdf as a csv and then reloading it as a gdf that would cause the creation of an additional geometry column?
EDIT
In another script, I loaded the same csv file as a pd dataframe. Here is the first row of data within that pd dataframe.
Unnamed: 0 0
fid_critic 0
scntfc_nm Castilleja victoriae
cnm_eng Victoria's Owl-clover
cnm_fren Castilléjie de Victoria
cswc_pop NaN
ch_stat Final
cb_site_nm Cattle Point
ch_detail Detailed Polygon
ch_variant NaN
ch_method NaN
hectares 0.8559
utm_zone 10
utm_east 478383
utm_north 5365043
latitude 48.438164
longitude -123.29226
shape_1 0.0
objectid 10251681.0
area_sqm 8478.6733
feat_len 326.5008
fid_protec -1
name_e NaN
name_f NaN
aichi_t11 NaN
iucn_cat NaN
oecm NaN
o_area 0.0
loc_e NaN
loc_f NaN
type_e NaN
mgmt_e NaN
gov_type NaN
legisl_e NaN
status_e NaN
protdate 0
delisdate 0
owner_e NaN
owner_f NaN
subs_right NaN
comments NaN
url NaN
shape_leng 0.0
protected 0
shape_le_1 320.859687
shape_area 6499.790343
geometry POLYGON ((1200735.4438 384059.0133999996, 1200...
COSEWIC_status Endangered
reason_for_designation This small annual herb is confined to a very s...
sq_m 6499.790343
classification c
Name: 0, dtype: object
So my only theory here is that, when you save a gdf as a csv, that the csv contains a column called geometry. Then when you load that csv as a gdf, that geopandas tries to create a new geometry column ontop of the one that was already in the csv. I could be completely wrong about this. Even if this is the case, Im not sure how to go about resolving the issue.
Thanks for the help!
using your sample data to create a CSV. Had to replace geometry as sample is not a valid WKT string
re-produced your error
solved by loading using pandas then converting to geopandas
solution
df = pd.read_csv(f)
cp_union = gpd.GeoDataFrame(
df.loc[:, [c for c in df.columns if c != "geometry"]],
geometry=gpd.GeoSeries.from_wkt(df["geometry"]),
crs="epsg:3005",
)
full code
import pandas as pd
import geopandas as gpd
import io
from pathlib import Path
# fmt: off
df_q = pd.read_csv(io.StringIO("""Unnamed: 0 0
fid_critic 0
scntfc_nm Castilleja victoriae
cnm_eng Victoria's Owl-clover
cnm_fren Castilléjie de Victoria
cswc_pop NaN
ch_stat Final
cb_site_nm Cattle Point
ch_detail Detailed Polygon
ch_variant NaN
ch_method NaN
hectares 0.8559
utm_zone 10
utm_east 478383
utm_north 5365043
latitude 48.438164
longitude -123.29226
shape_1 0.0
objectid 10251681.0
area_sqm 8478.6733
feat_len 326.5008
fid_protec -1
name_e NaN
name_f NaN
aichi_t11 NaN
iucn_cat NaN
oecm NaN
o_area 0.0
loc_e NaN
loc_f NaN
type_e NaN
mgmt_e NaN
gov_type NaN
legisl_e NaN
status_e NaN
protdate 0
delisdate 0
owner_e NaN
owner_f NaN
subs_right NaN
comments NaN
url NaN
shape_leng 0.0
protected 0
shape_le_1 320.859687
shape_area 6499.790343
geometry POLYGON ((5769135.557632876 7083849.386658552, 5843426.213336911 7098018.122146672, 5852821.812968816 7081377.7312996285, 5914814.478616157 7091734.620966213, 5883751.009067913 7017032.330573363, 5902031.719573214 6983898.953064103, 5864452.659165712 6922039.030140929, 5829585.402576889 6878872.269967912, 5835906.522449658 6846685.714836724, 5800391.382286092 6827305.509709548, 5765261.646424723 6876008.057438379, 5765261.402301509 6876010.894933639, 5765264.431247815 6876008.786040769, 5760553.056402712 6927522.42488809, 5720896.599172597 6983360.181762057, 5755349.303491102 7039380.015177476, 5769135.557632876 7083849.386658552))
COSEWIC_status Endangered
reason_for_designation This small annual herb is confined to a very s...
sq_m 6499.790343
classification c"""), sep="\s\s+", engine="python", header=None).set_index(0).T
# fmt: on
# generate a CSV file from sample data
f = Path.cwd().joinpath("SO_q.csv")
df_q.to_csv(f, index=False)
# replicate issue...
try:
gpd.read_file(f)
except ValueError as e:
print(e)
# now the actual solution
df = pd.read_csv(f)
cp_union = gpd.GeoDataFrame(
df.loc[:, [c for c in df.columns if c != "geometry"]],
geometry=gpd.GeoSeries.from_wkt(df["geometry"]),
crs="epsg:3005",
)
GeoPandas automatically adds a geometry field when you read in your CSV.
As you already have a field called "geometry", GeoPandas raises an exception. GeoPandas doesn't read the WKT strings in the "geometry" field as geometry.
Some workarounds:
the GDAL/OGR CSV driver (which geopandas uses) supports the open option GEOM_POSSIBLE_NAMES which lets you specify a different field name to look for geometry in.
gdf = gpd.read_file('/path/to/gdf.csv',
GEOM_POSSIBLE_NAMES="geometry",
KEEP_GEOM_COLUMNS="NO")
The KEEP_GEOM_COLUMNS option is also required otherwise GDAL/OGR will return a "geometry" column as well as the geometry and you still get the original ValueError: GeoDataFrame does not support multiple columns using the geometry column name 'geometry'.
Alternatively, if you have control over the original writing of the CSV, the GDAL/OGR CSV driver documentation notes that if a field is named "WKT" it will be read as geometry:
When reading a field named “WKT” is assumed to contain WKT geometry, but also is treated as a regular field.
So one option is to write out the CSV using "WKT" as the geometry column name and then it can be read straight back in:
gdf = gdf.rename_geometry('WKT')
gdf.to_csv('/path/to/gdf.csv', index=False)
# Then in later scripts you can just read it straight back in
gdf = gpd.read_file('/path/to/gdf.csv')
A final option is to read the CSV in as a Pandas DataFrame (per #RobRaymond's answer) then manually create the geometry from the WKT and convert to a GeoDataFrame. Here is an alternative way of doing so:
import geopandas as gpd
df = gpd.read_file('/path/to/gdf.csv', ignore_geometry=True)
# Create geometry objects from WKT strings
df['geometry'] = gpd.GeoSeries.from_wkt(df['geometry'])
# Convert to GDF
gdf = gpd.GeoDataFrame(df)
I have a situation where I need to merge multiple dataframes that I can do easily using the below code:
# Merge all the datasets together
df_prep1 = df_prep.merge(df1,on='e_id',how='left')
df_prep2 = df_prep1.merge(df2,on='e_id',how='left')
df_prep3 = df_prep2.merge(df3,on='e_id',how='left')
df_prep4 = df_prep3.merge(df_4,on='e_id',how='left')
df_prep5 = df_prep4.merge(df_5,on='e_id',how='left')
df_prep6 = df_prep5.merge(df_6,on='e_id',how='left')
But what I want to understand is that if there is any other efficient way to perform this merge, maybe using a helper function? If yes, then how could I achieve that?
You can use reduce from functools module to merge multiple dataframes:
from functools import reduce
dfs = [df_1, df_2, df_3, df_4, df_5, df_6]
out = reduce(lambda dfl, dfr: pd.merge(dfl, dfr, on='e_id', how='left'), dfs)
You can put all your dfs into a list, or pass them from a function, a loop, etc. and then have 1 main df that you merge everything onto.
You can start with an empty df and iterate through. In your case, since you are doing left merge, it looks like your df_prep should already have all of the e_id values that you want. You'll need to figure out what you want to do with any additional columns, e.g., you can have pandas add _x and _y after conflicting column names that you don't merge, or rename them, etc. See this toy example:
main_df = pd.DataFrame({'e_id': [0, 1, 2, 3, 4]})
for x in range(3):
dfx = pd.DataFrame({'e_id': [x], 'another_col' + str(x): [x * 10]})
main_df = main_df.merge(dfx, on='e_id', how='left')
to get:
e_id another_col0 another_col1 another_col2
0 0 0.0 NaN NaN
1 1 NaN 10.0 NaN
2 2 NaN NaN 20.0
3 3 NaN NaN NaN
4 4 NaN NaN NaN
I've got a pandas DataFrame filled mostly with real numbers, but there is a few nan values in it as well.
How can I replace the nans with averages of columns where they are?
This question is very similar to this one: numpy array: replace nan values with average of columns but, unfortunately, the solution given there doesn't work for a pandas DataFrame.
You can simply use DataFrame.fillna to fill the nan's directly:
In [27]: df
Out[27]:
A B C
0 -0.166919 0.979728 -0.632955
1 -0.297953 -0.912674 -1.365463
2 -0.120211 -0.540679 -0.680481
3 NaN -2.027325 1.533582
4 NaN NaN 0.461821
5 -0.788073 NaN NaN
6 -0.916080 -0.612343 NaN
7 -0.887858 1.033826 NaN
8 1.948430 1.025011 -2.982224
9 0.019698 -0.795876 -0.046431
In [28]: df.mean()
Out[28]:
A -0.151121
B -0.231291
C -0.530307
dtype: float64
In [29]: df.fillna(df.mean())
Out[29]:
A B C
0 -0.166919 0.979728 -0.632955
1 -0.297953 -0.912674 -1.365463
2 -0.120211 -0.540679 -0.680481
3 -0.151121 -2.027325 1.533582
4 -0.151121 -0.231291 0.461821
5 -0.788073 -0.231291 -0.530307
6 -0.916080 -0.612343 -0.530307
7 -0.887858 1.033826 -0.530307
8 1.948430 1.025011 -2.982224
9 0.019698 -0.795876 -0.046431
The docstring of fillna says that value should be a scalar or a dict, however, it seems to work with a Series as well. If you want to pass a dict, you could use df.mean().to_dict().
Try:
sub2['income'].fillna((sub2['income'].mean()), inplace=True)
In [16]: df = DataFrame(np.random.randn(10,3))
In [17]: df.iloc[3:5,0] = np.nan
In [18]: df.iloc[4:6,1] = np.nan
In [19]: df.iloc[5:8,2] = np.nan
In [20]: df
Out[20]:
0 1 2
0 1.148272 0.227366 -2.368136
1 -0.820823 1.071471 -0.784713
2 0.157913 0.602857 0.665034
3 NaN -0.985188 -0.324136
4 NaN NaN 0.238512
5 0.769657 NaN NaN
6 0.141951 0.326064 NaN
7 -1.694475 -0.523440 NaN
8 0.352556 -0.551487 -1.639298
9 -2.067324 -0.492617 -1.675794
In [22]: df.mean()
Out[22]:
0 -0.251534
1 -0.040622
2 -0.841219
dtype: float64
Apply per-column the mean of that columns and fill
In [23]: df.apply(lambda x: x.fillna(x.mean()),axis=0)
Out[23]:
0 1 2
0 1.148272 0.227366 -2.368136
1 -0.820823 1.071471 -0.784713
2 0.157913 0.602857 0.665034
3 -0.251534 -0.985188 -0.324136
4 -0.251534 -0.040622 0.238512
5 0.769657 -0.040622 -0.841219
6 0.141951 0.326064 -0.841219
7 -1.694475 -0.523440 -0.841219
8 0.352556 -0.551487 -1.639298
9 -2.067324 -0.492617 -1.675794
Although, the below code does the job, BUT its performance takes a big hit, as you deal with a DataFrame with # records 100k or more:
df.fillna(df.mean())
In my experience, one should replace NaN values (be it with Mean or Median), only where it is required, rather than applying fillna() all over the DataFrame.
I had a DataFrame with 20 variables, and only 4 of them required NaN values treatment (replacement). I tried the above code (Code 1), along with a slightly modified version of it (code 2), where i ran it selectively .i.e. only on variables which had a NaN value
#------------------------------------------------
#----(Code 1) Treatment on overall DataFrame-----
df.fillna(df.mean())
#------------------------------------------------
#----(Code 2) Selective Treatment----------------
for i in df.columns[df.isnull().any(axis=0)]: #---Applying Only on variables with NaN values
df[i].fillna(df[i].mean(),inplace=True)
#---df.isnull().any(axis=0) gives True/False flag (Boolean value series),
#---which when applied on df.columns[], helps identify variables with NaN values
Below is the performance i observed, as i kept on increasing the # records in DataFrame
DataFrame with ~100k records
Code 1: 22.06 Seconds
Code 2: 0.03 Seconds
DataFrame with ~200k records
Code 1: 180.06 Seconds
Code 2: 0.06 Seconds
DataFrame with ~1.6 Million records
Code 1: code kept running endlessly
Code 2: 0.40 Seconds
DataFrame with ~13 Million records
Code 1: --did not even try, after seeing performance on 1.6 Mn records--
Code 2: 3.20 Seconds
Apologies for a long answer ! Hope this helps !
If you want to impute missing values with mean and you want to go column by column, then this will only impute with the mean of that column. This might be a little more readable.
sub2['income'] = sub2['income'].fillna((sub2['income'].mean()))
# To read data from csv file
Dataset = pd.read_csv('Data.csv')
X = Dataset.iloc[:, :-1].values
# To calculate mean use imputer class
from sklearn.impute import SimpleImputer
imputer = SimpleImputer(missing_values=np.nan, strategy='mean')
imputer = imputer.fit(X[:, 1:3])
X[:, 1:3] = imputer.transform(X[:, 1:3])
Directly use df.fillna(df.mean()) to fill all the null value with mean
If you want to fill null value with mean of that column then you can use this
suppose x=df['Item_Weight'] here Item_Weight is column name
here we are assigning (fill null values of x with mean of x into x)
df['Item_Weight'] = df['Item_Weight'].fillna((df['Item_Weight'].mean()))
If you want to fill null value with some string then use
here Outlet_size is column name
df.Outlet_Size = df.Outlet_Size.fillna('Missing')
Pandas: How to replace NaN (nan) values with the average (mean), median or other statistics of one column
Say your DataFrame is df and you have one column called nr_items. This is: df['nr_items']
If you want to replace the NaN values of your column df['nr_items'] with the mean of the column:
Use method .fillna():
mean_value=df['nr_items'].mean()
df['nr_item_ave']=df['nr_items'].fillna(mean_value)
I have created a new df column called nr_item_ave to store the new column with the NaN values replaced by the mean value of the column.
You should be careful when using the mean. If you have outliers is more recommendable to use the median
Another option besides those above is:
df = df.groupby(df.columns, axis = 1).transform(lambda x: x.fillna(x.mean()))
It's less elegant than previous responses for mean, but it could be shorter if you desire to replace nulls by some other column function.
using sklearn library preprocessing class
from sklearn.impute import SimpleImputer
missingvalues = SimpleImputer(missing_values = np.nan, strategy = 'mean', axis = 0)
missingvalues = missingvalues.fit(x[:,1:3])
x[:,1:3] = missingvalues.transform(x[:,1:3])
Note: In the recent version parameter missing_values value change to np.nan from NaN
I use this method to fill missing values by average of a column.
fill_mean = lambda col : col.fillna(col.mean())
df = df.apply(fill_mean, axis = 0)
You can also use value_counts to get the most frequent values. This would work on different datatypes.
df = df.apply(lambda x:x.fillna(x.value_counts().index[0]))
Here is the value_counts api reference.
I'm trying to generate 4 plots from a DataFrame using Seaborn
Date A B C D
2019-04-05 330.665 161.975 168.69 0
2019-04-06 322.782 150.243 172.539 0
2019-04-07 322.782 150.243 172.539 0
2019-04-08 295.918 127.801 168.117 0
2019-04-09 282.674 126.894 155.78 0
2019-04-10 293.818 133.413 160.405 0
I have casted dates using pd.to_DateTime and numbers using pd.to_numeric. Here is the df.info():
<class 'pandas.core.frame.DataFrame'>
Int64Index: 6 entries, 460 to 465
Data columns (total 5 columns):
Date 6 non-null datetime64[ns]
A 6 non-null float64
B 6 non-null float64
C 6 non-null float64
D 6 non-null float64
dtypes: datetime64[ns](1), float64(4)
memory usage: 288.0 bytes
I can do a wide column plot by just calling .plot() on df.
However,
The legend of the plot is covering the plot itself
I would instead like to have 4 separate plots in 1 diagram and have tried using lmplot to achieve this.
I would like to add labels to the plot like so:
Plot with image
I first melted the data:
df=pd.melt(df,id_vars='Date', var_name='Var', value_name='Unit')
And then tried lmplot
sns.lmplot(x = df['Date'], y='Unit', col='Var', data=df)
However, I get the traceback:
TypeError: Invalid comparison between dtype=datetime64[ns] and str
I have also tried setting df.set_index['Date'] and replotting that using x=df.index and that gave me the same error.
The data can be plotted using Google Sheets but I am trying to automate a workflow where the chart can be generated and sent via Slack to selected recipients.
I hope I have expressed myself clearly enough as I am rather new to Python and Seaborn and hope to get some help from the experts here.
Regarding the legend you can just use .legend(loc="upper left", bbox_to_anchor=(1,1)) as in this example
%matplotlib inline
import pandas as pd
import numpy as np
data = np.random.rand(10,4)
df = pd.DataFrame(data, columns=["A", "B", "C", "D"])
df.plot()\
.legend(loc="upper left", bbox_to_anchor=(1,1));
While for the second IIUC you can play from
df.plot(subplots=True, layout=(2,2));