Pandas replace with blanks [duplicate] - pandas

I am trying to create a dataframe in pandas using a CSV that is semicolon-delimited, and uses commas for the thousands separator on numeric data. Is there a way to read this in so that the type of the column is float and not string?

Pass param thousands=',' to read_csv to read those values as thousands:
In [27]:
import pandas as pd
import io
t="""id;value
0;123,123
1;221,323,330
2;32,001"""
pd.read_csv(io.StringIO(t), thousands=r',', sep=';')
Out[27]:
id value
0 0 123123
1 1 221323330
2 2 32001

The answer to this question should be short:
df=pd.read_csv('filename.csv', thousands=',')

Take a look at the read_csv documentation there is a keyword argument 'thousands' that you can pass the ',' into. Likewise if you had European data containing a '.' for the separator you could do the same.

Related

Extracting portions of the entries of Pandas dataframe

I have a Pandas dataframe with several columns wherein the entries of each column are a combination of​ numbers, upper and lower case letters and some special characters:, i.e, "=A-Za-z0-9_|"​. Each entry of the column is of the form:
​'x=ABCDefgh_5|123|' ​
I want to retain only the numbers 0-9 appearing only between | | and strip out all other characters​. Here is my code for one column of the dataframe:
list(map(lambda x: x.lstrip(r'\[=A-Za-z_|,]+'), df[1]))
However, the code returns the full entry ​'x=ABCDefgh_5|123|' ​ without stripping out anything. Is there an error in my code?
Instead of working with these unreadable regex expressions, you might want to consider a simple split. For example:
import pandas as pd
d = {'col': ["x=ABCDefgh_5|123|", "x=ABCDefgh_5|123|"]}
df = pd.DataFrame(data=d)
output = df["col"].str.split("|").str[1]

pandas remove spaces from Series

The question is, how to gain access to the strings inside of the first column so that string manipulations can be performed with each value. For example remove spaces in front of each string.
import pandas as pd
data = pd.read_csv("adult.csv", sep='\t', index_col=0)
series = data['workclass'].value_counts()
print(series)
Here is the file:
Zipped csv file
It is index, so use str.strip with series.index:
series.index = series.index.str.strip()
But if need convert series here to 2 columns DataFrame use:
df = series.rename_axis('a').reset_index(name='b')

Using corrwith() on 2 time series data frames [duplicate]

I am trying to create a dataframe in pandas using a CSV that is semicolon-delimited, and uses commas for the thousands separator on numeric data. Is there a way to read this in so that the type of the column is float and not string?
Pass param thousands=',' to read_csv to read those values as thousands:
In [27]:
import pandas as pd
import io
t="""id;value
0;123,123
1;221,323,330
2;32,001"""
pd.read_csv(io.StringIO(t), thousands=r',', sep=';')
Out[27]:
id value
0 0 123123
1 1 221323330
2 2 32001
The answer to this question should be short:
df=pd.read_csv('filename.csv', thousands=',')
Take a look at the read_csv documentation there is a keyword argument 'thousands' that you can pass the ',' into. Likewise if you had European data containing a '.' for the separator you could do the same.

PySpark Dataframe: append to each value of a column a word

I would like to append to each value of a column in a pyspark dataframe a word( for example from a list of words). I though to just convert it to pandas framework because it is easier but I need to do it on pyspark. Any Ideas? Thank you :)
you can do it easily with concat function:
from pyspark.sql import functions as F
for col in df.columns:
df.withColumn(col, F.concat(F.col(col), F.lit("new_word"))

pandas reading CSV data formatted with comma for thousands separator

I am trying to create a dataframe in pandas using a CSV that is semicolon-delimited, and uses commas for the thousands separator on numeric data. Is there a way to read this in so that the type of the column is float and not string?
Pass param thousands=',' to read_csv to read those values as thousands:
In [27]:
import pandas as pd
import io
t="""id;value
0;123,123
1;221,323,330
2;32,001"""
pd.read_csv(io.StringIO(t), thousands=r',', sep=';')
Out[27]:
id value
0 0 123123
1 1 221323330
2 2 32001
The answer to this question should be short:
df=pd.read_csv('filename.csv', thousands=',')
Take a look at the read_csv documentation there is a keyword argument 'thousands' that you can pass the ',' into. Likewise if you had European data containing a '.' for the separator you could do the same.