Convert floats to ints in pandas dataframe - pandas

I have a pandas dataframe with a column ‘distance’ and it is of datatype ‘float64’.
Distance
14.827379
0.754254
0.2284546
1.833768
I want to convert these numbers to whole numbers (14,0,0,1). I tried with this but I get the error “ValueError: Cannot convert NA to integer”.
df['distance(kmint)'] = result['Distance'].astype('int')
Any help would be appreciated!!

I filtered out the NaN's from the dataframe using this:
result = result[np.isfinite(result['distance(km)'])]
Then, I was able to convert from float to int.

An alternative approach would be to convert the NaN values as part of your data import and cleaning processes. The more generalized solution could involve specifying the values that are NaN in the read_table command by setting the na_values flag. What you want to make sure of is that there isn't some malfored data like 1.5km in one of your fields that getting picked up as a NaN value.
pandas.read_table(..., na_values=None, keep_default_na=True, na_filter=True, ....)
Subsequently, once the dataframe is populated and the NaN values are identified properly, you can use the fillna method to substitute in zeros or the values that you identified as your distances.
Finally, it would be best to probably use notnull versus isfinite to convert the over to integers.

Related

Pandas Python, as a currency

Suppose I have a column with currency values, as well as blank values, I want the blanks to be represented at 0.00 and the currency to be in two decimal place values, how would I do this using pandas python?
If you just want to alter the visual appearance you can use
df.fillna(0).apply(lambda x: ["{0:.2f}".format(item) for item in x ])
This will fill all np.nan with 0 and then convert the full dataframe to a string with given format-specification. However, this is a very dump approach, since you loose all abilities for calculating with your data.

How to retain NaN values using pandas factorize()?

I have a Pandas data frame with several columns, with some columns comprising categorical entries. I convert (or, encode) these entries to numerical values using factorize() as follows:
for column in df.select_dtypes(['category']):
df[column] = df[column].factorize(na_sentinel=None)[0]
The columns have several NaN entries, so I let na_sentinel=None to retain the NaN entries. However, the NaN values are not retained (they get converted to numerical entries), which is not what I desire. My Pandas version is 1.3.5. Is there something I am missing?
Factorize converts NaN values by default to -1. The NaN values are retained in this way since the NaN values can be identified by the -1. You would probably want to keep the default which is:
na_sentinel =-1
see
https://pandas.pydata.org/docs/reference/api/pandas.factorize.html

Pandas dataframe mixed dtypes when reading csv

I am reading in a large dataframe that is throwing a DtypeWarning: Columns (I understand this warning) but am struggling to prevent it (I don't want to set low_memory to False as I would like to specify the correct dtypes.
For every columns, the majority of rows are float values and the last 3 rows are string (metadata basically, information about each column). I understand that I can set the dtype per column when reading in the csv, however I do not know how to change rows 1:n to be float32 for example and the last 3 rows to be strings. I would like to avoid reading in two separate CSVs. The resulting dtype of all columns after reading in the dataframe is 'object'. Below is a reproducible example. The dtype warning is not thrown when reading in i am guessing because of the size of the dataframe - however the result is exactly the same as the problem i am facing. i would like to make the first 3 rows float32 and the last 3 string so that they are the correct dtype. thank you!
reproducible example:
df = pd.DataFrame([[0.1, 0.2,0.3],[0.1, 0.2,0.3],[0.1, 0.2,0.3],
['info1', 'info2','info3'],['info1', 'info2','info3'],['info1', 'info2','info3']],
index=['index1', 'index2', 'index3', 'info1', 'info2', 'info3'],
columns=['column1', 'column2', 'column3'] )
df.to_csv('test.csv')
df1 = pd.read_csv('test.csv', index_col=0)

NaN output when multiplying row and column of dataframe in pandas

I have two data frames the first one looks like this:
and the second one like so:
I am trying to multiply the values in number of donors column of the second data frame(96 values) with the values in the first row of the first data frame and columns 0-95 (also 96 values).
Below is the code I have for multiplying the two right now, but as you can see the values are all NaN:
Does anyone know how to fix this?
Your second dataframe has dtype object, you must convert it to float
df_sls.iloc[0,3:-1].astype(float)

Pandas DataFrame: sort_values by an index with empty strings

I have a pandas DataFrame with multi level index. I want to sort by one of the index levels. It has float values, but occasionally few empty strings too which I want to be considered as nan.
df = pd.DataFrame(dict(x=[1,2,3,4]), index=[1,2,3,''])
df.index.name = 'i'
df.sort_values('i')
TypeError: '<' not supported between instances of 'str' and 'int'
One way to solve the problem is to replace the empty strings with nan, do the sort, and then replace nan with empty strings again.
I am wondering if there is any way we could tweek the sort_values to consider empty stings as nan.
Why there are empty strings in the first place?
In my application, actually the data read has missing values which is read as np.nan. But, np.nan values cause problem with groupby. So, they are replace to empty strings. I wish we had a constant like nan which is treated like empty string by groupby and like nan for numeric operations.
I am wondering if there is any way we could tweek the sort_values to consider empty stings as nan.
In pandas missing values are not empty values, only if save DataFrame with missing values then are replaced by empty strings.
Btw, main problem is mixed values - numeric with strings (empty values), best is convert all strings to numeric for avoid it.
You can replace empty values by missing values by rename:
df = pd.DataFrame(dict(x=[1,2,3,4]), index=[1,2,3,''])
df.index.name = 'i'
df = df.rename({'':np.nan})
df = df.sort_values('i')
print (df)
x
i
1.0 1
2.0 2
3.0 3
NaN 4
Possible solution if cannot be changed original data is get positions of sorted values by Index.argsort and change order by DataFrame.iloc:
df = df.iloc[df.rename({'':np.nan}).index.argsort()]
print (df)
x
i
1 1
2 2
3 3
4