How to randomly put number, strings in excel using pandas - pandas

I have 3000 rows of data in excel
id,product,store,revenue,data,state
1,Ball,,222,nil,,
1,Pen,,234,nil,nil,,
2,Books,,543,nil,,
2,Ink,,123,nil,,
I need to fill the 3rd store column with random number between 1 & 5
My code is giving every time 1, df['store'] = df['store'].fillna(random.randint(1,5))
I need to fill the 5th state column with random string {'CA', 'WD','CH', 'AL'}
I need to create a 6th which is country column if 'CA', 'CH' in 5th column map to USA and 'WD', 'AL' map to Japan
{'CA':'USA', 'CH':'USA', 'AL':'Japan'}

Let us try
n=len(df)
num=np.random.randint(1,6,size=n)
l={'CA', 'WD','CH', 'AL'}
state=np.random.choice(list(l), n)
df['store'] = df['store'].fillna(pd.Series(num,index=df.index))
df['state'] = df['state'].fillna(pd.Series(state,index=df.index))
df['country']=df.state.map({'CA':'USA', 'CH':'USA', 'AL':'Japan'})

Related

Select column with the most unique values from csv, python

I'm trying to come up with a way to select from a csv file the one numeric column that shows the most unique values. If there are multiple with the same amount of unique values it should be the left-most one. The output should be either the name of the column or the index.
Position,Experience in Years,Salary,Starting Date,Floor,Room
Middle Management,5,5584.10,2019-02-03,12,100
Lower Management,2,3925.52,2016-04-18,12,100
Upper Management,1,7174.46,2019-01-02,10,200
Middle Management,5,5461.25,2018-02-02,14,300
Middle Management,7,7471.43,2017-09-09,17,400
Upper Management,10,12021.31,2020-01-01,11,500
Lower Management,2,2921.92,2019-08-17,11,500
Middle Management,5,5932.94,2017-11-21,15,600
Upper Management,7,10192.14,2018-08-18,18,700
So here I would want 'Floor' or 4 as my output given that Floor and Room have the same amount of unique values but Floor is the left-most one (I need it in pure python, i can't use pandas)
I have this nested in a whole bunch of other code for what I need to do as a whole, i will spare you the details but these are the used elements in the code:
new_types_list = [str, int, str, datetime.datetime, int, int] #all the datatypes of the columns
l1_listed = ['Position', 'Experience in Years', 'Salary', 'Starting Date', 'Floor', 'Room'] #the header for each column
difference = [3, 5, 9, 9, 6, 7] #is basically the amount of unique values each column has
And here I try to do exactly what I mentioned before:
another_list = [] #now i create another list
for i in new_types_list: # this is where the error occurs, it only fills the list with the index of the first integer 3 times instead of with the individual indices
if i== int:
another_list.append(new_types_list.index(i))
integer_listi = [difference[i] for i in another_list] #and this list is the corresponding unique values from the integers
for i in difference: #now we want to find out the one that is the highest
if i== max(integer_listi):
chosen_one_i = difference.index(i) #the index of the column with the most unique values is the chosen one -
MUV_LMNC = l1_listed[chosen_one_i]
```
You can use .nunique() to get number of unique in each column:
df = pd.read_csv("your_file.csv")
print(df.nunique())
Prints:
Position 3
Experience in Years 5
Salary 9
Starting Date 9
Floor 7
Room 7
dtype: int64
Then to find max, use .idxmax():
print(df.nunique().idxmax())
Prints:
Salary
EDIT: To select only integer columns:
print(df.loc[:, df.dtypes == np.integer].nunique().idxmax())
Prints:
Floor

Creating a new column to get the rupee converted value of different currencies using pandas

i have a column which is of type object with different currencies. I want to create a new column that converts currencies into rupees. I also have a dictionary that have currency conversion details to indian rupees meaning what is the ruppes conversion of 1$ etc (Example- 1$:70, 1€:79,1€:90,1¥:0.654)
Dictionary looks like this:-
d1 = {'1$':70, '1€':79,'1€':90,'1¥':0.654}
Dataframe looks like this:-
Currency
'20$'
'30€'
'40€'
'35¥'
I want to get:-
Currency Rup_convrtd
'20$' 1400
'30€' 2700
'40€' 3600
'35¥' 22.855
Please somebody help me in getting the Rup_convrtd column using Pandas.
Use:
d1 = {'1$':70, '1€':79,'1€':90,'1¥':0.654}
#first remove 1 with indexing
d = {k[1:]:v for k, v in d1.items()}
print (d)
{'$': 70, '€': 90, '¥': 0.654}
#map last value of column by dict and multiple all values without last
df['Rup_convrtd'] = df['Currency'].str[-1].map(d).mul(df['Currency'].str[:-1].astype(int))
print (df)
Currency Rup_convrtd
0 20$ 1400.00
1 30€ 2700.00
2 40€ 3600.00
3 35¥ 22.89

Replacing substrings based on lists

I am trying to replace substrings in a data frame by the lists "name" and "lemma". As long as I enter the lists manually, the code delivers the result in the dataframe m.
name=['Charge','charge','Prepaid']
lemma=['Hallo','hallo','Hi']
m=sdf.replace(regex= name, value =lemma)
As soon as I am reading in both lists from an excel file, my code is not replacing the substrings anymore. I need to use an excel file, since the lists are in one table that is very large.
sdf= pd.read_excel('training_data.xlsx')
synonyms= pd.read_excel('synonyms.xlsx')
lemma=synonyms['lemma'].tolist()
name=synonyms['name'].tolist()
m=sdf.replace(regex= name, value =lemma)
Thanks for your help!
df.replace()
Replace values given in to_replace with value.
Values of the DataFrame are replaced with other values dynamically. This differs from updating with .loc or .iloc, which require you to specify a location to update with some value.
in short, this method won't make change on the series level, only on values.
This may achieve what you want:
sdf.regex = synonyms.name
sdf.value = synonyms.lemma
If you are just trying to replace 'Charge' with 'Hallo' and 'charge' with 'hallo' and 'Prepaid' with 'Hi' then you can use repalce() and pass the list of words to finds as the first argument and the list of words to replace with as the second keyword argument value.
Try this:
df=df.replace(name, value=lemma)
Example:
name=['Charge','charge','Prepaid']
lemma=['Hallo','hallo','Hi']
df = pd.DataFrame([['Bob', 'Charge', 'E333', 'B442'],
['Karen', 'V434', 'Prepaid', 'B442'],
['Jill', 'V434', 'E333', 'charge'],
['Hank', 'Charge', 'E333', 'B442']],
columns=['Name', 'ID_First', 'ID_Second', 'ID_Third'])
df=df.replace(name, value=lemma)
print(df)
Output:
Name ID_First ID_Second ID_Third
0 Bob Hallo E333 B442
1 Karen V434 Hi B442
2 Jill V434 E333 hallo
3 Hank Hallo E333 B442

how to do this operation in pandas

I have a data frame that contains country column. Unfortunately the country names characters are all capitalized and I need them to be ISO3166_1_Alpha_3
as an example United States of America is going to be U.S.A
United Kingdom is going to be U.K and so on.
Fortunately I found this data frame on the internet that contains 2 important columns the first is the country name and the second is the ISO3166_1_Alpha_3
you can find the data frame on this website
https://datahub.io/JohnSnowLabs/iso-3166-country-codes-itu-dialing-codes-iso-4217-currency-codes
So i wrote this code
data_geo = pd.read_excel("tab0.xlsx")#this is the data frame that contains all the capitalized country name
country_iso = pd.read_csv(r"https://datahub.io/JohnSnowLabs/iso-3166-country-codes-itu-dialing-codes-iso-4217-currency-codes/r/iso-3166-country-codes-itu-dialing-codes-iso-4217-currency-codes-csv.csv",
usecols=['Official_Name_English', 'ISO3166_1_Alpha_3'])
s = pd.Series(data_geo.countery_name_e).str.lower().str.title()#this line make all the names characters small except the first character
y = pd.Series([])
Now i want to make a loop when a
s = Official_Name_English
I want to append
country_iso[Official_Name_English].ISO3166_1_Alpha_3
to the Y series. If country name isn't in this list append NaN
this is 20 rows in s
['Diffrent Countries', 'Germany', 'Other Countries', 'Syria',
'Jordan', 'Yemen', 'Sudan', 'Somalia', 'Australia',
'Other Countries', 'Syria', 'Lebanon', 'Jordan', 'Yemen', 'Qatar',
'Sudan', 'Ethiopia', 'Djibouti', 'Somalia', 'Botswana Land']
Do you know how can i make this?
You could try map:
data_geo = pd.read_excel("tab0.xlsx")
country_iso = pd.read_csv(r"https://datahub.io/JohnSnowLabs/iso-3166-country-codes-itu-dialing-codes-iso-4217-currency-codes/r/iso-3166-country-codes-itu-dialing-codes-iso-4217-currency-codes-csv.csv",
usecols=['Official_Name_English', 'ISO3166_1_Alpha_3'])
s = pd.Series(data_geo.countery_name_e).str.lower().str.title()
mapper = (country_iso.drop_duplicates('Official_Name_English')
.dropna(subset=['Official_Name_English'])
.set_index('Official_Name_English')['ISO3166_1_Alpha_3'])
y = data_geo['countery_name_e'].map(mapper)

Create new column on pandas DataFrame in which the entries are randomly selected entries from another column

I have a DataFrame with the following structure.
df = pd.DataFrame({'tenant_id': [1,1,1,2,2,2,3,3,7,7], 'user_id': ['ab1', 'avc1', 'bc2', 'iuyt', 'fvg', 'fbh', 'bcv', 'bcb', 'yth', 'ytn'],
'text':['apple', 'ball', 'card', 'toy', 'sleep', 'happy', 'sad', 'be', 'u', 'pop']})
This gives the following output:
df = df[['tenant_id', 'user_id', 'text']]
tenant_id user_id text
1 ab1 apple
1 avc1 ball
1 bc2 card
2 iuyt toy
2 fvg sleep
2 fbh happy
3 bcv sad
3 bcb be
7 yth u
7 ytn pop
I would like to groupby on tenant_id and create a new column which is a random selection of strings from the user_id column.
Thus, I would like my output to look like the following:
tenant_id user_id text new_column
1 ab1 apple [ab1, bc2]
1 avc1 ball [ab1]
1 bc2 card [avc1]
2 iuyt toy [fvg, fbh]
2 fvg sleep [fbh]
2 fbh happy [fvg]
3 bcv sad [bcb]
3 bcb be [bcv]
7 yth u [pop]
7 ytn pop [u]
Here, random id's from the user_id column have been selected, these id's can be repeated as "fvg" is repeated for tenant_id=2. I would like to have a threshold of not more than ten id's. This data is just a sample and has only 10 id's to start with, so generally any number much less than the total number of user_id's. This case say 1 less than total user_id's that belong to a tenant.
i tried first figuring out how to select random subset of varying length with
df.sample
new_column = df.user_id.sample(n=np.random.randint(1, 10)))
I am kinda lost after this, assigning it to my df results in Nan's, probably because they are of variable lengths. Please help.
Thanks.
per my comment:
Your 'new column' is not a new column, it's a new cell for a single row.
If you want to assign the result to a new column, you need to create a new column, and apply the cell computation to it.
df['new column'] = df['user_id'].apply(lambda x: df.user_id.sample(n=np.random.randint(1, 10))))
it doesn't really matter what column you use for the apply since the variable is not used in the computation