Multi-Index : how can I group all these columns together? - multi-index

I have a Dataframe which is not a MultiIndex, it looks like this : enter image description here
As you can see, it has 127 rows and 1328 columns. I'm pretty new to all of that and I've been trying to 'group' the 1328 columns under a unique name like this : enter image description here
I've tried to play around with set_index, reset_index...but I can't figure out how to do !
If you guys have any idea, thanks !

Related

How to read csv files correctly using pandas?

I'm having a csv file like below. I need to check whether the number of columns are greater than the max length of rows. Ex,
name,age,profession
"a","24","teacher","cake"
"b",31,"Doctor",""
"c",27,"Engineer","tea"
If i try to read it using
print(pd.read_csv('test.csv'))
it will print as below.
name age profession
a 24 teacher cake
b 31 Doctor NaN
c 27 Engineer tea
But it's wrong. It happened due to the less number of columns. So i need to identify this scenario as a wrong csv format. what is the best way to test this other than reading this as string and testing the length of each row.
And important thing is, the columns can be different. There are no any mandatory columns to present.
You can try put header=None into .read_csv. Then pandas will throw ParserError if number of columns won't match length of rows. For example:
try:
df = pd.read_csv("your_file.csv", header=None)
except pd.errors.ParserError:
print("File Invalid")

Encode all data in one column and assign the same code if data has a same value

I have a dataframe which has appr. 100 columns and 20000 rows. Now I want to encode one categorical column so that it will have numerical code. After checking its value counts, the result shows something like this:
df['name'].value_counts()
aaa 650
baa 350
cad 50
dae 10
ef3 1
....
The total unique values are about 3300. So I might have a code range from 1 to 3300. I will
normalize the numerical code before train it. As I have already many columns in the dataset, I prefer not using one hot encoding method. So how can I do it? Thank you!
You can enumerate each group using ngroup(). It would look something like:
df.assign(num_code=lambda x: x.groupby(['name']).ngroup())
I don't know what kind of information the column contains, however I am not sure it makes sense to assign an incremental numerical code to a column that seems to be categorical for training models.

groupby 2 columns and count into separate columns based on one columns cases

I'm trying to group by 2 columns of which the first value has 5 different values and the second 2.
My data looks like this:
and using
df_counted = df_analysis
.groupby(['TYPE', 'RESULT'])
.size()
.sort_values(ascending=False)
.reset_index(name='COUNT')
I was able to transform it into the cases I want:
However I don't want a column for result, just for counts.
It's suppoed to be like
COUNT_TRUE COUNT_FALSE
FORWARD 21 182
BACKWARD 34 170
RIGHT 24 298
LEFT 20 242
NEUTRAL 16 82
The best I could do there was this. How do I get there?
Pandas has a feature of making a pivot table with dataframe. Your task can also be done by making pivot table.
df_counted.pivot_table(index="TYPE", columns="RESULT", values="COUNT")
Result:
Solved it and went a kind of full SQL there. It's not elegant, but it works:
df_counted is the last df from the question with the NaN values.
# drop duplicates for the first counts
df_pos = df_counted.drop_duplicates(subset=['TYPE'], keep='first').drop(columns=['COUNT_POS'])
# drop duplicates for the first counts
df_neg = df_counted.drop_duplicates(subset=['TYPE'], keep='last').drop(columns=['COUNT_NEG'])
# join on TYPE
df = df_pos.set_index('TYPE').join(df_neg.set_index('TYPE'))
If someone has a more elegant way of doing this, I'd be super interested to see it.

Need explanation on how pandas.drop is working here

I have a data frame, lets say xyz. I have written code to find out the % of null values each column possess in the dataframe. my code below:
round(100*(xyz.isnull().sum()/len(xyz.index)), 2)
let say i got following results:
abc 26.63
def 36.58
ghi 78.46
I want to drop column ghi because it has more than 70% of null values.
I achieved it using the following code:
xyz = xyz.drop(xyz.loc[:,round(100*(xyz.isnull().sum()/len(xyz.index)), 2)>70].columns, 1)
but , i did not understand how does this code works, can anyone please explain it?
the code is doing the following:
xyz.drop( [...], 1)
removes the specified elements for a given axis, either by row or by column. In this particular case, df.drop( ..., 1) means you're dropping by axis 1, i.e, column
xyz.loc[:, ... ].columns
will return a list with the column names resulting from your slicing condition
round(100*(xyz.isnull().sum()/len(xyz.index)), 2)>70
this instruction is counting the number of nulls, adding them up and normalizing by the number of rows, effectively computing the percentage of nan in each column. Then, the amount is rounded to have only 2 decimal positions and finally you return True is the number of nan is more than 70%. Hence, you get a mapping between columns and a True/False array.
Putting everything together: you're first producing a Boolean array that marks which columns have more than 70% nan, then, using .loc you use Boolean indexing to look only at the columns you want to drop ( nan % > 70%), then using .columns you recover the name of such columns, which then are used by the .drop instruction.
Hopefully this clear things up!
If you code is hard to understand , you can just check dropna with thresh, since pandas already cover this case.
df=df.dropna(axis=1,thresh=round(len(df)*0.3))

PIG - Converting elements of bags into fields

I have a Pig query with the following output (one row)
(6,{(6,76,35,1565),(6,76,76,920),(6,35,76,906),(6,177,35,822),(6,268,35,720),(6,35,177,701),(6,35,268,694),(6,35,35,656),(6,85,85,611),(6,35,90,559)})
I would like to transform each element of my bag into a field, so
(6,(6,76,35,1565),(6,76,76,920),(6,35,76,906),(6,177,35,822),(6,268,35,720),(6,35,177,701),(6,35,268,694),(6,35,35,656),(6,85,85,611),(6,35,90,559))
where I can name every field with a different name : x1, x2, x3,....,
I tried flattening but that made one row for each element of the bag:
6,(6,76,35,1565)
6,(6,76,76,920)
6, (6,35,76,906)
And I want all the elements to remain in one single row.
Any ideas?
try the following links it may be helpfull
1.How to convert fields to rows in Pig?
2.write array from pig
You will have to use BagToTuple.Assuming you have a relation A with 2 fields.
B = FOREACH A GENERATE A.$0,FLATTEN(BagToTuple(A.$1));