Can I use a CSV in Spark MLLib? - dataframe

I'm new to using Spark's MLLib Python API. I have my data in CSV format like so:
Label 0 1 2 3 4 5 6 7 8 9 ... 758 759 760 761 762 763 764 765 766 767
0 -0.168307 -0.277797 -0.248202 -0.069546 0.176131 -0.152401 0.12664 -0.401460 0.125926 0.279061 ... -0.289871 0.207264 -0.140448 -0.426980 -0.328994 0.328007 0.486793 0.222587 0.650064 -0.513640
3 -0.313138 -0.045043 0.279587 -0.402598 -0.165238 -0.464669 0.09019 0.008703 0.074541 0.142638 ... -0.094025 0.036567 -0.059926 -0.492336 -0.006370 0.108954 0.350182 -0.144818 0.306949 -0.216190
2 -0.379293 -0.340999 0.319142 0.024552 0.142129 0.042989 -0.60938 0.052103 -0.293400 0.162741 ... 0.108854 -0.025618 0.149078 -0.917385 0.110629 0.146427
Can I use this as is by loading it using df = spark.read.format("csv").option("header", "true").load("file.csv")? I'm attempting to train a Random Forest model. I've tried researching it, but it doesn't seem to be a big topic. I don't want to just attempt it without being fully sure it would work because the cluster I use has long queue times.

Yes! You'll want to infer the schema too.
df = spark.read.format("csv").option("header", "true").option("inferSchema", "true").load("file.csv")
If you have many files with the same column names and data types, save the schema to reuse.
schema = df.schema
And then next time you read a csv file with the same columns, you can
df = spark.read.format("csv").option("header", "true").option("schema", schema).load("file.csv")

Related

How can I detect similarity of names in the same columns

Guys I have a dataset like this:
`
df = pd.DataFrame(data = ['John','gal britt','mona','diana','molly','merry','mony','molla','johnathon','dina'],\
columns = ['Name'])
df
`
it gives this output
Name
0 John
1 gal britt
2 mona
3 diana
4 molly
5 merry
6 mony
7 molla
8 johnathon
so I imagine that to get all names across each other and detect the similarity I will use df.merge(df,how = "cross" )
The thing is the real data is 40000 rows and performing this will result in a very big dataset which I don't have the memory for.
any algorithm or idea would really help and I'll adjust the logic to my purposes
I tried working with vaex instead of pandas to work with this huge amount of data but still I run into the problem of insufficient memory allocation.
In short: I KNOW that this algorithm or way of thinking about such problem is wrong and inefficient.

Search values in a Pandas DataFrame with values from another DataFrame

I have 2 dataframes.
df_dora
content
feature
id
1
cyber hygien
risk management
1
2
cyber risk
risk management
2
...
...
... ...
59
intellig share
information sharing
63
60
inform share
information sharing
64
df_corpus
content
id
meta.name
meta._split_id
0
market grow cyber attack...
56a2a2e28954537131a4aa734f49e361
14_Group_AG_2021
0
1
sec form file index
7aedfd4df02687d3dff9897c925da508
14_Group_AG_2021
1
...
...
...
...
213769
cyber secur alert parent compani fina...
ab10325601597f203f3f0af7aa647112
17_La_Banque_2021
8581
213770
intellig share statement parent compani fina...
6af5687ac31849d19d2048e0b2ca472d
17_La_Banque_2021
8582
I am trying to extract a count of each term listed in df_dora.content within df_corpus.content grouped by df_content.meta.name.
I tried to use isin
df = df_corpus[df_corpus.content.isin(df_dora.content)]
len(df)
Returns only 17 rows
content
id
meta.name
meta
41474
incid
a4c478e0fad1b9775c05e01d871b3aaf
3_Agricole_2021
10185
68690
oper risk
2e5139d82c242c89523110cc1110647a
10_Banking_Group_PLC_2021
5525
...
...
...
...
...
99259
risk report
a84eefb9a4772d13eb67f2d6ae5215cb
31_Building_Society_2021
4820
105662
risk manag
e8050be841fedb6dd10599e8b4892a9f
43_Bank_SA_2021
131
df_corpus.loc[df_corpus.content.isin(df_dora.content), 'content'].tolist()
also returns 17 rows
if I search for 2 of the terms that exist in df_dora directly in df_corpus
resiliency_term = df_corpus.loc[df_corpus['content'].str.contains("cyber risk|inform share", case=False)]
print(resiliency_term)
I get 243 rows (which matches what was in the original file.)
So given the above...my question is this how do I extract a count of each term listed in df_dora.content within df_corpus.content grouped by df_content.meta.name.
Thanks in advance for any help.
unique_vals = '|'.join(df_dora.content.unique())
df_corpus.groupby('meta.name').apply(lambda x: x.content.str.findall(unique_vals).explode().value_counts())
Output given your four lines of each:
17_La_Banque_2021 intellig share 1
Name: content, dtype: int64

Averaging dataframes with many string columns and display back all the columns

I have struggled with this even after looking at the various past answers to no avail.
My data consists of columns numeric and non numeric. I'd like to average the numeric columns and display my data on the GUI together with the information on the non-numeric columns.The non numeric columns have info such as names,rollno,stream while the numeric columns contain students marks for various subjects. It works well when dealing with one dataframe but fails when I combine two or more dataframes in which it returms only the average of the numeric columns and displays it leaving the non numeric columns undisplayed. Below is one of the codes I've tried so far.
df=pd.concat((df3,df5))
dfs =df.groupby(df.index,level=0).mean()
headers = list(dfs)
self.marks_table.setRowCount(dfs.shape[0])
self.marks_table.setColumnCount(dfs.shape[1])
self.marks_table.setHorizontalHeaderLabels(headers)
df_array = dfs.values
for row in range(dfs.shape[0]):
for col in range(dfs.shape[1]):
self.marks_table.setItem(row, col,QTableWidgetItem(str(df_array[row,col])))
A working code should return averages in something like this
STREAM ADM NAME KCPE ENG KIS
0 EAGLE 663 FLOYCE ATI 250 43 5
1 EAGLE 664 VERONICA 252 32 33
2 EAGLE 665 MACREEN A 341 23 23
3 EAGLE 666 BRIDGIT 286 23 2
Rather than
ADM KCPE ENG KIS
0 663.0 250.0 27.5 18.5
1 664.0 252.0 26.5 33.0
2 665.0 341.0 17.5 22.5
3 666.0 286.0 38.5 23.5
Sample data
Df1 = pd.DataFrame({
'STREAM':[NORTH,SOUTH],
'ADM':[437,238,439],
'NAME':[JAMES,MARK,PETER],
'KCPE':[233,168,349],
'ENG':[70,28,79],
'KIS':[37,82,79],
'MAT':[67,38,29]})
Df2 = pd.DataFrame({
'STREAM':[NORTH,SOUTH],
'ADM':[437,238,439],
'NAME':[JAMES,MARK,PETER],
'KCPE':[233,168,349],
'ENG':[40,12,56],
'KIS':[33,43,43],
'MAT':[22,58,23]})
Your question not clear. However guessing the origin of question based on content. I have modified your datframes which were not well done by adding a stream called 'CENTRAL', see
Df1 = pd.DataFrame({'STREAM':['NORTH','SOUTH', 'CENTRAL'],'ADM':[437,238,439], 'NAME':['JAMES','MARK','PETER'],'KCPE':[233,168,349],'ENG':[70,28,79],'KIS':[37,82,79],'MAT':[67,38,29]})
Df2 = pd.DataFrame({ 'STREAM':['NORTH','SOUTH','CENTRAL'],'ADM':[437,238,439], 'NAME':['JAMES','MARK','PETER'],'KCPE':[233,168,349],'ENG':[40,12,56],'KIS':[33,43,43],'MAT':[22,58,23]})
I have assumed you want to merge the two dataframes and find avarage
df3=Df2.append(Df1)
df3.groupby(['STREAM','ADM','NAME'],as_index=False).sum()
Outcome

Reformat wide Excel table to more SQL-friendly structure

I have a very wide Excel sheet, from Column A - DIE (about 2500 columns wide), of survey data. Each column is a question, and each row is a response. I'm trying to upload the data to SQL and convert it to a more SQL-friendly format using the UNPIVOT function, but I can't even get it loaded into SQL because it exceeds the 1024-column limit.
Basically, I have an Excel sheet that looks like this:
But I want to convert it to look like this:
What options do I have to make this change, either in Excel (prior to upload) or SQL (while circumventing the 1024 column limit)?
I have had to do this quite a bit. My solution was to write a Python script that would un-crosstab a CSV file (typically exported from Excel), creating another CSV file. The Python code is here: https://pypi.python.org/pypi/un-xtab/ and the documentation is here: http://pythonhosted.org/un-xtab/. I've never run it on a file with 2500 columns, but don't know why it wouldn't work.
R has a very specific function call in one of it's libraries. You can also connect, read, and write data with R into a database. Would suggest downloading R and Rstudio.
Here is a working script to get you started that does what you need:
Sample data:
df <- data.frame(id = c(1,2,3), question_1 = c(1,0,1), question_2 = c(2,0,2))
df
Input table:
id question_1 question_2
1 1 1 2
2 2 0 0
3 3 1 2
Code to transpose the data:
df2 <- gather(df, key = id, value = values)
df2
Output:
id id values
1 1 question_1 1
2 2 question_1 0
3 3 question_1 1
4 1 question_2 2
5 2 question_2 0
6 3 question_2 2
Some helper functions for you to import and export the csv data:
# Install and load the necessary libraries
install.packages(c('tidyr','readr'))
library(tidyr)
library(readr)
# to read a csv file
df <- read_csv('[some directory][some filename].csv')
# To output the csv file
write.csv(df2, '[some directory]data.csv', row.names = FALSE)
Thanks for all the help. I ended up using Python due to limitations in both SQL (over 1024 columns wide) and Excel (well over 1 million rows in the output). I borrowed the concepts from rd_nielson's code, but that was a bit more complicated than I needed. In case it's helpful to anyone else, this is the code I used. It outputs a csv file with 3 columns and 14 million rows that I can upload to SQL.
import csv
with open('Responses.csv') as f:
reader = csv.reader(f)
headers = next(reader) # capture current field headers
newHeaders = ['ResponseID','Question','Response'] # establish new header names
with open('PythonOut.csv','w') as outputfile:
writer=csv.writer(outputfile, dialect='excel', lineterminator='\n')
writer.writerow(newHeaders) # write new headers to output
QuestionHeaders = headers[1:len(headers)] # Slice the question headers from original header list
for row in reader:
questionCount = 0 # start counter to loop through each question (column) for every response (row)
while questionCount <= len(QuestionHeaders) - 1:
newRow = [row[0], QuestionHeaders[questionCount], row[questionCount + 1]]
writer.writerow(newRow)
questionCount += 1

Changing values in pandas dataframe does not work

I’m having a problem changing values in a dataframe. I also want to consult regarding a problem I need to solve and the proper way to use pandas to solve it. I'll appreciate help on both.
I have a file containing information about matching degree of audio files to speakers. The file looks something like that:
wave_path spk_name spk_example# score mark comments isUsed
190 122_65_02.04.51.800.wav idoD idoD 88 NaN NaN False
191 121_110_20.17.27.400.wav idoD idoD 87 NaN NaN False
192 121_111_00.34.57.300.wav idoD idoD 87 NaN NaN False
193 103_31_18.59.12.800.wav idoD idoD_0 99 HIT VP False
194 131_101_02.08.06.500.wav idoD idoD_0 96 HIT VP False
What I need to do, is some kind of a sophisticated counting. I need to group the results by speaker, and calculate for each speaker some calculation. I then proceed with the speaker that made the best calculation for me, but before proceeding I need to mark all the files which I used for the calculation as being used, i.e. changing the isUsed value for each row in which they appear (files can appear more than once) to TRUE. Then I make another iteration. Calculate for each speaker, mark the used files and so on until no more speakers left to be calculated.
I thought a lot about how to implement that process using pandas (it is quite easy to implement in regular python but it will take a lot of looping and data structuring that my guess will slow the process down significantly, and also I’m using this process to get to learn pandas abilities more deeply)
I came out with the following solution. As preparation steps, I’ll group by speaker name and set the file name as index by the set_index method. I will then iterate over the groupbyObj and apply the calculation function, which will return the selected speaker and the files to be marked as used.
Then I’ll iterate over the files and mark them as used (this would be fast and simple since I set them as indexes beforehand), and so on until I finish calculating.
First, I’m not sure about this solution, so feel free to tell me your thoughts on it.
Now, I’ve tried implementing this, and got into trouble:
First I indexed by file name, no problem here:
In [53]:
marked_results['isUsed'] = False
ind_res = marked_results.set_index('wave_path')
ind_res.head()
Out[53]:
spk_name spk_example# score mark comments isUsed
wave_path
103_31_18.59.12.800.wav idoD idoD 99 HIT VP False
131_101_02.08.06.500.wav idoD idoD 99 HIT VP False
144_35_22.46.38.700.wav idoD idoD 96 HIT VP False
41_09_17.10.11.700.wav idoD idoD 93 HIT TEST False
122_188_03.19.20.400.wav idoD idoD 93 NaN NaN False
Then I choose a file and checked that I get the entries relevant to that file:
In [54]:
example_file = ind_res.index[0];
ind_res.ix[example_file]
Out[54]:
spk_name spk_example# score mark comments isUsed
wave_path
103_31_18.59.12.800.wav idoD idoD 99 HIT VP False
103_31_18.59.12.800.wav idoD idoD_0 99 HIT VP False
103_31_18.59.12.800.wav idoD idoD_1 97 HIT VP False
103_31_18.59.12.800.wav idoD idoD_2 95 HIT VP False
Now problems here too. Then I tried to change the isUsed value for that file to True, and that where I got the problem:
In [56]:
ind_res.ix[example_file]['isUsed'] = True
ind_res.ix[example_file].isUsed = True
ind_res.ix[example_file]
Out[56]:
spk_name spk_example# score mark comments isUsed
wave_path
103_31_18.59.12.800.wav idoD idoD 99 HIT VP False
103_31_18.59.12.800.wav idoD idoD_0 99 HIT VP False
103_31_18.59.12.800.wav idoD idoD_1 97 HIT VP False
103_31_18.59.12.800.wav idoD idoD_2 95 HIT VP False
So, you see the problem. Nothing has changed. What am I doing wrong? Is the problem described above should be solved using pandas?
And also:
1. How can I approach a specific group by a groupby object? bcz I thought maybe instead of setting the files as indexed, grouping by a file, and the using that groupby obj to apply a changing function to all of its occurrences. But I didn’t find a way to approach a specific group and passing the group name as parameter and calling apply on all the groups and then acting only on one of them seemed not "right" to me.
I hope it is not to long... :)
Indexing Panda objects can return two fundamentally different objects: a view or a copy.
If mask is a basic slice, then df.ix[mask] returns a view of df. Views share the same underlying data as the original object (df). So modifying the view, also modifies the original object.
If mask is something more complicated, such as an arbitrary sequence of indices, then df.ix[mask] returns a copy of some rows in df. Modifying the copy has no affect on the original.
In your case, since the rows which share the same wave_path occur at arbitrary locations, ind_res.ix[example_file] returns a copy. So
ind_res.ix[example_file]['isUsed'] = True
has no effect on ind_res.
Instead, you could use
ind_res.ix[example_file, 'isUsed'] = True
to modify ind_res. However, see below for a groupby suggestion which I think might be closer to what you really want.
Jeff has already provided a link to the Pandas docs which state that
The rules about when a view on the data is returned are entirely
dependent on NumPy.
Here are the (complicated) rules which describe when a view or copy is returned. Basically, however, the rule is if the index is requesting a regularly spaced slice of the underlying array then a view is returned, otherwise a copy (out of necessity) is returned.
Here is a simple example which uses basic slice. A view is returned by df.ix, so modifying subdf modifies df as well:
import pandas as pd
import numpy as np
df = pd.DataFrame(np.arange(12).reshape(4,3),
columns=list('ABC'), index=[0,1,2,3])
subdf = df.ix[0]
print(subdf.values)
# [0 1 2]
subdf.values[0] = 100
print(subdf)
# A 100
# B 1
# C 2
# Name: 0, dtype: int32
print(df) # df is modified
# A B C
# 0 100 1 2
# 1 3 4 5
# 2 6 7 8
# 3 9 10 11
Here is a simple example which uses "fancy indexing" (arbitrary rows selected). A copy is returned by df.ix. So modifying subdf does not affect df.
df = pd.DataFrame(np.arange(12).reshape(4,3),
columns=list('ABC'), index=[0,1,0,3])
subdf = df.ix[0]
print(subdf.values)
# [[0 1 2]
# [6 7 8]]
subdf.values[0] = 100
print(subdf)
# A B C
# 0 100 100 100
# 0 6 7 8
print(df) # df is NOT modified
# A B C
# 0 0 1 2
# 1 3 4 5
# 0 6 7 8
# 3 9 10 11
Notice the only difference between the two examples is that in the first, where a view is returned, the index was [0,1,2,3], whereas in the second, where a copy is returned, the index was [0,1,0,3].
Since we are selected rows where the index is 0, in the first example, we can do that with a basic slice. In th second example, the rows where index equals 0 could appear at arbitrary locations, so a copy has to be returned.
Despite having ranted on about the subtlety of Pandas/NumPy slicing, I really don't think that
ind_res.ix[example_file, 'isUsed'] = True
is what you are ultimately looking for. You probably want to do something more like
import pandas as pd
import numpy as np
df = pd.DataFrame(np.arange(12).reshape(4,3),
columns=list('ABC'))
df['A'] = df['A']%2
print(df)
# A B C
# 0 0 1 2
# 1 1 4 5
# 2 0 7 8
# 3 1 10 11
def calculation(grp):
grp['C'] = True
return grp
newdf = df.groupby('A').apply(calculation)
print(newdf)
which yields
A B C
0 0 1 True
1 1 4 True
2 0 7 True
3 1 10 True