I'm trying to select the rows who's index values are congruent to 1 mod 24. How can I best do this?
This is my dataframe:
ticker date open high low close volume momo nextDayLogReturn
335582 ETH/USD 2021-11-05 00:00:00+00:00 4535.3 4539.3 4495.8 4507.1 9.938260e+06 9.094134 -9.160928
186854 BTC/USD 2021-11-05 00:00:00+00:00 61437.0 61528.0 61111.0 61170.0 1.191233e+07 10.640513 -10.825763
186853 BTC/USD 2021-11-04 23:00:00+00:00 61190.0 61541.0 61130.0 61437.0 1.395133e+07 10.645757 -10.842114
335581 ETH/USD 2021-11-04 23:00:00+00:00 4518.8 4539.4 4513.6 4535.3 1.296507e+07 9.087243 -9.139240
186852 BTC/USD 2021-11-04 22:00:00+00:00 61393.0 61426.0 61044.0 61190.0 1.360557e+07 10.639201 -10.812127
This was my attempt:
newindex = []
for i in range(0,df2.shape[0]+1):
if(i%24 ==1):
newindex.append(i)
df2.iloc[[newindex]]
Essentially, I need to select the rows using a boolean but i'm not sure how to do it.
Many thanks
In my datasets, I have one variable which contains 30% missing value.
I am trying to use tree based model but not getting clear picture how to implement it.
data['X'].value_counts()
OUTPUT-----
? 39454
MC 32223
HM 6197
SP 4892
BC 4569
MD 3473
CP 2493
UN 2366
CM 1932
OG 1020
PO 585
DM 536
CH 145
WC 130
OT 94
MP 79
SI 52
FR 1
The approach which I am trying to implement is:
Suppose this variable has 24 distinct category. And the above is the value count output. ? is the missing value and I should impute the value among rest of the value mentioned with the help of tree based model.
Different categories are MC HM SP BC MD CP UN CM OG PO DM CH WC OT MP ST FR ? and count of ? is 39454. So we have 39454 missing values that we should impute with the help of tree based model
Now, with the help of above values I have to train a model and predict the missing value.
I would recommend below :
Take non-missing data and perform clustering
assign labels for missing data using appropriate cluster
I have struggled with this even after looking at the various past answers to no avail.
My data consists of columns numeric and non numeric. I'd like to average the numeric columns and display my data on the GUI together with the information on the non-numeric columns.The non numeric columns have info such as names,rollno,stream while the numeric columns contain students marks for various subjects. It works well when dealing with one dataframe but fails when I combine two or more dataframes in which it returms only the average of the numeric columns and displays it leaving the non numeric columns undisplayed. Below is one of the codes I've tried so far.
df=pd.concat((df3,df5))
dfs =df.groupby(df.index,level=0).mean()
headers = list(dfs)
self.marks_table.setRowCount(dfs.shape[0])
self.marks_table.setColumnCount(dfs.shape[1])
self.marks_table.setHorizontalHeaderLabels(headers)
df_array = dfs.values
for row in range(dfs.shape[0]):
for col in range(dfs.shape[1]):
self.marks_table.setItem(row, col,QTableWidgetItem(str(df_array[row,col])))
A working code should return averages in something like this
STREAM ADM NAME KCPE ENG KIS
0 EAGLE 663 FLOYCE ATI 250 43 5
1 EAGLE 664 VERONICA 252 32 33
2 EAGLE 665 MACREEN A 341 23 23
3 EAGLE 666 BRIDGIT 286 23 2
Rather than
ADM KCPE ENG KIS
0 663.0 250.0 27.5 18.5
1 664.0 252.0 26.5 33.0
2 665.0 341.0 17.5 22.5
3 666.0 286.0 38.5 23.5
Sample data
Df1 = pd.DataFrame({
'STREAM':[NORTH,SOUTH],
'ADM':[437,238,439],
'NAME':[JAMES,MARK,PETER],
'KCPE':[233,168,349],
'ENG':[70,28,79],
'KIS':[37,82,79],
'MAT':[67,38,29]})
Df2 = pd.DataFrame({
'STREAM':[NORTH,SOUTH],
'ADM':[437,238,439],
'NAME':[JAMES,MARK,PETER],
'KCPE':[233,168,349],
'ENG':[40,12,56],
'KIS':[33,43,43],
'MAT':[22,58,23]})
Your question not clear. However guessing the origin of question based on content. I have modified your datframes which were not well done by adding a stream called 'CENTRAL', see
Df1 = pd.DataFrame({'STREAM':['NORTH','SOUTH', 'CENTRAL'],'ADM':[437,238,439], 'NAME':['JAMES','MARK','PETER'],'KCPE':[233,168,349],'ENG':[70,28,79],'KIS':[37,82,79],'MAT':[67,38,29]})
Df2 = pd.DataFrame({ 'STREAM':['NORTH','SOUTH','CENTRAL'],'ADM':[437,238,439], 'NAME':['JAMES','MARK','PETER'],'KCPE':[233,168,349],'ENG':[40,12,56],'KIS':[33,43,43],'MAT':[22,58,23]})
I have assumed you want to merge the two dataframes and find avarage
df3=Df2.append(Df1)
df3.groupby(['STREAM','ADM','NAME'],as_index=False).sum()
Outcome
I know there is a few threads on this topic but none of their solutions seems to work for me. I have a table in a PDF document from which I would like to be able to extract information. I can copy and paste the text into textedit and it is legible but not really useable. By this I mean all the text is readable but the data is all separated by spaces with no way to differentiate columns from spaces within text within a cell.
But whenever I try to use tools like tabula or scraper wiki the text extracted is garbage.
Is anyone able to give me any pointers as to how I might go about this?
Here's a solution using Python and Unix
In Python:
import urllib
# download pdf
testfile = urllib.URLopener()
testfile.retrieve('http://www.european-athletics.org/mm/Document/EventsMeetings/General/01/27/52/10/EICH-FinalEntriesforwebsite_Neutral.pdf', 'test.pdf')
In Unix:
$ pdftotext -layout test.pdf
Snippet of output to test.txt:
Lastname Firstname Country DOB PB SB
1500m Men
Rowe Brenton AUT 17/08/1987
Vojta Andreas AUT 09/06/1989 3:38.99 3:41.09
Khadiri Amine CYP 20/11/1988 3:45.16 3:45.16
Friš Jan CZE 19/12/1995 3:43.76 3:43.76
Holuša Jakub CZE 20/02/1988 3:38.79 3:41.54
Kocourek Milan CZE 06/12/1987 3:43.97 3:43.97
Bueno Andreas DEN 07/07/1988 3:42.78 3:42.78
Alcalá Marc ESP 07/11/1994 3:41.79 3:41.79
Mechaal Adel ESP 05/12/1990 3:38.30 3:38.30
Olmedo Manuel ESP 17/05/1983 3:39.82 3:40.66
Ruíz Diego ESP 05/02/1982 3:36.42 3:40.60
Kowal Yoann FRA 28/05/1987 3:38.07 3:39.22
Grice Charlie GBR 07/11/1993 3:39.44 3:39.44
O'Hare Chris GBR 23/11/1990 3:37.25 3:40.42
Orth Florian GER 24/07/1989 3:39.97 3:40.20
Tesfaye Homiyu GER 23/06/1993 3:34.13 3:34.13
Kazi Tamás HUN 16/05/1985 3:44.28 3:44.28
Mooney Danny IRL 20/06/1988 3:42.69 3:42.69
Travers John IRL 16/03/1991 3:42.52 3:43.74
Bussotti Neves Junior Joao Capistrano M. ITA 10/05/1993 3:47.58 3:47.58
Jurkēvičs Dmitrijs LAT 07/01/1987 3:45.95 3:45.95
Ingebrigtsen Henrik NOR 24/02/1991 3:44.00
Ingebrigtsen Filip NOR 20/04/1993
Krawczyk Szymon POL 29/12/1988 3:41.64 3:41.64
Ostrowski Artur POL 10/07/1988 3:41.36 3:41.36
ebrowski Krzysztof POL 09/07/1990 3:41.49 3:41.49
Smirnov Valentin RUS 13/02/1986 3:37.55 3:38.74
Nava Goran SRB 15/04/1981 3:40.65 3:44.49
Pelikán Jozef SVK 29/07/1984 3:43.85 3:45.51
Ek Staffan SWE 13/11/1991 3:43.54 3:43.54
Rogestedt Johan SWE 27/01/1993 3:40.03 3:40.03
Özbilen lham Tanui TUR 05/03/1990 3:34.76 3:38.05
Özdemir Ramazan TUR 06/07/1991 3:44.35 3:44.35
You can also download a simple command line tool to deal with the PDF file you linked to. The run this command to extract the table(s) on the first page:
pdftotext \
-enc UTF-8 \
-l 1 \
-table \
EICH-FinalEntriesforwebsite_Neutral.pdf \
EICH-FinalEntriesforwebsite_Neutral.txt
-enc UTF-8: sets the text encoding so that the Ö, Ä, Ü and İ (as well as ö, ä, ü, ß, á, š, ē, í and č) characters in the text get correctly extracted.
-l 1: tells the command to extract as the last page the page number 1.
-table: this is the decisive parameter.
The command produces this output:
EUROPEAN ATHLETICS INDOOR CHAMPIONSHIPS
PRAGUE / CZE, 6-8 MARCH 2015
FINAL ENTRIES - MEN
Lastname Firstname Country DOB PB SB
1500m Men
Rowe Brenton AUT 17/08/1987
Vojta Andreas AUT 09/06/1989 3:38.99 3:41.09
Khadiri Amine CYP 20/11/1988 3:45.16 3:45.16
Friš Jan CZE 19/12/1995 3:43.76 3:43.76
Holuša Jakub CZE 20/02/1988 3:38.79 3:41.54
Kocourek Milan CZE 06/12/1987 3:43.97 3:43.97
Bueno Andreas DEN 07/07/1988 3:42.78 3:42.78
Alcalá Marc ESP 07/11/1994 3:41.79 3:41.79
Mechaal Adel ESP 05/12/1990 3:38.30 3:38.30
Olmedo Manuel ESP 17/05/1983 3:39.82 3:40.66
Ruíz Diego ESP 05/02/1982 3:36.42 3:40.60
Kowal Yoann FRA 28/05/1987 3:38.07 3:39.22
Grice Charlie GBR 07/11/1993 3:39.44 3:39.44
O'Hare Chris GBR 23/11/1990 3:37.25 3:40.42
Orth Florian GER 24/07/1989 3:39.97 3:40.20
Tesfaye Homiyu GER 23/06/1993 3:34.13 3:34.13
Kazi Tamás HUN 16/05/1985 3:44.28 3:44.28
Mooney Danny IRL 20/06/1988 3:42.69 3:42.69
Travers John IRL 16/03/1991 3:42.52 3:43.74
Bussotti Neves Junior Joao Capistrano M. ITA 10/05/1993 3:47.58 3:47.58
Jurkēvičs Dmitrijs LAT 07/01/1987 3:45.95 3:45.95
Ingebrigtsen Henrik NOR 24/02/1991 3:44.00
Ingebrigtsen Filip NOR 20/04/1993
Krawczyk Szymon POL 29/12/1988 3:41.64 3:41.64
Ostrowski Artur POL 10/07/1988 3:41.36 3:41.36
Żebrowski Krzysztof POL 09/07/1990 3:41.49 3:41.49
Smirnov Valentin RUS 13/02/1986 3:37.55 3:38.74
Nava Goran SRB 15/04/1981 3:40.65 3:44.49
Pelikán Jozef SVK 29/07/1984 3:43.85 3:45.51
Ek Staffan SWE 13/11/1991 3:43.54 3:43.54
Rogestedt Johan SWE 27/01/1993 3:40.03 3:40.03
Özbilen İlham Tanui TUR 05/03/1990 3:34.76 3:38.05
Özdemir Ramazan TUR 06/07/1991 3:44.35 3:44.35
3000m Men
Rowe Brenton AUT 17/08/1987
Vojta Andreas AUT 09/06/1989 7:59.95 7:59.95
Note, however:
The -table parameter to the pdftotext command line tool is only available in the XPDF-version 3.04, which you can download here: www.foolabs.com/xpdf/download.html. It is NOT (yet) available in Poppler's fork of pdftotext (latest version of which is 0.43.0).
If you only have Poppler's pdftotext, you'd have to use the -layout parameter (instead of -table), which gives you a similarly good result for the PDF file in question:
pdftotext \
-enc UTF-8 \
-l 1 \
-layout \
EICH-FinalEntriesforwebsite_Neutral.pdf \
EICH-FinalEntriesforwebsite_Neutral.txt
However, I have seen PDFs where the result is much better with -table (and XPDF) than it is with -layout (and Poppler).
(XPDF has the -layout parameter too -- so you can see the difference if you try both.)
I have this set of data:
TestSystems
[1] 0013-021 0013-022 0013-031 0013-032 0013-033 0013-034
Levels: 0013-021 0013-022 0013-031 0013-032 0013-033 0013-034
Utilization
[1] 61.42608 64.95802 31.51387 45.11971 43.66110 63.68363
Availability
[1] 28.92506 32.58015 11.86372 16.22164 36.23264 40.54977
str(TestSystems)
Factor w/ 6 levels "0013-021","0013-022",..: 1 2 3 4 5 6
str(Utilization)
num [1:6] 61.4 65 31.5 45.1 43.7 ...
str(Availability)
num [1:6] 28.9 32.6 11.9 16.2 36.2 ...
I would like to have a plot as below:
http://imgur.com/snPOVW5
The plot is not from R, but other software. I would like the same plot to be from R. Appreciate any help.
thanks.
I have tried with below code and it works
df <- data.frame(TestSystems, Availability, Utilization)
df.long<-melt(df)
p <- ggplot(df.long,aes(TestSystems, value,fill=variable))+ geom_bar(stat="identity",position="dodge")
how do I display those 12 values at the top of each bar?