Given a large data frame (in my case 250M rows and 30 cols), why is it so slow to just change then name of a column?
I am using df.rename(columns={'oldName':'newName'},inplace=True) so this should not make any copies of the data, yet it is taking over 30 seconds, while I would have expected this to be in the order of milliseconds (as it's just replacing one string by another).
I know, that' a huge table, more than most people have RAM in their machine (hence I'm not going to add example code either), but still this shouldn't take any significant amount of time as it's not actually touching any of the data. Why does this take so long, i.e. why is renaming a column doing effort proportional to the number of rows of my dataframe?
I don't think inplace=True doesn't copy your data. There are some discussion on SO saying it actually does copy, and then assign back. Also see this github issue.
You can just override the columns with:
df.columns = df.columns.to_series().replace({'a':'b'})
Related
I have a rather large data frame (~400M rows). For each row, I calculate which elements in another data frame (~200M rows) I need to increment by 1. This second data frame is indexed by three parameters. So the line I use to do this looks like:
total_counts[i].loc[zipcode,j,g_durs[j][0]:g_durs[j][-1]+1] += 1
The above line works, but it appears to be a horrendous bottleneck and takes almost 1 second to run. Unfortunately, I cannot wait 11 years for this to finish.
Any insights on how I might speed up this operation? Do I need to split up the 200M row data frame into an array of smaller objects? I don't understand exactly what is making this so slow.
Thanks in advance.
edit at Oct 16 2017: I think I found the problem, it seems to be a bug in pandas core. It can't merge/join anything over 145k rows. 144k rows it can do without an issue. Pandas version 0.20.3, running on Fedora 26.
----Original post----
I have a medium size amount of data to process (about 200k rows with about 40 columns). I've optimised a lot of the code, but the only trouble I have now is joining the columns.
I receive the data in an unfortunate structure and need to extract the data in a certain way, then put it all into a dataframe.
Basically I extract 2 arrays at a time (each 200k rows long). One array is the timestamp, the other array is the values.
Here I create a dataframe, and use the timestamp as the index.
When I extract the second block of data, I do the same and create a new dataframe using the new values + timestamp.
I need to join the two dataframes on the index. The timestamps can be slightly different, so I use a join method using the 'outer' method, to keep the new timestamps. Basically I follow the documentation below.
result = left.join(right, how='outer')
https://pandas.pydata.org/pandas-docs/stable/merging.html#joining-on-index
This however is way to slow. I left it for about 15 mins and it still hadn't finished processing, so I killed the process.
Can anyone help? Any hints/tips?
edit:
It's a work thing, so I can't give out the data sorry. But it's just two long dataframes, each with a timestamp as the index, and a single column for the values.
The code is just as described above.
data_df.join(variable_df, how='outer')
I forgot to answer this. It's not really a bug in pandas.
The timestamp was a nanosecond timestamp, and joining them on the index like this was causing a massive slow down. Basically it was better to join on a column - made it all much faster.
I have a "Appeared date" column A and next to it i have a ">180" date column B. There is also "CONCAT" column C and a "ATTR" column D.
What i want to do is find out the latest date 180 or more from past, and write it in ">180" column, for each date in "Appeared Date" column, where the Concat column values are same.
The Date in >180 column should be more than 180 days from "Appeared date" column in the past, but should also be an earliest date found only from the "Appeared date" column.
Based on this i would like to check if a particular product had "ATTR" = 'NEW' >180 earlier also i.e. was it launched 180 days or more ago and appearing again recently?
Is there an excel formula which can get the nearest dates (>180) picked from the Appeared date and show it in the ">180" column?
Will it involve a mix of SMALL(), FREQUENCY(), MATCH(), INDEX() etc?
Or a VBA procedure is required?
To do this efficiently with formulas, you can use something called Range Slicing to reduce the size of the arrays to be processed, by efficiently truncating them so that they contain just the subset of those 3,000 to 50,000 rows that could possibly hold the correct answer, and THEN doing the actual equality check. (As apposed to your MAX/Array approach, which does computationally expensive array operations on all the rows, even though most of the rows have no relationship with the current row that you seek an answer for).
Here's my approach. First, here's my table layout:
...and here's my formulas:
180: =[#Appeared]-180
Start: =MATCH([#CONCAT],[CONCAT],0)
End: =MATCH([#CONCAT],[CONCAT],1)
LastRow: =MATCH(1,--(OFFSET([Appeared],[#Start],,[#End]-[#Start])>[#180]),0)+[#Start]-1
LastItem: =INDEX([Appeared],[#LastRow])
LastDate > 180: =IF([#Appeared]-[#LastItem]>180,[#LastItem],"")
Days: =IFERROR([#Appeared]-[#[LastDate > 180]],"")
Even with this small data set, my approach is around twice as fast as your MAX approach. And as the size of the data grows, your approach is going to get exponentially slower, as more and more processing power is wasted on crunching rows that can't possibly contain the answer. Whereas mine will get slower in a linear fashion. We're probably talking a difference of minutes, or perhaps even an hour or so at the extremes.
Note that while you could do my approach with a single mega-formula, you would be wise not to: it won't be anywhere near as efficient. splitting your mega-formulas into separate cells is a good idea in any case because it may help speed up calculation due to something called multithreading. Here’s what Diego Oppenheimer, a former program manager for Microsoft Excel, had to say on the subject back in 2005 :
Multithreading enables Excel to spot formulas that can be calculated concurrently, and then run those formulas on multiple processors simultaneously. The net effect is that a given spreadsheet finishes calculating in less time, improving Excel’s overall calculation performance. Excel can take advantage of as many processors (or cores, which to Excel appear as processors) as there are on a machine—when Excel loads a workbook, it asks the operating system how many processors are available, and it creates a thread for each processor. In general, the more processors, the better the performance improvement.
Diego went on to outline how spreadsheet design has a direct impact on any performance increase:
A spreadsheet that has a lot of completely independent calculations should see enormous benefit. People who care about performance can tweak their spreadsheets to take advantage of this capability.
The bottom line: Splitting formulas into separate cells increases the chances of calculating formulas in parallel, as further outlined by Excel MVP and calculation expert Charles Williams at the following links:
Decision Models: Excel Calculation Process
Excel 2010 Performance: Performance and Limit Improvements
I think i found the answer. Earlier i was using the MIN function, though incorrectly, as the dates in the array formula (when you select and hit F9 key) were coming in descending order. So i finally used the MAX function to find the earliest date which was more than 180 in the past.
=IF(MAX(IF(--(A2-$A$2:$A$33>=180)*(--(C2=$C$2:$C$33))*(--
($D$2:$D$33="NEW")),$A$2:$A$33))=0,"",MAX(IF(--(A2-$A$2:$A$33>=180)*(--
(C2=$C$2:$C$33))*(--($D$2:$D$33="NEW")),$A$2:$A$33)))
Check the revised Sample.xlsx which is self-explanatory. I have added the Attr='NEW' criteria in the formula for the final workaround, to find if there were any new items that came 180 days or earlier.
Though still an ADO query alternative may be required to process the large amounts of data.
I wrote a program to compute similarities among a set of 2 million documents. The program works, but I'm having trouble storing the results. I won't need to access the results often, but will occasionally need to query them and pull out subsets for analysis. The output basically looks like this:
1,2,0.35
1,3,0.42
1,4,0.99
1,5,0.04
1,6,0.45
1,7,0.38
1,8,0.22
1,9,0.76
.
.
.
Columns 1 and 2 are document ids, and column 3 is the similarity score. Since the similarity scores are symmetric I don't need to compute them all, but that still leaves me with 2000000*(2000000-1)/2 ≈ 2,000,000,000,000 lines of records.
A text file with 1 million lines of records is already 9MB. Extrapolating, that means I'd need 17 TB to store the results like this (in flat text files).
Are there more efficient ways to store these sorts of data? I could have one row for each document and get rid of the repeated document ids in the first column. But that'd only go so far. What about file formats, or special database systems? This must be a common problem in "big data"; I've seen papers/blogs reporting similar analyses, but none discuss practical dimensions like storage.
DISCLAIMER: I don't have any practical experience with this, but it's a fun exercise and after some thinking this is what I came up with:
Since you have 2.000.000 documents you're kind of stuck with an integer for the document id's; that makes 4 bytes + 4 bytes; the comparison seems to be between 0.00 and 1.00, I guess a byte would do by encoding the 0.00-1.00 as 0..100.
So your table would be : id1, id2, relationship_value
That brings it to exactly 9 bytes per record. Thus (without any overhead) ((2 * 10^6)^2)*9/2bytes are needed, that's about 17Tb.
Off course that's if you have just a basic table. Since you don't plan on querying it very often I guess performance isn't that much of an issue. So you could go 'creative' by storing the values 'horizontally'.
Simplifying things, you would store the values in a 2 million by 2 million square and each 'intersection' would be a byte representing the relationship between their coordinates. This would "only" require about 3.6Tb, but it would be a pain to maintain, and it also doesn't make use of the fact that the relations are symmetrical.
So I'd suggest to use a hybrid approach, a table with 2 columns. First column would hold the 'left' document-id (4 bytes), 2nd column would hold a string of all values of documents starting with an id above the id in the first column using a varbinary. Since a varbinary only takes the space that it needs, this helps us win back some space offered by the symmetry of the relationship.
In other words,
record 1 would have a string of (2.000.000-1) bytes as value for the 2nd column
record 2 would have a string of (2.000.000-2) bytes as value for the 2nd column
record 3 would have a string of (2.000.000-3) bytes as value for the 2nd column
etc
That way you should be able to get away with something like 2Tb (inc overhead) to store the information. Add compression to it and I'm pretty sure you can store it on a modern disk.
Off course the system is far from optimal. In fact, querying the information will require some patience as you can't approach things set-based and you'll pretty much have to scan things byte by byte. A nice 'benefit' of this approach would be that you can easily add new documents by adding a new byte to the string of EACH record + 1 extra record in the end. Operations like that will be costly though as it will result in page-splits; but at least it will be possible without having to completely rewrite the table. But it will cause quite bit of fragmentation over time and you might want to rebuild the table once in a while to make it more 'aligned' again. Ah.. technicalities.
Selecting and Updating will require some creative use of SubString() operations, but nothing too complex..
PS: Strictly speaking, for 0..100 you only need 7 bytes, so if you really want to squeeze the last bit out of it you could actually store 8 values in 7 bytes and save another ca 300Mb, but it would make things quite a bit more complex... then again, it's not like the data is going to be human-readable anyway =)
PS: this line of thinking is completely geared towards reducing the amount of space needed while remaining practical in terms of updating the data. I'm not saying it's going to be fast; in fact, if you'd go searching for all documents that have a relation-value of 0.89 or above the system will have to scan the entire table and even with modern disks that IS going to take a while.
Mind you that all of this is the result of half an hour brainstorming; I'm actually hoping that someone might chime in with a neater approach =)
I'm currently writing code to combine two worksheets containing different versions of data.
Hereby I first want to sort both via a Key Column, combine 'em and subsequently mark changes between the versions in the output worksheet.
As the data amounts to already several 10000 lines and might some day exceed the lines-per-worksheet limit of excel, I want these calculations to run outside of a worksheet. Also it should perform better.
Currently I'm thinking of a Quicksort of first and second data and then comparing the data sets per key/line. Using the result of the comparison to subsequently format the cells accordingly.
Question
I'd just love to know, whether I should use:
List OR Array OR Collection OR Dictionary
OF Lists OR Arrays OR Collections OR Dictionaries
I have as of now been unable to determine the differences in codability and performance between this 16 possibilities. Currently I'm implementing an Array OF Arrays approach, constantly wondering whether this makes sense at all?
Thanks in advance, appreciate your input and wisdom!
Some time ago, I had the same problem with the macro of a client. Additionally to the really big number of rows (over 50000 and growing), it had the problem of being tremendously slow from certain row number (around 5000) when a "standard approach" was taken, that is, the inputs for the calculations on each row were read from the same worksheet (a couple of rows above); this process of reading and writing was what made the process slower and slower (apparently, Excel starts from row 1 and the lower is the row, the longer it takes to reach there).
I improved this situation by relying on two different solutions: firstly, setting a maximum number of rows per worksheet, once reached, a new worksheet was created and the reading/writing continued there (from the first rows). The other change was moving the reading/writing in Excel to reading from temporary .txt files and writing to Excel (all the lines were read right at the start to populate the files). These two modifications improved the speed a lot (from half an hour to a couple of minutes).
Regarding your question, I wouldn't rely too much on arrays with a macro (although I am not sure about how much information contains each of these 10000 lines); but I guess that this is a personal decision. I don't like collections too much because of being less efficient than arrays; and same thing for dictionaries.
I hope that this "short" comment will be of any help.