VBA Tying Up in a Do While Loop - vba

I'm using Excel 2010. I have a Do While loop processing a large table over 100,000 rows long. If it finds a particular cell in the table, it inserts two rows after that cell and copies the contents of that row to the two new blank rows just created. The loop works fine until it gets to about the 20,000 row and then it locks up. Up to that point it is processing perfectly. It does not always lock up on the same row. I'm using a copy, then a paste special to duplicate the row. After the copy paste is done for the row, I clear the clip board with "Application.CutCopyMode = False". If I comment out the copy/paste, the loop successfully completes.
For the amount of data that I'm working with, I would guess that it will insert about 30,000 rows based on the original table. Is there anything odd about copy/paste special that I should know about?

You are working with a table, why do you need to insert rows?
Append them on the end of the table. If the order is an issue a table in a particular order can be put in that order with a sort (a sort key which is probably implicit in your table already).
Better yet - append the new rows to an in memory table object and paste the entire table object to the end of the original table when your loop is complete. This way you also avoid processing your inserted rows, get much simpler logic in the loop, and the process will probably run faster.

First, I commented out the pastes and it failed. The insert of the two rows also failed without the copy and the pastes. My solution was to first order the starting table. As I found a row that needed duplication, I read that row into an Array and then saved the array to the row past the bottom of the table. After processing the table, I then reordered the rows using the order column I first created and then deleted that column. Runs faster then the copy/paste but I lost the cell formatting in the new rows.

Related

Refresh rows in results window

I have some rows that all contain the same flag value from a column and they don't have anything else in common. When I run my program, the flag is going to be updated so that re-running the query will no longer find the same rows.
Is there a way to run a query, and then later just refresh the rows as long as they are still in the results window?
Not really. You can cut and paste what is in the results window and create inserts out of that data. Unlike xbase solutions when you commit a delete your data is deleted. You have to have an external copy or backup to restore from to put the data back.

VB.NET Deleting row from dataTable is removing all rows

I am having one heck of a time deleting one row from a datatable.
I'm using this code:
Dim foundRow As DataRow() = nodes.Select("identifier LIKE '*Scene Root*'")
If foundRow.Count > 0 Then foundRow(0).Delete()
nodes.AcceptChanges()
The problem is this is removing ALL rows from the datatable.
Dset.Tables("node").Rows(0).Delete()
That also deletes all rows from the table. I am a little confused as to why this is happening.
Help me regain my sanity!
I should add.. I have single stepped first example and it finds one row and it IS the row I want to delete but the actual .delete is deleting every row in the table.
Maybe its what's in the table?
It seems likely your code is running repeatedly. Try putting a Debug.WriteLine("Hello, world") in there and see how many times the message appears each time it's supposed to delete one row.

VBA- Need help to do average rows if data present in the other columns

I have a excel sheet which we may keep adding rows/ deleting them.
And I have an average value present in some cell.I would want the excel formula to identify if there is text in another column to average the columns
So now if I insert another row, I have to manually update the average formula.
Is there a way to have a formula which check if column A is not empty, it should consider the data in column G for the average
There's a lot of approaches to this. My current favourite is a CELL:INDEX(...) expression. For instance, to find the last populated cell in the first continuously populated range between B1 and B5000, I would use (probably as a named range) $B$1:INDEX($B$1:$B$500,MATCH(TRUE, $B$1:$B$500="", 0)-1).
This approach is great because it's non volatile, so it shouldn't bog your worksheet down. It might be vulnerable to $B$500 gradually shrinking if you're only ever deleting rows, though. Alternatives are referencing the whole column ($C:$C), but that's usually dog slow in modern excel, or using OFFSET which never shrinks, but is volatile.

Delete Entire Rows That Have Multiple Matching Cells

I created a macro that will copy over some information from one sheet in my workbook to another to match some criteria so I may import the info into a program. Only problem is after the macro runs, there are some blank rows and a couple duplicates. I have 12 columns of info but I would like to have the macro look at and compare entries in columns D,E,F,G and L with the row above them. So D2,E2,F2,G2 and L2 would be compared to D1,E1,F1,G1 and L1. IF all five of the entries in these cells match that of the previous row, then delete the entire row.
I've found some codes that match one cell or looks for duplicates in a certain column but nothing to look and match multiple columns and I'm so new to this that I'm having trouble even getting started.
Any and all input is welcome.
You're going to have to put in the logic of your program yourself but use something like:
worksheets("Sheet1").range("A1").offset(i, 0).resize(1, colnum).delete Shift:=xlUp
An easy way to find the commands you need is to record a macro and see what Excel uses to build that macro.

Speed up the deletion of duplicates

I coded a VBA script in Excel which adds new data into a Datasheet with previous information. Before doing that, the new data is copied into a provisional Datasheet. To prevent duplicates, I create an additional column and do a VLOOKUP of IDs. If the ID from the new imported data is already in the Datasheet with the old data, this row is marked as duplicated and will be deleted. The "non-duplicated rows" are then copied into the final Datasheet, where all the data is stored.
Right now I use a column reference (A:A) in the VLOOKUP and I donĀ“t know if maybe this is the reason why the VBA script needs every day more resources and time to run. When I coded for the first time, I did the test with no more than 4,000 rows in the original Datasheet and 4,000 rows in the imported data. The macro was done after 90 seconds. Right now, it needs more than 5 minutes and the Datasheet with data is just 40,000 rows large, while the new data is always around 4,000 rows.
Should I dynamically reference the range in the VLOOKUP instead of using A:A or it doesn't matter in terms of speed?
As mentioned in my comment, there certainly is a way to accomplish this task using VBA, but sometimes the simpliest solution is best. I would reccomend added all 40K records each time and using the "Remove Duplicates" function under the "Data" ribbon using the column that holds your unique value.