Refresh rows in results window - sql

I have some rows that all contain the same flag value from a column and they don't have anything else in common. When I run my program, the flag is going to be updated so that re-running the query will no longer find the same rows.
Is there a way to run a query, and then later just refresh the rows as long as they are still in the results window?

Not really. You can cut and paste what is in the results window and create inserts out of that data. Unlike xbase solutions when you commit a delete your data is deleted. You have to have an external copy or backup to restore from to put the data back.

Related

Add new column to existing table Pentaho

I have a table input and I need to add the calculation to it i.e. add a new column. I have tried:
to do the calculation and then, feed back. Obviously, it stuck the new data to the old data.
to do the calculation and then feed back but truncate the table. As the process got stuck at some point, I assume what happens is that I was truncating the table while the data was still getting extracted from it.
to use stream lookup and then, feed back. Of course, it also stuck the data on the top of the existing data.
to use stream lookup where I pull the data from the table input, do the calculation, at the same time, pull the data from the same table and do a lookup based on the unique combination of date and id. And use the 'Update' step.
As it is has been running for a while, I am positive it is not the option but I exhausted my options.
It's seems that you need to update the table where your data came from with this new field. Use the Update step with fields A and B as keys.
actully once you connect the hope, result of 1st step is automatically carried forward to the next step. so let's say you have table input step and then you add calculator where you are creating 3rd column. after writing logic right click on calculator step and click on preview you will get the result with all 3 columns
I'd say your issue is not ONLY in Pentaho implementation, there are somethings you can do before reaching Data Staging in Pentaho.
'Workin Hard' is correct when he says you shouldn't use the same table, but instead leave the input untouched, and just upload / insert the new values into a new table, doesn't have to be a new table EVERYTIME, but instead of truncating the original, you truncate the staging table (output table).
How many 'new columns' will you need ? Will every iteration of this run create a new column in the output ? Or you will always have a 'C' Column which is always A+B or some other calculation ? I'm sorry but this isn't clear. If the case is the later, you don't need Pentaho for transformations, Updating 'C' Column with a math or function considering A+B, this can be done directly in most relational DBMS with a simple UPDATE clause. Yes, it can be done in Pentaho, but you're putting a lot of overhead and processing time.

Microsoft Access Table Shows Up Blank, But Query Correctly Pulls Data From Table

I am having an issue with my Microsoft Access database. One of my tables looks completely blank, but it has 11632 records listed in the bottom. Take a look at this screenshot. Though the table shows up blank, when I run the query it pulls the correct data from this table, so I know the data is there, it is just not appearing for some reason. I have tried on Access 2013 and 2016 on a different computer, and both have the same effect. I have also tried compacting and repairing, and also exporting the table but the file it exports to also appears blank, aside from the field names. Any ideas on what I could try?
Thanks!
Turn your import into a 2 step process (or more...). Write it raw into a scratch pad table. Then fire an append query, that has the appropriate criteria to result in only valid records going into the final table.
This isn't unusual at all given that some outside data sources may not have the controls to prevent bad records. Sometimes one must "wash" the data with several different query criteria/steps in order to filter out the bad guys.

VBA Tying Up in a Do While Loop

I'm using Excel 2010. I have a Do While loop processing a large table over 100,000 rows long. If it finds a particular cell in the table, it inserts two rows after that cell and copies the contents of that row to the two new blank rows just created. The loop works fine until it gets to about the 20,000 row and then it locks up. Up to that point it is processing perfectly. It does not always lock up on the same row. I'm using a copy, then a paste special to duplicate the row. After the copy paste is done for the row, I clear the clip board with "Application.CutCopyMode = False". If I comment out the copy/paste, the loop successfully completes.
For the amount of data that I'm working with, I would guess that it will insert about 30,000 rows based on the original table. Is there anything odd about copy/paste special that I should know about?
You are working with a table, why do you need to insert rows?
Append them on the end of the table. If the order is an issue a table in a particular order can be put in that order with a sort (a sort key which is probably implicit in your table already).
Better yet - append the new rows to an in memory table object and paste the entire table object to the end of the original table when your loop is complete. This way you also avoid processing your inserted rows, get much simpler logic in the loop, and the process will probably run faster.
First, I commented out the pastes and it failed. The insert of the two rows also failed without the copy and the pastes. My solution was to first order the starting table. As I found a row that needed duplication, I read that row into an Array and then saved the array to the row past the bottom of the table. After processing the table, I then reordered the rows using the order column I first created and then deleted that column. Runs faster then the copy/paste but I lost the cell formatting in the new rows.

Trying to use VBA to delete an entire row if one field in null/blank

So I am working on a project where I import an excel file into Access, but when I do the importing( the tables in excel have the same tables headings as in Access) I tend to get a bunch of extra rows because in my excel file I have functions behind most of the cells, so that is why, even tho it appears to be empty, Access transfers the rows even tho nothing is actually entered into it.
So my question is, is there a way by using VBA in Access that I can automatically, once the excel file is imported, It could loop through all the rows in a specific table and delete an entire row based on specific criteria. Like if within a row there is an empty field, it will delete the whole entire row. This will save me a lot of time, instead of manually searching through the table for blank fields and deleting the row myself.
I do have knowledge working with VBA but im unsure of how to go about doing this, I was trying to use a DELETE SQL statement, but couldn't figure out how to do it properly since I need it to be In VBA.
Create a SQL query that pulls back the rows you want to delete.
Once the query is finished to your liking, go into SQL View for
the query, and remove everything before the FROM clause.
Instead, type a DELETE there. Save the query. So it should look like "DELETE FROM blah WHERE blah, blah, blah"
So now you have a query that will delete rows that you want it to. Now, to get it to run through VBA (note that you can run it manually just by double-clicking it in the Queries pane), just put the following line in where you want it to run.
docmd.OpenQuery "yourDeleteQueryName"
Note that yourDeleteQueryName should be replaced with your Delete query name.

Read Excel rows, mark rows read

I need to use VBA to read rows out of Excel into another application, but if the process dies in the middle, I need to know which rows were read.
Is the best way to put something in a column, per row, that says that row was done? Then save it after every row is read?
Doesn't seem like a great way.
Any help would be great. Thanks everybody.
I usually write the row to another sheet, then delete it from the original. That way your original is always what's unwritten, but you still have the written data. If you can't muck with the original, start by copying everything to a new sheet, then clearing contents as you process.
Read the entire range into an array (matrix), and keep track of the position (index), saving the position value to a file or database.
Not knowing what application you are writing to, it might be best to check that the data does not exist in the destination application for each and every write. Make an index or some other autonumber (to use a database term) in your destination app. This way you don't have to save the state data in your spreadsheet and save it every time, which will slow down your VBA big time.