I'm working on building reports outside of NetSuite (in order to join this data with data from other source systems) using data pulled into Redshift from the NetSuite back-end tables. I have several tables that have been completely piped into Redshift against which I write my queries. In trying to recreate some values from the monthly P&Ls, I noticed my totals were not tying out with what is shown in the NS UI. After troubleshooting with our finance team, it appears as they there are 3 invoices that they deleted but are still showing up in the Transactions table. I do not as an IsDeleted field or something similar. How can I identify which records in the table have been deleted in order to filter them out of my results?
For Transactions has other posters have said use deleted records but a word to the wise here, deleted record only track the transactions themselves. Therefore, if your end users delete some lines in your transactions, deleted record WONT show said transactions in deleted records.
Some commenter said to look in system notes but in our cases, we only have the new and old version_id for the type impact in the system notes. Moreover, we never found a way to get what it mapped to on the ODBC side. (Correct me if I am wrong I would be more than happy to get a better way that the shitty hack we found)
The only workaround we found in our process here is to load all transactions with last_modified_date > {ts'last Import date'} in a temporary table and check if some lines for those transactions were deleted. In addition, to looking into deleted record to find the deleted transactions themselves. That is the only way we were able to match for our P&L reports long therm.
The logic behind this is that luckily in our processes, the end user must always edit the transaction themselves to delete some lines. Therefore, when they save their changes the transaction itself get a new modified date.
We asked NetSuite support directly and they told us that they do not officially have an official table to track the deletions of lines.
You can create a saved search of deleted records(record type of invoice) in NetSuite. Export it via CSV or Excel then use this CSV/Excel to update Redshift table and tag deleted record.
In the future you can create an API call to Redshift(if available) when a NetSuite record got deleted that will update/tag the deleted record in Redshift. This way you don't need to generate the deleted record.
DELETE is logged in TRANSACTION_HISTORY
Related
I am working on a program that is supposed to insert hundreds of rows to the database per run.
The problem is that once the inserted data is wrong, how can we recover from that run? Currently I only have a log file (I created the format), which records the raw data get inserted (no metadata nor primary keys). Is there a way we can create a log that database can understand it, and once we want to undo the insertion we feed the database with that log file.
Or, if there is alternative mechanism of undoing an operation from a program, kindly let me know, thanks.
The fact, that this is only hundreds of rows, makes it succeptible to the great-grandmother of all undo mechanisms:
have a table importruns with a row for each run you do. I assume it has an integer auto-increment PK
add a field to your data table, that identifies carries the PK of the import run
for insert-only runs, you just need to DELETE FROM sometable WHERE importid=$whatever
If you also have replace/update imports, go one step further
for each data table have a corresponding table, that has one field more: superseededby
for each row you update/replace, place an original copy of the row in this table plus the import id in superseededby
to revert, you now have to add INSERT INTO originaltable SELECT * FROM superseededtable WHERE superseededby=$whatever
You can clean up superseededtable for known-good imports, to make sure, storage doesn't grow unlimited.
You have several options. Depending on when you notice the error.
If you know there is an error with the data, the you can use the transactions API to rollback to changes of the current transaction.
In case you know there was an error only later, then you can create your own log. Make an index identifying the transaction, and add a field to the relevant table where that id would be inserted. This would allow you to identify exactly which transaction it came from. You can also create a stored procedure that deletes rows according to the given transaction id.
I have a problem that I haven't been able to come up with a solution for yet. I have a database (actually thousands of them at customer sites) that I want to extract data from periodically. I'd like to do a full data extract one time (select * from table) then after that only get rows that have changed.
The challenge is that there aren't any updated date columns in most of the tables that could be used to constrain the SQL query. I can't use a trigger based approach nor change the application that writes to the database since it's another group that develops the app and they are way backed up already.
I may be able to write to the database tables when doing the data extract, but would prefer not to do that. Does anyone have any ideas for how we might be able to do this?
You will have to programatically mark the records. I see suggestions of an auto-incrementing field but that will only get newly inserted records. How will you track updated or deleted records?
If you only want newly inserted that an autoincrementing field will do the job; in subsequent data dumps grab every thing since the last value of the autoincrment field and then recrod the current value.
If you want updates the minimum I can see is to have a last_update field and probably a trigger to populare it. If the last_update is later the the last data dump grab that record. This will get inserts and updates but not deletes.
You could try something like a 'instead of delete' trigger if your RDBMS supports it and NULL the last_update field. On subsequent data dumps grap all recoirds where this field is NULL and then delete them. But there would be problems with this (e.g. how to stop the app seeing them between the logical and physical delete)
The most fool proof method I can see is aset of history (audit) tables and ech change gets written to them. Then you select your data dump from there.
By the way do you only care about know the updates have happened? What about if 2 (or more) updates have happened. The history table is the only way that I can see you capturing this scenario.
This should isolate rows that have changed since your last backup. Assuming DestinationTable is a copy of SourceTable even on the key fields; if not you could list out the important fields.
SELECT * FROM SourceTable
EXCEPT
SELECT * FROM DestinationTable
Let's say I want to import all the customers (or all the rows in some other specific table) to some external system. Not all at once but every one after they have been created in database. To do that I have to keep record of all the rows that have already been reported because I want to find only the ones that have not been reported yet. Is it generally better to add a column to do that or to create some kind of a batchlog table?
I'm using MS SQL Server if that is relevant
A Simplified example:
select * from Customer where reportedToExternalSystem is null
or
select * from Customer where cus_id not in (select cus_id from integrationBatchLog)
or is there maybe some more ways to do that that might be even better? This is the first time I do something like this so I don't know the best practise yet.
The simple solution is to add a column that marks the row as imported. A status int (0/1) or if you want to keep track of when it was imported an imported date. This solution does have some limitations:
You can only import the row once. Do you need to import the customer again when the record is updated? Are you going to clear the update field when the customer is updated?
It causes the row to be locked when you update the row status. Are you sure the application that inserts the customer record will be happy with your code locking the records?
On some system it causes the entire row to be written to the log system for recovery. Depending on the size of the row this can be a lot of log writing for just one field.
In a highly parallel import system you can have a lot of contention for resources. If one import program is locking the table, think how bad it would be if many import programs are locking the table at the same time.
If the customer data is updated several times between your import polling interval, you will only see the latest data and will skip over the intermediate updates. This is only an issue if you care about the intermedaite updates. For customers you might not care, for order statuses you might care a lot.
You have to modify the table structure. This might not be allowed by the source application due to data/support/political issues.
Besides putting a status column in the table, one technique that works well is to put a trigger on the table and mirror the import data to a second table. You would then 'consume' the data in the second table. This has several advantages:
It keeps the locking issues contained to the second table.
It allows you to process every update to the main table.
You can add an index to the second table that is used to keep track of the update statuses without the issues of changing the main table.
If you delete the rows from the second table (either immediately as they are consumed or after a short audit period) the size of the table/index will be kep to a minimum.
When I use this technique in Sql Server I put the second table in a seperate schema. Since most apps store their tables in dbo, you can end up with dbo.Customers and Import.Customers. This can help you to keep track of which tables you are importing and keeps you from having to come up with new names for your import tables.
Unless you have to complicate implementation, go with the simplest solution possible. One important thing you should consider, is how hard would it be to refactor this simple to more general one, in case if you need it.
In your case I see only one problem in upgrading from column to table. If you would need history of imports. Solution: make reportedToExternalSystem column of DateTime (or Timestamp) type
I would use a separate table indicating, say, import date cross-referenced to the key of the record in the table you're tracking. In other words, a table with 3 columns: auto-increment key, record-id-from-other-table, import-date. Something like that. This also allows the case if a record is ever re-imported later. You'd have track of all the imports by date.
I Prefer having a column for importing status. Maintaining a separate log leads to time consumable results with growing table size. I do have conceptual idea on SQL Servers but seems that it works. Keep posting!
I have a database that keeps record history. For each update to a record, the system will "deactivate" the previous record (along with all it's children), by setting the "Status" column to "0".
Now it's not a problem yet...but eventually this system is going to have a lot of records, and history is more important than speed right now. But the more records inserted, the slower searches become.
What is the best approach to archive the records? I've had suggestions to create a cloned archive database to hold the data. I've also had the idea to storing all previous records into a xml file, that can be read / loaded later if we need to dig up archived records.
You could create a separate partition containing only the active record if your DBMS supports it. You can also add an index to Status so that the select ... from tbl where status=1 isn't incredibly slow.
http://msdn.microsoft.com/en-us/library/ms187802.aspx
I'm trying to figure out what would be the best way to have a history on a database, to track any Insert/Delete/Update that is done. The history data will need to be coded into the front-end since it will be used by the users. Creating "history tables" (a copy of each table used to store history) is not a good way to do this, since the data is spread across multiple tables.
At this point in time, my best idea is to create a few History tables, which the tables would reflect the output I want to show to the users. Whenever a change is made to specific tables, I would update this history table with the data as well.
I'm trying to figure out what the best way to go about would be. Any suggestions will be appreciated.
I am using Oracle + VB.NET
I have used very successfully a model where every table has an audit copy - the same table with a few additional fields (time stamp, user id, operation type), and 3 triggers on the first table for insert/update/delete.
I think this is a very good way of handling this, because tables and triggers can be generated from a model and there is little overhead from a management perspective.
The application can use the tables to show an audit history to the user (read-only).
We've got that requirement in our systems. We added two tables, one header, one detail called AuditRow and AuditField. The AuditRow contains one row per row changed in any other table, and the AuditField contains one row per column changed with old value and new value.
We have a trigger on every table that writes a header row (AuditRow) and the needed detail rows (one per changed colum) on each insert/update/delete. This system does rely on the fact that we have a guid on every table that can uniquely represent the row. Doesn't have to be the "business" or "primary" key, but it's a unique identifier for that row so we can identify it in the audit tables. Works like a champ. Overkill? Perhaps, but we've never had a problem with auditors. :-)
And yes, the Audit tables are by far the largest tables in the system.
If you are lucky enough to be on Oracle 11g, you could also use the Flashback Data Archive
Personally, I would stay away from triggers. They can be a nightmare when it comes to debugging and not necessarily the best if you are looking to scale out.
If you are using an PL/SQL API to do the INSERT/UPDATE/DELETEs you could manage this in a simple shift in design without the need (up front) for history tables.
All you need are 2 extra columns, DATE_FROM and DATE_THRU. When a record is INSERTed, the DATE_THRU is left NULL. If that record is UPDATEd or DELETEd, just "end date" the record by making DATE_THRU the current date/time (SYSDATE). Showing the history is as simple as selecting from the table, the one record where DATE_THRU is NULL will be your current or active record.
Now if you expect a high volume of changes, writing off the old record to a history table would be preferable, but I still wouldn't manage it with triggers, I'd do it with the API.
Hope that helps.