I have a simple package where I pull down every table from a remote source DB into my local server DB.
Every data flow task is simple source to destination.
The problem I have is occasionally the download will stop and I won't get all the tables, or sometimes tables won't fully pull down all data.
All I want to do is have a table with all table names that I need to pull down.
After each table in my data flow completes I need to update a flag in my new table of table names so that there is a 1 for every table that fully downloads from the source to the destination. Any help would be appreciated.
A simple approach is to add an Execute SQL Task after each Data Flow Task that updates the table containing a LastOperationDate which will have the last execution time.
UPDATE AuditTable
SET LastExecutionDate=GETDATE()
WHERE TableName LIKE #param
The #param is the name of the table in the data destination of the concerned data flow.
Related
I need to perform some calculations using few columns from a table. This database table that gets updated every couple of hours generates duplicates on couple of columns every other day. There is no way tell which one is inserted first which affects my calculations.
Is there a way to copy these rows into a new table automatically as data gets added every couple of hours and perform calculations on the fly? This way whatever comes first will be captured into a new table for a dashboard and for other business use cases.
I thought of creating a stored procedure and using a job scheduler to perform this. But I do not have admin access and can not schedule jobs. Is there another way of doing this efficiently? Much appreciated!
Edit: My request for admin access is being approved.
Another way as to stated in the answers, what you can do is:
Make a temp table.
Make a prod table.
Use stored procedure to copy everything from the temp table into prod table after any load have been done.
Use the same stored procedure to clean the temp table after the load is done.
Don't know if this will work, but this is in general how we are dealing with huge amount of load on a daily basis.
How do I create a trigger in Microsoft SQL server to keep a track of all deleted data of any table existing in the database into a single audit table? I do not want to write trigger for each and every table in the database. There will only be once single audit table which keeps a track of all the deleted data of any table.
For example:
If a data is deleted from a person table, get all the data of that person table and store it in an XML format in an audit table
Please check my solution that I tried to describe at SQL Server Log Tool for Capturing Data Changes
The solution is build on creating trigger on selected tables dynamically to capture data changes (after insert, update, delete) and store those in a general table.
Then a job executes periodically and parses data captured and stored in this general table. After data is parsed it will be easier for humans to understand and see easily which table field is changed and its old and new values
I hope this proposed solution helps you for your own solution,
I have staging table in job in SQL server which dumped data in another table and once data is dumped to transaction table I have truncated staging table.
Now, problem occurs if job is failed then data is transaction table is
roll back and all data is placed in staging table. So staging table already consist data and if I rerun the job then it merges all new data with existing data in staging table.
I want my staging table to empty when the job will run.
Can I make use of temp table in this scenario?
This is a common scenario in data warehousing project and the answer is logical instead of technical. You've two approaches to deal with these scenario:
If your data is important then first check if staging is empty or not. If the table is not empty then it means last job failed; in this case instead of inserting into staging do a insert-update operation and then continue with the job steps. If the table is empty then it means that last job was successful then new data will be insert only.
If you can afford data loss from last job then make a habit to truncate table before running your package.
I want to create a plsql procedure which should get executed at 6 pm everyday and whole Data Of a day which is in a Temporary Table should Be Migrated to a base Table. After Successful Migration of Data it should Display a Count Of rows processed.
Your question is not much clear, what i understood is you wanted some data operation to be performed at specific time. For that i would suggest you to use Scheduler JOB's with Oracle. For more Details for creating/scheduling/stopping job refer this --> http://docs.oracle.com/cd/E11882_01/server.112/e25494/scheduse.htm#ADMIN12384
I'm trying to figure out if there's a method for copying the contents of a main schema into a table of another schema, and then, somehow updating that copy or "refreshing" the copy as the main schema gets updated.
For example:
schema "BBLEARN", has table users
SELECT * INTO SIS_temp_data.dbo.bb_users FROM BBLEARN.dbo.users
This selects and inserts 23k rows into the table bb_course_users in my placeholder schema SIS_temp_data.
Thing is, the users table in the BBLEARN schema gets updated on a constant basis, whether or not new users get added, or there are updates to accounts or disables or enables, etc. The main reason for copying the table into a temp table is for data integration purposes and is unrelated to the question at hand.
So, is there a method in SQL Server that will allow me to "update" my new table in the spare schema based on when the data in the main schema gets updated? Or do I just need to run a scheduled task that does a SELECT * INTO every few hours?
Thank you.
You could create a trigger which updates the spare table whenever an updated or insert is performed on the main schema
see http://msdn.microsoft.com/en-us/library/ms190227.aspx