Problem with reading a database table between truncate/refresh - sql

We need to call an external web service to get some data and store in a table locally. This process needs to be repeated every 10 minutes as the data that the external web service publishes changes rapidly. As part of this, we need to clear the entire table and re-insert the current data that is published by the web service.
The tricky situation we have is: What at the time the table is truncated a user queries the table and gets no results? This results to invalid result displayed to the user.
Can anyone please give me an advice on this?

Use transaction around both the operations. Something like this.
Begin transaction;
truncate table
populate the new table
end transaction
Snapshot isolation guarantees that data you will see will be consistent.

If you can create a view over your table, then you can load the data into a new table, take however long you need to to populate it, and then just alter the view to reference the new table.

Related

DB2: Working with concurrent DDL operations

We are working on a data warehouse using IBM DB2 and we wanted to load data by partition exchange. That means we prepare a temporary table with the data we want to load into the target table and then use that entire table as a data partition in the target table. If there was previous data we just discard the old partition.
Basically you just do "ALTER TABLE target_table ATTACH PARTITION pname [starting and ending clauses] FROM temp_table".
It works wonderfully, but only for one operation at a time. If we do multiple loads in parallel or try to attach multiple partitions to the same table it's raining deadlock errors from the database.
From what I understand, the problem isn't necessarily with parallel access to the target table itself (locking it changes nothing), but accesses to system catalog tables in the background.
I have combed through the DB2 documentation but the only reference to the topic of concurrent DDL statements I found at all was to avoid doing them. The answer to this question, can't be to simply not attempt it?
Does anyone know a way to deal with this problem?
I tried to have a global, single synchronization table to lock if you want to attach any partitions, but it didn't help either. Either I'm missing something (implicit commits somewhere?) or some of the data catalog updates even happen asynchronously, which makes the whole problem much worse. If that is the case, is there are any chance at all to query if the attach is safe to perform at any given moment?

SQL Server: copy newly added rows from one table and insert into another automatically

I need to perform some calculations using few columns from a table. This database table that gets updated every couple of hours generates duplicates on couple of columns every other day. There is no way tell which one is inserted first which affects my calculations.
Is there a way to copy these rows into a new table automatically as data gets added every couple of hours and perform calculations on the fly? This way whatever comes first will be captured into a new table for a dashboard and for other business use cases.
I thought of creating a stored procedure and using a job scheduler to perform this. But I do not have admin access and can not schedule jobs. Is there another way of doing this efficiently? Much appreciated!
Edit: My request for admin access is being approved.
Another way as to stated in the answers, what you can do is:
Make a temp table.
Make a prod table.
Use stored procedure to copy everything from the temp table into prod table after any load have been done.
Use the same stored procedure to clean the temp table after the load is done.
Don't know if this will work, but this is in general how we are dealing with huge amount of load on a daily basis.

MuleSoft ESB Batch Transactions

Thanks is advance for any help. Here is the scenario that I am trying to recreate in Mulesoft.
1,500,000 records in a table. Here is the current process that we use.
Start a transaction.
delete all records from the table.
reload the table from a flat file.
commit the transaction.
in the end we need the file in a good state, thus the use of the transaction. If there is any failure, the data in the table will be rolled back to the initial valid state.
I was able to get the speed that we needed by using the Batch element < 10 minutes, but it appears that transactions are not supported around the whole batch flow.
Any ideas how I could get this to work in Mulesoft?
Thanks again.
A little different workflow but how about:
Load temp table from flat file
If successful drop original table
Rename temp table to original table name
You can keep your Mule batch processing workflow to load the temp table and forget about rolling back.
For this you might try the following:
Use XA transactions (since more than one connector will be used,
regardless of the use of the same transport or not)
Enlist in the transaction the resource used in the custom Java code.
This also can be applied within the same transport (e.g. JDBC on the Mule configuration and also on the Java component), so it's not restricted to the case demonstrated in the PoC, which is only given as a reference.
Please refer to this article https://dzone.com/articles/passing-java-arrays-in-oracle-stored-procedure-fro
From temp table poll records.You can contruct array with any number of records. With 100K size will only involve 15 round trips in total.
To determine error records you can insert records in an error table but that has to be implemented in database procedure.

Get SQL Logs/Error as output of the script

I am working on a web application. In this application we are using inlilne SQL query for database operation. The reason behind is , we are having number of different databases with same database structure. Also we need to work on different databases. If we proceed with stored procedure, there may be a chance of exception, if we miss any script to execute to any of the database.
In the exiting application, it is inserting the employee details using different database calls. That is , one call to insert the details to EmployeeMaster table , another call is to Address table, next one is to qualification table etc.
Now I am making this multiple database calls to single call. I would like get all the exception/log details from the executed script.
I am familiar with RAISERROR. But this way we can only get the single error details.
I want to get the message like
1 emp100 Created
2 Details Inserted to Address table
3 Qualification1 not exist In the Qualification Master table
I know we can achieve it using a temporary table.
Is there any better way to solve this issue?
Any help would be appreciable.
Regards,
Ranish

In Oracle can you create a table that only exists while the database is running?

Is there a way in Oracle to create a table that only exists while the database is running and is only stored in memory? So if the database is restarted I will have to recreate the table?
Edit:
I want the data to persist across sessions. The reason being that the data is expensive to recreate but is also highly sensitive.
Using a temporary table would probably help performance compared to what happens today, but its still not a great solution.
You can create a 100% ephemeral table that is usable for the duration of a session (typically shorter than the duration than the database run time) called a TEMPORARY table. The entire purpose of a table in memory is to make it faster for reading from. You will have to re-populate the table for each session as the table will be forgotten (both structure and data) once the session completes.
No exactly, no.
Oracle has the concept of a "global temporary table". With a global temporary table, you create the table once, as with any other table. The table definition will persist permanently, as with any other table.
The contents of the table, however, will will not be permanent. Depending on how you define it, the contents will persist for either the life of the session (on commit perserve rows) or the life of the transaction (on commit delete rows).
See the documentation for all the details:
http://docs.oracle.com/cd/E11882_01/server.112/e25494/tables003.htm#ADMIN11633
Hope that helps.
You can use Oracle's trigger mechanism to invoke a stored procedure when the database starts up or shuts down.
That way you could have the startup trigger create the table, and the shutdown trigger drop it.
You'd probably also want the startup trigger to handle cases where the table exists and truncate it just in case the server stopped suddenly and the shutdown trigger wasn't called.
Oracle trigger documentation
Using Oracle's Global Temporary Tables, you can create a table in memory and have it delete the data at the end of the transaction, or the end of the session.
If I understand correctly, you have some data that needs to be processed when the database is brought online and left available only as long as the database is online. The only use-case I can think of that would require this is if you're encrypting some data and you want to ensure that the unencrypted data is never written to disk.
If this is actually your use-case, I would recommend forgetting about trying to create your own solution for this and, instead, make use of Oracle's encrypted tablespaces or Transparent Data Encryption.