Query about Sequence containers in SSIS and using Transactions - sql

I have a SSIS package which contains 3 different Execute SQL tasks which are basically data stages, each stage is a stored procedure which is wrapped in a serialized transaction. I have wrapped all 3 into a sequence container, but im not sure whether the Transactions properties of the container would affect the fact that there are already transaction stored procedures in the Execute SQL Tasks. The default properties on the sequence container is:
TransactionOption: Supported
IsolationLevel: Serializable
Also would these settings cause a deadlock?
Thanks

Related

Autonomous transaction analogue in ABAP

I'm trying to commit DML update in a database table while the main program is still running without committing it as there may be errors in future and there might be a need to rollback it but the internal (saved) updates should stay.
Like in the Oracle autonomous transactions.
Call function ... starting new task ... or Submit ... and return don't work as they affect the main transaction.
Is there a way to start a nested database LUW and commit it without interrupting the main LUW?
I am not aware of a way to do this with OpenSQL. But when you are using the ADBC framework, then each instance of the class CL_SQL_CONNECTION operates within a separate database LUW.
I would generally not recommend using ADBC unless you have to, because:
You are now writing SQL statements as strings, which means you don't have compile-time syntax checking.
You can't put variables into SQL code anymore. (OK, you can, but you shouldn't, because you are probably creating SQL injection vulnerabilities that way). You need to pass all the variables using statement->set_param.
You are now writing NativeSQL, which means that you might inadvertently write SQL which can't be ported to other database backends.
You can create separate function for saving your changes and you can call your function with starting new task mode like below.
call function 'ZFUNCTION' starting new task 'SAVECHANGES'
exporting
param = value.

Parallel execution of stored procedures in job (SQL Server)

Breif:
I have five stored procedures and each of it have no dependencies. The identical stuff is that it pulling data from five different servers. We are just collating and feeding it to our server.
Question:
We have scheduled all these five in single job with 5 different steps. I want to execute it in parallel instead of sequential order.
Extra:
I can actually collate all five stored procs in to one if its possible to run it in parallel in sp level (if not possible in job level).
Thanks
The job steps are always executed in succession (hence 'steps').
To parallelize it, create five jobs with the same schedule.
There are several options. 2 of which are :
Create a SSIS job to run the 5 stored procedures in parallel.
Create 5 different SQL Server Agent jobs scheduled at the same time
It's also possible to create a main job and add steps to call child jobs (each calling it's own stored procedure) asynchronously as suggested at https://dba.stackexchange.com/questions/31104/calling-a-sql-server-job-within-another-job/31105#31105
However for more complicated flow it's better to use SSIS packages which are design to handle different workflows.

What does "query" mean in SQL Server in "use query governor"

If I am executing a stored procedure containing a number of successive query statements,
Does use query governor apply to each statement executed in the stored procedure, or does it mean a single executed statement - in this case the entire stored procedure?
I think you are mixing up some concepts.
The first concept is what is a transaction? That depends if you are using explicit or implicit transactions.
By default, implicit transactions are set on.
If you want to have all the statements either committ or rollback in a stored procedure, you will have to use BEGIN TRANS, COMMIT and/or ROLLBACK statements with error checking in the stored procedure.
Now lets talk about the second concept. The resource governor is used to limit the amount of resources given to a particular user group.
Basically, a login id is mapped by a classifier function to a workload group and resource pool. This allows you to put all your users in a low priority group giving them only a small slice of the CPU and MEMORY while your production jobs can be in a high priority group with the LION's share of the resources.
This prevents a typical user from writing a report that has a huge CROSS JOIN and causes a performance issue on the production database.
I hope this clears up the confusion. If not, please ask exactly what you are looking for.
It appears the answer to my question is that a stored procedure counts as a query in this context;
We have spent some time examining an issue with a stored procedure comprising a number of EXEC'd dml statements, which timed out with "Use Query Governor" selected, according to the value of the number of seconds applicable. Deselecting "Use Query Governor" resolved the problem.

sql - insert concurrently in database

I have an sql server database on which I use stored procedures to insert new data from the application. I wonder what I have to do in order to ensure that it is correct to use multiple threads to call the stored procedures in my db in a concurrent correct/safe way.
A possible concurrency issue is about correctness. I call multiple stored procedures from the data layer interface and each stored procedure (and the other stored procedure that they call) perform its updates in multiple db tables in ways that can break table structure correctness if done concurrently in any "interleaving way" (e.g. from SP_1 I insert different elements in Table1 or Table2 depending on some conditions related to the existence of some element y in Table2)
It seems to me that (using the way tables are defined) in order to make the program correct I have to see every stored procedure actions done in isolation than any other operation that could be called concurrently (operations called using the data access layer interface).
Does one big transaction (that includes everything done as part of an DAL interface call) can help me make the program correct from the point of view of the db in the presence of concurrent inserts? I cannot see (if the solution could be viable) how this could improve more than, say a mutual exclusion approach where only a single thread at the moment would be able to make the necessary inserts into the db tables at a time...
if done concurrently in any "interleaving way"
This is exactly what transactions are used to avoid.

How do transactions within Oracle stored procedures work? Is there an implicit transaction?

In an Oracle stored procedure, how do I write a transaction? Do I need to do it explicitly or will Oracle automatically lock rows?
You might want to browse the concept guide, in particular the chapter about transactions:
A transaction is a logical unit of work that comprises one or more SQL statements run by a single user. [...] A transaction begins with the user's first executable SQL statement. A transaction ends when it is explicitly committed or rolled back by that user.
You don't have to explicitely start a transaction, it is done automatically. You will have to specify the end of the transaction with a commit (or a rollback).
The locking mechanism is a fundamental part of the DB, read about it in the chapter Data Concurrency and Consistency.
Regarding stored procedures
A stored procedure is a set of statements, they are executed in the same transaction as the calling session (*). Usually, transaction control (commit and rollback) belongs to the calling application. The calling app has a wider vision of the process (which may involve several stored procedures) and is therefore in a better position to determine if the data is in a consistent state. While you can commit in a stored procedure, it is not the norm.
(*) except if the procedure is declared as an autonomous transaction, in which case the procedure is executed as an independent session (thanks be here now, now I see your point).
#AdamStevenson Concerning DDL, there's a cite from the Concept's Guide:
If the
current transaction contains any DML statements, Oracle first commits
the
transaction, and then runs and commits the DDL statement as a new,
single
statement transaction.
So if you have started a transaction before the DDL statement (e.g. wrote an INSERT, UPDATE, DELETE, MERGE statements), the transaction started will be implicitly commited - you should always keep that in mind when processing DML statements.
I agree with Vincent Malgrat, you might find some very useful information about transaction processing at the Concept's Guide.