SQL Server, Remote Stored Procedure, and DTC Transactions - sql

Our organization has a lot of its essential data in a mainframe Adabas database. We have ODBC access to this data and from C# have queried/updated it successfully using ODBC/Natural "stored procedures".
What we'd like to be able to do now is to query a mainframe table from within SQL Server 2005 stored procs, dump the results into a table variable, massage it, and join the result with native SQL data as a result set.
The execution of the Natural proc from SQL works fine when we're just selecting it; however, when we insert the result into a table variable SQL seems to be starting a distributed transaction that in turn seems to be wreaking havoc with our connections.
Given that we're not performing updates, is it possible to turn off this DTC-escalation behavior?
Any tips on getting DTC set up properly to talk to DataDirect's (formerly Neon Systems) Shadow ODBC driver?

Check out SET REMOTE_PROC_TRANSACTIONS OFF which should disable it.
Or sp_serveroption to configure the linked server generally, not per batch.
Because you are writing on the MS SQL side, you start a transaction.
By default, it escalates whether it needs to or not.
Even though the table variable does not particapate in the transaction.
I've had similar issues before where the MS SQL side behaves differently based on if MS SQL writes, in a stored proc and other stuff. The most reliable way I found was to use dynamic SQL calls to my Sybase linked server...

The following code sets the "Enable Promotion of Distributed Transactions" for linked servers:
USE [master]
GO
EXEC master.dbo.sp_serveroption #server=N'REMOTE_SERVER', #optname=N'remote proc transaction promotion', #optvalue=N'false'
GO
This will allow you to insert the results of a linked server stored procedure call into a table variable.

I'm not sure about DTC, but DTSX (Integration Services) may be useful for moving the data. However, if you can simply query the data, you may want to look at adding a linked server for direct access. You could then just write a simple query to populate your table based on a select from the linked server's table.

That's true. As you might guess, the Natural procedures we want to call do lookups and calculations that we'd like to keep at that level if possible.

Related

Check database / server before executing query

I am frequently testing certain areas on a development server and so running a pre-defined SQL statement to truncate the tables in question before testing again. It would only be a slip of a key to switch to the live server.
I'm looking for an IF statement or similar to prevent that.
Either to check the server name, database name, or even that a certain record in a different table exists before running the query.
Any help appreciated
For such cases I use stored procedures. I'd call them TestTruncateTables, etc.
Then instead of calling TRUNCATE TABLE you should CALL TestTruncateTables.
Just make sure that the procedures are not created on the live server. If by any chance you happen to run CALL TestTruncateTables on the live server you only get an error about non-existing proc.

oracle sql how to do asynchronous queries

I have multiple select queries which I want to execute asynchronously.
How can I do this in oracle sql ?
I basically want to test something and so want to simulate workload so I don't really care about the result and I know I can do this in multiple threads but this is specific and so would prefer if I can do this entirely in sql. procedures are fine though.
NOTE: there are no update queries only select.
I read about nowait but am not sure how to use it in oracle.
I tried something like -
select * from foo with(nowait) where col1="something";
This is the error I got -
with(nowait)
*
ERROR at line 3:
ORA-00933: SQL command not properly ended
The Oracle info on NOWAIT says:
Specify NOWAIT if you want the database to return control to you immediately
if the specified table, partition, or table subpartition is already locked by
another user. In this case, the database returns a message indicating that the
table, partition, or subpartition is already locked by another user.
This will not do what you want.
Asynchronous queries are an application thing, not a SQL thing. For example I can open TOAD and open a dozen windows and run long queries in all of them and still open another window and run another query. I could open a dozen instances of SQLPLUS and do the same thing. Nothing in the query lets me do this, it's in the application.
I think you could use DBMS_SCHEDULER to schedule some sql or procs that execute SQL.
However this is probably not the best way to do this
There are tools for this. The best way maybe to write a procedure you can call from the web and then you can use any performance testing tool that can make a web call...its worked for me before.
You may also consider:
http://sqlmag.com/database-performance-tuning/testing-heavy-load-simulating-multiple-concurrent-operations

multiple select statements in single ODBCdataAdapter

I am trying to use an ODBCdataadapter in C# to run a query which needs to select some data into a temporary table as a preliminary step. However, this initial select statement is causing the query to terminate so that data gets put into the temp table but I can't run the second query to get it out. I have determined that the problem is the presence of two select statements in a single dataadapter query. That is to say the following code only runs the first select:
select 1
select whatever from wherever
When I run my query directly through SQL Server Management Studio it works fine. Has anyone encountered this sort of issue before? I have tried the exact same query previously on similar databases using the same C# code (only the connection string is different) and had no problems.
Before you ask, the temp table is helpful because otherwise I would be running a whole lot of inner select statements which would bog down the database.
Assuming you're executing a Command that's command type is CommandText you need a ; to separate the statements.
select 1;
select whatever from wherever;
You might also want to consider using a Stored Procedure if possible. You should also use the SQL client objects instead of the ODBC client. That way you can take advantage of additional methods that aren't available otherwise. You're supposed to get better perf as well.
If you need to support multiple Databases you can just use the DataAdapter class and use a Factory o create the concrete types. This gives you the benefits of using the native drivers without being tied to a specific backend. ORMS that support multiple back ends typically do this. The Enterprise Library Data Access Application Block while not an ORM does this as well.
Unfortunately I do not have write access to the DB as my organization has been contracted just to extract information to a data warehouse. The program is one generalized for use on multiple systems which is why we went with ODBC. I suppose it would not be terrible to rewrite it using SQL Management Objects.
ODBC Connection requires a single select statement and its retrieval from SQL Server.
If any such functionality is required, a Hack can do the purpose
use the query
SET NOCOUNT ON
at the top of your select statement.
When SET NOCOUNT is ON, the count (indicating the number of rows affected by a Transact-SQL statement) is not returned.
When SET NOCOUNT is OFF, the count is returned. It is used with any SELECT, INSERT, UPDATE, DELETE statement.
The setting of SET NOCOUNT is set at execute or run time and not at parse time.
SET NOCOUNT ON mainly improves stored procedure (SP) performance.
Syntax:
SET NOCOUNT { ON | OFF }

Is it possible to create a temp table on a linked server?

I'm doing some fairly complex queries against a remote linked server, and it would be useful to be able to store some information in temp tables and then perform joins against it - all with the remote data. Creating the temp tables locally and joining against them over the wire is prohibitively slow.
Is it possible to force the temp table to be created on the remote server? Assume I don't have sufficient privileges to create my own real (permanent) tables.
This works from SQL 2005 SP3 linked to SQL 2005 SP3 in my environment. However if you inspect the tempdb you will find that the table is actually on the local instance and not the remote instance. I have seen this as a resolution on other forums and wanted to steer you away from this.
create table SecondServer.#doll
(
name varchar(128)
)
GO
insert SecondServer.#Doll
select name from sys.objects where type = 'u'
select * from SecondServer.#Doll
I am 2 years late to the party but you can accomplish this using sp_executeSQL and feeding it a dynamic query to create the table remotely.
Exec RemoteServer.RemoteDatabase.RemoteSchema.SP_ExecuteSQL N'Create Table here'
This will execute the temp table creation at the remote location..
It's not possible to directly create temporary tables on a linked remote server. In fact you can't use any DDL against a linked server.
For more info on the guidelines and limitations of using linked servers see:
Guidelines for Using Distributed Queries (SQL 2008 Books Online)
One work around (and off the top of my head, and this would only work if you had permissions on the remote server) you could:
on the remote server have a stored procedure that would create a persistent table, with a name based on an IN parameter
the remote stored procedure would run a query then insert the results into this table
You then query locally against that table perform any joins to any local tables required
Call another stored procedure on the remote server to drop the remote table when you're done
Not ideal, but a possible work around.
Yes you can but it only lasts for the duration of the connection.
You need to use the EXECUTE AT syntax;
EXECUTE('SELECT * INTO ##example FROM sys.objects; WAITFOR DELAY ''00:01:00''') AT [SERVER2]
On SERVER2 the following will work (for 1 minute);
SELECT * FROM ##example
but it will not work on the local server.
Incidently if you open a transaction on the second server that uses ##example the object remains until the transaction is closed. It also stops the creating statement on the first server from completing. i.e. on server2 run and the transaction on server1 will continue indefinately.
BEGIN TRAN
SELECT * FROM ##example WITH (TABLOCKX)
This is more accademic than of practical use!
If memory is not much of an issue, you could also use table variables as an alternative to temporary tables. This worked for me when running a stored procedure with need of temporary data storage against a Linked Server.
More info: eg this comparison of table variables and temporary tables, including drawbacks of using table variables.

User Granted Access to Stored Procedure but Can't Run Query

I am working on a product that runs an SQL server which allows some applications to login and their logins are granted permission to run a stored procedure- AND NOTHING ELSE. The stored procedure is owned by an admin; the stored procedure takes a query and executes it, then the results are returned to the application.
Unfortunately I can't figure out why the application can call the stored procedure to which it's granted access, but the stored procedure cannot execute the SQL statement which was passed into it.
The stored procedure executes the passed in query when I'm logged in as an admin, but when I log in as the limited user it throws an exception in the execute statement.
For example:
EXEC [Admin].[STORED_PROC] #SQL_STATEMENT = 'SELECT * FROM table_x'
the STORED_PROC looks something like this:
BEGIN TRY
EXEC (#SQL_STATEMENT)
END TRY
BEGIN CATCH
-- some logging when an exception is caught, and the exception is caught here!!!
END CATCH
There is nothing inside the the try catch statement except that EXEC... and the SQL_STATEMENT works when I'm logged in as the Admin, but not when I'm logged in as the User.
Can anybody help me figure out what permissions I need to set in order to allow the User to run queries through the stored proc only?
So there have been some comments about allowing raw SQL statements to be executed via stored proc defeats the purpose of using a stored proc... but in reality what we're actually doing is we're passing an encrypted SQL statement into the stored proc and the stored proc gets the statement decrypted and THEN it executes it.
So yes, in reality raw SQL statements are not secure and they defeat the purpose of stored procs, but I don't know how to encrypt SQL queries that are passed through ODBC and run against a pre-2005 SQL Server.
In any case, I tried to put up some minimal safeguards to at least have some basic security.
Since you are using dynamic sql, SQL server can't tell which tables you are using, so you have to grant SELECT rights to all the tables as well
Users also need to have SELECT grant on the tables
Allowing raw SQL to be passed into a stored procedure and then executing is the very essence of data insecurity.
SQL Server security is structured so that arbitrary bits of SQL execute in their own security context. If you don't have the permission to run the query ad hoc, you also don't have the permission to run it through a stored procedure. In this, SQL Server is saving you from yourself.
Since your system allows access to stored procs and nothing else (which is good for security purposes and should not be changed) then you simply cannot under any circumstances use dynamic SQL because the rights are not at the table level and your dbas are unlikely to change that. This is not only to prevent SQL Injection attacks but to prevent possible internal fraud so any workplace which has considered this important will not be willing to compromise to make life easier for you. You simply need to redesign to never do anything dynamically. You have no other choice. If you write the procs to do what you want it to do in the first place, there is no need to send encypted sql.
When dynamic SQL is used through EXEC or sp_executesql within an SP, the EXEC permissions on the SP do not allow you to run arbitrary code in the dynamic sql. You either need to grant SELECT (yuck), or you might be able to impersonate another user using EXECUTE AS or SETUSER.
When normal SQL is used, EXEC permissions works fine, overridding ungranted SELECT persmissions. If you have DENY, though, I believe that trumps it.
Having said that, I'm still not sure you should use EXECUTE AS when the source of the SQL is outside the SP (or outside the database). For code-generation or dynamic sql which is safe from outside influence, EXECUTE AS can be a useful tool
This is most likely because of different schemas i.e. the user who logs in is not part of the Admin schema, or at least I would hope not.
The security technique that permits the type of access you are looking to achieve, i.e. to permit access to objects that are owned by the same schema, is called Ownership Chaining.
This principle is not best explained in a post.
Here is a link from Microsoft that explains the concept.
http://msdn.microsoft.com/en-us/library/ms188676(SQL.90).aspx
Here is a an outstanding article on security that provides examples and walkthroughs, for ownership chaining, amongst other techniques.
http://www.sommarskog.se/grantperm.html
I hope this is clear and assists you but please feel free to pose further questions.
Cheers, John