Im trying to execute a stored procedure and simply insert its results in a temporary table, and I'm getting the following message:
The operation could not be performed because OLE DB provider "SQLNCLI"
for linked server "MyServerName" was unable to begin a distributed
transaction. OLE DB provider "SQLNCLI" for linked server
"MyServerName" returned message "No transaction is active.".
My query looks like this:
INSERT INTO #TABLE
EXEC MyServerName.MyDatabase.dbo.MyStoredProcedure Param1, Param2, Param3
Exact column number, names, the problem is not the result.
MSDTC is allowed and started in both computers, Remote procedure calling too.
The machines are not in the same domain, but I can execute remote queries from my machine and get the result. I can even execute the stored procedure and see its results, I just can't insert it in another table.
EDIT
Oh I forgot to mention, the stored procedure doesn't fire any trigger. It only inserts records in temporary tables which it creates itself for data treating.
Well, after following lots of tutorials and researching a lot about it, I had changed all the configuration I thought was necessary for it to work, but it still didn't.
Today we had to force a power reboot on our development server because of a faulty no-break, and when we booted up the server, guess what? It works!
So just for the record, I've changed some specific MSDTC configuration, added it as a linked server and allowed RPC IN and OUT, and changed the RPC configuration for 'NO AUTHENTICATION REQUIRED' or something like that.
I remember reading somewhere that after you changed this configuration, a reboot was required, even though Windows says that it has already restarted the service.
I had rebooted my server like... twice since I changed it, and it still didn't work. But as today, after a complete turn off and turn on, it works!
As for the syntax, I kept the same.
You also have to check the DNS name resolution in the IP network configuration.
For example, you have a server called server-a.mydomain.com and another one called server-b.otherdomain.com, log in the server-a and do a "ping server-b" (without the domain).
If it responds "Ping request could not find host server-b. Please check the name and try again." that is the problem.
Go to the Control Pannel > Network Connections > Right click in the network card > properties > Internet Protocol > Properties > Advanced > DNS > Append this DNS suffix in order.
And here add the local domain: mydomain.com and then add the remote domain: otherdomain.com
Click OK until you exit
Now if you do the "ping server-b" it should repond something like:
Pinging server-b.otherdomain.com [192.168.1.2] with 32 bytes of data:
Reply from 192.168.1.2: bytes=32 time=12ms TTL=64 Reply from
192.168.1.2: bytes=32 time=9ms TTL=64
Now try to again to execute the distributed transaction.
I had the luxury of safely restarting the SQL Server services on both sides of the Linked Server connection. I did not have to reboot the machines.
Have you tried using openquery?
insert into table select * from openquery(myservername, 'exec mydatabase.dbo.mystoredproc param1, param2, param3')
Related
I've been running into an issue in R Studio with a SQL connection.
We've had an on-prem SQL Server that's been upgraded over the years, and a colleague that set it up no longer is with the organization.
We also have an Azure Server that's loaded with a SQL Server as well that was much more recently set up before they departed.
We have a GUI program we're currently developing, and one of the early steps is a SQL Login connection for the user where the variable is declared (db_user) and changes with their login and passes the password correctly within system variables defined in .Renviron as posted on RStudio's site for references.
Our initial connection string looks like this, and this is the line of code that starts the connection and where I believe the issue may lie first:
db_conn_onprem <- DBI::dbConnect(odbc::odbc(),
Driver = "SQL Server",
Server = Sys.getenv("server"),
Database = Sys.getenv("database"),
UID = Sys.getenv("db_user"),
PWD = Sys.getenv("PWD")
Whenever the Azure connection succeeds, it connects as dbo#Azure\Azure vs On-Prem's guest#Server\Server.
(I can't post in-line screenshots yet)
On-Prem Connection Screenshot: https://i.ibb.co/PmbGt5y/RStudio-SQL.png
Azure Connection Screenshot: https://i.ibb.co/WFY3FqZ/azure1.png
I feel this is something dbo-related since that's where the connection drops.
(variable names anonymized)
Now for the issue:
Whenever we attempt to run a series of queries, our on-prem errors out with this:
Error: nanodbc/nanodbc.cpp:1655: 42000: [Microsoft][SQL Server][SQL Server]Cannot execute as the server principal because the principal "db_user" does not exist, this type of principal cannot be impersonated, or you do not have permission.
<SQL> 'EXECUTE AS LOGIN = 'db_user' SELECT name FROM master.sys.sysdatabases WHERE dbid > 4 AND HAS_DBACCESS(name) = 1 ORDER BY name ASC'
However, run the exact same procedure on the SQL Server in Azure with relatively no major configuration, and it succeeds.
Here's the SQL Code we run:
EXECUTE AS LOGIN = 'db_user' SELECT name
FROM master.sys.sysdatabases
WHERE dbid > 4
AND HAS_DBACCESS(name) = 1
ORDER BY name ASC
I feel like I've exhausted my resources for this, first I thought it was the initial R code or possibly SQL Drivers, however I don't believe that to be the issue since the SQL driver pulls a list of names in R Studio in the Connections context menu, but bounces back the error when attempting to complete the query.
Whenever I'm searching errors for references for this error, I see
Cannot execute as the server principal because the principal "dbo" does not exist, this type of principal cannot be impersonated, or you do not have permission.
Listed as the most commonly related error for the one I'm experiencing, however I've tried a number of those (From blank DB ownerships to unrelated solutions), but I've mostly hit a wall here.
Any assistance would be greatly appreciated.
I feel this is something dbo-related since that's where the connection drops, but I have no clue where to continue going on this issue.
Yep.
This
EXECUTE AS LOGIN = 'db_user'
requires impersonate permission for the login. Which the error message is clearly telling you. It's unclear why you want to impersonate that login instead of simply connecting as the login to begin with.
I would like to start monitoring our system closely as to see who and at what time did a user run a query.
Currently, on the tables from HIST DB, we are able to see the query Texts, username, date, time, and client IP. But what we are more interested in is to see the client host machine name.
When we run a query requesting client hostname, the output comes as unknown.
Below is the query that we are running to get our required information:
SELECT *
FROM NZ_QUERY_HISTORY
Is there anything else that we can look at or implement for us to be able to see client machine name.
FYI: When we run: show session all; we do infact see the client host machine.
We did our own view on top of the query history database when we started using netezza a few years ago, and it does not include the DNS name (hostname) of the client. I guess we left it out because it is empty most of the time. Another guess is that our DNS setup doesn’t not allow reverse DNS lookups for all IP addresses from the netezza host.
Instead we rely on:
the clien IP address and netezza username
the username&ip address on the client machine
In total that is quite powerfull
Furthermore we add a bit if ‘pre sql’ to the connection configuration of the client tools we use (sas, powercenter, business objects, etc) and add as much info as we can to the 4 ‘client_application_*’ variables. See here for syntax: https://www.ibm.com/support/knowledgecenter/en/SSULQD_7.2.1/com.ibm.nz.dbu.doc/r_dbuser_set.html
For powercenter we add the workflow,session and other -names...
I'm writing an insert into a linked server table that includes the IP the of the local server. When SSMS into the server and exec the SP, it provides the correct information into the Linked table.
When the SQL Agent runs the job for itself its returning 'NULL' and inserting Null into the remote table instead of the local IP. I'm sure this is because there is no "local" IP being used as it is using its only ports ect.
Specifically talking about the Connectionproperty('local_net_address)
set #vcLocalIP= convert (varchar,CONNECTIONPROPERTY('local_net_address'))
Any Help or Ideas on this would be greatly appreciated. Just trying to craft this SP so it can be put on different servers and all return the relevant information with as little "manual" intervention as possible.
We have a system with an Oracle backend to which we have access (though possibly not administrative access) and a front end to which we do not have the source code. The database is quite large and not easily understood - we have no documentation. I'm also not particularly knowledgable about Oracle in general.
One aspect of the front end queries the database for a particular set of data and displays it. We have a need to determine what query is being made so that we can replicate and automate it without the front end (e.g. by generating a csv file periodically).
What methods would you use to determine the SQL required to retrieve this set of data?
Currently I'm leaning towards the use of an EeePC, Wireshark and a hub (installing Wireshark on the client machines may not be possible), but I'm curious to hear any other ideas and whether anyone can think of any pitfalls with this particular approach.
Clearly there are many methods. The one that I find easiest is:
(1) Connect to the database as SYS or SYSTEM
(2) Query V$SESSION to identify the database session you are interested in.
Record the SID and SERIAL# values.
(3) Execute the following commands to activate tracing for the session:
exec sys.dbms_system.set_bool_param_in_session( *sid*, *serial#*, 'timed_statistics', true )
exec sys.dbms_system.set_int_param_in_session( *sid*, *serial#*, 'max_dump_file_size', 2000000000 )
exec sys.dbms_system.set_ev( *sid*, *serial#*, 10046, 5, '' )
(4) Perform some actions in the client app
(5) Either terminate the database session (e.g. by closing the client) or deactivate tracing ( exec sys.dbms_system.set_ev( sid, serial#, 10046, 0, '' ) )
(6) Locate the udump folder on the database server. There will be a trace file for the database session showing the statements executed and the bind values used in each execution.
This method does not require any access to the client machine, which could be a benefit. It does require access to the database server, which may be problematic if you're not the DBA and they don't let you onto the machine. Also, identifying the proper session to trace can be difficult if you have many clients or if the client application opens more than one session.
Start with querying Oracle system views like V$SQL, v$sqlarea and
v$sqltext.
Which version of Oracle? If it is 10+ and if you have administrative access (sysdba), then you can relatively easy find executed queries through Oracle enterprise manager.
For older versions, you'll need access to views that tuinstoel mentioned in his answer.
Same data you can get through TOAD for oracle which is quite capable piece of software, but expensive.
Wireshark is indeed a good idea, it has Oracle support and nicely displays the whole conversation.
A packet sniffer like Wireshark is especially interesting if you don't have admin' access to the database server but you have access to the network (for instance because there is port mirroring on the Ethernet switch).
I have used these instructions successfully several times:
http://www.orafaq.com/wiki/SQL_Trace#Tracing_a_SQL_session
"though possibly not administrative access". Someone should have administrative access, probably whoever is responsible for backups. At the very least, I expect you'd have a user with root/Administrator access to the machine on which the oracle database is running. Administrator should be able to login with a
"SQLPLUS / AS SYSDBA" syntax which will give full access (which can be quite dangerous). root could 'su' to the oracle user and do the same.
If you really can't get admin access then as an alternative to wireshark, if your front-end connects to the database through an Oracle client, look for the file sqlnet.ora. You can set trace_level_client, trace_file_client and trace_directory_client and get it to log the Oracle network traffic between the client and database server.
However it is possible that the client will call a stored procedure and retrieve the data as output parameters or a ref cursor, which means you may not see the query being executed through that mechanism. If so, you will need admin access to the db server, and trace as per Dave Costa's answer
A quick and dirty way to do this, if you can catch the SQL statement(s) in the act, is to run this in SQL*Plus:-
set verify off lines 140 head on pagesize 300
column sql_text format a65
column username format a12
column osuser format a15
break on username on sid on osuser
select S.USERNAME, s.sid, s.osuser,sql_text
from v$sqltext_with_newlines t,V$SESSION s
where t.address =s.sql_address
and t.hash_value = s.sql_hash_value
order by s.sid,t.piece
/
You need access those v$ views for this to work. Generally that means connecting as system.
We have a database running on SQL 2005. One of the store procedure looks up a user's email address from Active Directory using a linked server. The call to the linked server occurs in a database function.
I'm able to call is successfully from my Asp.Net application the first time, but periodically after that, it fails with the following error:
{"The requested operation could not be performed because OLE DB provider \"ADsDSOObject\" for linked server \"ADSI\" does not support the required transaction interface."}
It appears that the amount of time between calling the function affects whether the linked server query will work correctly. I am not using any transactions. When I try calling the function in a quick make-shift SQL script, it runs fine everytime (even when tested in quick succession).
Is there some sort of transaction being left open that naturally dies if I don't try calling the procedure again? I'm at a loss here.
Here is the simple call in the store procedure:
DECLARE #email varchar(50)
SELECT #email = LEFT(mail, 50)
FROM OPENQUERY (
ADSI,
'SELECT mail, sAMAccountName FROM ''LDAP://DC=Katz,DC=COM'' WHERE objectCategory = ''Person'' AND objectClass = ''User'''
)
WHERE sAMAccountName = CAST(#LoginName AS varchar(35))
RETURN #email
I've worked with SQL Server linkservers often, though rarely LDAP queries... but I got curious and read the Microsoft support page linked to in Ric Tokyo's previous post. Towards the bottom it reads:
It is typical for a directory server
to enforce a server limitation on the
number of objects that will be
returned for a given query. This is to
prevent denial-of-service attacks and
network overloading. To properly query
the directory server, large queries
should be broken up into many smaller
ones. One way to do this is through a
process called paging. While paging is
available through ADSI's OLEDB
provider, there is currently no way
available to perform it from a SQL
distributed query. This means that the
total number of objects that can be
returned for a query is the server
limit. In the Windows 2000 Active
Directory, the default server limit is
1,000 objects.
I'm thinking that the reason it fails on you (or not) depending on whether call it from the app or from a "quick make-shift sql script" (as you put it) might be related to the security context under which the operation is executing. Depending on how the link server connection was set up, the operation could be being executed under a variety of possible credentials depending on how you initiate the query.
I don't know, but that's my best guess. I'd look at the linkserver configuration, in particular the linkserver settings for what set of credentials are used as the security context under which operations executed across the linkserver run.
Rather then query Active Directory through a linked server, you might be better off caching your AD data into a SQL database and then querying that instead. You could use Integration Services by creating a OLE DB connection using "OLE DB PRovider for Microsoft Directory Services" and having a DataReader source with a query like:
SELECT physicalDeliveryOfficeName, department, company, title, displayName, SN,
givenName, sAMAccountName, manager, mail, telephoneNumber, mobile
FROM 'LDAP://DC=SOMECO,DC=COM'
WHERE objectClass='User' and objectCategory = 'Person'
order by mail
Using this method you will still run into the 1000 row limit for results from an AD query (note it is NOT advisable to try and increase this limit in AD, it is there to prevent the domain controller from becoming overloaded). Sometimes its possible to use a combination of queries to return the full data set, e.g. names A - L and M - Z
Alternatively you could use the CSVDE command line utility in Windows Server to export your directory information to a CSV file and then import it into a SQL database (see http://computerperformance.co.uk/Logon/Logon_CSVDE_Export.htm for more info on exporting AD data with CSVDE).
please read the support page from Microsoft
I suspect that it might be the cached query plan due to your statement that "When I try calling the function in a quick make-shift SQL script, it runs fine everytime (even when tested in quick succession)."
Could you try executing your stored procedure like so:
EXEC usp_MyProcedure WITH RECOMPILE
This question appears in the top of the first google page when search for the error string but has not valid answer.
This error happens intermitently when isolation level is not specified on .NET code nor in Store Procedure.
This error also happens in SQL Server 2008.
The fix is force SET TRANSACTION ISOLATION LEVEL READ (UN)COMMITTED because a isolation level any higher is not supported by Active Directory and SQL Server is trying to use SERIALIZABLE.
Now, as this error is intermitent. Why is ADO.NET or SQLServer switching its default isolation to SERIALIZABLE sometimes and sometimes not? What triggers this switching?