I have one scalar-valued function, func-A and inline table-valued function, func-B. func-A calls func-B and func-B again calls func-A recursively. but the recursion level will never be too deep. It must always be 2 levels. For example, func-A calls func-B. And func-B again calls func-A and that will be the end.
This is working OK on my local SQL Server 2008 R2 but failing at production server. Error Message is displaying "Maximum stored procedure, function, trigger, or view nesting level exceeded (limit 32).". But strangely, on production server, this problem is happening to certain database instances only. some instances are working OK.
How do I overcome this problem? (I think I may need to turn on some options, for example like "RECURSIVE_TRIGGERS". )Thanks In Advance.
Here's some simple steps to diagnose the recursive calls:
Use SQL profiler and capture a set of inputs that cause the issue to manifest.
Connect via Management Studio and create a new query window, and execute the command/verify it fails.
Create a SQL Server Profiler session, with the following options:
Column Filter - SPID Is Equal to the SPID for your SSMS window
Include Event: Statement Started
Include Event: Sp:StmtCompleted
This will show you individual statements in your UDF's are executing, allowing you to home in on the path. Another option is to simply edit the procedure to PRINT the parameters at the top, allowing you to home in on the recursion depth issue at the data level.
Related
I executed a script in unix that called a function in oracle db. I didn't gave the logfile information for the unix script. Usually, when I run a script to call a db function, I give logfile for script and monitor the unix log file and know that if the function is still running or is done. Also, the logfile has information whether the function executed successfully or not.
I have following concerns, based on above situation:
Can I monitor if the function is still running or not using oracle sql developer?
Can I know if the funtion executed successfully in Oracle DB or not? If oracle saves a log of function execution and I could access that then it would be great.
Thank You
Yes, you can monitor if the function is still running by checking the session's status in v$session. See this answer for information on how: How to list active / open connections in Oracle?
As for what the execution result was... probably not.
The PL/SQL you executed won't directly appear in dba_audit_trail, but any queries it ran as part of execution might. The audit trail will show if the queries were successful or not, but it won't show the query results or the final result of the function execution.
In a SSIS ETL, I have a query that I need to run on a server/db that does not allow us to create stored procedures.
I would normally use the stored procedure in my variable as the source for my OLE DB source:
However, since we can't put the stored procedure on this server, I was going to store the code for the stored procedure into a variable by executing a SQL statement, retrieving the text from our home database, then use the text stored in this variable as the SQL command for the source:
This way, I can still remotely change the SSIS OLE DB Source object WHERE clause (as long as I don't change the SELECT portion).
I can't imagine that this is very common, so I wanted to get some opinions - is there a better way to do this? I don't want to put all of the code for this SP into the OLE DB Source editor directly because we can't afford to redeploy in case of a WHERE clause update.
You've got the part down that many folks don't do and that's using Variables to drive your package execution. You are further correct in that you can't exactly swap out your columns. To be pedantic, which I am, you can completely change out the query as long as the same metadata is presented.
So, then this question becomes how best to accomplish allowing a package to have a query's filter driven by an external force. Factoring in maintainability, ease of debugging, etc.
My gut reaction is 3 Variables
QueryBase: String. Hardcoded. SELECT * FROM MyTable except of course I'd enumerate my columns
Query: String. EvaluateAsExpression = True Expression: #[User::QueryBase] + #[User::QueryFilter]
QueryFilter: String
So, we use Query in the OLE DB Source much as you have your longer variable name in there. The only downside to this approach, pre SSIS-2012 is the limitation on string length in an expression. It was ... 4k I believe. If you assign a value of 5k characters, it's fine. It's just in the expression language, adding two strings together can't exceed 4k.
I didn't specify what QueryFilter is going to have in it or the magic to get it there. That, I would base on the bigger picture of your environment, usage, etc. but the general concept is that it will eventually turn into WHERE Condition1 IS NOT NULL but maybe in a full reload situation, it becomes an empty string.
So, what are our options for changing the value of QueryFilter
/SET is an optional parameter passed to the invoking process (dtexec.exe) that makes SSIS packages go. If you have a very limited set of choices and aren't interested in building additional infrastructure out to support the parameters, just hard code some examples. Approximately dtexec /file p1.dtsx /set \Package.Variables[User::QueryFilter].Properties[Value];" WHERE Condition1 IS NOT NULL" Save it into .bat files, different sql agent jobs, whatever. Click and run and you're done.
Configuration approach. SSIS offers native ability to use configurations from a SQL Server table, XML, Registry, Parent Package and Environment Variable for 2005 to current edition. The only downside to this approach is that it would not support concurrent execution with different parameters like the first would.
Environment approach. 2012 and 2014, with their new Project Deployment Model, give us the concept of Environments within the SSISDB catalog which is similar to configuration with a SQL Server table but it is done after development is complete and the packages are deployed. It's rather nice as it builds out a history of values used so if someone asks why is the data all wrong, you can write a query to pull back the parameters used and Oh look someone used the initial load filter instead of the daily. Whoopsidaisy. Same concern over concurrent execution and changing values.
Table driven approach. Instead of using the Configuration with SQL Server table backing, you roll your own table and then add into your package an Execute SQL Task to retrieve the filter, Single Row, into our QueryFilter Variable.
Script Task. Use whatever floats your boat to determine what the filter should be.
Message Queue. They have built in a Message Queue Task and might be of use here if you're already doing it. Otherwise, too much effort to manage
When trying to automate reading out constraint information using sp_helpconstraint I got the bright idea of pulling out the source code of the built-in SP directly and run it myself (since it returns multiple result sets so those can't be stored in a temp table). So I ran exec sp_helptext 'sp_helpconstraint' (on SQL Azure) to generate the source code, and copied it into a new query window.
However, when I run the SP (on SQL Azure), I get lot's of error messages -- for example, that object syscomments doesn't exist even though I am using the exact same source that runs perfectly when calling sp_helpconstraint directly. Just to make sure it wasn't an anomaly with the procedure or a mistake in my copy/paste execution, I tested the exact same procedure on SQL Server 2008, and if I directly copy the SP source into a new query window, it runs perfectly (obviously after removing the return statements and manually setting the input parameters).
What gives?? Do built-in SP's run in a special context where more commands are available than normal on SQL Azure version? Is sp_helptext not returning the actual source that is being run on SQL Azure?
If you want me to try anything out, give a suggestion and I can try it on our SQL Azure Development instance. Thanks!
I'm trying to test a proposition that one of our vendors presented to us for accessing their product database and it regards to queries and transactions that span multiple servers. I've never done this directly on the database before and to be frank, I'm clueless, so I'm trying to mock up a proof that this works at least conceptually.
I've got two SQL Server 2005 servers. Let's for argument's sake call them Server1 and Server2 [hold your applause] each containing a dummy database. The dummy database on Server1 is called Source and that on Server2 is called Destination, just to keep things simple. The databases each hold a single table called Input and Output respectively, so the structure is quasi explained like so:
Server1.Source.dbo.Input
Server2.Destination.dbo.Output
I have a stored procedure on Server2 called WriteDataToOutput that receives a single Varchar argument and writes it's content to the output table.
Now the trickiness starts:
I want to create a stored procedure on Server1.Source that calls the WriteDataToOutput stored procedure defined on Server2, which seems like the simple step.
I want this call to be part of a transaction so that if the procedure that invokes it fails, the entire transaction is is rolled back.
And here endeth my knowledge of what to do. Can anyone point me in the right direction? I tried this on two different databases on the same server, and it worked just fine, leading me to assume that it will work on different servers, the question is, how do I go about doing such a thing? Where do I start?
As others have noted, I agree that a linked server is the best way to go.
Here are a couple of pointers that snagged me the first time I dealt with linked servers:
If the linked server is an instance, make sure you bracket the name. For example [SERVERNAME\INSTANCENAME].
Use an alias for the table or view from the linked server or you will get a "multi-part identifier cannot be bound" error. There is a limit of a 4 part naming convention. For example SERVER.DATABASE.dbo.TABLE.FIELD has five parts and will give an error. However, SELECT linked.FieldName FROM SERVER.DATABASE.dbo.TABLE AS linked will work fine
You will want to link the servers:
http://msdn.microsoft.com/en-us/library/aa213778.aspx
for step 2 you need to have Distributed Transaction Coordinator running, you also need to use SET XACT_ABORT ON to make sure it will all rollback
you also need to enable RPC which is turned off by default in 2005 and up
There is a whole bunch of stuff that can bite you in the neck
MSDN says you can have transactions across linked servers if you use the command BEGIN DISTRIBUTED TRANSACTION.
I remember though that I had problems called a stored procedure on a linked server, but I worked around it, rather than solving it.
Using linked servers, you can run stored procedures on either server within a single transaction using DTC (Distributed Transactino Coordinator). You will definitely want to do some performance analysis. I have found some SPs using links can drastically slow down down database performance, especially if you try to join result sets from each of the two servers.
Set up a linked server, then you should be able to execute selects/inserts/updates across the servers. Something like:
INSERT INTO Server2.Destination.dbo.Output
SELECT * FROM Input
WHERE <Criteria>
This assumes you are running the query from Server1.Source, so you wouldn't need to fully qualify.
We have a database running on SQL 2005. One of the store procedure looks up a user's email address from Active Directory using a linked server. The call to the linked server occurs in a database function.
I'm able to call is successfully from my Asp.Net application the first time, but periodically after that, it fails with the following error:
{"The requested operation could not be performed because OLE DB provider \"ADsDSOObject\" for linked server \"ADSI\" does not support the required transaction interface."}
It appears that the amount of time between calling the function affects whether the linked server query will work correctly. I am not using any transactions. When I try calling the function in a quick make-shift SQL script, it runs fine everytime (even when tested in quick succession).
Is there some sort of transaction being left open that naturally dies if I don't try calling the procedure again? I'm at a loss here.
Here is the simple call in the store procedure:
DECLARE #email varchar(50)
SELECT #email = LEFT(mail, 50)
FROM OPENQUERY (
ADSI,
'SELECT mail, sAMAccountName FROM ''LDAP://DC=Katz,DC=COM'' WHERE objectCategory = ''Person'' AND objectClass = ''User'''
)
WHERE sAMAccountName = CAST(#LoginName AS varchar(35))
RETURN #email
I've worked with SQL Server linkservers often, though rarely LDAP queries... but I got curious and read the Microsoft support page linked to in Ric Tokyo's previous post. Towards the bottom it reads:
It is typical for a directory server
to enforce a server limitation on the
number of objects that will be
returned for a given query. This is to
prevent denial-of-service attacks and
network overloading. To properly query
the directory server, large queries
should be broken up into many smaller
ones. One way to do this is through a
process called paging. While paging is
available through ADSI's OLEDB
provider, there is currently no way
available to perform it from a SQL
distributed query. This means that the
total number of objects that can be
returned for a query is the server
limit. In the Windows 2000 Active
Directory, the default server limit is
1,000 objects.
I'm thinking that the reason it fails on you (or not) depending on whether call it from the app or from a "quick make-shift sql script" (as you put it) might be related to the security context under which the operation is executing. Depending on how the link server connection was set up, the operation could be being executed under a variety of possible credentials depending on how you initiate the query.
I don't know, but that's my best guess. I'd look at the linkserver configuration, in particular the linkserver settings for what set of credentials are used as the security context under which operations executed across the linkserver run.
Rather then query Active Directory through a linked server, you might be better off caching your AD data into a SQL database and then querying that instead. You could use Integration Services by creating a OLE DB connection using "OLE DB PRovider for Microsoft Directory Services" and having a DataReader source with a query like:
SELECT physicalDeliveryOfficeName, department, company, title, displayName, SN,
givenName, sAMAccountName, manager, mail, telephoneNumber, mobile
FROM 'LDAP://DC=SOMECO,DC=COM'
WHERE objectClass='User' and objectCategory = 'Person'
order by mail
Using this method you will still run into the 1000 row limit for results from an AD query (note it is NOT advisable to try and increase this limit in AD, it is there to prevent the domain controller from becoming overloaded). Sometimes its possible to use a combination of queries to return the full data set, e.g. names A - L and M - Z
Alternatively you could use the CSVDE command line utility in Windows Server to export your directory information to a CSV file and then import it into a SQL database (see http://computerperformance.co.uk/Logon/Logon_CSVDE_Export.htm for more info on exporting AD data with CSVDE).
please read the support page from Microsoft
I suspect that it might be the cached query plan due to your statement that "When I try calling the function in a quick make-shift SQL script, it runs fine everytime (even when tested in quick succession)."
Could you try executing your stored procedure like so:
EXEC usp_MyProcedure WITH RECOMPILE
This question appears in the top of the first google page when search for the error string but has not valid answer.
This error happens intermitently when isolation level is not specified on .NET code nor in Store Procedure.
This error also happens in SQL Server 2008.
The fix is force SET TRANSACTION ISOLATION LEVEL READ (UN)COMMITTED because a isolation level any higher is not supported by Active Directory and SQL Server is trying to use SERIALIZABLE.
Now, as this error is intermitent. Why is ADO.NET or SQLServer switching its default isolation to SERIALIZABLE sometimes and sometimes not? What triggers this switching?