I have a question regarding ssis packages. I have an ssis package with OnError, OnPreExecute and OnPostExecute. In these event handlers SQL task that perform different task and update different tables. My question is this: there are some system variables that I make use of like the SourceName and SourceDescriptin (which is the current sql task's name and description). I notice there isn't any variables for the connection (Server name, Database name) of the "Source" i.e step. Is there any way to get the database name and server name that Source/Step used? Any help will be much appreciated, thanking you in advance.
I dont think, we will have a system variable carrying the Server name and Db name. In an SSIS package, any number of connections can be created in the connection manager. So making them all available in a system variable will be not possible. I think such a thing need to be managed by developer itself. May be these steps will hep you.
1. Add a parameter (1 parameter per Server).
This parameter will be available through out the package.
2. When the package is running , if required give value to the parameter from SSIS catalog (through SSIS SPs)
For the server name, you can use the system variable called MachineName. It will give you the name of the machine the package is running on. As for the DB, you will have to capture it using a Execute SQL Task and store it within a variable of your choice.
Related
I am a newbie to QlikView and looking for some guidance on how to pass external parameter to qv script i.e qvw file.
Below is the scenario on which I am working:
We are creating a report for which the source is database and we will be using automation tools to trigger the script from Linux servers. Now after doing a bit of research I found two ways to connect to the database from QV script.
1) Use the connection string in script to connect the db but in our case the passwords are changed after every 3 months. So it cancels out this option.
2) Other option is to create a text file on qv server from linux job which will hold the connection string and include that text file into the script. This option is ruled out for my case as our qv server is shared by other teams and it is not secure to have password hard coded in a file on common server.
Now I am thinking to pass connection string or user name and password as a parameter to the script from the automation tool.
Is it possible to pass external parameters to qv script from linux server? And if yes, how to do it?
Something like below:
ODBC CONNECT TO server (XUserId is $(vuser), XPassword is $(vpwd));
SQL SELECT * FROM db.table;
$(vuser) and $(vpwd) are variables.
Thanks in advance for your time and please let me know if you need more clarification on this.
The chapter 7.1 Command Line Syntax of Qlikview Reference Manual (which i strongly recommend getting and using heavily) says:
/v
If this switch is immediately followed by a variable name and an assignment, the variable will obtain the
assigned value before the script execution starts.
What manual will not tell you, that the variable has to exist in the script, i.e. you add variable via Settings->Variable overview Ctrl-Alt-V and then you can pass it via:
qv /r /vvuser=user1 file.qvw
I use a different solution. In every qlikview file I add the line
$(must_include=.\etc\DBConnect.txt);
The connection is then defined in the text file DBConnect.txt which may look like this:
ODBC CONNECT TO [conn] (XUserId is cRQCaaaaaaaaaaabbbbbbbROaA, XPassword is YaaaaaaBBBBBBBBZ);
This way all users in the company may use the same qlikview files and refresh it by using their own credentials. However it is neccessary that all have the same name for the ODBC connection to the server.
In a SSIS ETL, I have a query that I need to run on a server/db that does not allow us to create stored procedures.
I would normally use the stored procedure in my variable as the source for my OLE DB source:
However, since we can't put the stored procedure on this server, I was going to store the code for the stored procedure into a variable by executing a SQL statement, retrieving the text from our home database, then use the text stored in this variable as the SQL command for the source:
This way, I can still remotely change the SSIS OLE DB Source object WHERE clause (as long as I don't change the SELECT portion).
I can't imagine that this is very common, so I wanted to get some opinions - is there a better way to do this? I don't want to put all of the code for this SP into the OLE DB Source editor directly because we can't afford to redeploy in case of a WHERE clause update.
You've got the part down that many folks don't do and that's using Variables to drive your package execution. You are further correct in that you can't exactly swap out your columns. To be pedantic, which I am, you can completely change out the query as long as the same metadata is presented.
So, then this question becomes how best to accomplish allowing a package to have a query's filter driven by an external force. Factoring in maintainability, ease of debugging, etc.
My gut reaction is 3 Variables
QueryBase: String. Hardcoded. SELECT * FROM MyTable except of course I'd enumerate my columns
Query: String. EvaluateAsExpression = True Expression: #[User::QueryBase] + #[User::QueryFilter]
QueryFilter: String
So, we use Query in the OLE DB Source much as you have your longer variable name in there. The only downside to this approach, pre SSIS-2012 is the limitation on string length in an expression. It was ... 4k I believe. If you assign a value of 5k characters, it's fine. It's just in the expression language, adding two strings together can't exceed 4k.
I didn't specify what QueryFilter is going to have in it or the magic to get it there. That, I would base on the bigger picture of your environment, usage, etc. but the general concept is that it will eventually turn into WHERE Condition1 IS NOT NULL but maybe in a full reload situation, it becomes an empty string.
So, what are our options for changing the value of QueryFilter
/SET is an optional parameter passed to the invoking process (dtexec.exe) that makes SSIS packages go. If you have a very limited set of choices and aren't interested in building additional infrastructure out to support the parameters, just hard code some examples. Approximately dtexec /file p1.dtsx /set \Package.Variables[User::QueryFilter].Properties[Value];" WHERE Condition1 IS NOT NULL" Save it into .bat files, different sql agent jobs, whatever. Click and run and you're done.
Configuration approach. SSIS offers native ability to use configurations from a SQL Server table, XML, Registry, Parent Package and Environment Variable for 2005 to current edition. The only downside to this approach is that it would not support concurrent execution with different parameters like the first would.
Environment approach. 2012 and 2014, with their new Project Deployment Model, give us the concept of Environments within the SSISDB catalog which is similar to configuration with a SQL Server table but it is done after development is complete and the packages are deployed. It's rather nice as it builds out a history of values used so if someone asks why is the data all wrong, you can write a query to pull back the parameters used and Oh look someone used the initial load filter instead of the daily. Whoopsidaisy. Same concern over concurrent execution and changing values.
Table driven approach. Instead of using the Configuration with SQL Server table backing, you roll your own table and then add into your package an Execute SQL Task to retrieve the filter, Single Row, into our QueryFilter Variable.
Script Task. Use whatever floats your boat to determine what the filter should be.
Message Queue. They have built in a Message Queue Task and might be of use here if you're already doing it. Otherwise, too much effort to manage
I am creating an MSSQL2008 SSIS package to generate and email reports from database tables. It works perfectly on a single database. The client is running 3 different databases used by 3 different divisions. The database structure is exactly the same. All three databases are located on the same server, same security / credentials are used.
I created a "For Each Loop Container" in my SSIS package that loops through the list of 3 items and populates it into a variable. How do I now take that and pass it to the "Execute SQL Task" to run three times (once for each database)?
Thank you for your time!
It was a lot easier than I expected.
I went to Properties of the "Execute SQL Task" and under "Expressions" for "Connection" I specified #varDBName, which was the variable I populated in the outer "for each" loop. I also needed to set "DelayValidation" property to "True" so it's only evaluated during run-time.
I hope this helps somebody else.
I have database connection to database DB1. The only thing I could do - execute any t-sql statements including using stored procedures. I want to export the specific table (or even the specific rows of specific table) to my local database. As you can read abve, DBs are on diffrent servers meaning no direct connection is possible. Therefore question: Is it possible to write query that returns the other query to execute on local server and get data? Also note, that table contains BLOBs. Thanks.
If you have SQL Server Management Studio, you can use the data import function on your local database to get the data. It works as long as you have Read/Select access on the tables you are trying to copy.
If you have Visual Studio you can use the database tools in there to move data between two servers as long as you can connect to both from your workstation.
Needs Ultimate or Premium though:
http://msdn.microsoft.com/en-us/library/dd193261.aspx
RedGate has some usefull tools too:
http://www.red-gate.com/products/sql-development/sql-compare/features
Maybe you should ask at https://dba.stackexchange.com/ instead.
If you can login to the remote db (where you can only issue t-sql), you may create linked server on your local server to the remote and use it later directly in queries, like:
select * from [LinkedServerName].[DatabaseName].[SchemaName].[TableName]
We have a database running on SQL 2005. One of the store procedure looks up a user's email address from Active Directory using a linked server. The call to the linked server occurs in a database function.
I'm able to call is successfully from my Asp.Net application the first time, but periodically after that, it fails with the following error:
{"The requested operation could not be performed because OLE DB provider \"ADsDSOObject\" for linked server \"ADSI\" does not support the required transaction interface."}
It appears that the amount of time between calling the function affects whether the linked server query will work correctly. I am not using any transactions. When I try calling the function in a quick make-shift SQL script, it runs fine everytime (even when tested in quick succession).
Is there some sort of transaction being left open that naturally dies if I don't try calling the procedure again? I'm at a loss here.
Here is the simple call in the store procedure:
DECLARE #email varchar(50)
SELECT #email = LEFT(mail, 50)
FROM OPENQUERY (
ADSI,
'SELECT mail, sAMAccountName FROM ''LDAP://DC=Katz,DC=COM'' WHERE objectCategory = ''Person'' AND objectClass = ''User'''
)
WHERE sAMAccountName = CAST(#LoginName AS varchar(35))
RETURN #email
I've worked with SQL Server linkservers often, though rarely LDAP queries... but I got curious and read the Microsoft support page linked to in Ric Tokyo's previous post. Towards the bottom it reads:
It is typical for a directory server
to enforce a server limitation on the
number of objects that will be
returned for a given query. This is to
prevent denial-of-service attacks and
network overloading. To properly query
the directory server, large queries
should be broken up into many smaller
ones. One way to do this is through a
process called paging. While paging is
available through ADSI's OLEDB
provider, there is currently no way
available to perform it from a SQL
distributed query. This means that the
total number of objects that can be
returned for a query is the server
limit. In the Windows 2000 Active
Directory, the default server limit is
1,000 objects.
I'm thinking that the reason it fails on you (or not) depending on whether call it from the app or from a "quick make-shift sql script" (as you put it) might be related to the security context under which the operation is executing. Depending on how the link server connection was set up, the operation could be being executed under a variety of possible credentials depending on how you initiate the query.
I don't know, but that's my best guess. I'd look at the linkserver configuration, in particular the linkserver settings for what set of credentials are used as the security context under which operations executed across the linkserver run.
Rather then query Active Directory through a linked server, you might be better off caching your AD data into a SQL database and then querying that instead. You could use Integration Services by creating a OLE DB connection using "OLE DB PRovider for Microsoft Directory Services" and having a DataReader source with a query like:
SELECT physicalDeliveryOfficeName, department, company, title, displayName, SN,
givenName, sAMAccountName, manager, mail, telephoneNumber, mobile
FROM 'LDAP://DC=SOMECO,DC=COM'
WHERE objectClass='User' and objectCategory = 'Person'
order by mail
Using this method you will still run into the 1000 row limit for results from an AD query (note it is NOT advisable to try and increase this limit in AD, it is there to prevent the domain controller from becoming overloaded). Sometimes its possible to use a combination of queries to return the full data set, e.g. names A - L and M - Z
Alternatively you could use the CSVDE command line utility in Windows Server to export your directory information to a CSV file and then import it into a SQL database (see http://computerperformance.co.uk/Logon/Logon_CSVDE_Export.htm for more info on exporting AD data with CSVDE).
please read the support page from Microsoft
I suspect that it might be the cached query plan due to your statement that "When I try calling the function in a quick make-shift SQL script, it runs fine everytime (even when tested in quick succession)."
Could you try executing your stored procedure like so:
EXEC usp_MyProcedure WITH RECOMPILE
This question appears in the top of the first google page when search for the error string but has not valid answer.
This error happens intermitently when isolation level is not specified on .NET code nor in Store Procedure.
This error also happens in SQL Server 2008.
The fix is force SET TRANSACTION ISOLATION LEVEL READ (UN)COMMITTED because a isolation level any higher is not supported by Active Directory and SQL Server is trying to use SERIALIZABLE.
Now, as this error is intermitent. Why is ADO.NET or SQLServer switching its default isolation to SERIALIZABLE sometimes and sometimes not? What triggers this switching?