I am trying to get all the SQL Statements present in an Informatica workflow. (These can be from Post Sql, Pre Sql, Source Qualifier etc). Could somebody guide me on how to about it.
At the session level you can run the session once, copy the session log contents into a text editor (I use notepad++) and find all instances of SELECT, DELETE and UPDATE.
If you know all your connection names you can search for that too as the query statement is usually preceded by mention of the connection name
Related
I try to create a pretty complex database on ms Access 2013, so I wanted to type it directly in SQL. It has no errors, as other DBMS can fully build the database from the script I wrote (for example, phpmyadmin imports it with no difficulty).
On this tutorial, it is showed how to write a SQL query in order to build tables. I thought this way matched well with my goal as I could copy-paste my script in the query and run it to create the whole thing.
But when I tried to open/double-click on the query a pop-up appears saying "Select data source", waiting for me to select an ODBC, either from a file or a host, before continuing and executing the query.
I tried other types of queries (creating only one table at time, trying on a blank file, or even SELECT * FROM *), bt this message keeps showing up and I really don't know how to deal with it as I don't want to connect to anything but the infile database.
Does anyone got a hint about what to do in this case?
Or, even better, how could Access import my SQL script in order to create the database?
You should configure the database connection in the ODBC and check whether the connection is established or not. Once the connection is established, you can run the query to fetch the data or create tables as per your requirement.
In a SSIS ETL, I have a query that I need to run on a server/db that does not allow us to create stored procedures.
I would normally use the stored procedure in my variable as the source for my OLE DB source:
However, since we can't put the stored procedure on this server, I was going to store the code for the stored procedure into a variable by executing a SQL statement, retrieving the text from our home database, then use the text stored in this variable as the SQL command for the source:
This way, I can still remotely change the SSIS OLE DB Source object WHERE clause (as long as I don't change the SELECT portion).
I can't imagine that this is very common, so I wanted to get some opinions - is there a better way to do this? I don't want to put all of the code for this SP into the OLE DB Source editor directly because we can't afford to redeploy in case of a WHERE clause update.
You've got the part down that many folks don't do and that's using Variables to drive your package execution. You are further correct in that you can't exactly swap out your columns. To be pedantic, which I am, you can completely change out the query as long as the same metadata is presented.
So, then this question becomes how best to accomplish allowing a package to have a query's filter driven by an external force. Factoring in maintainability, ease of debugging, etc.
My gut reaction is 3 Variables
QueryBase: String. Hardcoded. SELECT * FROM MyTable except of course I'd enumerate my columns
Query: String. EvaluateAsExpression = True Expression: #[User::QueryBase] + #[User::QueryFilter]
QueryFilter: String
So, we use Query in the OLE DB Source much as you have your longer variable name in there. The only downside to this approach, pre SSIS-2012 is the limitation on string length in an expression. It was ... 4k I believe. If you assign a value of 5k characters, it's fine. It's just in the expression language, adding two strings together can't exceed 4k.
I didn't specify what QueryFilter is going to have in it or the magic to get it there. That, I would base on the bigger picture of your environment, usage, etc. but the general concept is that it will eventually turn into WHERE Condition1 IS NOT NULL but maybe in a full reload situation, it becomes an empty string.
So, what are our options for changing the value of QueryFilter
/SET is an optional parameter passed to the invoking process (dtexec.exe) that makes SSIS packages go. If you have a very limited set of choices and aren't interested in building additional infrastructure out to support the parameters, just hard code some examples. Approximately dtexec /file p1.dtsx /set \Package.Variables[User::QueryFilter].Properties[Value];" WHERE Condition1 IS NOT NULL" Save it into .bat files, different sql agent jobs, whatever. Click and run and you're done.
Configuration approach. SSIS offers native ability to use configurations from a SQL Server table, XML, Registry, Parent Package and Environment Variable for 2005 to current edition. The only downside to this approach is that it would not support concurrent execution with different parameters like the first would.
Environment approach. 2012 and 2014, with their new Project Deployment Model, give us the concept of Environments within the SSISDB catalog which is similar to configuration with a SQL Server table but it is done after development is complete and the packages are deployed. It's rather nice as it builds out a history of values used so if someone asks why is the data all wrong, you can write a query to pull back the parameters used and Oh look someone used the initial load filter instead of the daily. Whoopsidaisy. Same concern over concurrent execution and changing values.
Table driven approach. Instead of using the Configuration with SQL Server table backing, you roll your own table and then add into your package an Execute SQL Task to retrieve the filter, Single Row, into our QueryFilter Variable.
Script Task. Use whatever floats your boat to determine what the filter should be.
Message Queue. They have built in a Message Queue Task and might be of use here if you're already doing it. Otherwise, too much effort to manage
Using Informatica designer, is there a way to run a complex SELECT statement as-is against a source database, and workflow it into a target table?
For example, SQL Server Integration Services makes it really easy to create source/target connections, paste your source SQL, and map the results to the target table. When the package is run, SQL runs against the source, and results are dumped into the target.
Yes, it is possible.
You need to create a source definition with ports that reflect the columns in your SELECT statement and override the generated query with yours by putting it into the SQL Query field of the Source Qualifier transformation.
Then link the port to the target, generate the session and workflow, configure the connections and your're done.
Yes it is possible, Informatica generates a query of its own for the columns which you propagate from Source Qualififer and you can override this Query at 2 levels:
1. Mapping Level: In source qualifier you can override it and you can validate the query
2. Session Level: In session you can use SQL QUERY column for your source to overrirde the default query and validate that as well. Also at session level you can pass this query as a parameter giving you flexibility to change source query as and when you desire
I use an application connected with an sql database. I found using the profiler that the application runs an update query with a syntax error. I don't have access to the application's source code. The result is that the record is not updated. Is there a way to modify the query every time it is executed with something like trigger? I can't use INSTEAD OF because there ism't any record updated or inserted.
This answer
https://stackoverflow.com/a/3319031/1359088
suggests a way to log to a text file all the errors. You could write a little utility and schedule it to run every hour or whatever, which could read through this log, find the erroneous sql statements, fix them, then run them itself.
I wanted to know if there is a SQL command which can disable/hide headers (field names) from SELECT query. Right now each time I run my query and generate .csv it shows field names in the very first row. I am on a platform where I do no have direct access to database.
This is not a SQL feature, but a feature of the environment where you run the SQL and then export it to CSV. Some environments offer this feature, some do not.
Yeeeep, mySQL has a skip-column-names option that can be applied when you invoke your MySQL connection.
http://dev.mysql.com/doc/refman/5.0/en/mysql-command-options.html