Informatica - How to Pass queries from a table to SQL Transformation and get the results - sql

SO heres what I am trying to do.
I have a table which has 2 columns - QC_Check and Query. For each QC_Check I have a query. There are several records like this.
Is there a way using SQL transformation that, I can fetch the SQL query stored in the Query column to Informatica, run the queries in Teradata and get the results stored somewhere.

Although, I have not tried it myself, this should be possible using SQL transformation in Query mode with Dynamic SQL Queries.
Use the table with Query column as a source. Create a SQL transformation with Query mode. Connect the Query column to the SQL transformation.
Write ~Query_Port~ in the SQL editor in the SQL transformation:
If you want to capture the results from your query, you have to configure output ports for columns you retrieve from the database.

Related

Query optimization and comparison with impala

I am working on PowerBi and use SQL server as database. I used views or direct tables as source to PowerBi . My views are simple select queries with simple joins. I am not finding any scope for query optimizations. Query execution takes time in SQL and table has millions of data increasing day by day.
Now I am thinking to use impala as well as SQL server. I am getting clean data from Rapidminer. I didn't use impala before. So I have some doubts. Please answer if you can. I have zero knowledge of impala.
Can we create connection between rapid miner and impala? then what will be the steps? google give me some steps which is difficult to understand.
Can we create connection between impala and sql?
Can we create view on impala and create joins in views? I know we can create view as well as joins in impala. But my question is can we create it together?
suppose SQl and impala connection is made then suppose I have one table from impala and one table from sql server management studio. can I join both tables in impala? for this can we create connection between impala and sql server management studio?
5.Can I use all tables or views created in sql to impala (after making connection between sql and impala). That means my tables or views are in sql. but I am fetching data in impala.
All tables stored in sql server. can I do join operation on these tables in impala.
7.Can I make views in impala using tables which are stored in sql
8.Can I create all tables in impala and do etl operation like sum, add, dateadd in impala
9.Can I create all tables in impala and do etl operation like sum, add, dateadd in power query
10.Can I create views from sql and put it in impala table. and use in power query
Can I create all tables and views with joins in impala?
12.How can I optimise my query in sql and if I run same query for same data in impala then my execution time will reduce or not?
My SQL query is like this
create view as test
select * from table a
inner join table b on a.id=b.id
inner join table c on b.name=c.name
go
output is 3000000 row. increasing day by day
also instead of using view I use table directly. but execution time is not decreasing.

Output DAX Evaluate results to a SQL Server Database table

So I have a DAX query that I have built in SSAS. But since my application like DAX, I thought pushing the set of DAX results to a database server table would be the ideal solution for my application to read.
How do I output the contents of a DAX query to a SQL Server database table? And if possible truncate the contents of the table before each run?
I'm using SQL Server 2016 if that helps.
I got around this by using a linked server to output the contents of an 'open query' to a table variable in SQL.

In terms of the queries we write, what would be the difference between the queries that we write in sql and spark sql?

We have a python scripts that parses the sql script (select, insert) to get the source and target columns while inserting data from one table to another. Right now we parse only SQL queries. Now when we want some Spark SQL queries to be parsed using the same model, will the structure of the queries change from SQL to Spark SQL?

Dynamic SQL Query for Matrix

I need some help in creating a stored procedure or a dynamic SQL query to query a matrix and have a output in a required format.
Database: SQL Server 2014
States table - screenshot:
Required output
More states will added to the table. So the Script should be able to dynamically get all the rows and columns.
The output is to create a new table.
Thanks in advance.
Try this below links for your reference it may help you for logic
How to pivot table in SQL Server with a stored procedure?
Dynamic Pivot Queries with dynamic dates as column header in SQL Server
Note: The above links for pivoting.better refer it and do the unipovt

How can I do a dynamic pivot in SQL Server 2000

I want to write a SQL statement that uses a pivot in SQL Server 2000. The PIVOT keyword is not available in SQL Server 2000 so I found some examples that use a case statement but that requires that you know the column names beforehand which I won't. How do I do a pivot which dynamically generates the column names from the data it has available to it?
We create SQL commands with the CASE statements from our application and fire them at the database (any database, not specifically SQL server). First we determine the number of pivot columns and their names using one query, from those results we generate the next query.
So the first query to determine the columns looks somewhat like:
SELECT DISTINCT myField FROM myTable
Then we use all the values in this result to construct an SQL command where a CASE statement is generated for each value.
We wanted a databasebase agnostic solution so we do this processing outside the database but i'm sure you could do the same in a stored procedure withing SQL server itself.
I have not tried to replicate PIVOT on SQL Server 2000 but what I have done is use PIVOT when I do not know the column names beforehand. I had used ROW_NUMBER() to determine the column names instead. You can try that.