informatica multi correlated subquery implementation - sql

I am facing a task that due to my lack of experience with Informatica Components, in particular SQL Transformation, I did not implemented yet.
So what would be the best approch in PowerCenter to implement this kind of subquery:
SELECT
A.ID,
NVL2(A.SACHKONTO, B.KLAMMER, A.ID) AS KLAMMER
FROM
Table1 A,
(SELECT
A.ID AS KLAMMER,
B.ID
FROM
(SELECT
ID,
ID AS VON_ID,
LEAD(ID,1) OVER (ORDER BY ID) - 1 AS BIS_ID
FROM
Table1
WHERE
SACHKONTO IS NULL) A,
Table1 B
WHERE
B.ID BETWEEN A.VON_ID AND A.BIS_ID
) B
WHERE
A.ID = B.ID
So I tried different approch with small successed.
The first was to "decompose" the SQL in it's small part(I will refer if necessary or edit the question).
I also tried to put the all query (after have adapted it to Informatica SQL "language", but without success.
This is the most close solution that i got to replicate such query, without considerting performance important(I did use an SQL Override in the SQ), but as from the table result, the join is been not propely processed, then I believe I need to add another pipeline to let execute the join in the proper order:
My mapping solution:

You could put the subquery into a lookup transformation and then match the records coming through from the outer query (which i assume will be what you're putting into your zource qualifier per the A.ID = B.ID from the original query and then filter the nulls later)
Or you could similarly use 2 source qualifiers (one for the parent query and another for the subquery) followed by a joiner transformation with normal join type. This way will filter your source data sooner than my first option so performance wise should be better.
Final option is to use a source qualifier with sql override same as your current query... filters record out right at the database so a plus there (so long as the query itself is optimal) but sql overrides are not validated in the wider mapping context and can spring unexpected surprises so should be used with a lot of caution.

Related

Can I leverage BigQuery (BQ) partition via a join?

I am a Tableau designer, and we are building some views that get filtered by category a lot. Because of this, we tried to create a category_id that would serve as partition. The problem seems to be that if I filter data category only, the partition doesn't get used and the total table GB and cost gets hit.
Our team is trying to see if this could be minimized by using a nested query as follows:
SELECT *
FROM table a
INNER JOIN (
SELECT DISTINCT category_id, category
FROM table
) b
ON a.category_id = b.category_id
WHERE b.category = 'Category A'
The idea is that we could show the user b.category, they select it in Tableau and then the inner join would kick off the partition and limit the bytes returned. When I try this in the BQ interface, the estimated returned size comes back the same.
You'll need to filter on the partitioned field before you make the inner join.
I haven't used tableau before so don't know if this is possible but just an idea. You could create a parameter which is set by the chosen category in tableau, which could be referenced in the where statement of the partitioned table?
SELECT *
FROM table a
INNER JOIN (
SELECT DISTINCT category_id, category
FROM table
Where category = #chosen_category
) b
ON a.category_id = b.category_id;
When you say that your attempts to filter only by category, the partition isn't used, have you actually tested querying the table from the console to test whether the partition is being used or not. If it isn't then you need to look at the partition, but if it is, then you would need to take another look at your Tableau query.
VizQL (Viz query language) is Tableau's sql parser that converts your Tableau viz into SQL for execution, so whilst you cannot really modify the outgoing SQL, you can at least capture it and test which enables you to identify poor performing calculations and/or vizzes, as well as optimise the backend for the queries that Tableau will send.
I've written an article about this here: https://datawonders.atlassian.net/wiki/spaces/TABLEAU/pages/1290600449/Let+s+Talk+Errors+Tuning+6+minute+read
The thing about Tableau is that it treats the source as a derived table, with all filters being placed at the upper-level of the query immediately before the stream,
so your query:
Select *
From table a
Join (
Select Distinct Category_ID, Category
From table
)b On a.category_id = b.category_id
Where b.category = 'Category A'
Will actually look like this (assuming you just select everything):
Select a1.*
From (
Select *
From table a
Join (
Select Distinct Category_ID, Category
From table
)b On a.category_id = b.category_id
)a1
Where a1.category = <your selected category>
So you can see from here that being two-levels deep, your Category table just won't be hit, instead everything shall be read into the spool, the join taking place in tempdb, and only the complete set is filtered immediately before streaming to Tableau.
Bad, underperforming sql it most certainly is.
And this is where the relational method of v2020.2 comes into play, as this has been designed to treat each table as a separate exclusive entity, joins are only made at execution time, so you could build a view that uses data from table a where you are using table b to provide the filtering.
As an alternative, and my preferred overall method is to switch entirely to Custom SQL, utilising this with parameters, as this will enable you to craft and test your own sql to create your own high-performance, low-loading query, but as parameters are parsed before the query is executed, you can place the filtering deep down in the query without the need for a secondary look-up table or filtered derived statement - a select distinct as you are currently using it is still going to produce a large plan, as unless the category column is indexed, the engine shall still need to read every record from the table.
So using parameters, your new query will look something like:
Select a1.*
From (
Select *
From table a
Join lookup_table b On On a.category_id = b.category_id
And b.category = <parameters.pCategory>
)a1
(I've placed the filter condition directly onto the join as this can improve performance in some circumstances, though this actually shouldn't make much difference)
And when used in conjunction with the Set parameter action, you can now use parameters as in/out updateable variables which shall update as the user interacts directly with the viz, instead of the user needing to manually update as they go. If you haven't used these before, I wrote an article about it here: https://community.tableau.com/s/news/a0A4T00000313S0UAI/psst-have-you-had-a-go-with-variables-in-tableau-yet
Steve

Teiid not performing optimal join

For our Teiid Springboot project we use a row filter in a where clause to determine what results a user gets.
Example:
SELECT * FROM very_large_table WHERE id IN ('01', '03')
We want the context in the IN clause to be dynamic like so:
SELECT * FROM very_large_table WHERE id IN (SELECT other_id from very_small_table)
The problem now is that Teiid gets all the data from very_large_table and only then tries to filter with the where clause, this makes the query 10-20 times slower. The data in this very_small_tableis only about 1-10 records and it is based on the user context we get from Java.
The very_large_table is located on a Oracle database and the very_small_table is on the Teiid Pod/Container. Somehow I can't force Teiid to ship the data to Oracle and perform filtering there.
Things that I have tried:
I have specified the the foreign data wrappers as follows
CREATE FOREING DATA WRAPPER "oracle_override" TYPE "oracle" OPTIONS (EnableDependentsJoins 'true');
CREATE SERVER server_name FOREIGN DATA WRAPPER "oracle_override";
I also tried, exists statement or instead of a where clause use a join clause to see if pushdown happened. Also hints for joins don't seem to matter.
Sadly the performance impact at the moment is that high that we can't reach our performance targets.
Are there any cardinalities on very_small_table and very_large_table? If not the planner will assume a default plan.
You can also use a dependent join hint:
SELECT * FROM very_large_table WHERE id IN /*+ dj */ (SELECT other_id from very_small_table)
Often, exists performs better than in:
SELECT vlt.*
FROM very_large_table vlt
WHERE EXISTS (SELECT 1 FROM very_small_table vst WHERE vst.other_id = vlt.id);
However, this might end up scanning the large table.
If id is unique in vlt and there are no duplicates in vst, then a JOIN might optimize better:
select vlt.*
from very_small_table vst join
very_large_table vlt
on vst.other_id = vlt.id;

Determine datatypes of columns - SQL selection

Is it possible to determine the type of data of each column after a SQL selection, based on received results? I know it is possible though information_schema.columns, but the data I receive comes from multiple tables and is joint together and the data is renamed. Besides that, I'm not able to see or use this query or execute other queries myself.
My job is to store this received data in another table, but without knowing beforehand what I will receive. I'm obviously able to check for example if a certain column contains numbers or text, but not if it is originally stored as a TINYINT(1) or a BIGINT(128). How to approach this? To clarify, it is alright if the data-types of the columns of the source and destination aren't entirely the same, but I don't want to reserve too much space beforehand (or too less for that matter).
As I'm typing, I realize I'm formulation the question wrong. What would be the best approach to handle described situation? I thought about altering tables on the run (e.g. increasing size if needed), but that seems a bit, well, wrong and not the proper way.
Thanks
Can you issue the following query about your new table after you create it?
SELECT *
INTO JoinedQueryResults
FROM TableA AS A
INNER JOIN TableB AS B ON A.ID = B.ID
SELECT *
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = 'JoinedQueryResults'
Is the query too big to run before knowing how big the results will be? Get a idea of how many rows it may return, but the trick with queries with joins is to group on the columns you are joining on, to help your estimate return more quickly. Here's of an example of just returning a row count from the query above which would have created the JoinedQueryResults table above.
SELECT SUM(A.NumRows * B.NumRows)
FROM (SELECT ID, COUNT(*) AS NumRows
FROM TableA
GROUP BY ID) AS A
INNER JOIN (SELECT ID, COUNT(*) AS NumRows
FROM TableB
GROUP BY ID) AS B ON A.ID = B.ID
The query above will run faster if all you need is a record count to help you estimate a size.
Also try instantiating a table for your results with a query like this.
SELECT TOP 0 *
INTO JoinedQueryResults
FROM TableA AS A
INNER JOIN TableB AS B ON A.ID = B.ID

SQL - Relationship between a SubQuery and an Outer Table

Problem
I need to better understand the rules about when I can reference an outer table in a subquery and when (and why) that is an inappropriate request. I've discovered a duplication in an Oracle SQL query I'm trying to refactor but I'm running into issues when I try and turn my referenced table into a grouped subQuery.
The following statement works appropriately:
SELECT t1.*
FROM table1 t1,
INNER JOIN table2 t2
on t1.id = t2.id
and t2.date = (SELECT max(date)
FROM table2
WHERE id = t1.id) --This subquery has access to t1
Unfortunately table2 sometimes has duplicate records so I need to aggregate t2 first before I join it to t1. However when I try and wrap it in a subquery to accomplish this operation, suddenly the SQL engine can't recognize the outer table any longer.
SELECT t1.*
FROM table1 t1,
INNER JOIN (SELECT *
FROM table2 t2
WHERE t1.id = t2.id --This loses access to t1
and t2.date = (SELECT max(date)
FROM table2
WHERE id = t1.id)) sub on t1.id = sub.id
--Subquery loses access to t1
I know these are fundamentally different queries I'm asking the compiler to put together but I'm not seeing why the one would work but not the other.
I know I can duplicate the table references in my subquery and effectively detach my subquery from the outer table but that seems like a really ugly way of accomplishing this task (what with all the duplication of code and processing).
Helpful References
I found this fantastic description of the order in which clauses are executed in SQL Server: (INNER JOIN ON vs WHERE clause). I'm using Oracle but I would think that this would be standard across the board. There is a clear order to clause evaluation (with FROM being first) so I would think that any clause occuring further down the list would have access to all information previously processed. I can only assume my 2nd query somehow changes that ordering so that my subquery is being evaluated too early?
In addition, I found a similar question asked (Referencing outer query's tables in a subquery
) but while the input was good they never really explained why he couldn't do what he is doing and just gave alternative solutions to his problem. I've tried their alternate solutions but it's causing me other issues. Namely, that subquery with the date reference is fundamental to the entire operation so I can't get rid of it.
Questions
I want to understand what I've done here... Why can my initial subquery see the outer table but not after I wrap the entire statement in a subquery?
That said, if what I'm trying to do can't be done, what is the best way of refactoring the first query to eliminate the duplication? Should I reference table1 twice (with all the duplication that requires)? Or is there (probably) a better way of tackling this problem?
Thanks in advance!
------EDIT------
As some have surmised these queries above are not the actually query I'm refactoring but an example of the problem I'm running into. The query I'm working with is a lot more complicated so I'm hesitant to post it here as I'm afraid it will get people off track.
------UPDATE------
So I ran this by a fellow developer and he had one possible explanation for why my subquery is losing access to t1. Because I'm wrapping this subquery in a parenthesis, he thinks that this subquery is being evaluated before my table t1 is being evaluated. This would definitely explain the 'ORA-00904: "t1"."id": invalid identifier' error I've been receiving. It would also suggest that like arithmetic order of operations, that adding parens to a statement gives it priority within certain clause evaluations. I would still love for an expert to weigh in if they agree/disagree that is a logical explanation for what I'm seeing here.
So I figured this out based on the comment that Martin Smith made above (THANKS MARTIN!) and I wanted to make sure I shared my discovery for anyone else who trips across this issue.
Technical Considerations
Firstly, it would certainly help if I used the proper terminology to describe my problem: My first statement above uses a correlated subquery:
http://en.wikipedia.org/wiki/Correlated_subquery
http://www.programmerinterview.com/index.php/database-sql/correlated-vs-uncorrelated-subquery/
This is actually a fairly inefficient way of pulling back data as it reruns the subquery for every line in the outer table. For this reason I'm going to look for ways of eliminating these type of subqueries in my code:
https://blogs.oracle.com/optimizer/entry/optimizer_transformations_subquery_unesting_part_1
My second statement on the other hand was using what is called an inline view in Oracle also known as a derived table in SQL Server:
http://docs.oracle.com/cd/B19306_01/server.102/b14200/queries007.htm
http://www.programmerinterview.com/index.php/database-sql/derived-table-vs-subquery/
An inline view / derived table creates a temporary unnamed view at the beginning of your query and then treats it like another table until the operation is complete. Because the compiler needs to create a temporary view when it sees on of these subqueries on the FROM line, those subqueries must be entirely self-contained with no references outside the subquery.
Why what I was doing was stupid
What I was trying to do in that second table was essentially create a view based on an ambiguous reference to another table that was outside the knowledge of my statement. It would be like trying to reference a field in a table that you hadn't explicitly stated in your query.
Workaround
Lastly, it's worth noting that Martin suggested a fairly clever but ultimately inefficient way to accomplish what I was trying to do. The Apply statement is a proprietary SQL Server function but it allows you to talk to objects outside of your derived table:
http://technet.microsoft.com/en-us/library/ms175156(v=SQL.105).aspx
Likewise this functionality is available in Oracle through different syntax:
What is the equivalent of SQL Server APPLY in Oracle?
Ultimately I'm going to re-evaluate my entire approach to this query which means I'll have to rebuild it from scratch (believe it or not I didn't create this monstrocity originally - I swear!). A big thanks to everyone who commented - this was definitely stumping me but all of the input helped put me on the right track!
How about the following query:
SELECT t1.* FROM
(
SELECT *
FROM
(
SELECT t2.id,
RANK() OVER (PARTITION BY t2.id, t2.date ORDER BY t2.date DESC) AS R
FROM table2 t2
)
WHERE R = 1
) sub
INNER JOIN table1 t1
ON t1.id = sub.id
In your second example you are trying to pass the t1 reference down 2 levels.. you can't do that, you can only pass it down 1 level (which is why the 1st works). If you give a better example of what you are trying to do, we can help you rewrite your query as well.

Multiple Joins in Teradata SQL - Faster to Use Subqueries or Temp Tables?

I am writing SQL for Teradata. I need to use joins to connect data from multiple tables. Is it typically faster to use subqueries or create temporary tables and append columns one join at a time? I'm trying to test it myself but network traffic makes it hard for me to tell which is faster.
Example A:
SELECT a.ID, a.Date, b.Gender, c.Age
FROM mainTable AS a
LEFT JOIN (subquery 1) AS b ON b.ID = a.ID
LEFT JOIN (subquery 2) AS c ON c.ID = a.ID
Or I could...
Example B:
CREATE TABLE a AS (
SELECT mainTable.ID, mainTable.Date, sq.Gender
FROM mainTable
LEFT JOIN (subquery 1) AS sq ON sq.id = mainTable.ID
)
CREATE TABLE b AS (
SELECT a.ID, a.Date, a.Gender, sq.Age
FROM a
LEFT JOIN (subquery 2) AS sq ON sq.id = a.ID
)
Assuming I clean everything up afterward, is one approach preferable to another? Again, I would like to just test this myself but the network traffic is kind of messing me up.
EDIT: The main table has anywhere from 100k to 5 million rows. The subqueries return a 1:1 relationship to the main table's IDs, but require WHERE clauses to filter dates. The subquery SQL isn't trivial, I guess is what I'm trying to convey.
Of course it's recommended to write joins, that's why there's an optmizer :-)
If you create temporary tables you force a specific order of processing instead of letting the optimizer decide which is the best plan.
Creating temporary tables might be usefull in some rare cases when you got a really complex query with dozens of joins and you need to break it into a more easily maintainable parts or you would like to get a specific PI for further processing.
Regarding testing different approaches:
Runtime should never be used for that, it might vary greatly based on the load on the server. You need to access Teradata's Query Log (DBQL: dbc.QryLogV, etc.) to get details about actual CPU/IO/spool usage. If you don't have access to it you might ask your DBA to grant it to you.
Btw, instead of real tables you should create VOLATILE TABLES which are automatically dropped when you logoff.