NHibernate - ICriteria for two tables? - nhibernate

I'm having a heck of a time creating the ICriteria for two tables.
The SQL I am trying to mimic is below.
Is there anyway to do this?
I've been trying CreateAlias, Subqueries, and a bunch of other stuff But I always end up with errors.
I have tried posting this on the nhusers Google group, but not getting much help.
Thanks.
Kim
SELECT *
FROM Echo_File_status efs, Data_DELETION_PARAMETER ddp
WHERE
efs.EFS_PRODUCT_CODE = DDP.DDP_PRODUCT_CODE(+)
AND
DDP.DDP_PROCESS_TYPE = 'D'
AND
( ( trunc(nvl(efs.efs_file_create_date, sysdate)) > sysdate - dp.DDP_DAYS_ON_LINE ) or
( efs.efs_status_code != 'ACKED' ) )
ORDER BY efs.efs_product_code, decode(efs.efs_status_code, 'READY', 1, 'TRANS', 2 , 'FAERR', 3, 'FCERR', 4, 'PRERR', 5, 'TRERR', 6, 'PREP', 7, 'PRCOM', 8, 'FCREA', 9 , 'TRCOM', 10, 'ACKED', 11, 1),
efs.efs_file_create_date DESC

Why use icriteria while Hql would be easy to use? Join the objects on that code property.

My personal opinion on this kind of problem, is that you should just keep writing expressions like that in pure SQL, since it will only be worse to read and maintain using the NHibernate criteria API.
As long as you have test coverage on the query, you can safely hide that implementation detail away.

Related

Insert query failed in Vertica with ERROR code 4534 when triggered from RStudio

I am executing an insert query on Vertica DB and it is working fine when triggered from a SQL client(SQuirrel). But when I am trying to trigger the same query from RStudio it is returning the following error:
Error in .local(conn, statement, ...) : execute JDBC update query
failed in dbSendUpdate ([Vertica]VJDBC ERROR: Receive on
v_default_node0005: Message receipt from v_default_node0008 failed [])
The SQL query somewhat looks like:
insert into SCHEMA1.TEMP_NEW(
SELECT C.PROGRAM_GROUP_ID,
C.POPULATION_ID,
C.PROGRAM_ID,
C.FULLY_QUALIFIED_NAME,
C.STATE,
C.DATA_POINT_TYPE,
C.SOURCE_TYPE,
B.SOURCE_DATA_PARTITION_ID AS DATA_PARTITION_ID,
C.PRIMARY_CODE_PRIMARY_DISPLAY,
C.PRIMARY_CODE_ID,
C.PRIMARY_CODING_SYSTEM_ID,
C.PRIMARY_CODE_RAW_CODE_DISPLAY,
C.PRIMARY_CODE_RAW_CODE_ID,
C.PRIMARY_CODE_RAW_CODING_SYSTEM_ID,
(C.COMPONENT_QUALIFIED_NAME)||('/2') AS SPLIT_PART,
Count(*) AS RECORD_COUNT
from (SELECT DPL.PROGRAM_GROUP_ID,
DPL.POPULATION_ID,
DPL.PROGRAM_ID,
DPL.FULLY_QUALIFIED_NAME,
'MET' AS STATE,
DPL.DATA_POINT_TYPE,
DPL.IDENTIFIER_SOURCE_TYPE AS SOURCE_TYPE,
DPL.IDENTIFIER_SOURCE_DATA_PARTITION_ID AS DATA_PARTITION_ID,
DPL.PRIMARY_CODE_PRIMARY_DISPLAY,
DPL.PRIMARY_CODE_ID,
DPL.PRIMARY_CODING_SYSTEM_ID,
DPL.PRIMARY_CODE_RAW_CODE_DISPLAY,
DPL.PRIMARY_CODE_RAW_CODE_ID,
DPL.PRIMARY_CODE_RAW_CODING_SYSTEM_ID,
DPL.supporting_data_point_lite_id,
DPL.COMPONENT_QUALIFIED_NAME,
COUNT(*) AS RECORD_COUNT
FROM SCHEMA2.TABLE1 DPL
WHERE DPL.DATA_POINT_TYPE <> 'PREFERRED_DEMOGRAPHICS'
AND DPL.DATA_POINT_TYPE <> 'PERSON_DEMOGRAPHICS'
AND DPL.DATA_POINT_TYPE <> 'CALCULATED_RISK_SCORE'
AND DPL.DATA_POINT_TYPE <> '_NOT_RECOGNIZED'
AND DPL.POPULATION_ID NOT ILIKE '%ARCHIVE%'
AND DPL.POPULATION_ID NOT ILIKE '%SNAPSHOT%'
AND DPL.PROGRAM_GROUP_ID = '<PROGRAM_GROUP_ID>'
AND PROGRAM_GROUP_ID IS NOT NULL
AND DPL.IDENTIFIER_SOURCE_DATA_PARTITION_ID IS NULL
AND DPL.PRIMARY_CODE_RAW_CODE_ID IS NOT NULL
AND DPL.PRIMARY_CODE_ID IS NOT NULL
AND EXISTS (SELECT 1
FROM SCHEMA2.TABLE2 MO
WHERE MO.STATE = 'MET'
AND MO.POPULATION_ID NOT ILIKE '%ARCHIVE%'
AND MO.POPULATION_ID NOT ILIKE '%SNAPSHOT%'
AND DPL.PROGRAM_GROUP_ID = MO.PROGRAM_GROUP_ID
AND DPL.PROGRAM_ID = MO.PROGRAM_ID
AND DPL.FULLY_QUALIFIED_NAME = MO.FULLY_QUALIFIED_NAME
AND DPL.OUTCOME_SEQUENCE = MO.MEASURE_OUTCOME_SEQ
AND MO.PROGRAM_GROUP_ID = '<PROGRAM_GROUP_ID>')
GROUP BY 1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16) AS C
Left Join
(SELECT DISTINCT SOURCE_DATA_PARTITION_ID,
supporting_data_point_lite_id
FROM SCHEMA2.TABLE3 DPI
where DPI.SOURCE_DATA_PARTITION_ID is not null
AND EXISTS (SELECT 1
FROM (SELECT DPL.supporting_data_point_lite_id
FROM SCHEMA2.TABLE1 DPL
WHERE DPL.DATA_POINT_TYPE <> 'PREFERRED_DEMOGRAPHICS'
AND DPL.DATA_POINT_TYPE <> 'PERSON_DEMOGRAPHICS'
AND DPL.DATA_POINT_TYPE <> 'CALCULATED_RISK_SCORE'
AND DPL.DATA_POINT_TYPE <> '_NOT_RECOGNIZED'
AND DPL.POPULATION_ID NOT ILIKE '%ARCHIVE%'
AND DPL.POPULATION_ID NOT ILIKE '%SNAPSHOT%'
AND DPL.PROGRAM_GROUP_ID = '<PROGRAM_GROUP_ID>'
AND PROGRAM_GROUP_ID IS NOT NULL
AND DPL.IDENTIFIER_SOURCE_DATA_PARTITION_ID IS NULL
AND DPL.PRIMARY_CODE_RAW_CODE_ID IS NOT NULL
AND DPL.PRIMARY_CODE_ID IS NOT NULL
AND EXISTS (SELECT 1
FROM SCHEMA2.TABLE2 MO
WHERE MO.STATE = 'MET'
AND MO.POPULATION_ID NOT ILIKE '%ARCHIVE%'
AND MO.POPULATION_ID NOT ILIKE '%SNAPSHOT%'
AND DPL.PROGRAM_GROUP_ID = MO.PROGRAM_GROUP_ID
AND DPL.PROGRAM_ID = MO.PROGRAM_ID
AND DPL.FULLY_QUALIFIED_NAME = MO.FULLY_QUALIFIED_NAME
AND DPL.OUTCOME_SEQUENCE = MO.MEASURE_OUTCOME_SEQ
AND MO.PROGRAM_GROUP_ID = '<PROGRAM_GROUP_ID>')) SDP
WHERE DPI.supporting_data_point_lite_id = SDP.supporting_data_point_lite_id)) AS B
on C.supporting_data_point_lite_id = B.supporting_data_point_lite_id
group by 1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15)
Only the schema name and the table names have been replaced. All other details the same.
Can someone please help me to fix the error.
This error means some node-to-node communication that happened during the processing of your query failed for some reason.
There are many possible reasons this could happen. Sometimes a poor network or other environment issues could cause this to occur. If v_default_node0008 was taken down while this query was running for example, you may see this message. Other times it can be the sign of a Vertica bug, in which case you'd have to take it up with support and/or your administrator.
Normally when a query plan is executing, the control flow happens from the bottom up. At the lowest levels of the plan, various scan(s) read from projections, and when there's no data left to feed to operators above the scan(s), they stop, which causes their neighboring operators to stop, until ultimately the root operator stops and the query finishes.
Occasionally, there is a need to end the query in a top-down fashion. When you have many nodes, each passing data between multiple threads in service of your query, it can be tricky for Vertica to tear down everything atomically in a deterministic fashion. If a thread sending data stops before the thread receiving data was expecting it to (because the receiver hasn't realized the plan is being stopped yet), then it may log this error message. Usually when that happens it is innocuous; you'll see it in vertica.log but it doesn't bubble all the way up to the application. If one of those is making its way to the application then it is probably a Vertica bug.
So when can this happen?
One common scenario is when you have a LIMIT clause. The different scans producing rows on different nodes can't coordinate directly, so they have to be told by operators higher up in the plan when the limit has been reached.
It also happens when a query gets canceled. Cancellation can happen for many reasons -- at the request of the application, from the dba running interrupt_statement on your query, or via resource pool policy. If you exceed the RUNTIMECAP for your resource pool for example, the query is automatically cancelled if it exceeds a configured execution time threshold.
There may be others too, but those are the most common cases. It won't always be obvious that either limits or cancels are happening to you. The query may be rewritten to include a limit at various stages, and the application or and/or DBA's policy may be affecting things under the cover.
While this doesn't directly solve your problem, it hopefully gives you some additional context and ideas for further troubleshooting. The problem is likely going to be very specific to your use case, environment and data, and could be a bug. If you can't make progress I'd suggest taking it to Vertica support, since they will be more equipped to help you make sense of this further.

Merging two queries that are the same BUT their GROUP BY?

So as the title implies, I have the same exact structure for two queries but I just need to change the grouping. I know I can do a union, however, to narrow the script, is there a way to associate both grouping in one query? For example:
SELECT shtuff FROM report_tickets WHERE something = something GROUP BY companies
//Query two
SELECT shtuff FROM report_tickets WHERE something = something GROUP BY ticket_id
Can I do?:
SELECT * FROM report_tickets WHERE something = something GROUP BY companies AND ticket_id
I know this doesn't work but in concept. Have the query take in account both groupings? ^^
I would need to add the following to line 30 from the linked code below, this may determine if it can work or not.
SUM(IF(ROUND((SELECT difference)) < 1, 1, 0)) AS zero_to_one,
SUM(IF(ROUND((SELECT difference)) BETWEEN 1 AND 8, 1, 0)) AS one_to_eight,
SUM(IF(ROUND((SELECT difference)) BETWEEN 8 AND 24, 1, 0)) AS eight_to_twentyfour,
SUM(IF(ROUND((SELECT difference)) > 24, 1, 0)) AS over_twentyfour,

What does Google BigQuery's string "t0" mean?

I've trying to understand GoogleBigQuery and I've seen this in a Query : AS t0
I also see t0 attached to some metrics or dimension like this t0.postId
Here is the full query I'm trying to understand :
SELECT t0.Author, COUNT(DISTINCT t0.postId, 50000) AS t0.calc_FPB538 FROM (SELECT
MAX(IF (hits.customDimensions.index = 10, hits.customDimensions.value, NULL)) WITHIN RECORD AS postId,
date(MAX(IF (hits.customDimensions.index = 4, hits.customDimensions.value, NULL))) WITHIN RECORD AS Datepublished,
MAX(IF (hits.customDimensions.index = 1, hits.customDimensions.value, NULL)) WITHIN RECORD AS Country,
MAX(IF (hits.customDimensions.index = 7, hits.customDimensions.value, NULL)) WITHIN RECORD AS Author,
FROM
[My_data.ga_sessions_20161104]) AS t0 WHERE (STRFTIME_UTC_USEC(TIMESTAMP_TO_USEC(TIMESTAMP(STRING(t0.Datepublished))), '%Y%m%d') >= '20161102' AND STRFTIME_UTC_USEC(TIMESTAMP_TO_USEC(TIMESTAMP(STRING(t0.Datepublished))), '%Y%m%d') <= '20161108') GROUP EACH BY t0.Author ORDER BY t0.calc_FPB538 DESC
What does it mean, how should I use it ?
Thanks.
I think you really need to find a tutorial on basic sql/query terms and methods, but in general (and I'm going to use general terms like object as it applies whether table or not) when you see syntax like this:
[My_data.ga_sessions_20161104]) AS t0
You are saying look at this object/table [My_data.ga_session_20161104] and give it a label of t0 so I can reference columns/datapoints on that object. Then when you later see things like t0.postId you know that you are referencing [My_data.ga_sessions_20161104]. This way if you reference another similar table that has a datapoint/column of postId both you and the engine running the query knows what the heck you are talking about.
You can also label columns/data points as you see in your query with COUNT(DISTINCT t0.postId, 50000) AS t0.calc_FPB538 this is saying perform a count on the number of postId results and label it as t0.calc_FPB538 because I will want to reference it as such later (or you just like your resutls to have specific names).

Translate this SQL to jOOQ

I'm failing to translate this SQL into working jOOQ:
SELECT * FROM product p JOIN
( SELECT * FROM
(SELECT max(product_rev_id) as maxi
FROM product_rev
GROUP BY product_id) as a
JOIN product_rev as t ON t.product_rev_id = maxi
WHERE valid_to IS NOT NULL
) as z ON z.product_id = p.product_id
WHERE p.product_id in(1,2,3,4,5);
Here's the SQL-Fiddle:
http://sqlfiddle.com/#!9/d7816/1
I tried several hours, but it's getting a mess with all that aliases and jOOQ.
The easiest way to translate such nested queries to jOOQ is by looking at jOOQ sub queries as composable, reusable elements. I.e.
// Assuming this
import static org.jooq.impl.DSL.*;
// Then write the inner-most derived table
Table<?> a = table(
select(max(PRODUCT_REV.PRODUCT_REV_ID).as("maxi"))
.from(PRODUCT_REV)
.groupBy(PRODUCT_REV.PRODUCT_ID)
).as("a");
// Then use a in the middle derived table
ProductRev t = PRODUCT_REV.as("t");
Table<?> z = table(
select()
.from(a)
.join(t).on(t.PRODUCT_REV_ID.eq(a.field("maxi", PRODUCT_REV.PRODUCT_REV_ID.getType())))
.where(t.VALID_TO.isNotNull())
).as("z");
// Finally, the outer-most query
Product p = PRODUCT.as("p");
DSL.using(configuration)
.select()
.from(p)
.join(z).on(z.field(PRODUCT_REV.PRODUCT_ID).eq(p.PRODUCT_ID))
.where(p.PRODUCT_ID.in(1, 2, 3, 4, 5))
.fetch();
Alternative, using views
From your query, I suspect that the only really dynamic part is
WHERE p.product_id in (1, 2, 3, 4, 5)
This means that you might as well create a view in your database for the rest of the query, and query that view from your client.

Coldfusion: SELECT FROM QoQ WHERE X in (SELECT Y FROM QoQ) not working

In CF I am trying to do a QoQ where the rows are in a list of other rows. Basically moving some code from cftags to cfscript (Not important why). In tags we have a main query and we have several nests that do some heavy lifting. I am moving this to cfscript and have the following syntax that is working:
var submissionList = new Query(dbtype="query", QoQsrcTable=ARGUMENTS.answers, sql="
SELECT submission_id FROM QoQsrcTable GROUP BY submission_id
").execute().getResult();
var submissions = new Query(dbtype="query", QoQsrcTable=ARGUMENTS.answers, sql="
SELECT * FROM QoQsrcTable WHERE submission_id IN (#submissionList.submission_id#)
").execute().getResult();
I have tried the following but it fails to work:
var submissions = new Query(dbtype="query", QoQsrcTable=ARGUMENTS.answers, sql="
SELECT * FROM QoQsrcTable WHERE submission_id IN (SELECT submission_id FROM QoQsrcTable GROUP BY submission_id)
").execute().getResult();
I think the second example should work. I've tried messing with it in various ways. But can't seem to figure out what I am doing wrong. Maybe a nested QoQ doesn't work like that. Is there another way I can accomplish what I am trying without two chunks of code? Just so it's more readable and I don't have to assign variables twice.
QoQ doesn't support subqueries. That's the long and the short of it.
Docs
In Coldfusion 10 or Railo 4, you can utilize the groupBy function of Underscore.cfc to accomplish what you want in much less code:
_ = new Underscore();// instantiate the library
submissions = _.groupBy(arguments.answers, 'submission_id');
groupBy() returns a structure where the keys are the values of the group element (in this case, submission_id).
(Disclaimer: I wrote Underscore.cfc)