We are moving data from Oracle on one Geography to an SQL Server database in another Geography. We are noticing that the time related columns for different objects are different in the two geographies. They are differing by 2 hours.
At source: 2022/05/13 12:01:00
At target: 2022/05/13 10:01:00
I am using this SQL to extract the data at source Oracle database
select distinct item.pitem_id || '~'
|| TO_CHAR(paoItem.PCREATION_DATE, 'dd-Mon-YYYY HH24:MI') || '~'
|| puItemOwner.puser_id || '~'
|| pgItem.pname || '~'
|| puItemLMU.puser_id || '~'
|| TO_CHAR(paoItem.PLAST_MOD_DATE, 'dd-Mon-YYYY HH24:MI')
-- Item Information
from infodba.PITEM item
inner join infodba.PPOM_APPLICATION_OBJECT paoItem on paoItem.puid = item.puid
inner join infodba.PPOM_GROUP pgItem on pgItem.puid = paoItem.ROWNING_GROUPU
inner join infodba.PPOM_USER puItemOwner on puItemOwner.puid = paoItem.ROWNING_USERU
inner join infodba.PPOM_USER puItemLMU on puItemLMU.puid = paoItem.RLAST_MOD_USERU
where item.pitem_id in '3204-001-0613-C';
This query is giving me the correct result itself - 2022/05/13 12:01:00. But when I import this data into the target system, the date is getting update to 2022-05-13 10:01:00.000.
I am assuming that this time difference of 2 hours is because of the probable Geography time difference. If so, what can I do to ensure that the data gets persisted correctly.
Please share me some idea on which options or where to look for this sort of issue.
Thanks,
Pavan.
Related
So i have a query like this:
select
p.identifier,
GROUP_CONCAT(
'[' ||
'{"thumbnail":' ||
'"' ||
ifnull(s.thumbnail,'null') ||
'"' ||
',"title:' ||
'"' ||
s.title ||
'","content": [' ||
GROUP_CONCAT(
'{"text":' ||
ifnull(c.text,'null') ||
'", "image":' ||
ifnull(c.image,'null') ||
'", "caption": "' ||
ifnull(c.caption,'null') ||
'"},'
) ||
']},'
)
from pois as p
join stories as s on p.identifier = s.poiid
join content c on s.storyid = c.storyid
group by s.storyid
And i got and error:
in prepare, misuse of aggregate function GROUP_CONCAT()
To see clearly i have a big object named POIS every POI have multiple STORIES and every STORY have multiple CONTENTS, and i want to display x rows(how many pois i have) and inside the column have every story that is connected to their poi(and every content inside stories) and i need this in json format so i can parse the database query and read back into my object.
I hope its clear what is my problem and you can help me.
So i changed the query to something like this:
SELECT p.identifier, (
SELECT json_group_array(json_object('id', s.storyid))
FROM stories AS s
WHERE s.poiid=p.identifier
) as stories,
(
SELECT json_group_array(json_object('id', c.contentid, 'storyId', s.storyid))
FROM content AS c
JOIN stories AS s ON c.storyid=s.storyid
WHERE s.poiid=p.identifier
) as contents
FROM pois AS p
GROUP BY p.identifier
this is my result:
enter image description here
but the 3 rd column i would like to put inside the second(every pois have multiple stories and every story have one or multiple contents, so the contents should be inside their stories.
I have a script that runs a SAS passhtrough query that connects to an Oracle database. This was part of a cronjob that runs on a Unix server, and has had no issues for years. In the past few weeks however, the job has started hanging up on this one particular step - according to logs it used to take about 15 seconds to run but now just will run indefinitely before we have to kill the job. There are no associated errors or warnings in the log - the job will create a lockfile and just run indefinitely until we have to kill it.
The step where the job hangs up is pasted in below. There are two macro variables &start_dt and &end_dt, which represent the date range the job is pulling sales data for.
While investigating, we tried a few different approaches, and were able to get this step to run successfully and in its usual time by changing three things:
running the script through an Enterprise Guide client which connects
to the same server as opposed to running the script via CLI / shell
script
Changing the library the step writes to to work instead of writing
the dataset to salesdata library (as seen in code below)
Changing the dates to hardcoded values instead of macro variables.
As for the date variables themselves, they are strings in date9 format, e.g
&start_dt = '08-May-22', &end_dt = '14-May-22'. Initially I suspected the issue was related to the way the dates are structured since this is an older project I have inherited, but am confused to why the job ran without issue for so long up until a few weeks ago, even with these oddly formatted date macro vars.
The other possibility I considered was that some sort of resource on the unix server was getting locked up when it got to this step, potentially from some sort of hanging job or some other conflict with an older file such as a log or a previous sas dataset.
Problematic version of the step in the script pasted below:
PROC SQL;
connect to oracle(user=&uid pass=&pwd path='#dw');
create table salesdata.shipped as
Select
SKN_NBR,
COLOR_NBR,
SIZE_NBR,
SALESDIV_KEY,
ORDER_LINE_QTY as QUANTITY label="SUM(ORDER_LINE_QTY)",
EX1 as DOLLARS label="SUM(EX1)" from connection to oracle(
select
A1."SKN_NBR",
A1."COLOR_NBR",
A1."SIZE_NBR",
decode(A1."SALESDIV_KEY", 'ILB', 'IQ',
'IQ ', 'IQ',
'IQC', 'IQ',
'ISQ', 'IQ',
'IWC', 'IQ',
'QVC'),
SUM(A1."ORDER_LINE_QTY"),
SUM(A1."ORDER_LINE_QTY" * A1."ORDER_LINE_PRICE_AMT")
from DW.ORDERLINE A1, DISTINCT_SKN A2, DW.ORDERSTATUSTYPE A3
where
A2."SKN_NBR" = A1."SKN_NBR" AND
A1."CURRENT_STATUS_DATE" Between &start_dt and &end_dt AND
A1."ORDERLINESTATUS_KEY" = A3."ORDERLINESTATUS_KEY" AND
A3."ORDERSTATUS_SHIPPED" = 'Y' AND
A1."ORDER_LINE_PRICE_AMT" > 0
group by A1."SKN_NBR",
A1."COLOR_NBR",
A1."SIZE_NBR",
decode(A1."SALESDIV_KEY", 'ILB', 'IQ',
'IQ ', 'IQ',
'IQC', 'IQ',
'ISQ', 'IQ',
'IWC', 'IQ',
'QVC')
order by A1."SKN_NBR",
A1."COLOR_NBR",
A1."SIZE_NBR",
decode(A1."SALESDIV_KEY", 'ILB', 'IQ',
'IQ ', 'IQ',
'IQC', 'IQ',
'ISQ', 'IQ',
'IWC', 'IQ',
'QVC')
) as t1(SKN_NBR, COLOR_NBR, SIZE_NBR, SALESDIV_KEY, ORDER_LINE_QTY, EX1)
;
disconnect from oracle; quit;
[1]: https://i.stack.imgur.com/GGjin.jpg
What style you need to use for date constants in Oracle depends on your settings in Oracle. But normally you can use expressions like one of these
date '2022-05-14'
'2022-05-14'
You seem to claim that on your system you can use values like
'14-May-22'
(how does Oracle know what century you mean by that?).
Note that in Oracle it is important to use single quotes around constants as it interprets strings in double quotes as object names.
So if you have a date value in SAS just make sure to make the macro variable value look like what Oracle wants.
For example to set ENDDT to today's date you could use:
data _null_;
call symputx('enddt',quote(put(today(),date11.),"'"));
run;
Which would the same as
%let enddt='17-MAY-2022';
So #Tom answer was helpful - it appears that our DBAs updated some settings a few weeks back that impacted how stringent Oracle is in terms of which date formats are accepted.
For what it's worth, the date macro vars were being constructed on the fly using a clunky data step that read off of a date key dataset:
You'll notice the last piece of the date string being put together for bost variables uses year2. format, so just the last two digits of the year.
To #Tom's point, this is apparently confusing Oracle in terms of which century its in, so the job gets hung up.
data dateparm;
set salesdata.week_end_date;
start = "'" || put(day(week_end_date - 6), z2.) || '-' || put(week_end_date - 6, monname3.) || '-' ||
put(week_end_date - 6, year2.) || "'";
end = "'" || put(day(week_end_date), z2.) || '-' || put(week_end_date, monname3.) || '-' ||
put(week_end_date, year2.) || "'";
call symput('start_dt', start);
call symput('end_dt', end);
run;
Once I changed this step to use year4. format for the last piece, the job was able to run fine without incident on both unix and E guide. Example below:
data dateparm;
set npdd.week_end_date;
start = "'" || put(day(week_end_date - 6), z2.) || '-' || put(week_end_date - 6, monname3.) || '-' ||
put(week_end_date - 6, year4.) || "'";
end = "'" || put(day(week_end_date), z2.) || '-' || put(week_end_date, monname3.) || '-' ||
put(week_end_date, year4.) || "'";
call symput('start_dt', start);
call symput('end_dt', end);
run;
I have the following SQL query:
SELECT att.prod_name, att.prod_group, att.prod_size, obj.physical_id, obj.variant, max(obj.last_updated_date)
FROM Table1 obj
join Table2 att
on obj.prod_name = att.prod_name
where
obj.access_state = 'cr'
AND obj.variant in ('Front')
AND obj.size_code in ('LARGE')
AND att.prod_name in ('prod_1','prod_2')
group by 1,2,3,4,5
The output currently looks like this:
prod_name prod_group prod_size physical_id variant max
prod_1 1 Large - 2 Oz jnjnj3lnzhmui Front 8/8/2020
prod_1 1 Large - 2 Oz pokoknujyguin Front 6/8/2020
prod_2 1 Large - 3 Oz oijwu8ygtoiim Front 4/2/2018
prod_2 1 Large - 3 Oz ytfbeuxxxx2u2 Front 7/2/2018
prod_2 1 Large - 3 Oz rtyferqdctyyx Front 4/4/2020
How can I convert this to a nested json in the query itself ?
Required output: (Variant and max date can be ignored)
{"prod_name":"prod_1" , "prod_group":"1", "prod_size":"Large - 2 Oz", "physical_id":{"physical_id_1":"jnjnj3lnzhmui", "physical_id2" : "pokoknujyguin"}}
{"prod_name":"prod_2" , "prod_group":"1", "prod_size":"Large - 3 Oz", "physical_id":{"physical_id_1":"oijwu8ygtoiim", "physical_id2" : "ytfbeuxxxx2u2", "physical_id3" : "rtyferqdctyyx"}}
As stated there aren't built in JSON statements for Redshift like there are in BigQuery TO_JSON() or SQL Server FOR JSON.
So, you are stuck with either writing a conversion yourself in a coding language like Java or Python or writing up a bunch of string manipulation code to "fake it" directly in Redshift.
Something akin to:
SELECT CHR(123) || '"prod_name"'|| ':' || '"' || nvl(prod_name,0) || '"' || ',' ||
'"prod_group"'|| ':' || '"' || nvl(prod_group,'') || '"' || ',' ||
'"prod_size"'|| ':' || '"' || nvl(prod_size,'') || '"' || Chr(125) FROM TABLE1
The nvl protects you from null values if present. The nesting aspects get a little harder, but with enough patience you should get there.
Good luck!
I have gone over the SQL for an hour and can't find why the error is being raised. I have checked all the basic reasons this error can occur and found nothing. I'm suspicious of the CASE statement but it appears to be correct. Can anyone spot the problem or point me in a direction? Thanks
INSERT INTO RPT_HOUSEHLDBATCH
(CUSTOMERKEY,HOUSEHOLDNBR,CUSTOMERTYPE,LASTNAME,FIRSTNAME,ADDRNBR,AddressLine1,AddressLine2,AddressLine3,
CITYNAME, STATECD, ZIPCD, SCORE, DATECREATED, RUNDATE, TYPECD, PREVIOUSHHLDNBR)
SELECT CustomerKey,' || in_HHNbr || ',
CASE SUBSTR(CUSTOMERKEY,1,1)
WHEN ''P'' THEN ''I''
WHEN ''O'' THEN ''B''
END CASE,
a.LastName,
a.FirstName,
AddrNbr,
AddressLine1,
AddressLine2,
AddressLine3,
Cityname,
StateCd,
ZipCd,
2, b.AddDate, SYSDATE, ''' || in_NewUpd || ''', HouseHoldNbr
FROM rpt_HouseHldBatchwrk a
JOIN PERS b
ON SUBSTR(a.CUSTOMERKEY,2) = b.PersNbr
WHERE CUSTOMERKEY = ''P' || in_PersNbr || '''
UNION
SELECT CustomerKey,' || in_HHNbr || ',
CASE SUBSTR(CUSTOMERKEY,1,1)
WHEN ''P'' THEN ''I''
WHEN ''O'' THEN ''B''
END CASE,
a.LastName,
a.FirstName,
AddrNbr,
AddressLine1,
AddressLine2,
AddressLine3,
Cityname,
StateCd,
ZipCd,
2, b.AddDate, SYSDATE, ''' || in_NewUpd || ''', HouseHoldNbr
FROM rpt_HouseHldBatchwrk a
JOIN ORG b
ON SUBSTR(a.CUSTOMERKEY,2) = b.OrgNbr
WHERE CUSTOMERKEY = ''O' || in_OrgNbr || '''
A good strategy to debug such a statement is to pare it down, as #OldProgrammer suggested.
In your case, I'd try to ignore the INSERT and get the SELECT running first.
As it's a UNION of two SELECTs, I'd split them into separate statements, too.
The in_NewUpd and in_PersNbr look strange, just like parameters in a procedure. I'd replace them with fixed known values, like WHERE CUSTOMERKEY LIKE ''P1234''
And please, please don't store SQL in a variable and execute it based on a condition. The syntax for that is to use placeholder like ? in the SQL instead of string concatenation ||. You'll flood the cursor cache if you do it wrong.
Turns out the in_HHNbr in some cases was coming in as a null. Concatenating null throws missing expression.
As far as how the code is structured I inherited this code and can't make major changes. I was sent to bug fix.
Does anyone know how to improve the below oracle sql query with multiple IS NOT NULL with OR operator:
select count(1)
from s_srv_req sr, s_evt_act act, s_bu bu
where sr.row_id = act.sra_sr_id(+)
and sr.bu_id = bu.row_id
and sr.last_upd > to_date('31-DEC-2013','DD-MON-YYYY')
and **(X_REASON_CODE1 is not null
OR X_REASON_CODE2 is not null
OR X_CONCERN_CODE1 is not null
OR X_CONCERN_CODE2 is not null
OR X_COMPONENT_CODE is not null)**
The purpose here is to fetch all records even if one of the codes column is not null.
Note: This query is taking much time and i cannot progress with such time taking queries. Thanks in advance.
You should use COALESCE function
select count(1)
from s_srv_req sr, s_evt_act act, s_bu bu
where sr.row_id = act.sra_sr_id(+)
and sr.bu_id = bu.row_id
and sr.last_upd > to_date('31-DEC-2013','DD-MON-YYYY')
and COALESCE(X_REASON_CODE1, X_REASON_CODE2, X_CONCERN_CODE1, X_CONCERN_CODE2, X_COMPONENT_CODE) is not null
have you tried with using a excluding method?
Use the total table to minus the records that all of those columns are null at the same time?
here are some fake code:
Method1, using Minus
table a
minus
table a with X_REASON_CODE1 ||
OR X_REASON_CODE2 ||
OR X_CONCERN_CODE1 ||
OR X_CONCERN_CODE2 ||
OR X_COMPONENT_CODE is not null
Method 2, using NOT EXISTS or NOT IN
table a
not exists X_REASON_CODE1 ||
OR X_REASON_CODE2 ||
OR X_CONCERN_CODE1 ||
OR X_CONCERN_CODE2 ||
OR X_COMPONENT_CODE is not null