Quick one I hope...I am struggling with an Access Query
I need to copy values from Table A into Table B ONLY if they don't already exist in Table B using the MTR# field to determine if exists.
The query will also need to increment tB.ImgRefNum up one from the previous record when inserting.
I need to copy
tA.MTR# to tB.MTR#
tA.MTRF1 to tB.Item
tA.MTRF2 to tB.PONum
tA.MTRF3 to tB.DateRecv **BUT need to cast from text YYYYMMDD to date format)
Table A
TRX Number (number)
MTR# (number)
MTRF1 (text)
MTRF2 (text)
MTRF3 (text) *A date is stored here but textually as YYYYMMDD
Table B
ImgRefNum (number)
MTR# (number)
Item (text)
W (number)
L (number)
Vendor (text)
PONum (number)
DateRecv (date)
Anyone give me a hand?
You can use the following SQL query (don't know exactly what part you were struggling with, so can't provide a specific explanation):
INSERT INTO tB (tB.ImgRefNum, tB.MTR#, tB.Item, tB.PONum, tB.DateRecv)
SELECT (SELECT Max(tB.ImgRefNum)+1 FROM tB) As NewRef, tA.MTR#, tA.MTRF1, tA.MTRF2, DateSerial(CInt(Mid(tA.MTRF2, 1, 4)),CInt(Mid(tA.MTRF2, 5, 2)), CInt(Mid(tA.MTRF2, 7, 2) ))
FROM tA
WHERE (SELECT Count(s.MTR#) FROM tB AS s WHERE s.MTR# = a.MTR#) = 0
Obviously, it is essential that MTRF3 always contains a valid date string exactly formatted YYYYMMDD, else you will run into errors.
Simply use the familiar NOT EXISTS, LEFT JOIN NULL, NOT IN queries with wrangling for your date and max time. Below uses the NOT EXISTS approach and assumes you use month-first dates MM/DD/YYYY (US-based):
INSERT INTO tB ([ImgRefNum], [MTR#], [Item], [PONum], [DateRecv])
SELECT (SELECT Max(sub.[ImgRefNum]) FROM tA sub) + 1,
tA.[MTR#], tA.[MTRF1], tA.[MTRF2],
CDate(Mid(tA.[MTRF3], 5, 2) & "/" & Mid(tA.[MTRF3], 7, 2) & "/" &
LEFT(tA.[MTRF3], 4))
FROM tA
WHERE NOT EXISTS
(SELECT 1 FROM tB sub WHERE sub.[MTR#] = tA.[MTR#])
Related
I have a query that I am running on AWS athena that should return all the filenames that are not contained in the second table. I am basically trying to find all the filename that are not in ejpos landing table.
The one table looks like this (item sales):
origin_file
run_id
/datarite/ejpos/8023/20220706/filename1
8035
/datarite/ejpos/8023/20220706/filename2
8035
/datarite/ejpos/8023/20220706/filename3
8035
The other table looks like this (ejpos_files_landing):
filename
filename1
filename2
filename3
filename4
They don't have the same number of rows, hence I am trying to find the file names that are in ejpos_pos_landing but not in item sales table.
I get this error when I run:
mismatched input 'from'. Expecting: ',', <expression>
The query is here:
SELECT trim("/datarite/ejpos/8023/20220706/" from "validated"."datarite_ejpos_itemsale" where
run_id = '8035') as origin_file,
FROM "validated"."datarite_ejpos_itemsale"
LEFT JOIN "landing"."ejpos_landing_files" ON "landing"."ejpos_landing_files".filename =
"validated"."datarite_ejpos_itemsale".origin_file
WHERE "landing"."ejpos_landing_files".filename IS NULL;
The expected result would be:
|filename4|
Because it is not in the other table
Can anyone assist?
There is a lot of wrong stuff in your query based on the example data and declared goals.
trim("/datarite/ejpos/8023/20220706/" from "validated"."datarite_ejpos_itemsale" where run_id = '8035') as origin_file is not a valid sql.
ON "landing"."ejpos_landing_files".filename = "validated"."datarite_ejpos_itemsale".origin_file will not work cause origin_file is prefixed. You can use strpos if there should be only one instance of filename in the origin_file.
your join and filtering condition are build to find items present in datarite_ejpos_itemsale and missing in ejpos_landing_files while you state the vise versa is needed.
the mentioned in the comments extra comma
Try next:
-- sample data
WITH item_sales(origin_file, run_id) AS (
VALUES ('/datarite/ejpos/8023/20220706/filename1', 8035),
('/datarite/ejpos/8023/20220706/filename2', 8035),
('/datarite/ejpos/8023/20220706/filename3', 8035),
('/datarite/ejpos/8023/20220706/filename4', 8036)
),
ejpos_files_landing(filename) as(
VALUES ('filename1'),
('filename2'),
('filename3'),
('filename4')
)
-- query
select filename
from ejpos_files_landing l
left outer join item_sales s -- reverse the join
on strpos(s.origin_file, l.filename) >= 1 -- assuming that filename should be present only one time in the string
and s.run_id = 8035 -- if you need to filter out run id
where s.origin_file is null
Output:
filename
filename4
Alternative approach you can try:
-- query
select filename
from ejpos_files_landing l
where filename not in (
select element_at(split(origin_file, '/'), -1) -- split by '/' and get last
from item_sales
where run_id = 8035
)
I am working with the JSON_VALUE function and I need a kind of dynamic query
I have a column called Criteria and sometimes it has 1 value but sometimes it has 2 or 3 vales like:
Example of 1 value: $.IRId = 1
Example of 2 values: $.IROwner = 'james.jonson#domain.com' AND DaysTillDue < 10
So in order to read the values from a JSON column and taking the Criteria column I am using this logic:
DECLARE #CriteriaValue int
,#CriteriaStatement VARCHAR(50)
SELECT #CriteriaValue=SUBSTRING(Criteria, CHARINDEX('=',Criteria)+1, len(Criteria)) FROM #SubscriptionCriteria;
SELECT #CriteriaStatement= SUBSTRING(Criteria,0, CHARINDEX('=',Criteria)) FROM #SubscriptionCriteria;
SELECT #CriteriaValue,#CriteriaStatement
SELECT *
FROM [SAAS].[ObjectEvent]
WHERE
JSON_VALUE(JSONMessageData, #CriteriaStatement) = #CriteriaValue
That SQL code is taking only the Criteria Column with only 1 value ($.IRId = 1), but the idea is to have something that reads the criteria no matter the different filters and apply them into the final query. The idea I have is that the query would look like this:
SELECT *
FROM [SAAS].[ObjectEvent]
WHERE
JSON_VALUE(JSONMessageData, #CriteriaStatement1) = #CriteriaValue1 ADN JSON_VALUE(JSONMessageData, #CriteriaStatement2) = #CriteriaValue2 AND
JSON_VALUE(JSONMessageData, #CriteriaStatement3) = #CriteriaValue3
ETC
Any suggestion?
I have a query like this
select
t.tiid, t.employeeid, t.remarks,
dd.DocID, dd.Document, dd.DocuName
from
ti t
inner join
History cth on cth.tiid = t.tiid
inner join
Downloads dd on dd.DocID = cth.DocID
My data in table is like this
History:
DocID DocuName
1,2 abc.dox,def.docx
Downloads
DocID DocuName document
1 abc.docx x3400000efg..
2 def.docx xc445560000...
but when I execute this query, it shows an error:
Conversion failed when converting the varchar value '1,2' to data type int.
The DocID of history is multiple DocID had been combined with comma, So you can not compare the value directly( One value vs Multiple values).
You can check whether the multiple values contain the specify value use CHARINDEX.
To make sure complete matched of sub string,need a delimiter to indicate a single value, otherwise can get wrong result.
For Eample:
CHARINDEX('1,','12,2,3') will be 1, but in fact, there is no 1 in the string.
select
t.tiid,
t.employeeid,
t.remarks,
dd.DocID,
dd.Document,
dd.DocuName
from ti t
inner join History cth on cth.tiid=t.tiid
inner join Downloads dd on CHARINDEX(','+LTRIM(dd.DocID)+',',','+cth.DocID+',')>0
As the error says, you are trying to equate a string with int.You need to convert the int DocID as string and check if it's present in the comma-separated DocID .Something like
SELECT t.tiid,
t.employeeid,
t.remarks,
dd.DocID,
dd.Document,
dd.DocuName
FROM ti t
INNER JOIN History cth ON cth.tiid=t.tiid
INNER JOIN Downloads dd ON CHARINDEX(',' + CAST(dd.DocID AS VARCHAR(10)) + ',',',' + cth.DocID + ',')>0
http://sqlfiddle.com/#!9/b98ea/1 (Sample Table)
I have a table with the following fields:
transfer_id
src_path
DH_USER_ID
email
status_state
ip_address
src_path field contains a couple of duplicates filename values but a different folder name at the beginning of the string.
Example:
191915/NequeVestibulumEget.mp3
/191918/NequeVestibulumEget.mp3
191920/NequeVestibulumEget.mp3
I am trying to do the following:
Set status_state field to 'canceled' for all the duplicate filenames within (src_path) field except for one.
I want the results to look like this:
http://sqlfiddle.com/#!9/5e65f/2
*I apologize in advance for being a complete noob, but I am taking SQL at college and I need help.
SQL Fiddle Demo
fix_os_name: Fix the windows path string to unix format.
file_name: Split the path using /, and use char_length to bring last split.
drank: Create a seq for each filename. So unique filename only have 1, but dup also have 2,3 ...
UPDATE: check if that row have rn > 1 mean is a dup.
.
Take note the color highlight is wrong, but code runs ok.
with fix_os_name as (
SELECT transfer_id, replace(src_path,'\','/') src_path,
DH_USER_ID, email, status_state, ip_address
FROM priority_transfer p
),
file_name as (
SELECT
fon.*,
split_part(src_path,
'/',
char_length(src_path) - char_length(replace(src_path,'/','')) + 1
) sfile
FROM fix_os_name fon
),
drank as (
SELECT
f.*,
row_number() over (partition by sfile order by sfile) rn
from file_name f
)
UPDATE priority_transfer p
SET status_state = 'canceled'
WHERE EXISTS ( SELECT *
FROM drank d
WHERE d.transfer_id = p.transfer_id
AND d.rn > 1);
ADD: One row is untouch
Use the regexp_matches function to separate the file name from the directory.
From there you can use distinct() to build a table with unique values for the filename.
select
regexp_matches(src_path, '[a-zA-Z.0-9]*$') , *
from priority_transfer
;
Good morning everyone!
Below is a piece of code I stitched together: I used a CTE to grab the records(data) from a link table and than convert strings to dates, than use the merge statement to get the data into a local table:
I am having a problem with the column(field) LAST_RACE_DATE this field is set to NULL and is not required but it does not update with my current set up. What I am trying to accomplished is for this field to populate when data is entered but also update, meaning it should also update with NULL.
So if the field has a specific date, and a new date is entered in the remote database, this field should update as well, even if the data is deleted in the back end, it should also remove the local table data for this field.
WITH CTE AS(
SELECT MEMBER_ID
,[MEMBER_DATE] = MAX(CONVERT(DATE, MEMBER_DATE))
,RACE_DATE = MAX(CONVERT(DATE, RACE_DATE))
,LAST_RACE_DATE = MAX(CONVERT(DATE, LAST_RACE_DATE))
FROM [EXAMPLE].[dbo].[LINKED_MEMBER_DATA]
WHERE (MEMBER_DATE IS NOT NULL) AND (ISDATE(MEMBER_DATE)<> 0) AND (RACE_DATE IS NOT NULL) AND (ISDATE(RACE_DATE)<> 0)
AND (LAST_RACE_DATE IS NULL) OR (ISDATE(LAST_RACE_DATE)<> 0)
GROUP BY MEMBER_ID)
MERGE dbo.LINKED_MEMBER_DATA AS Target
USING (SELECT
MEMBER_ID, MEMBER_DATE, RACE_DATE, LAST_RACE_DATE
FROM CTE
GROUP BY MEMBER_ID, RACE_DATE, LAST_RACE_DATE)AS SOURCE ON (Target.MEMBER_ID = SOURCE.MEMBER_ID)
WHEN MATCHED AND
(Target.MEMBER_DATE) <> (SOURCE.MEMBER_DATE)
OR (Target.RACE_DATE) <> (SOURCE.RACE_DATE)
OR ISNULL(TARGET.LAST_RACE_DATE , Target.LAST_RACE_DATE) <> ISNULL(SOURCE.LAST_RACE_DATE, SOURCE.LAST_RACE_DATE)
THEN UPDATE SET
Target.MEMBER_DATE = SOURCE.MEMBER_DATE
,Target.RACE_DATE = SOURCE.RACE_DATE
,Target.LAST_RACE_DATE = SOURCE.LAST_RACE_DATE
WHEN NOT MATCHED BY TARGET THEN
INSERT(
MEMBER_ID, MEMBER_DATE, RACE_DATE, LAST_RACE_DATE)
VALUES (Source.MEMBER_ID, Source.MEMBER_DATE, Source.RACE_DATE, Source.LAST_RACE_DATE);
I also tried this:
ISNULL(Target.LAST_RACE_DATE,'N/A') <> ISNULL(SOURCE.LAST_RACE_DATE,'N/A')
But it generates the below error for dates conversion:
Conversion failed when converting date and/or time from character string.
Thanks a Million!!
Your current statement is failing because the ISNULLs that you have don't do anything (if one of the values is NULL the expression will evaluate to NULL), and NULL values don't compare. Your second attempt doesn't work because ISNULL requires the data types of the two values to be the same, so you could try eg ISNULL(Target.LAST_RACE_DATE, '1970-01-01') <> ISNULL(Source.LAST_RACE_DATE, '1970-01-01').
Another option would be to simply enumerate the different cases (eg, (((Source.LAST_RACE_DATE IS NULL AND Target.LAST_RACE_DATE IS NOT NULL) OR (Source.LAST_RACE_DATE IS NOT NULL AND Target.LAST_RACE_DATE IS NULL) OR (Source.LAST_RACE_DATE <> Target.LAST_RACE_DATE))). Enumerating the different situations makes the code a bit more verbose, but it can result in better performance (whether it is measurably better really depends on how much data you are processing).