If condition in SQL file vertica - sql

I need to execute insertions for around 10 tables, before inserting I have to check for a condition, condition remains the same for each of the tables, instead of giving that condition within insert query, I wish I could give in if condition (a select query), if satisfied then execute insert statements, is there a way to give if condition in Vertica SQL file ? If condition is not satisfied I dont want to execute any of the insert queries.

If the condition is, for example, that you only insert the data on a Sunday, try this:
a) a test table:
CREATE LOCAL TEMPORARY TABLE input(id,name)
ON COMMIT PRESERVE ROWS AS
SELECT 42,'Arthur Dent'
UNION ALL SELECT 43,'Ford Prefect'
UNION ALL SELECT 44,'Tricia McMillan'
KSAFE 0;
From this table, select with a WHERE condition that tests whether it's sunday -- that's all, see here:
SELECT
*
FROM input
WHERE TRIM(TO_CHAR(CURRENT_DATE,'Day'))='Sunday'
;
id|name
42|Arthur Dent
43|Ford Prefect
44|Tricia McMillan
With a different value for the week day (I'm writing this on a Sunday...), you get this:
SELECT
*
FROM input
WHERE TRIM(TO_CHAR(CURRENT_DATE,'Day'))='Monday'
;
id|name
select succeeded; 0 rows fetched
I use this technique in SQL generating SQL to create a script or an empty file determining on circumstances, and then to call that script (full or empty), implementing a conditional SQL execution that way....

I know it is an old post, but i want to share what i recently solved my problem.
I need to insert in one table or another based on some condition. You first should have a field or value what will be your search condition.
create table tmp1 (
Col1 int null
,Col2 varchar(100) null
)
--Insert values
insert into tmp (Col1,Col2) Values
(1,'Text1')
,(2,'Text2')
--Insert into table001
insert into table001
select
t.field1
,t.field2
,......
from table1 t
inner join tmp t2
on t2.col1 = t.ColX
where 1 = case when t2.Col2 = 'Text1' then 1 else 0 end --Search condition; if 1<>0 then it doesn't do anything; otherwise insert.
--Insert into table002
insert into table002
select
t.field1
,t.field2
,......
from table2 t
inner join tmp t2
on t2.col1 = t.ColX
where 1 = case when t2.Col2 = 'Text2' then 1 else 0 end --Search condition; if 1<>0 then it doesn't do anything; otherwise insert.
Or you can use an UNION/UNION ALL based on this if it is the same working table.
Regards!

Related

Save value in local variable HANA SQL Script

I'm trying to take value from a non-empty row and overwrite it in the subsequent rows until another non-empty row appears and then write that in the subsequent rows. Coming from ABAP Background, I'm not sure how to accomplish this in HANA SQL Script. Here's a picture to show what the data looks like.
Basically 'Doe, John' should be overwritten into all the empty rows until 'Doe, Jane' appears and then 'Doe, Jane' should be overwritten into empty rows until another name appears.
My idea is to store the non-empty row in a local variable, but I haven't had much success so far. Here's my code:
tempTab1 = SELECT
CASE WHEN EMPLOYEE <> ''
THEN lv_emp = EMPLOYEE
ELSE EMPLOYEE
END AS EMPLOYEE,
FROM :tempTab;
In general, rows in dataset are unordered until you explicitly specify ORDER BY part of SQL. If you observe some order it may be a side-effect and can vary. So first of all you have to explicitly create a row number column (assume it's name is RECORD).
Then you should go this way:
Select only rows with non-empty data in column.
Use LEAD(RECORD) over(order by RECORD) to identify the next non-empty record number.
Join your source dataset to dataset defined on step 3 on between condition for RECORD field.
with a as (
select 1 as record, 'Val1' as field1 from dummy union
select 2 as record, '' as field1 from dummy union
select 3 as record, '' as field1 from dummy union
select 4 as record, 'Val2' as field1 from dummy union
select 5 as record, '' as field1 from dummy union
select 6 as record, '' from dummy union
select 7 as record, '' from dummy union
select 8 as record, 'Val3' as field1 from dummy
)
, fill_base as (
select field1, record, lead(record, 1, record) over(order by record asc) as next_record
from a
where field1 <> '' and field1 is not null
)
select
a.record
, case
when a.field1 = '' or a.field1 is null
then f.field1
else a.field1
end as field1
, a.field1 as field1_original
from a
left join fill_base as f
on a.record > f.record
and a.record < f.next_record
The performance in HANA may be bad in some cases since it process window functions very bad.
Here is another more elegant solution with two nested window functions than does not force you to write multiple selects for each column: How to make LAG() ignore NULLS in SQL Server?
You can use window aggregate function LAST_VALUE to achieve the imputation of missing values.
Sample Data
CREATE TABLE sample (id integer, sort integer, value varchar(10));
INSERT INTO sample VALUES (4711, 1, 'Hello');
INSERT INTO sample VALUES (4712, 2, null);
INSERT INTO sample VALUES (4713, 3, null);
INSERT INTO sample VALUES (4714, 4, 'World');
INSERT INTO sample VALUES (4715, 5, null);
INSERT INTO sample VALUES (4716, 6, '!');
Generate a new column with imputed values
SELECT base.*, LAST_VALUE(fill.value ORDER BY fill.sort) AS value_imputed
FROM sample base
LEFT JOIN sample fill ON fill.sort <= base.sort AND fill.value IS NOT NULL
GROUP BY base.id, base.sort, base.value
ORDER BY base.id, base.sort
Result
Note that sort could be anything determining the order (e.g. a timestamp).

SQL Server Merge - Output returning null deleted rows as inserted

I'm using MERGE to sync data in a table, and I'm seeing some wrong (to me) behavior from SQL Server. When I OUTPUT the INSERTED.* values, and a row was deleted, the MERGE command returns a row with all NULL columns for each row that was deleted.
For example, take this schema:
CREATE TABLE tbl
(
col1 INT NOT NULL,
col2 INT NOT NULL
);
I do an initial load of data, and all 4 rows are outputted as expected.
WITH data1 AS (
SELECT 1 [col1],1 [col2]
UNION ALL SELECT 2 [col1],2 [col2]
UNION ALL SELECT 3 [col1],3 [col2]
UNION ALL SELECT 4 [col1],4 [col2]
)
MERGE tbl t
USING data1 s
ON t.col1 = s.col1 AND t.col2 = s.col2
WHEN NOT MATCHED BY TARGET
THEN INSERT (col1,col2) VALUES (s.col1,s.col2)
WHEN NOT MATCHED BY SOURCE
THEN DELETE
OUTPUT INSERTED.*;
Now, say I remove 2 rows from the data I'm syncing with the table (in my CTE) and do the same MERGE, I see 2 rows of all NULL columns returned.
WITH data1 as (
SELECT 1 [col1],1 [col2]
UNION ALL SELECT 2 [col1],2 [col2]
)
MERGE tbl t
USING data1 s
ON t.col1 = s.col1 AND t.col2 = s.col2
WHEN NOT MATCHED BY TARGET
THEN INSERT (col1,col2) VALUES (s.col1,s.col2)
WHEN NOT MATCHED BY SOURCE
THEN DELETE
OUTPUT INSERTED.*;
To me, this seems like wrong behavior because A) I didn't as for any deleted rows and B) this makes it seem like I inserted these 2 NULL rows into my table, which I clearly did not. Can anyone shed some light on what's happening?
From the documentation:
output_clause - Returns a row for every row in target_table that is
updated, inserted, or deleted, in no particular order. $action can be
specified in the output clause. $action is a column of type
nvarchar(10) that returns one of three values for each row: 'INSERT',
'UPDATE', or 'DELETE', according to the action that was performed on
that row.
It seems that SQL Server is outputting one row for every row that changed (by an insert or delete). When I specify OUTPUT INSERTED.*, I'm really only specifying the inserted data, which is null for the 2 rows that were changed. If I specify OUTPUT INSERTED.col1 [InsCol1],INSERTED.col2 [InsCol2],DELETED.col1 [DelCol1],DELETED.col2 [DelCol2],$action then I can see a better picture of what's happening.
Thanks to Laurence for your comment.

SQL Server query to Oracle query conversion

I need to write a query in Oracle, but I'm more familiar with SQL Server.
In SQL Server, the query would look as follows: (simplified)
if exists (
select * from table where a=1
)
begin
update table set b=1 where a=1
end else
begin
insert table (a,b) values(1,1)
end
Thanks for any help :)
===============================================================================
This is the Merge option, (I think):
MERGE INTO table T
USING (
SELECT a,b
FROM table
) Q
ON T.a = Q.a
WHEN MATCHED THEN
UPDATE SET T.a = 1
WHEN NOT MATCHED THEN
INSERT table (a,b) VALUES (1,1);
Is this correct?
This should be the correct syntax for Oracle 11g. I'm not an expert on Oracle so maybe someone else could explain it better, but I believe the dual table is used when inserting new values instead or trying too merge is from another table.
MERGE INTO table1 T
USING (
SELECT 1 a, 1 b FROM dual
) Q
ON (T.a = Q.a)
WHEN MATCHED THEN
UPDATE SET b = 1
WHEN NOT MATCHED THEN
INSERT (a,b) VALUES (Q.a,Q.b);
Working example

I am trying to return a certain values in each row which depend on whether different values in that row are already in a different table

I'm still a n00b at SQL and am running into a snag. What I have is an initial selection of certain IDs into a temp table based upon certain conditions:
SELECT DISTINCT ID
INTO #TEMPTABLE
FROM ICC
WHERE ICC_Code = 1 AND ICC_State = 'CA'
Later in the query I SELECT a different and much longer listing of IDs along with other data from other tables. That SELECT is about 20 columns wide and is my result set. What I would like to be able to do is add an extra column to that result set with each value of that column either TRUE or FALSE. If the ID in the row is in #TEMPTABLE the value of the additional column should read TRUE. If not, FALSE. This way the added column will ready TRUE or FALSE on each row, depending on if the ID in each row is in #TEMPTABLE.
The second SELECT would be something like:
SELECT ID,
ColumnA,
ColumnB,
...
NEWCOLUMN
FROM ...
NEWCOLUMN's value for each row would depend on whether the ID in that row returned is in #TEMPTABLE.
Does anyone have any advice here?
Thank you,
Matt
If you left join to the #TEMPTABLE you'll get a NULL where the ID's don't exist
SELECT ID,
ColumnA,
ColumnB,
...
T.ID IS NOT NULL AS NEWCOLUMN -- Gives 1 or 0 or True/false as a bit
FROM ... X
LEFT JOIN #TEMPTABLE T
ON T.ID = X.ID -- DEFINE how the two rows can be related unquiley
You need to LEFT JOIN your results query to #TEMPTABLE ON ID, this will give you the ID if there is one and NULL if there isn't, if you want 1 or 0 this would do it (For SQL Server) ISNULL(#TEMPTABLE.ID,0)<>0.
A few notes on coding for performance:
By definition an ID column is unique so the DISTINCT is redundant and causes unnecisary processing (unless it is an ID from another table)
Why would you store this to a temporary table rather than just using it in the query directly?
You could use a union and a subquery.
Select . . . . , 'TRUE'
From . . .
Where ID in
(Select id FROM #temptable)
UNION
SELECT . . . , 'FALSE'
FROM . . .
WHERE ID NOT in
(Select id FROM #temptable)
So the top part, SELECT ... FROM ... WHERE ID IN (Subquery), does a SELECT if the ID is in your temptable.
The bottom part does a SELECT if the ID is not in the temptable.
The UNION operator joins the two results nicely, since both SELECT statements will return the same number of columns.
To expand on what someone else was saying with Union, just do something like so
SELECT id, TRUE AS myColumn FROM `table1`
UNION
SELECT id, FALSE AS myColumn FROM `table2`

Move SQL data from one table to another

I was wondering if it is possible to move all rows of data from one table to another, that match a certain query?
For example, I need to move all table rows from Table1 to Table2 where their username = 'X' and password = 'X', so that they will no longer appear in Table1.
I'm using SQL Server 2008 Management Studio.
Should be possible using two statements within one transaction, an insert and a delete:
BEGIN TRANSACTION;
INSERT INTO Table2 (<columns>)
SELECT <columns>
FROM Table1
WHERE <condition>;
DELETE FROM Table1
WHERE <condition>;
COMMIT;
This is the simplest form. If you have to worry about new matching records being inserted into table1 between the two statements, you can add an and exists <in table2>.
This is an ancient post, sorry, but I only came across it now and I wanted to give my solution to whoever might stumble upon this one day.
As some have mentioned, performing an INSERT and then a DELETE might lead to integrity issues, so perhaps a way to get around it, and to perform everything neatly in a single statement, is to take advantage of the [deleted] temporary table.
DELETE FROM [source]
OUTPUT [deleted].<column_list>
INTO [destination] (<column_list>)
All these answers run the same query for the INSERT and DELETE. As mentioned previously, this risks the DELETE picking up records inserted between statements and could be slow if the query is complex (although clever engines "should" make the second call fast).
The correct way (assuming the INSERT is into a fresh table) is to do the DELETE against table1 using the key field of table2.
The delete should be:
DELETE FROM tbl_OldTableName WHERE id in (SELECT id FROM tbl_NewTableName)
Excuse my syntax, I'm jumping between engines but you get the idea.
A cleaner representation of what some other answers have hinted at:
DELETE sourceTable
OUTPUT DELETED.*
INTO destTable (Comma, separated, list, of, columns)
WHERE <conditions (if any)>
Yes it is. First INSERT + SELECT and then DELETE orginals.
INSERT INTO Table2 (UserName,Password)
SELECT UserName,Password FROM Table1 WHERE UserName='X' AND Password='X'
then delete orginals
DELETE FROM Table1 WHERE UserName='X' AND Password='X'
you may want to preserve UserID or someother primary key, then you can use IDENTITY INSERT to preserve the key.
see more on SET IDENTITY_INSERT on MSDN
You should be able to with a subquery in the INSERT statement.
INSERT INTO table1(column1, column2) SELECT column1, column2 FROM table2 WHERE ...;
followed by deleting from table1.
Remember to run it as a single transaction so that if anything goes wrong you can roll the entire operation back.
Use this single sql statement which is safe no need of commit/rollback with multiple statements.
INSERT Table2 (
username,password
) SELECT username,password
FROM (
DELETE Table1
OUTPUT
DELETED.username,
DELETED.password
WHERE username = 'X' and password = 'X'
) AS RowsToMove ;
Works on SQL server make appropriate changes for MySql
Try this
INSERT INTO TABLE2 (Cols...) SELECT Cols... FROM TABLE1 WHERE Criteria
Then
DELETE FROM TABLE1 WHERE Criteria
You could try this:
SELECT * INTO tbl_NewTableName
FROM tbl_OldTableName
WHERE Condition1=#Condition1Value
Then run a simple delete:
DELETE FROM tbl_OldTableName
WHERE Condition1=#Condition1Value
You may use "Logical Partitioning" to switch data between tables:
By updating the Partition Column, data will be automatically moved to the other table:
here is the sample:
CREATE TABLE TBL_Part1
(id INT NOT NULL,
val VARCHAR(10) NULL,
PartitionColumn VARCHAR(10) CONSTRAINT CK_Part1 CHECK(PartitionColumn = 'TBL_Part1'),
CONSTRAINT TBL_Part1_PK PRIMARY KEY(PartitionColumn, id)
);
CREATE TABLE TBL_Part2
(id INT NOT NULL,
val VARCHAR(10) NULL,
PartitionColumn VARCHAR(10) CONSTRAINT CK_Part2 CHECK(PartitionColumn = 'TBL_Part2'),
CONSTRAINT TBL_Part2_PK PRIMARY KEY(PartitionColumn, id)
);
GO
CREATE VIEW TBL(id, val, PartitionColumn)
WITH SCHEMABINDING
AS
SELECT id, val, PartitionColumn FROM dbo.TBL_Part1
UNION ALL
SELECT id, val, PartitionColumn FROM dbo.TBL_Part2;
GO
--Insert sample to TBL ( will be inserted to Part1 )
INSERT INTO TBL
VALUES(1, 'rec1', 'TBL_Part1');
INSERT INTO TBL
VALUES(2, 'rec2', 'TBL_Part1');
GO
--Query sub table to verify
SELECT * FROM TBL_Part1
GO
--move the data to table TBL_Part2 by Logical Partition switching technique
UPDATE TBL
SET
PartitionColumn = 'TBL_Part2';
GO
--Query sub table to verify
SELECT * FROM TBL_Part2
Here is how do it with single statement
WITH deleted_rows AS (
DELETE FROM source_table WHERE id = 1
RETURNING *
)
INSERT INTO destination_table
SELECT * FROM deleted_rows;
EXAMPLE:
postgres=# select * from test1 ;
id | name
----+--------
1 | yogesh
2 | Raunak
3 | Varun
(3 rows)
postgres=# select * from test2;
id | name
----+------
(0 rows)
postgres=# WITH deleted_rows AS (
postgres(# DELETE FROM test1 WHERE id = 1
postgres(# RETURNING *
postgres(# )
postgres-# INSERT INTO test2
postgres-# SELECT * FROM deleted_rows;
INSERT 0 1
postgres=# select * from test2;
id | name
----+--------
1 | yogesh
(1 row)
postgres=# select * from test1;
id | name
----+--------
2 | Raunak
3 | Varun
If the two tables use the same ID or have a common UNIQUE key:
1) Insert the selected record in table 2
INSERT INTO table2 SELECT * FROM table1 WHERE (conditions)
2) delete the selected record from table1 if presents in table2
DELETE FROM table1 as A, table2 as B WHERE (A.conditions) AND (A.ID = B.ID)
It will create a table and copy all the data from old table to new table
SELECT * INTO event_log_temp FROM event_log
And you can clear the old table data.
DELETE FROM event_log
For some scenarios, it might be the easiest to script out Table1, rename the existing Table1 to Table2 and run the script to recreate Table1.