Looping over HANA table - hana

I'm trying to create a stored procedure in HANA that should loop through a table. Lets say I have 3 columns, ColumnA, ColumnB and ColumnC. In ColumnA, I have my identifiers that I would like to loop over. In ColumnB, I have related identifiers, but in some cases, the identifiers in ColumnB, can be the same as whats in ColumnA. In ColumnC I have a COUNT.
So the table looks like:
ColumnA | ColumnB | ColumnC
0001 | 0002 | 0
0003 | 0004 | 0
0002 | 0005 | 6
The process should loop over each row and check ColumnC to see if the value in ColumnC is greater than 0. If its not, then take the related identifier from ColumnB, and look for it in ColumnA. If there is a value there greater than 0, the loop should insert that line into a table and break.
Any suggestion would be useful, I'm also open to using different methods, besides a procedure.

BEGIN
DECLARE V_NO INT;
DECLARE LV_LOOP INT;
SELECT count(*) INTO V_NO FROM "YOURTABLE";
FOR LV_LOOP IN 1..:V_NO DO
SELECT "COLUMNC" INTO LV_COLUMNC FROM "YOURTABLE";
IF :LV_COLUMNC > 0 THEN
INSERT INTO "YOURTABLE1" VALUES (.....);
ELSE
SELECT "COLUMNB" INTO LV_COLUMNB FROM "YOURTABLE";
SELECT "COLUMNA" INTO LV_COLUMNA FROM "YOURTABLE";
IF :LV_COLUMNB = :LV_COLUMNA THEN
SELECT 'PASS' FROM DUMMY;
END IF;
END IF;
END FOR;
END;
Please let me know if it resolved your issue.

Related

how to use case to do multiple insert in a table in postgres

I am using a case like this in stored procedure.
BEGIN
FOREACH abc IN array abcarray
LOOP
case abc
when abc1 then INSERT 1 into table a;
when abc2 then INSERT 2 into table a;
else
--do nothing
end case;
END LOOP;
Is this the proper way to use case if the requirement is to insert both 1 and 2 in the table or should i try something else?
You don't need a loop or PL/pgSQL for that. You can embed a CASE expression into a SELECT that acts as the source for the INSERT
insert into table_a (some_column)
select case
when abc = 'abc1' then 1
when abc = 'abc2' then 2
end
from unnest(abcarray) as x(abc);

Write a cursor to search and insert

I have tables like that:
Main
id name isedit
1 kyle 0
2 jhon 1
3 dave 0
EditHistory
id idmain name isedit Begin end
1 2 jhon 0 28.05.2020 18:30 28.05.2020 18:35
2 2 jhon 0 28.05.2020 18:35 NULL
3 1 kyle 0 27.05.2020 12:03 NULL
I currently use trigger:
(…) if update(isedit) and exists (
select 1
from Inserted I
where design = 0
) begin
Insert into dbo.HistoryEdit
([idmain][name][isedit][Begin][end]) SELECT id, name, iedit, GETDATE(), null
from Inserted
end;
I need to create cursor that will check through EditHistory for previous rows with same idmain and if there is such row edit its end date to GETDATE() and insert into HistoryEdit as in my current insert.
I know it can be easily done with IF's and thats how I would do it. But I have to use cursor for that, and I never used cursor before.
I never used cursor before.
Well don't start now. Just update the older rows before you insert the newer ones:
declare #d datetime = GetDate()
update EditHistory set end = #d
where id in (select id from inserted)
and end is null;
Insert into dbo.HistoryEdit
([idmain][name][isedit][Begin][end])
SELECT id, name, iedit, #d, null
from Inserted

Sql Developer - Can you use a case statement in a cursor to return multiple values

I've been working through a task of trying to classify several million rows of data into a variety of different topics. The data involves calls from our customer support, and we're trying to find a way to classify each call into one of 109 topics. Due to the confidentiality of the data I can't disclose any of the actual data, but will try to give a relatable subset of data that other people could compare to.
DATA:
Incident_Number | Call_Description
000123456 | Issue with oranges and apples
000987654 | oranges
004567891 | with apples and kiwis
026589741 | Issue with kiwis
SQL:
select
Incident_Number,
Call_Description,
(case
when call_description like '%oranges%' then oranges
when call_description like '%apples%' then apples
when call_descritpion like '%kiwis%' then 'kiwis'
else 'Unclassified' end) Topic
from DATA
Question
My hope would be to have Incident 000123456 classified as both oranges and apples and Incident 004567891 get classified as apples and kiwis
Desired Output
Incident_Number | Call_Description ......................| Topic
000123456 ........ | Issue with oranges and apples | oranges
000123456 ........ | Issue with oranges and apples | apples
000987654 ........ | oranges ...................................| oranges
004567891 .........| with apples and kiwis............... | apples
004567891 .........| with apples and kiwis............... | kiwis
026589741 .........| Issue with kiwis........................ | kiwis
Wrapup
From my limited knowledge and what I've garnered from research a simple case statement can't do this because it short circuits after finding the first true value. My question is whether or not it is possible to make some alterations to my code OR instead to somehow set up a cursor to run through my initial table and give me the desired output noted above.
I appreciate any help or advice and hope that I've adhered to the rules of this website (which has honestly saved my butt before!)
Regards,
Richard
I use Microsoft SQL Server instead of Oracle, so I'm not sure about the Oracle syntax, but one solution I have used in the past is to create a temporary table:
CREATE GLOBAL TEMPORARY TABLE my_temp_table (
groupName varchar(50)
) ON COMMIT DELETE ROWS;
Insert Into my_temp_table (groupName) VALUES('oranges')
Insert Into my_temp_table (groupName) VALUES('apples')
Insert Into my_temp_table (groupName) VALUES('kiwis')
then I would inner join to the table to duplicate the records:
select
Incident_Number,
Call_Description,
my_temp_table.groupName Topic
from DATA
inner join my_temp_table
on Data.call_description like '%' + my_temp_table.groupName + '%'
One problem with this method is that if a record doesn't fall into any categories, it will be excluded completely.
One option would be to create (physically or virtually) a table of keywords and use that to join to your table of data. Something like
WITH keywords AS (
SELECT 'apples' topic FROM dual UNION
SELECT 'oranges' FROM dual UNION
SELECT 'kiwis' FROM dual
)
SELECT data.incident_number,
data.incident_description,
keywords.topic
FROM data
JOIN keywords
ON( data.incident_description LIKE '%' || keywords.topic || '%' )
This will work but it's not the most efficient or flexible approach in the world. It doesn't handle different forms of the word well (if a description references the singular "apple" for example). It doesn't handle words that appear within other words (if a description talks about "crabapples" for example). And it's doing a relatively slow match based on scanning through the entire incident_description.
An alternative approach would be to use Oracle Text to index your data. That's likely to be a more complex solution but it would be much more flexible and should be more efficient.
My exaple is for MS SQL Server: http://www.sqlfiddle.com/#!3/8e904/2
You could create a table with details and a simple cross join with it
create table data(
Incident_Number varchar(9),
Call_Description varchar(50))
create table detail(detail varchar(20))
insert into data select
'000123456','Issue with oranges and apples'
union select
'000987654', 'oranges '
union select
'004567891', 'with apples and kiwis'
union select
'026589741', 'Issue with kiwis'
insert into detail select 'kiwis'
union select 'oranges'
union select 'apples'
select * from data a
cross join detail b
where a.call_description like '%'||b.detail||'%'
order by a.Incident_Number
Add a new version http://www.sqlfiddle.com/#!3/b3378/1
It manages all incidents identifying those without coincidences with null
select * from data a
left join detail b
on a.call_description like '%'||b.detail||'%'
I have gone for the CURSOR option to loop through the records in the table and update each row with the category information.
Because you mentioned your rules are quite complicated I think each one might need to be hand written in this procedure and not stored in a table.
Sorry this is MSSQL so might need converting a little for Oracle.
/*BEGIN SETUP TEMP DATA*/
CREATE TABLE #tmp1 (
idno int,
title nvarchar(100),
Categories nvarchar(100) --I have added this row to my temp data, you could store this in another table if needed.
)
INSERT INTO #tmp1 (idno, title) VALUES (1, 'problem apples and oranges')
INSERT INTO #tmp1 (idno, title) VALUES (2, 'problem with apples')
INSERT INTO #tmp1 (idno, title) VALUES (3, 'problem with oranges')
INSERT INTO #tmp1 (idno, title) VALUES (4, 'problem with kiwis')
INSERT INTO #tmp1 (idno, title) VALUES (5, 'problem with something')
/*END SETUP TEMP DATA*/
/*SETUP VARIABLES TO USE IN CURSOR*/
DECLARE #idno int,
#title nvarchar(100)
/*DECLARE CURSOR, OPEND IT AND FETCH DATA INTO INTO IT*/
DECLARE incident_cursor CURSOR FOR
SELECT idno, title
FROM #tmp1 --You could add WHERE Categories IS NULL to only update records that have not been processed
OPEN incident_cursor
FETCH NEXT FROM incident_cursor
INTO #idno, #title
/*LOOP THROUGH CURSOR*/
WHILE ##FETCH_STATUS = 0
BEGIN
DECLARE #allCategories nvarchar(100)
SET #allCategories = ''
/*WRITE RULES HERE TO WORK OUT WHETHER CATEGORY NEEDS ADDING*/
IF (#title LIKE '%apples%') BEGIN SET #allCategories = #allCategories + 'Apples ' END
IF (#title LIKE '%oranges%') BEGIN SET #allCategories = #allCategories + 'Oranges ' END
IF (#title LIKE '%kiwis%') BEGIN SET #allCategories = #allCategories + 'Kiwis ' END
IF #allCategories = '' BEGIN SET #allCategories = 'Uncategorised' END
/*UPDATE ORIGINAL TABLE WITH CATEGORY INFORMATION*/
UPDATE #tmp1 SET Categories = #allCategories WHERE idno = #idno
FETCH NEXT FROM incident_cursor
INTO #idno, #title
END
CLOSE incident_cursor;
DEALLOCATE incident_cursor;
/*THIS ARE JUST TO DISPLAY OUTPUT AND CLEAR UP TEST DATA*/
SELECT * FROM #tmp1
DROP TABLE #tmp1
You could set this to run in a scheduled job to update your records as frequently as you see fit.
Might not be a perfect solution for you but hopefully a starting point
Sorry for how long it's taken me to come back to this. I got pulled into some more pressing deliverables. I ended up figuring out how to get it working by storing the queries in one table as records, then running them using an execute immediate command. I've included the code below in case someone can use it in the future.
create table text_kw_search_queries
(
incident_number varchar2(15),
description varchar2(100),
topic_level_1 varchar2(89),
topic_level_2 varchar2(89)
);
DECLARE
V_SQL VARCHAR2(1000);
CURSOR UPTO IS
SELECT TOPIC_level_1, TOPIC_level_2, description FROM tcstopic
;
BEGIN
FOR i IN UPTO LOOP
V_SQL := 'INSERT /*+APPEND PARALLEL(TEXT_KW_SEARCH_LOOP,2)*/ INTO text_kw_search_queries
SELECT
t2.Incident_Number,
t2.description,
t2.TOPIC_level_1,
t2.TOPIC_level_2
FROM (
SELECT
t1.Incident_number,
t1.description,
'''||i.TOPIC_level_1||''' AS TOPIC_level_1,
'''||i.TOPIC_level_2||''' AS TOPIC_level_2,
(CASE WHEN '|| i.description ||' THEN 1 ELSE 0 END) as FLAG
FROM cso_text_query t1
) t2
WHERE t2.FLAG = 1';
EXECUTE IMMEDIATE V_SQL;
DBMS_OUTPUT.PUT_LINE(V_SQL);
COMMIT WORK;
END LOOP;
END;

How to select as much records as indicated by the value from database

I have one table in relational database Sybase ASE, with few columns. Three of them looks like this example:
_____________
| Product |
---------------
| ProductId |
| Name |
| Quantity |
_____________
So we have some records:
__________________________________
| ProductId | Name | Quantity |
----------------------------------
| 1 | pants | 2 |
| 2 | shirt | 1 |
| 3 | sweater | 3 |
----------------------------------
I need to get every name as many times as 'Quantity' of this product.
So the result should looks like:
pants
pants
shirt
sweater
sweater
sweater
If somebody have any idea how can I do this, please help me.
EDIT
2014-01-24 14:17 UTC+1
I'd like to thanks everybody. Gordon's solution is realy nice, but for my situation (bigger Quantity) I can't use that sql. I try do somethnig like 333kenshin's and simon's solutions but without cursor. I do somthnig like this:
IF OBJECT_ID('#TEMP') is not null
DROP TABLE #TEMP
create TABLE #TEMP (Name varchar(255))
DECLARE #Name varchar(255)
DECLARE #Quant INT
DECLARE #prodId INT
SET #prodId = 1
WHILE (EXISTS(SELECT 1 FROM product WHERE productID = #prodId))
BEGIN
SELECT
#Name = Name
#Quant = Quantity
FROM Product
DECLARE #i INT
SET #i = 1
WHILE #i <= #Quant
BEGIN
insert into #TEMP
values(#Name)
SELECT #i=#i+1
END
SELECT #prodId = #prodId + 1
END
select * from #TEMP
drop table #TEMP
For me, and my DB it was fastest solution. So thanks a lot for every answers.
To do this, you need a series of integers. You can generate one manually:
select p.name
from product p join
(select 1 as n union all select 2 union all select 3 union all select 4
) n
on n.n <= p.quantity;
This will work if quantity is not too big and you can put in the values in n.
The correct way to do this is temp table + cursors:
create a temp table
create cursor to iterate through Product table
within the cursor, create an inner WHILE loop
exit the loop and finally select the temp table
The following isn't 100% correct Sybase syntax, but it's pretty close.
-- 1: temp table
select productName into #TEMP
-- 2: cursor
declare
#productName char(10),
#quantity int
declare ProductRead CURSOR for
select
productName,
quantity
from
Product
OPEN ProductRead
FETCH ProductRead
INTO
#productName,
#quantity
WHILE (##sqlstatus=0)
BEGIN
-- 3: inner for loop
DECLARE #i INT
SET #i = 1
WHILE #i <= #quantity
BEGIN
insert #productName into #TEMP
END
END
-- 4: final result set
select productName from #TEMP
You could use a temporary table. Select all your product rows, then loop through each row returned. For each product, loop Quantity times and in the loop insert the product into the temp table. I'm not a Sybase user, so I can't give you the syntax but you would have a stored procedure which would do something like:
select all rows from product into a cursor
for each row
for i = 1 to row.Quantity
insert into temp (Name) values (row.Name)
next
end loop
select * from temp and return it
It's a pretty eccentric requirement, though.
EDIT: Gordon's solution is neat though! If n never gets too big I'd go with that!

SQL SELECT statement, column names as values from another table

I'm working on a database which has the following table:
id location
1 Singapore
2 Vancouver
3 Egypt
4 Tibet
5 Crete
6 Monaco
My question is, how can I produce a query from this which would result in column names like the following without writing them into the query:
Query result:
Singapore , Vancouver, Egypt, Tibet, ...
< values >
how can I produce a query which would result in column names like the
following without writing them into the query:
Even with crosstab() (from the tablefunc extension), you have to spell out the column names.
Except, if you create a dedicated C function for your query. The tablefunc extension provides a framework for this, output columns (the list of countries) have to be stable, though. I wrote up a "tutorial" for a similar case a few days ago:
PostgreSQL row to columns
The alternative is to use CASE statements like this:
SELECT sum(CASE WHEN t.id = 1 THEN o.ct END) AS "Singapore"
, sum(CASE WHEN t.id = 2 THEN o.ct END) AS "Vancouver"
, sum(CASE WHEN t.id = 3 THEN o.ct END) AS "Egypt"
-- more?
FROM tbl t
JOIN (
SELECT id, count(*) AS ct
FROM other_tbl
GROUP BY id
) o USING (id);
ELSE NULL is optional in a CASE expression. The manual:
If the ELSE clause is omitted and no condition is true, the result is null.
Basics for both techniques:
PostgreSQL Crosstab Query
You could do this with some really messing dynamic sql but I wouldn't recommend it.
However you could produce something like below, let me know if that stucture is acceptable and I will post some sql.
Location | Count
---------+------
Singapore| 1
Vancouver| 0
Egypt | 2
Tibet | 1
Crete | 3
Monaco | 0
Script for SelectTopNRows command from SSMS
drop table #yourtable;
create table #yourtable(id int, location varchar(25));
insert into #yourtable values
('1','Singapore'),
('2','Vancouver'),
('3','Egypt'),
('4','Tibet'),
('5','Crete'),
('6','Monaco');
drop table #temp;
create table #temp( col1 int );
Declare #Script as Varchar(8000);
Declare #Script_prepare as Varchar(8000);
Set #Script_prepare = 'Alter table #temp Add [?] varchar(100);'
Set #Script = ''
Select
#Script = #Script + Replace(#Script_prepare, '?', [location])
From
#yourtable
Where
[id] is not null
Exec (#Script);
ALTER TABLE #temp DROP COLUMN col1 ;
select * from #temp;