I have a piece of code that runs on a variety of databases. It simply runs a configurable sql query which returns a number of rows. From each row, i pull some text and a number to create a new object. Our latest client has decided to put all the text number combinations in a single row of the database i.e
text_1, num_1, text_2, num_2, text_3, num_3
Is there a clever way I can query this to return
text_1,num_1
text_2,num_2
text_3,num_3
so that I don't have to re-code the section for this client.
EDIT:
(Different databases means different RDBMS)
(the commas delimit different columns within a table)
SELECT
CASE row.id WHEN 1 THEN field1
WHEN 2 THEN field3
ELSE field5
END AS new_field_1,
CASE row.id WHEN 1 THEN field2
WHEN 2 THEN field4
ELSE field6
END AS new_field_2
FROM
myTable
CROSS JOIN
(SELECT 1 AS id UNION ALL SELECT 2 UNION ALL SELECT 3) AS row
This should work for most, though still need a little modification (Such as adding 'FROM dual' in the UNIONs for Oracle...)
Alternatively, just UNION three queries together...
SELECT field1, field2 FROM myTable
UNION ALL
SELECT field3, field4 FROM myTable
UNION ALL
SELECT field5, field6 FROM myTable
You can create a function/SP to return a ResultSet the way you need.
Related
Need help to create query to compare the data and fetch relevant data from table:
Input
Field1
Field2
abc_ID
ID
abc
abc
abc_id_test
test
abc_id_test
abc test
abc_id_test_scenario
scenario
abc_id_test_scenario
cde test
Output
Field1
Field2
abc_id_test_scenario
cde test
If field 2 contains string that matches with field 1 then I need to filter those records.
You can do it using unnest and string_to_array combined to expand a string to a set of rows. then check rows from Field2 not match Field1
with cte as (
select Field1, Field2, unnest(string_to_array(Field2, ' ')) as item
from mytable
)
select field1, Field2
from cte
where Field1 not like concat('%',item,'%')
Demo here
I'm trying to take value from a non-empty row and overwrite it in the subsequent rows until another non-empty row appears and then write that in the subsequent rows. Coming from ABAP Background, I'm not sure how to accomplish this in HANA SQL Script. Here's a picture to show what the data looks like.
Basically 'Doe, John' should be overwritten into all the empty rows until 'Doe, Jane' appears and then 'Doe, Jane' should be overwritten into empty rows until another name appears.
My idea is to store the non-empty row in a local variable, but I haven't had much success so far. Here's my code:
tempTab1 = SELECT
CASE WHEN EMPLOYEE <> ''
THEN lv_emp = EMPLOYEE
ELSE EMPLOYEE
END AS EMPLOYEE,
FROM :tempTab;
In general, rows in dataset are unordered until you explicitly specify ORDER BY part of SQL. If you observe some order it may be a side-effect and can vary. So first of all you have to explicitly create a row number column (assume it's name is RECORD).
Then you should go this way:
Select only rows with non-empty data in column.
Use LEAD(RECORD) over(order by RECORD) to identify the next non-empty record number.
Join your source dataset to dataset defined on step 3 on between condition for RECORD field.
with a as (
select 1 as record, 'Val1' as field1 from dummy union
select 2 as record, '' as field1 from dummy union
select 3 as record, '' as field1 from dummy union
select 4 as record, 'Val2' as field1 from dummy union
select 5 as record, '' as field1 from dummy union
select 6 as record, '' from dummy union
select 7 as record, '' from dummy union
select 8 as record, 'Val3' as field1 from dummy
)
, fill_base as (
select field1, record, lead(record, 1, record) over(order by record asc) as next_record
from a
where field1 <> '' and field1 is not null
)
select
a.record
, case
when a.field1 = '' or a.field1 is null
then f.field1
else a.field1
end as field1
, a.field1 as field1_original
from a
left join fill_base as f
on a.record > f.record
and a.record < f.next_record
The performance in HANA may be bad in some cases since it process window functions very bad.
Here is another more elegant solution with two nested window functions than does not force you to write multiple selects for each column: How to make LAG() ignore NULLS in SQL Server?
You can use window aggregate function LAST_VALUE to achieve the imputation of missing values.
Sample Data
CREATE TABLE sample (id integer, sort integer, value varchar(10));
INSERT INTO sample VALUES (4711, 1, 'Hello');
INSERT INTO sample VALUES (4712, 2, null);
INSERT INTO sample VALUES (4713, 3, null);
INSERT INTO sample VALUES (4714, 4, 'World');
INSERT INTO sample VALUES (4715, 5, null);
INSERT INTO sample VALUES (4716, 6, '!');
Generate a new column with imputed values
SELECT base.*, LAST_VALUE(fill.value ORDER BY fill.sort) AS value_imputed
FROM sample base
LEFT JOIN sample fill ON fill.sort <= base.sort AND fill.value IS NOT NULL
GROUP BY base.id, base.sort, base.value
ORDER BY base.id, base.sort
Result
Note that sort could be anything determining the order (e.g. a timestamp).
I'm running this query 50+ times, and I want to abstract the AND query statements below and store them in one/global table, so in the future I only have to edit one table (vs. 50) if I want to edit any of the AND statements. What's the most efficient way to store the AND statements in a separate table and then cal them in the below query?
SELECT
Field,
Field2,
Field3
into table1
FROM table2
WHERE (DESCRIPTION iLIKE '%ADVANCE%AUTO%Pa%')
AND is_duplicate!=1
AND amount >0
AND currency_id = 152
AND transaction_base_type = 'debit'
AND TRANSACTION_STATUS <> 'D'
You could create a view/materialized view:
CREATE VIEW my_view
AS
SELECT
Field,
Field2,
Field3,
DESCRIPTION
FROM table2
WHERE is_duplicate!=1
AND amount >0
AND currency_id = 152
AND transaction_base_type = 'debit'
AND TRANSACTION_STATUS <> 'D'
and then:
SELECT Field, Field2, Field3
FROM my_view
WHERE (DESCRIPTION iLIKE '%ADVANCE%AUTO%Pa%')
EDIT
I simply need to store WHERE clauses in one place so I can update them once and call them in 50 queries vs. including them in every query and updating them 50 times. Is it really that complicated?
As I wrote in comment you cannot simply parametrize table name(plus it may indicate that schema is flawed). SQL is powerful language so you could use dynamic SQL and functions.
CREATE OR REPLACE FUNCTION my_func(tab_name text)
RETURNS TABLE (
id INT, -- here goes common column list shared across all 50 tables
col1 INT,
col2 INT
)
LANGUAGE 'plpgsql'
AS $BODY$
BEGIN
RETURN QUERY
EXECUTE format('SELECT * from %I where col2 > 0',tab_name);
-- here goes shared conditions
END
$BODY$;
SELECT * FROM my_func('tab1');
SELECT * FROM my_func('tab2') WHERE col2 = 2;
-- condition that is not shared
db<>fiddle demo
I have a field which holds a short list of ids of a fixed length.
e.g. aab:aac:ada:afg
The field is intended to hold at most 5 ids, growing gradually. I update it by adding from a similarly constructed field that may partially overlap with my existing set, e.g. ada:afg:fda:kfc.
The field expans when joined to an "update" table, as in the following example.
Here, id_list is the aforementioned list I want to "merge", and table_update is a table with new values I want to "merge" into table1.
insert overwrite table table1
select
id,
field1,
field2,
case
when (some condition) then a.id_list
else merge(a.id_list, b.id_list)
end as id_list
from table1 a
left join
table_update b
on a.id = b.id;
I'd like to produce a combined field with the following value:
aab:aac:ada:afg:fda.
The challenge is that I don't know whether or how much overlap the strings have until execution, and I cannot run any external code, or create UDFs.
Any suggestions how I could approach this?
Split to get arrays, explode them, select existing union all new, aggregate using collect_set, it will produce unique array, concatenate array into string using concat_ws(). Not tested:
select concat_ws(':',collect_set(id))
from
(
select explode(split('aab:aac:ada:afg',':')) as id --existing
union all
select explode(split('ada:afg:fda:kfc',':')) as id --new
);
You can use UNION instead UNION ALL to get distinct values before aggregating into array. Or you can join new and existing and concatenate strings into one, then do the same:
select concat_ws(':',collect_set(id))
from
(
select explode(split(concat('aab:aac:ada:afg',':','ada:afg:fda:kfc'),':')) as id --existing+new
);
Most probably you will need to use lateral view with explode in the real query. See this answer about lateral view usage
Update:
insert overwrite table table1
select concat_ws(':',collect_set(a.idl)) as id_list,
id,
field1,
field2
from
(
select
id,
field1,
field2,
split(
case
when (some condition) then a.id_list
when b.id_list is null then a.id_list
else concat(a.id_list,':',b.id_list)
end,':') as id_list_array
from table1 a
left join table_update b on a.id = b.id
)s
LATERAL VIEW OUTER explode(id_list_array ) a AS idl
group by
id,
field1,
field2
;
I'm building a report and exporting it to GSheets. However, instead of running 4 to six calls to BQ (different projects), I'd like to make one call and extract the result as something like
T1.field1 | T1.field2 | T2.field3 | T2.field4 | etc.
The point is that these output data are not related to each other and the sizes of the output tables are different as well.
I thought to have null in the shorter tables.
The only solution I could think of is to add another column with row number and make a full join on the row number.
If you have better solution, I'd love to hear.
Thanks!
instead of joining you can consider union as it is in simplified example below. results are not horizontally layout - but still one call and friendly enough for spreadsheet to manipulate with
SELECT output, field1, field2, field3, field4, field5, field6
FROM
(SELECT 't1' AS output, field1, field2, field3
FROM (SELECT 1 AS field1, 2 AS field2, 3 AS field3)),
(SELECT 't2' AS output, field4, field5,
FROM (SELECT 4 AS field4, 5 AS field5)),
(SELECT 't3' AS output, field6
FROM (SELECT 6 AS field6))