I am running this query in Snowflake SQL
SELECT field('q', 's', 'q', 'l');
However I get this error:
SQL compilation error: Unknown function FIELD
Anyway I can find the position of something in an "IN" statement?
Ideally in a statement such as:
SELECT position_of_letter_in_in_statement, letter
from my_table a
where letter in ( 'q', 's', 'q', 'l');
with the output being as so:
position_of_letter_in_in_statement | letter
1 | 'q'
2 | 's'
3 | 'q'
4 | 'l'
I don't understand the business requirement but here are two solutions.
1- Concat values in IN statement and use POSITION:
select position(letter, 'abcd'),
letter
from my_table
where letter in ( 'a','b','c','d');
2- Use an ARRAY:
select ARRAY_POSITION( letter::variant, ARRAY_CONSTRUCT( 'a','b','c','d')) + 1 position_of_letter_in_in_statement,
letter
from my_table
where position_of_letter_in_in_statement is NOT NULL;
Related
This question already has answers here:
Oracle REGEXP confusion
(2 answers)
Closed 7 months ago.
I have been trying to write a SQL query to get city names start and end with vowels only.
Code:
select distinct city
from station
where REGEXP_LIKE (city, '^(a|e|i|o|u).*(a|e|i|o|u)$');
This above query gives me the wrong answer. I am using Oracle.
Here is a more concise way to write your query using REGEXP_LIKE:
SELECT DISTINCT city
FROM station
WHERE REGEXP_LIKE(city, '^[aeiou].*[aeiou]$', 'i');
The third match parameter i tells Oracle to do a case insensitive search. This frees us from the need of listing out uppercase and lowercase vowels.
You can use regular expressions, but - you can also use substr function. For example:
SQL> with city (name) as
2 (select 'BOSTON' from dual union all
3 select 'ALBUQUERQUE' from dual
4 )
5 select name,
6 case when substr(upper(name), 1, 1) in ('A', 'E', 'I', 'O', 'U') and
7 substr((name), -1) in ('A', 'E', 'I', 'O', 'U')
8 then 'OK'
9 else 'Not OK'
10 end as result_1,
11 --
12 case when regexp_like(name, '^[aeiou].*[aeiou]$', 'i') then 'OK'
13 else 'Not OK'
14 end as result_2
15 from city;
NAME RESULT_1 RESULT_2
----------- ---------- ----------
BOSTON Not OK Not OK
ALBUQUERQUE OK OK
SQL>
I am able to write answer for it
Query :
select distinct city from station where regex_like ( city,'^(a|e|i|o|u|A|E|I|O|U).*(a|e|i|o|u|A|E|I|O|U)$');
I have a query that returns many columns concated with : ,
SELECT DECODE(ship_ps.STATUS, 'A', 'Y', 'N') AS isactive_ship
,ship_ps.party_site_id
,ship_ps.party_site_number AS site_number
,ship_ps.col1 || ship_ps.col 2
from ...
where ....
and i have a seprate query
(SELECT hp.party_name
FROM apps.hz_cust_accounts hca,apps.hz_parties hp
WHERE 1=1
AND hp.party_id=hca.party_id
AND hca.status='A'
AND hca.cust_account_id=:p_sold_to_org_id6)
i want to concat the result of it with ship_ps.col1 || ship_ps.col 2 || THE_QUERY
How to achieve that
Did you try to run the SQL and got an error?:
SQL> select dummy||' '||(select 1 from dual) from dual;
DUMMY||''||(SELECT1FROMDUAL)
------------------------------------------
X 1
In SQL Server, "I have a field (D1) as 101, 102, 103 in database table: mastersupport.
I have this query
select right(D1, R)
from mastersupport.
It will return results like 1, 2, 3
But my requirement is that I want to show result as A, B, C only instead of 1,2,3". Please suggest a query.
I tried with the below but got syntax error.
SELECT DISTINCT
REPLACE(REPLACE((RIGHT(D1, 1)), '1' , ‘A’), '2', ‘B’, ) AS ExtractString
FROM
master_support;
Any other query to derive the result as A, B, C ......
You can use a case expression:
select case right(d1, 1)
when '1' then 'A'
when '2' then 'B'
when '3' then 'C'
...
end as extract_string
from master_support
Note that if d1 is of a numeric datatype, using arithmetics seems like a more natural approach:
select case d1 % 10
when 1 then 'A'
when 2 then 'B'
when 3 then 'C'
...
end extract_string
from master_support
I am using oracle 10g and i need to write a query where in the table that is to be considered for producing the output is based on the user input.
i have written in the following manner, but getting an error.
UNDEFINE CDR
SELECT F.EMPLOYEE_ID FROM
( SELECT DECODE(&&CDR,25,'TABLE 1' ,22,'TABLE 2' ,19,'TABLE 3' ,16,'TABLE 4') FROM DUAL ) F
WHERE F.FLAG='G';
The closest that you can come without dynamic SQL is:
select EMPLOYEE_ID
from table1
where flag = 'G' and &&CDR = 25
union all
select EMPLOYEE_ID
from table2
where flag = 'G' and &&CDR = 19
union all
select EMPLOYEE_ID
from table4
where flag = 'G' and &&CDR = 16
union all
select EMPLOYEE_ID
from table1
where flag = 'G' and &&CDR not in (25, 19, 16)
I have a table with test fields, Example
id | test1 | test2 | test3 | test4 | test5
+----------+----------+----------+----------+----------+----------+
12345 | P | P | F | I | P
So for each record I want to know how many Pass, Failed or Incomplete (P,F or I)
Is there a way to GROUP BY value?
Pseudo:
SELECT ('P' IN (fields)) AS pass
WHERE id = 12345
I have about 40 test fields that I need to somehow group together and I really don't want to write this super ugly, long query. Yes I know I should rewrite the table into two or three separate tables but this is another problem.
Expected Results:
passed | failed | incomplete
+----------+----------+----------+
3 | 1 | 1
Suggestions?
Note: I'm running PostgreSQL 7.4 and yes we are upgrading
I may have come up with a solution:
SELECT id
,l - length(replace(t, 'P', '')) AS nr_p
,l - length(replace(t, 'F', '')) AS nr_f
,l - length(replace(t, 'I', '')) AS nr_i
FROM (SELECT id, test::text AS t, length(test::text) AS l FROM test) t
The trick works like this:
Transform the rowtype into its text representation.
Measure character-length.
Replace the character you want to count and measure the change in length.
Compute the length of the original row in the subselect for repeated use.
This requires that P, F, I are present nowhere else in the row. Use a sub-select to exclude any other columns that might interfere.
Tested in 8.4 - 9.1. Nobody uses PostgreSQL 7.4 anymore nowadays, you'll have to test yourself. I only use basic functions, but I am not sure if casting the rowtype to text is feasible in 7.4. If that doesn't work, you'll have to concatenate all test-columns once by hand:
SELECT id
,length(t) - length(replace(t, 'P', '')) AS nr_p
,length(t) - length(replace(t, 'F', '')) AS nr_f
,length(t) - length(replace(t, 'I', '')) AS nr_i
FROM (SELECT id, test1||test2||test3||test4 AS t FROM test) t
This requires all columns to be NOT NULL.
Essentially, you need to unpivot your data by test:
id | test | result
+----------+----------+----------+
12345 | test1 | P
12345 | test2 | P
12345 | test3 | F
12345 | test4 | I
12345 | test5 | P
...
- so that you can then group it by test result.
Unfortunately, PostgreSQL doesn't have pivot/unpivot functionality built in, so the simplest way to do this would be something like:
select id, 'test1' test, test1 result from mytable union all
select id, 'test2' test, test2 result from mytable union all
select id, 'test3' test, test3 result from mytable union all
select id, 'test4' test, test4 result from mytable union all
select id, 'test5' test, test5 result from mytable union all
...
There are other ways of approaching this, but with 40 columns of data this is going to get really ugly.
EDIT: an alternative approach -
select r.result, sum(char_length(replace(replace(test1||test2||test3||test4||test5,excl1,''),excl2,'')))
from mytable m,
(select 'P' result, 'F' excl1, 'I' excl2 union all
select 'F' result, 'P' excl1, 'I' excl2 union all
select 'I' result, 'F' excl1, 'P' excl2) r
group by r.result
You could use an auxiliary on-the-fly table to turn columns into rows, then you would be able to apply aggregate functions, something like this:
SELECT
SUM(fields = 'P') AS passed,
SUM(fields = 'F') AS failed,
SUM(fields = 'I') AS incomplete
FROM (
SELECT
t.id,
CASE x.idx
WHEN 1 THEN t.test1
WHEN 2 THEN t.test2
WHEN 3 THEN t.test3
WHEN 4 THEN t.test4
WHEN 5 THEN t.test5
END AS fields
FROM atable t
CROSS JOIN (
SELECT 1 AS idx
UNION ALL SELECT 2
UNION ALL SELECT 3
UNION ALL SELECT 4
UNION ALL SELECT 5
) x
WHERE t.id = 12345
) s
Edit: just saw the comment about 7.4, I don't think this will work with that ancient version (unnest() came a lot later). If anyone thinks this is not worth keeping, I'll delete it.
Taking Erwin's idea to use the "row representation" as a base for the solution a bit further and automatically "normalize" the table on-the-fly:
select id,
sum(case when flag = 'F' then 1 else null end) as failed,
sum(case when flag = 'P' then 1 else null end) as passed,
sum(case when flag = 'I' then 1 else null end) as incomplete
from (
select id,
unnest(string_to_array(trim(trailing ')' from substr(all_column_values,strpos(all_column_values, ',') + 1)), ',')) flag
from (
SELECT id,
not_normalized::text AS all_column_values
FROM not_normalized
) t1
) t2
group by id
The heart of the solution is Erwin's trick to make a single value out of the complete row using the cast not_normalized::text. The string functions are applied to strip of the leading id value and the brackets around it.
The result of that is transformed into an array and that array is transformed into a result set using the unnest() function.
To understand that part, simply run the inner selects step by step.
Then the result is grouped and the corresponding values are counted.