PostgreSQL query to select records which a specific value doesn't include in text array - sql

I have a table like this
| id | data |
|---------------|---------------------|
| org:abc:basic | {org,org:abc:basic} |
| org:xyz:basic | {org,basic} |
| org:efg:basic | {org} |
I need to write a query to select all the rows which doesn't have the id inside the data column.
Or at least I need to query all the records which doesn't have a text starting from org: and ending with :basic within data.
Currently for this I try to run
SELECT * FROM t_permission WHERE 'org:%:basic' NOT LIKE ANY (data)
query which returns everything even the first row.

you can use the <> operator with ALL against the array:
select *
from the_table
where id <> all(data);

Related

Snowflake returns 'invalid query block' error when using `=ANY()` subquery operator

I'm trying to filter a table with a list of strings as a parameter, but as I want to make the parameter optional (in Python sql user case) I can't use IN operator.
With postgresql I was able to build the query like this:
SELECT *
FROM table1
WHERE (id = ANY(ARRAY[%(param_id)s]::INT[]) OR %(param_id)s IS NULL)
;
Then in Python one could choose to pass a list of param_id or just None, which will return all results from table1. E.g.
pandas.read_sql(query, con=con, params={param_id: [id_list or None]})
However I couldn't do the same with snowflake because even the following query fails:
SELECT *
FROM table1
WHERE id = ANY(param_id)
;
Does Snowflake not have ANY operator? Because it is in their doc.
If the parameter is a single string literal 1,2,3 then it first needs to be parsed to multiple rows SPLIT_TO_TABLE
SELECT *
FROM table1
WHERE id IN (SELECT s.value
FROM TABLE (SPLIT_TO_TABLE(%(param_id)s, ',')) AS s);
Agree with #Yuya. This is not very clear in documentation. As per doc -
"IN is shorthand for = ANY, and is subject to the same restrictions as ANY subqueries."
However, it does not work this way - IN works with a IN list where as ANY only works with subquery.
Example -
select * from values (1,2),(2,3),(4,5);
+---------+---------+
| COLUMN1 | COLUMN2 |
|---------+---------|
| 1 | 2 |
| 2 | 3 |
| 4 | 5 |
+---------+---------+
IN works fine with list of literals -
select * from values (1,2),(2,3),(4,5) where column1 in (1,2);
+---------+---------+
| COLUMN1 | COLUMN2 |
|---------+---------|
| 1 | 2 |
| 2 | 3 |
+---------+---------+
Below gives error (though as per doc IN and = ANY are same) -
select * from values (1,2),(2,3),(4,5) where column1 = ANY (1,2);
002076 (42601): SQL compilation error:
Invalid query block: (.
Using subquery ANY runs fine -
select * from values (1,2),(2,3),(4,5) where column1 = ANY (select column1 from values (1),(2));
+---------+---------+
| COLUMN1 | COLUMN2 |
|---------+---------|
| 1 | 2 |
| 2 | 3 |
+---------+---------+
Would it not make more sense for both snowflake and postgresql to have two functions/store procedures that have one/two parameters.
Then the one with the “default” just dose not asked this fake question (is in/any some none) and is simpler. Albeit it you question is interesting.

HIVE SQL: Select rows whose values contain string in a column

I want to select rows whose values contain a string in a column.
For example, I want to select all rows whose values contain a string '123' in the column 'app'.
table:
app id
123helper xdas
323helper fafd
2123helper dsaa
3123helper fafd
md5321 asdx
md5123 dsad
result:
app id
123helper xdas
2123helper dsaa
3123helper fafd
md5123 dsad
I am not familiar with SQL query.
Could anyone help me? .
Thanks in advances.
In a number of ways:
like:
select * from table
where app like '%123%'
rlike:
...
where app rlike '123'
instr:
...
where instr(app, '123')>0
locate:
...
where locate('123', app)>0
Invent your own way.
Read manual: String Functions and Operators.
Try the following using like
select
*
from yourTable
where app like '%123%'
Output:
| app | id |
| ---------- | ---- |
| 123helper | xdas |
| 2123helper | dsaa |
| 3123helper | fafd |
| md5123 | dsad |
Please use below query,
select app, id from table where app like '%123%';
Below are few additional information,
like '123%' --> Starts with 123
like '%123' --> Ends with 123
like '%123%'--> Contains 123 anywhere in the string

Postgres jsonb. Heterogenous json fields

If I have a table with a single jsonb column and the table has data like this:
[{"body": {"project-id": "111"}},
{"body": {"my-org.project-id": "222"}},
{"body": {"other-org.project-id": "333"}}]
Basically it stores project-id differently for different rows.
Now I need a query where the data->'body'->'etc'., from different rows would coalesce into a single field 'project-id', how can I do that?
e.g.: if I do something like this:
select data->'body'->'project-id' projectid from mytable
it will return something like:
| projectid |
| 111 |
But I also want project-id's in other rows too, but I don't want additional columns in the results. i.e, I want this:
| projectid |
| 111 |
| 222 |
| 333 |
I understand that each of your rows contains a json object, with a nested object whose key varies over rows, and whose value you want to acquire.
Assuming the 'body' always has a single key, you could do:
select jsonb_extract_path_text(t.js -> 'body', x.k) projectid
from t
cross join lateral jsonb_object_keys(t.js -> 'body') as x(k)
The lateral join on jsonb_object_keys() extracts all keys in the object as rows. Then we use jsonb_extract_path_text() to get the corresponding value.
Demo on DB Fiddle:
with t as (
select '{"body": {"project-id": "111"}}'::jsonb js
union all select '{"body": {"my-org.project-id": "222"}}'::jsonb
union all select '{"body": {"other-org.project-id": "333"}}'::jsonb
)
select jsonb_extract_path_text(t.js -> 'body', x.k) projectid
from t
cross join lateral jsonb_object_keys(t.js -> 'body') as x(k)
| projectid |
| :--------- |
| 111 |
| 222 |
| 333 |

Search an SQL table that already contains wildcards?

I have a table that contains patters for phone numbers, where x can match any digit.
+----+--------------+----------------------+
| ID | phone_number | phone_number_type_id |
+----+--------------+----------------------+
| 1 | 1234x000x | 1 |
| 2 | 87654311100x | 4 |
| 3 | x111x222x | 6 |
+----+--------------+----------------------+
Now, I might have 511132228 which will match with row 3 and it should return its type. So, it's kind of like SQL wilcards, but the other way around and I'm confused on how to achieve this.
Give this a go:
select * from my_table
where '511132228' like replace(phone_number, 'x', '_')
select *
from yourtable
where '511132228' like (replace(phone_number, 'x','_'))
Try below query:
SELECT ID,phone_number,phone_number_type_id
FROM TableName
WHERE '511132228' LIKE REPLACE(phone_number,'x','_');
Query with test data:
With TableName as
(
SELECT 3 ID, 'x111x222x' phone_number, 6 phone_number_type_id from dual
)
SELECT 'true' value_available
FROM TableName
WHERE '511132228' LIKE REPLACE(phone_number,'x','_');
The above query will return data if pattern match is available and will not return any row if no match is available.

Access query to grab +5 or more duplicates

i have a little problem with an Access query ( dont ask me why but i cannot use a true SGBD but Access )
i have a huge table with like 920k records
i have to loop through all those data and grab the ref that occur more than 5 time on the same date
table = myTable
--------------------------------------------------------------
| id | ref | date | C_ERR_ANO |
--------------------------------------------|-----------------
| 1 | A12345678 | 2012/02/24 | A 4565 |
| 2 | D52245708 | 2011/05/02 | E 5246 |
| ... | ......... | ..../../.. | . .... |
--------------------------------------------------------------
so to resume it a bit, i have like 900000+ records
there is duplicates on the SAME DATE ( oh by the way there is another collumn i forgot to add that have C_ERR_ANO as name)
so i have to loop through all those row, grab each ref based on date AND errorNumber
and if there is MORE than 5 time with the same errorNumber i have to grab them and display it in the result
i ended up using this query:
SELECT DISTINCT Centre.REFERENCE, Centre.DATESE, Centre.C_ERR_ANO
FROM Centre INNER JOIN (SELECT
Centre.[REFERENCE],
COUNT(*) AS `toto`,
Centre.DATESE
FROM Centre
GROUP BY REFERENCE
HAVING COUNT(*) > 5) AS Centre_1
ON Centre.REFERENCE = Centre_1.REFERENCE
AND Centre.DATESE <> Centre_1.DATESE;
but this query isent good
i tried then
SELECT DATESE, REFERENCE, C_ERR_ANO, COUNT(REFERENCE) AS TOTAL
FROM (
SELECT *
FROM Centre
WHERE (((Centre.[REFERENCE]) NOT IN (SELECT [REFERENCE]
FROM [Centre] AS Tmp
GROUP BY [REFERENCE],[DATESE],[C_ERR_ANO]
HAVING Count(*)>1 AND [DATESE] = [Centre].[DATESE]
AND [C_ERR_ANO] = [Centre].[C_ERR_ANO]
AND [LIBELLE] = [Centre].[LIBELLE])))
ORDER BY Centre.[REFERENCE], Centre.[DATESE], Centre.[C_ERR_ANO])
GROUP BY REFERENCE, DATESE, C_ERR_ANO
still , not working
i'm struggeling
Your group by clause needs to include all of the items in your select. Why not use:
select Centre.DATESE, Centre.C_ERR_ANO, Count (*)
Group by Centre.DATESE, Centre.C_ERR_ANO
HAVING COUNT (*) > 5
If you need other fields then you can add them, as long as you ensure the same fields appear in the select as the group by.
No idea what is going on with the formatting here!