SELECT with lower case + unaccent + multiple columns in PostgreSQL - sql

Table schools
id | address | name
1 | Rybničná 59, Bratislava | Stredná odborná škola elektrotechnická
2 | Ul. Sibírska 1, Trnava | Stredná odborná škola elektrotechnická
What I want
From client If I want to type:
Stredná odborná
stredná odborná
stredna odborna
It must find rows with id 1 and 2
If I want to type Bratislava or bratis It must find row with id 1
What I have
SELECT * FROM schools WHERE unaccent(address) LIKE ('%' || 'bratis' || '%');
I need to select from 2 columns (address and name)

To make the search case insentive, use ILIKE instead of LIKE. Then, you would want to remove the accents from the input string as well. At last, just use AND or OR to combine the two criteria (note that you could use the same search term for both columns - use OR in this case)
SELECT * FROM schools
WHERE unaccent(address) ILIKE ('%' || unaccent('bratis') || '%')
AND unaccent(name) ILIKE ('%' || unaccent('Stredná odborná') || '%')

I hope this works
SELECT * FROM schools
WHERE unaccent(address|| ' ' ||name) ILIKE ('%' || 'bratis' || '%');

Related

SQL query value (more char) longer then actual value in table

I'm trying to query with Postgres a value where my query string (value) has more char than the actual value in the column:
| id | firstName|
|:---| :--------|
| 1 | bee |
| 2 | beeWaxer |
so for example if I query beeWax, because beeWax has bee inside it I would like it to return also bee and also beeWaxer.
if I use a ILIKE operator it will only return beeWaxer (obviously):
SELECT * FROM table WHERE firstName ILIKE '%beeWax%';
is there a query that will return both rows?
It seems you want a LIKEin both directions: Show all names that include the search word and all names that are included in the search word. Something along the lines of:
SELECT *
FROM table
WHERE firstName ILIKE '%' || :searchword || '%'
OR :searchword ILIKE '%' || firstName || '%';
I agree with the existing answer, but if you want a little cleaner code you can also use the case insensitive regex:
SELECT *
FROM table
WHERE firstName ~* 'beeWax' or 'beeWax' ~* firstname
You can check if the firstname is like each row in column firstname
select *
from table x
where exists (
select 1 from table y
where 'beeWax' like '%' || x.firstName || '%'
);

Postgresql select where array overlaps using like

Is it possible to determine if an ARRAY column contains overlapping values from another array with the LIKE clause?
The && operator works but the strings have to be exact matches
q = """select * from articles where keywords && '{"mortgage brokers"}';""" // Exact match
Is it possible to filter the keywords where rows contain values with substring, not the full string? Something like:
q = """select * from articles where keywords && LIKE '{"mortgage"}';""" // HOW TO FILTER keywords containing value with substring
LIKE operates on strings. To check whether two arrays overlap, you can use &&.
From the Array Functions and Operators documentation:
&& : overlap (have elements in common)
SELECT ARRAY[1,4,3] && ARRAY[2,1] arrays_overlap;
| arrays_overlap |
| -------------- |
| true |
To see if there are values in an array that are LIKE those from another array, one solution would be to unnest both arrays and compare the results with LIKE:
SELECT EXISTS (
SELECT 1
FROM
unnest(ARRAY['abc', 'def' ]) my_array(x)
INNER JOIN unnest (ARRAY['a', 'z' ]) my_keywords(x)
ON my_array.x LIKE '%' || my_keywords.x || '%'
) arrays_have_similar_elements;
| arrays_have_similar_elements |
| ---------------------------- |
| true |
SELECT EXISTS (
SELECT 1
FROM
unnest(ARRAY['abc', 'def' ]) my_array(x)
INNER JOIN unnest (ARRAY['y', 'z' ]) my_keywords(x)
ON my_array.x LIKE '%' || my_keywords.x || '%'
) arrays_have_similar_elements;
| arrays_have_similar_elements |
| ---------------------------- |
| false |
Demo on DB Fiddle
Thanks to #GMB for the guidance. I was able to solve the problem using the query below.
SELECT * from articles, unnest(keywords) my_array(x)
INNER JOIN unnest(ARRAY['broker']) my_keywords(x)
ON my_array.x LIKE '%' || my_keywords.x || '%';

Postgres 9.1's concat_ws equivalent in Amazon redshift

I've got a query originally written for pg9.1 that I'm trying to fix for use on redshift as follows
select concat_ws(' | ', p.gb_id, p.aro_id, p.gb_name) c from (
select ca.p_id,
avg(ca.ab) as ab
from public.fca
join temp_s_ids s on ca.s_id = s.s_id
group by ca.p_id
) as x
join public.dim_protein as p on x.protein_id = p.protein_id;";
I've been trying to test it out on my own, but as it is created from temporary tables that are created by a php session, I haven't had any luck yet. However, my guess is that the concat_ws function isn't working as expected in redshift.
I don't believe there is an equivalent in redshift. You will have to roll your own. If there are no NULLS you can just use the concatenation operator ||:
SELECT p.gb_id || ' | ' || p.aro_id || ' | ' || p.gb_name c
FROM...
If you have to worry about nulls (and its separator):
SELECT CASE WHEN p.gb_id IS NOT NULL THEN p.gb_id || ' | ' END || CASE WHEN p.aro_id IS NOT NULL THEN p.aro_id || ' | ' END || COALESCE(p.gb_name, '') c
FROM
Perhaps that can be simplified, but I believe it will do the trick.
To handle NULLs, you can do:
select trim('|' from
coalesce('|' || p.gb_id) ||
coalesce('|' || p.p.aro_id) ||
coalesce('|' || p.gb_name)
)
from . . .

ORACLE SQL CSV Column comparison

I have a table with CSV values as column. I want use that column in where clause to compare subset of CSV is present or not. For example Table has values like
1| 'A,B,C,D,E'
Query:
select id from tab where csv_column contains 'A,C';
This query should return 1.
How to achieve this in SQL?
You can handle this using LIKE, making sure to search for the three types of pattern for each letter/substring which you intend to match:
SELECT id
FROM yourTable
WHERE (csv_column LIKE 'A,%' OR csv_column LIKE '%,A,%' OR csv_column LIKE '%,A')
AND
(csv_column LIKE 'C,%' OR csv_column LIKE '%,C,%' OR csv_column LIKE '%,C')
Note that match for the substring A means that either A,, ,A, or ,A appears in the CSV column.
We could also write a structurally similar query using INSTR() in place of LIKE, which might even give a peformance boost over using wildcards.
there's probably something funky you can do with regular expressions but in simple terms... if A and C will always be in that order
csv_column LIKE '%A%C%'
otherwise
(csv_column LIKE '%A%' AND csv_column LIKE '%C%' )
If you don't want to edit your search string, this could be a way:
select *
from yourTable
where csv like '%' || replace('A,C', ',', '%') || '%'
For example:
with yourTable(id, csv) as (
select 1, 'A,B,C,D,E' from dual union all
select 2, 'A,C,D,E' from dual union all
select 3, 'B,C,D,E' from dual
)
select *
from yourTable
where csv like '%' || replace('A,C', ',', '%') || '%'
gives:
ID CSV
---------- ---------
1 A,B,C,D,E
2 A,C,D,E
Consider that this will only work if the characters in the search string have the same order of the CSV column; for example:
with yourTable(id, csv) as (
select 1, 'C,A,B' from dual
)
select *
from yourTable
here csv like '%' || replace('A,C', ',', '%') || '%'
will give no results
Why not store the values as separate columns, and then use simple predicate filtering?

Merge multiple rows in a single

I need merge multiple rows in a single row with data concatenated on the columns.
This three lines are result is from my query with INNER JOIN
Name | SC | Type
----------------------
name1 | 121212 | type1
name2 | 123456 | null
name3 | null | type1
I want display result like this:
Name | SC | Type
----------------------
name1; 121212; type1;
name2; 123456; ;
name3; ; type1;
It's a single row, each column with data concatenated with ; and a \n in the end of each data.
The final query need run in SQL Server and Oracle.
I honestly doubt you can use the same query in both oracle and SQL-Server since they both have different functions when it comes to dealing with null values.
For Oracle:
SELECT NVL(Name,'') || ';' as name,
NVL(SC,'') || ';' as SC,
NVL(type,'') || ';' as type
FROM (YourQueryHere)
For SQL-Server
SELECT isnull(Name,'') + ';' as name,
isnull(SC,'') + ';' as SC,
isnull(type,'') + ';' as type
FROM (YourQueryHere)
Note that as #jarlh said, in concatenating side you can use concat(value,value2) which should work both on SQL-Server and Oracle, depending on your version.
You could simply concatenate the fields:
SELECT ISNULL(Name,'') + ';' as Name,
ISNULL(SC, '') + ';' as SC,
ISNULL(Type, '') + ';' as Type
FROM
(
-- whatever is your query goes here...
);