How do you query a table filtering on a substring of one of the columns? - sql

I have a table I wish to query. It has a string variable called comment which contains an ID along with other things. (i.e. "123456;varA;varB")
rowNo
comment
1
"123456;varA;varB"
2
"987654;varA;varB"
I want to filter based on the first substring in the comment variable.
That is, I want to filter the table on rows where the first substring of comment is "123456" (which in the example would return the first row)
How do I do this?
I was thinking something along the lines of the code below, using the "string_split" function, but it doesn't work.
SELECT *,
FROM table
WHERE (SELECT value FROM STRING_SPLIT(comment,';',1)="123456")
Does anyone have any ideas?
Note, I am querying in SQL in SAS, and this is on a large dataset, so I don't want to create a new table with a new column to then query on instead. Ideally I'd want to query on the existing table directly.

You can use the SCAN() function to parse a string.
WHERE '123456'=scan(comment,1,';')

Related

Dax How to get distinct values from a column

This is the query I'm trying.
EVALUATE
SELECTCOLUMNS('MyTable',"col1",DISTINCT(VALUES('MyTable'[Email])))
If you are trying to simply create a new, single column table with the distinct values of an existing table, you can use the formula below.
Starting with data like this...
... simply create a new table with this formula to get a list of distinct values.
Locations = DISTINCT(Fruit[Location])
This will work:
Evaluate
VALUES('Table'[Column])

PostgreSQL: return row where any column value like variable

I am trying to have the user search for a value in a SQL table, and the user is returned with any row that contains that value. At the moment, I can make it work such that the code is:
SELECT * FROM table WHERE lower('foo') in (lower('col1'),lower('col2'),etc)
However, I would like it to be able to search every column and return any row LIKE 'foo'. For instance,
SELECT * FROM table WHERE (lower('col1'), lower('col2'), etc) like lower('%foo%')
But that doesn't work.
Any suggestions?
I believe you need to use multiple WHERE clauses instead of grouping them all into one statement. Try this:
SELECT * FROM table
WHERE lower(col1) like lower('%foo%')
OR lower(col2) like lower('%foo%')
OR etc like lower('%foo%')
You can convert the whole row to a string and then use LIKE on the result of that:
select *
from the_table
where lower(the_table::text) like '%foo%';
the_table::text returns all columns of each row as a comma separated list enclosed with parentheses, e.g. (42,Arthur,Dent). So the above is not 100 identical to a LIKE condition applied on each column - but probably does what you want.

SQL - just view the description for explanation

I would like to ask if it is possible to do this:
For example the search string is '009' -> (consider the digits as string)
is it possible to have a query that will return any occurrences of this on the database not considering the order.
for this example it will return
'009'
'090'
'900'
given these exists on the database. thanks!!!!
Use the Like operator.
For Example :-
SELECT Marks FROM Report WHERE Marks LIKE '%009%' OR '%090%' OR '%900%'
Split the string into individual characters, select all rows containing the first character and put them in a temporary table, then select all rows from the temporary table that contain the second character and put these in a temporary table, then select all rows from that temporary table that contain the third character.
Of course, there are probably many ways to optimize this, but I see no reason why it would not be possible to make a query like that work.
It can not be achieved in a straight forward way as there is no sort() function for a particular value like there is lower(), upper() functions.
But there is some workarounds like -
Suppose you are running query for COL A, maintain another column SORTED_A where from application level you keep the sorted value of COL A
Then when you execute query - sort the searchToken and run select query with matching sorted searchToken with the SORTED_A column

SQLite WHERE-Clause for every column?

Does SQLite offer a way to search every column of a table for a searchkey?
SELECT * FROM table WHERE id LIKE ...
Selects all rows where ... was found in the column id. But instead to only search in the column id, I want to search in every column if the searchstring was found. I believe this does not work:
SELECT * FROM table WHERE * LIKE ...
Is that possible? Or what would be the next easy way?
I use Python 3 to query the SQLite database. Should I go the route to search through the dictionary after the query was executed and data returned?
A simple trick you can do is:
SELECT *
FROM table
WHERE ((col1+col2+col3+col4) LIKE '%something%')
This will select the record if any of these 4 columns contain the word "something".
No; you would have to list or concatenate every column in the query, or reorganize your database so that you have fewer columns.
SQLite has full-text search tables where you can search all columns at once, but such tables do not work efficiently with any other queries.
I could not comment on #raging-bull answer. So I had to write a new one. My problem was, that I have columns with null values and got no results because the "search string" was null.
Using coalesce I could solve that problem. Here sqlite chooses the column content, or if it is null an empty string (""). So there is an actual search string available.
SELECT *
FROM table
WHERE (coalesce(col1,"") || coalesce(col2,"") || coalesce(col3,"") || coalesce(col4,"")) LIKE '%something%')
I'm not quite sure, if I understood your question.
If you want the whole row returned, when id=searchkey, then:
select * from table where id=searchkey;
If you want to have specific columns from the row with the correct searchkey:
select col1, col2, col3 from table where id=searchkey;
If you want to search multiple columns for the "id": First narrow down which columns this could be found in - you don't want to search the whole table! Then:
select * from table where col1=searchkey or col2=searchkey or col3=searchkey;

SQL Query: Modify records based on a secondary table

I have two tables in a PostgreSQL database.
The first table contains an ID and a text field with up to 200 characters and the second table contains a data definition table which has a column that contains smileys or acronyms and a second column which converts them to plain readable English.
The number of records in table 1 is about 1200 and the number in table two is about 300.
I wish to write a SQL statement which will convert any text speak in column 1 in table one into normal readable language based on the definitions in Table 2.
So for example if the value in table 1 reads as: Finally Finished :)
The transformed SQL would be something like: Finally Finished Smiles or smiling,
where the definition is pulled from the second table.
Note the smiley could be anywhere in the text in column one and could one of three hundred characters.
Does anyone know if this is possible?
Yes. Do you want to do it entirely in SQL, or are you writing a brief bit of code to do this? I'm not entirely sure of how to do it all in SQL but I would consider something like what is below:
SELECT row.textToTranslate FROM Table_1
oldText = row.textToTranslate
Split row.textToTranslate by some delimeter
For each word in row.textToTranslate:
queryResult = SELECT FROM Table_2 WHERE pretranslate=word
if(queryResult!=Null)
modifiedText = textToTranslate.replace(word, queryResult)
UPDATE Table_1 SET translatedText=modifiedText WHERE textToTranslate=oldText