Query mySql with LIKE %...% and not pull false records - sql

I have a database that contains two fields that collect multiple values. For instance, one is colors, where one row might be "red, blue, navyblue, lightblue, orange". The other field uses numbers, we'll call it colorID, where one row might be "1, 10, 23, 110, 239."
Now, let's say I want to SELECT * FROM my_table WHERE 'colors' LIKE %blue%; That query will give me all the rows with "blue," but also rows with "navyblue" or "lightblue" that may or may not contain "blue." Likewise, with colorID, a query for WHERE 'colorID' LIKE %1% will pull up a lot more rows than I want.
What's the correct syntax to properly query the database and only return correct results? FWIW, the fields are both set as TEXT (due to the commas). Is there a better way to store the data that would make searching easier and more accurate?

you really should look at changing your db schema. One option would be to create a table that holds colours with an INT as the primary key. You could then create a pivot table to link my_table to colours
CREATE TABLE `colours` (
`id` INT NOT NULL ,
`colour` VARCHAR( 255 ) NOT NULL ,
PRIMARY KEY ( `id` )
) ENGINE = MYISAM
CREATE TABLE `mytable_to_colours` (
`mytable_id` INT NOT NULL ,
`colour_id` INT NOT NULL ,
) ENGINE = MYISAM
so your query could look like this - where '1' is the value of blue (and more likely how you would be referencing it)
SELECT *
FROM my_table
JOIN mytable_to_colours ON (my_table.id = mytable_to_colours.mytable_id)
WHERE colour_id = '1'

If you want to search in your existing table you can use the following query:
SELECT *
FROM my_table
WHERE colors LIKE 'blue,%'
OR colors LIKE '%,blue'
OR colors LIKE '%,blue,%'
OR colors = 'blue'
However it is much better than when you create table colors and numbers and create many to many relationships.
EDITED: Just like #seengee has written.

MySQL has a REGEXP function that will allow you to match something like "[^a-z]blue|^blue". But you should really consider not doing it this way at all. A single table containing one row for each color (with multiple rows groupable by a common ID) would be far more scalable.

The standard answer would be to normalize the data by putting a colorSelID (or whatever) in this table, then having another table with two columns, mapping from 'colorSelID' to the individual colorIDs, so your data above would turn into something like:
other colums | colorSelId
other data | 1
Then in the colors table, you'd have:
colorSelId | ColorId
1 | 1
1 | 10
1 | 23
1 | 110
1 | 239
Then, when you want to find all the items that match colorID 10, you just search on colorID, and join that ColorSelId back to your main table to get all the items with a colorID of 10:
select *
from
main_table join color_table
on
main_table.ColorSelId=color_table.ColorSelId
where
color_table.colorId = 10
Edit: note that this will also probably speed up your searches a lot, at least assuming you index on ColorId in the color table, and ColorSelId in the main table. A search on '%x%' will (almost?) always do a full table scan, whereas this will use the index.

Perhaps this will help to you:
SELECT * FROM table WHERE column REGEXP "[X]"; // where X is a number. returns all rows containg X in your column
SELECT * FROM table WHERE column REGEXP "^[X]"; // where X is a number. returns all rows containg X as first number in your column
Good luck!

None of the solutions suggested so far seem likely to work, assuming I understand your question. Short of splitting the comma-delimited string into a table and joining, you can do this (using 'blue' as an example):
WHERE ', ' + myTable.ValueList + ',' LIKE '%, blue,%'
If you aren't meticulous about spaces after commas, you would need to replace spaces in ValueList with empty strings as part of this code (and remove the space in ', ').

Related

SQL Server where condition on column with separated values

I have a table with a column that can have values separated by ",".
Example column group:
id column group:
1 10,20,30
2 280
3 20
I want to create a SELECT with where condition on column group where I can search for example 20 ad It should return 1 and 3 rows or search by 20,280 and it should return 1 and 2 rows.
Can you help me please?
As pointed out in comments,storing mutiple values in a single row is not a good idea..
coming to your question,you can use one of the split string functions from here to split comma separated values into a table and then query them..
create table #temp
(
id int,
columnss varchar(100)
)
insert into #temp
values
(1,'10,20,30'),
(2, '280'),
(3, '20')
select *
from #temp
cross apply
(
select * from dbo.SplitStrings_Numbers(columnss,',')
)b
where item in (20)
id columnss Item
1 10,20,30 20
3 20 20
The short answer is: don't do it.
Instead normalize your tables to at least 3NF. If you don't know what database normalization is, you need to do some reading.
If you absolutely have to do it (e.g. this is a legacy system and you cannot change the table structure), there are several articles on string splitting with TSQL and at least a couple that have done extensive benchmarks on various methods available (e.g. see: http://sqlperformance.com/2012/07/t-sql-queries/split-strings)
Since you only want to search, you don't really need to split the strings, so you can write something like:
SELECT id, list
FROM t
WHERE ','+list+',' LIKE '%,'+#searchValue+',%'
Where t(id int, list varchar(max)) is the table to search and #searchValue is the value you are looking for. If you need to search for more than one value you have to add those in a table and use a join or subquery.
E.g. if s(searchValue varchar(max)) is the table of values to search then:
SELECT distinct t.id, t.list
FROM t INNER JOIN s
ON ','+t.list+',' LIKE '%,'+s.searchValue+',%'
If you need to pass those search values from ADO.Net consider table parameters.

Assign unique ID's to three tables in SELECT query, ID's should not overlap

I am working on SQL Sever and I want to assign unique Id's to rows being pulled from those three tables, but the id's should not overlap.
Let's say, Table one contains cars data, table two contains house data, table three contains city data. I want to pull all this data into a single table with a unique id to each of them say cars from 1-100, house from 101 - 200 and city from 300- 400.
How can I achieve this using only select queries. I can't use insert statements.
To be more precise,
I have one table with computer systems/servers host information which has id from 500-700.
I have another tables, storage devices (id's from 200-600) and routers (ids from 700-900). I have already collected systems data. Now I want to pull storage systems and routers data in such a way that the consolidated data at my end should has a unique id for all records. This needs to be done only by using SELECT queries.
I was using SELECT ABS(CAST(CAST(NEWID() AS VARBINARY) AS INT)) AS UniqueID and storing it in temp tables (separate for storage and routers). But I believe that this may lead to some overlapping. Please suggest any other way to do this.
An extension to this question:
Creating consistent integer from a string:
All I have is various strings like this
String1
String2Hello123
String3HelloHowAreYou
I Need to convert them in to positive integers say some thing like
String1 = 12
String2Hello123 = 25
String3HelloHowAreYou = 4567
Note that I am not expecting the numbers in any order.Only requirement is number generated for one string should not conflict with other
Now later after the reboot If I do not have 2nd string instead there is a new string
String1 = 12
String3HelloHowAreYou = 4567
String2Hello123HowAreyou = 28
Not that the number 25 generated for 2nd string earlier can not be sued for the new string.
Using extra storage (temp tables) is not allowed
if you dont care where the data comes from:
with dat as (
select 't1' src, id from table1
union all
select 't2' src, id from table2
union all
select 't3' src, id from table3
)
select *
, id2 = row_number() over( order by _some_column_ )
from dat

order by field with more than 10000 ids

I need to do specific ordering with use of order by field.
select * from table order by field(id,3,4,1,2.......upto 10000 ids)
As the ordering required is not gettable from SQL then how much it affect as per performance and is it feasible to do?
Updates from the comments:
Ordering depends on user and category IDs and can be anything the user wants.
The ordering specification changes (about) daily.
So, we need a custom ordering that depends on the user and category and this ordering needs to change daily.
The easiest way would be to put your ordering in a separate table (called ordering_table in this example):
id | position
----+----------
1 | 11
2 | 42
3 | 23
etc.
The above would mean "put an id of 1 at position 11, 2 at position 42, 3 at position 23, ...". Then you can join that ordering table in:
SELECT t.id, t.col1, t.col2
FROM some_table t
JOIN ordering_table o ON (t.id = o.id)
ORDER BY o.position
Where ordering_table is the table (as above) that defines your strange ordering. This approach simply represents your ordering function as a table (any function with a finite domain is, essentially, just a table after all).
This "ordering table" approach should work fine as long as the ordering table is complete.
If you only need this strange ordering in one place then you could merge the position column into your main table and add NOT NULL and UNIQUE constraints on that column to make sure you cover everything and have a consistent ordering.
Further commenting indicates that you want different orderings for different users and categories and that the ordering will change on a daily basis. You could make separate tables for each condition (which would lead to a combinatorial explosion) or, as Mikael Eriksson and ypercube suggest, add a couple more columns to the ordering table to hold the user and category:
CREATE TABLE ordering_table (
thing_id INT NOT NULL,
position INT NOT NULL,
user_id INT NOT NULL,
category_id INT NOT NULL
);
The thing_id, user_id, and category_id would be foreign keys to their respective tables and you'd probably want to index all the columns in ordering_table but a couple minutes of looking at the query plans would be worthwhile to see if the indexes get used would be worthwhile. You could also make all four columns the primary key to avoid duplicates. Then, the lookup query would be something like this:
SELECT t.id, t.col1, t.col2
FROM some_table t
LEFT JOIN ordering_table o
ON (t.id = o.thing_id AND o.user_id = $user AND o.category_id = $cat)
ORDER BY COALESCE(o.position, 99999)
Where $user and $cat are the user and category IDs (respectively). Note the change to a LEFT JOIN and the addition of COALESCE to allow for missing rows in ordering_table, these changes will push anything that doesn't have a specified position in the order to the bottom of the list rather than removing them from the results completely.

Selecting distinct rows based on values from left table

Using Postgres. Here's my scenario:
I have three different tables. One is a title table. The second is a genre table. The third table is used to join the two. When I designed the database, I expected that each title would have one top level genre. After filling it with data, I discovered that there were titles that had two, sometimes, three top level genres.
I wrote a query that retrieves titles and their top level genres. This obviously requires that I join the two tables. For those that only have one top level genre, there is one record. For those that have more, there are multiple records.
I realize I'll probably have to write a custom function of some kind that will handle this for me, but I thought I'd ask if it's possible to do this without doing so just to make sure I'm not missing anything.
Is it possible to write a query that will allow me to select all of the distinct titles regardless of the number of genres that it has, but also include the genre? Or even better, a query that would give me a comma delimited string of genres when there are multiples?
Thanks in advance!
Sounds like a job for array_agg to me. With tables like this:
create table t (id int not null, title varchar not null);
create table g (id int not null, name varchar not null);
create table tg (t int not null, g int not null);
You could do something like this:
SELECT t.title, array_agg(g.name)
FROM t, tg, g
WHERE t.id = tg.t
AND tg.g = g.id
GROUP BY t.title, t.id
to get:
title | array_agg
-------+-----------------------
one | {g-one,g-two,g-three}
three | {g-three}
two | {g-two}
Then just unpack the arrays as needed. If for some reason you really want a comma delimited string instead of an array, then string_agg is your friend:
SELECT t.title, string_agg(g.name, ',')
FROM t, tg, g
WHERE t.id = tg.t
AND tg.g = g.id
GROUP BY t.title, t.id
and you'll get something like this:
title | string_agg
-------+---------------------
one | g-one,g-two,g-three
three | g-three
two | g-two
I'd go with the array approach so that you wouldn't have to worry about reserving a character for the delimiter or having to escape (and then unescape) the delimiter while aggregating.
Have a look at this thread which might answer your question.

Fetch a single field from DB table into itab

I want to fetch the a field say excep_point from a transparent table z_accounts for the combination of company_code and account_number. How can I do this in ABAP SQL?
Assume that table structure is
|company_code | account_number | excep_point |
Assuming you have the full primary key...
data: gv_excep_point type zaccounts-excep_point.
select single excep_point
into gv_excep_point
from zaccounts
where company_code = some_company_code
and account_number = some_account_number.
if you don't have the full PK and there could be multiple values for excep_point
data: gt_excep_points type table of zaccounts-excep_point.
select excep_point
into table gt_excep_points
from zaccounts
where company_code = some_company_code
and account_number = some_account_number.
There is at least another variation, but those are 2 I use most often.
For information only. When you selects data into table you can write complex expressions to combine different fields. For example, you have internal table (itab) with two fields "A" and "B". And you are going to select data from DB table (dbtab) wich have 6 columns - "z","x","y","u","v","w". And for example each field is type char2 You aim to cimbine "z","x","y","u" in "A" field of internal table and "v","w" in "B" field. You can write simple code:
select z as A+0(2)
x as A+2(2)
y as A+4(2)
u as A+6(2)
v as B+0(2)
w as B+2(2) FROM dbtab
INTO CORRESPONDING FIELDS OF TABLE itab
WHERE <where condition>.
This simple code makes you job done very simple
In addition to Bryans answer, here is the official online documentation about Open SQL.