**EDIT**
ok so i found the problam, it was that the min word length for the search was 4, i changed it to 3 but now it only finds the row 1 data and not row 2 data aswell...
-----original question:----
I have a MyISAM table on my phpmyadmin like this:
table name: `users`
coulmn name: `name`
row 1 data: 'dan'
row 2 data: 'dan252'
(it's just the important part of it)
now the name is fulltext index field, im using this query:
SELECT * FROM `users` WHERE MATCH(`name`) AGAINST('dan')
but phpmyadmin returns:
MySQL returned an empty result set (i.e. zero rows). ( Query took 0.0004 sec )
why it's not finding anything?
*EDIT*
ok so i found the problam, it was that the min word length for the search was 4, i changed it to 3 but now it only finds the row 1 data and not row 2 data aswell...
MATCH only works on columns with FULLTEXT indicing. And FULLTEXT indicing only works on MyISAM tables.
Secondly, 'dan' is probably too short to use on a MATCH.
Thirdly, if your search term matches more than 50% of your rows, the term is considered too common and the search fails.
Have a read here.
Try this.
SELECT * FROM users WHERE MATCH (name) AGAINST ('dan');
I guess you are using apostrophes for tablename and in MATCH. This is wrong. Try it and let me know if this works.
I guess you are trying to return all with 'dan' in them
You could try the LIKE command.
SELECT * FROM users WHERE name LIKE '%dan%'
That will return 'dan' and 'dan252' and (if he was in the table) '123dan456'.
You could also try
SELECT * FROM users Match(name) AGAINST('+dan*')
That should return the same thing and is probably more efficient.
If you are only trying to use 'dan' then use as #Kirishna says.
SELECT * FROM users WHERE MATCH (name) AGAINST ('dan')
Related
So in Hue I've entered a simple query(has to be as simple as possible, as others will run it too) to just get a limit of 20 records. The query is:
Select * from tablename Limit 20
The problem is that the query returns column names in this format: tablename.columnname
I need JUST the column name to be returned, NOT the table name referenced at all. How is this achieved without going into a large "from" statement spelling out all of the columns(only other way I currently know)?
Thanks in advance!
Not the best but you could do a right click on the '*' of SELECT * and then expand all the column names:
I am trying to write code that allows me to check if there are any cases of a particular pattern inside a table.
The way I am currently doing is with something like
select count(*)
from database.table
where column like (some pattern)
and seeing if the count is greater than 0.
I am curious to see if there is any way I can speed up this process as this type of pattern finding happens in a loop in my query and all I need to know is if there is even one such case rather than the total number of cases.
Any suggestions will be appreciated.
EDIT: I am running this inside a Teradata stored procedure for the purpose of data quality validation.
Using EXISTS will be faster if you don't actually need to know how many matches there are. Something like this would work:
IF EXISTS (
SELECT *
FROM bigTbl
WHERE label LIKE '%test%'
)
SELECT 'match'
ELSE
SELECT 'no match'
This is faster because once it finds a single match it can return a result.
If you don't need the actual count, the most efficient way in Teradata will use EXISTS:
select 1
where exists
( select *
from database.table
where column like (some pattern)
)
This will return an empty result set if the pattern doesn't exist.
In terms of performance, a better approach is to:
select the result set based on your pattern;
limit the result set's size to 1.
Check whether a result was returned.
Doing this prevents the database engine from having to do a full table scan, and the query will return as soon as the first matching record is encountered.
The actual query depends on the database you're using. In MySQL, it would look something like:
SELECT id FROM database.table WHERE column LIKE '%some pattern%' LIMIT 1;
In Oracle it would look like this:
SELECT id FROM database.table WHERE column LIKE '%some pattern%' AND ROWNUM = 1;
Does SQLite offer a way to search every column of a table for a searchkey?
SELECT * FROM table WHERE id LIKE ...
Selects all rows where ... was found in the column id. But instead to only search in the column id, I want to search in every column if the searchstring was found. I believe this does not work:
SELECT * FROM table WHERE * LIKE ...
Is that possible? Or what would be the next easy way?
I use Python 3 to query the SQLite database. Should I go the route to search through the dictionary after the query was executed and data returned?
A simple trick you can do is:
SELECT *
FROM table
WHERE ((col1+col2+col3+col4) LIKE '%something%')
This will select the record if any of these 4 columns contain the word "something".
No; you would have to list or concatenate every column in the query, or reorganize your database so that you have fewer columns.
SQLite has full-text search tables where you can search all columns at once, but such tables do not work efficiently with any other queries.
I could not comment on #raging-bull answer. So I had to write a new one. My problem was, that I have columns with null values and got no results because the "search string" was null.
Using coalesce I could solve that problem. Here sqlite chooses the column content, or if it is null an empty string (""). So there is an actual search string available.
SELECT *
FROM table
WHERE (coalesce(col1,"") || coalesce(col2,"") || coalesce(col3,"") || coalesce(col4,"")) LIKE '%something%')
I'm not quite sure, if I understood your question.
If you want the whole row returned, when id=searchkey, then:
select * from table where id=searchkey;
If you want to have specific columns from the row with the correct searchkey:
select col1, col2, col3 from table where id=searchkey;
If you want to search multiple columns for the "id": First narrow down which columns this could be found in - you don't want to search the whole table! Then:
select * from table where col1=searchkey or col2=searchkey or col3=searchkey;
I have a sqlite table containing records of variable length number prefixes. I want to be able to find the most complete prefix against another variable length number in the most efficient way:
eg. The table contains a column called prefix with the following numbers:
1. 1234
2. 12345
3. 123456
What would be an efficient sqlite query to find the second record as being the most complete match against 12345999.
Thanks.
A neat trick here is to reverse a LIKE clause -- rather than saying
WHERE prefix LIKE '...something...'
as you would often do, turn the prefix into the pattern by appending a % to the end and comparing it to your input as the fixed string. Order by length of prefix descending, and pick the top 1 result.
I've never used Sqlite before, but just downloaded it and this works fine:
sqlite> CREATE TABLE whatever(prefix VARCHAR(100));
sqlite> INSERT INTO WHATEVER(prefix) VALUES ('1234');
sqlite> INSERT INTO WHATEVER(prefix) VALUES ('12345');
sqlite> INSERT INTO WHATEVER(prefix) VALUES ('123456');
sqlite> SELECT * FROM whatever WHERE '12345999' LIKE (prefix || '%')
ORDER BY length(prefix) DESC LIMIT 1;
output:
12345
Personally I use next method, it will use indexes:
statement '('1','12','123','1234','12345','123459','1234599','12345999','123459999')'
should be generated by client
SELECT * FROM whatever WHERE prefix in
('1','12','123','1234','12345','123459','1234599','12345999','123459999')
ORDER BY length(prefix) DESC LIMIT 1;
select foo, 1 quality from bar where foo like "123*"
union
select foo, 2 quality from bar where foo like "1234*"
order by quality desc limit 1
I haven't tested it, but the idea would work in other dialects of SQL
a couple of assumptions.
you are joining with some other table so you want to know the largest variable length prefix for each record in the table you are joining with.
your table of prefixes is actually more than just the three you provide in your example...otherwise you can hardcode the logic and move on.
prefix_table.prefix
1234
12345
123456
etc.
foo.field
12345999
123999
select
a.field,
b.prefix,
max(len(b.prefix)) as length
from
foo a inner join prefix_table b on b.prefix = left(a.field, len(b.prefix))
group by
a.field,
b.prefix
note that this is untested but logically should make sense.
Without resorting to a specialized index, the best performing strategy may be to hunt for the answer.
Issue a LIKE query for each possible prefix, starting with the longest. Stop once you get rows returned.
It's certainly not the prettiest way to achieve what you wan't but as opposed to the other suggestions, indexes will be considered by the query planner. As always, it depends on your actual data. In particular, on how many rows in your table, and how long the average hunt will be.
I have a table, users, in an Oracle 9.2.0.6 database. Two of the fields are varchar - last_name and first_name.
When rows are inserted into this table, the first name and last name fields are supposed to be in all upper case, but somehow some values in these two fields are mixed case.
I want to run a query that will show me all of the rows in the table that have first or last names with lowercase characters in it.
I searched the net and found REGEXP_LIKE, but that must be for newer versions of oracle - it doesn't seem to work for me.
Another thing I tried was to translate "abcde...z" to "$$$$$...$" and then search for a '$' in my field, but there has to be a better way?
Thanks in advance!
How about this:
select id, first, last from mytable
where first != upper(first) or last != upper(last);
I think BQ's SQL and Justin's second SQL will work, because in this scenario:
first_name last_name
---------- ---------
bob johnson
Bob Johnson
BOB JOHNSON
I want my query to return the first 2 rows.
I just want to make sure that this will be an efficient query though - my table has 500 million rows in it.
When you say upper(first_name) != first_name, is "first_name" always pertaining to the current row that oracle is looking at? I was afraid to use this method at first because I was afraid I would end up joining this table to itself, but they way you both wrote the SQL it appears that the equality check is only operating on a row-by-row basis, which would work for me.
If you are looking for Oracle 10g or higher you can use the below example. Consider that you need to find out the rows where the any of the letter in a column is lowercase.
Column1
.......
MISS
miss
MiSS
In the above example, if you need to find the values miss and MiSS, then you could use the below query
SELECT * FROM YOU_TABLE WHERE REGEXP_LIKE(COLUMN1,'[a-z]');
Try this:
SELECT * FROM YOU_TABLE WHERE REGEXP_LIKE(COLUMN1,'[a-z]','c'); => Miss, miss lower text
SELECT * FROM YOU_TABLE WHERE REGEXP_LIKE(COLUMN1,'[A-Z]','c'); => Miss, MISS upper text
SELECT *
FROM mytable
WHERE FIRST_NAME IN (SELECT FIRST_NAME
FROM MY_TABLE
MINUS
SELECT UPPER(FIRST_NAME)
FROM MY_TABLE )
for SQL server where the DB collation setting is Case insensitive use the following:
SELECT * FROM tbl_user WHERE LEFT(username,1) COLLATE Latin1_General_CS_AI <> UPPER(LEFT(username,1))