I have a table with two fields defined as varchar(15). I want to know which records have the same value in both fields:
select * from table where field1 = field2
this returns a null result though I know there are records that match. What am I doing wrong?
Without further information (see my comments), the problem might be a corrupted index. What happens when you try:
select * from table where field1 || '' = field2 || ''
Using this query will make Firebird ignore the index (if any) and do a full table scan.
If this does return a result, you will want to validate and maybe repair the database (using gfix) and either backup and restore the database or drop and recreate the index.
Related
If I wanted to move data from one field into another field within the same record and then remove the data from the existing field can this reliably be done in one SQL statement, and is this dependent on the Database Provider?
An example of the SQL I am thinking about is below
UPDATE table SET field2 = field1, field1 = '' WHERE keyField = 1
Thanks for any help
Yes, it is reliable because major DB's like Oracle, SQL Server, MySQL, etc. are transactional, which ensure ACID. In other words, it's an all-or-nothing scenario, all changes happen or non happen (if a transaction is rolled-back for example).
However it is not sequential; i.e. doing this would yield the same result:
UPDATE table SET field1 = '', field2 = field1 WHERE keyField = 1
Rather, your SQL is doing something like this:
UPDATE table
SET field2_NewValue = field1_OriginalValue
, field1_NewValue = ''
WHERE keyField = 1
So I am comparing two Oracle databases by grabbing random rows in database A, and searching for these rows in database B based off their key columns. Then I compare the rows which are returned in java.
I am using the following query to find rows in database B using the key columns from database A:
select * from mytable
Where (Key_Column_A,Key_Column_B,Key_Column_C)
in (('1','A', 'cat'),('2','B', 'dog'),('3','C', ''));
This works just fine for the first two sets of keys, but the third key('3','C', '') does not work because there is a null value in the third column. Changing the statement to ('3','C', NULL) or changing the SQL to
select * from mytable
Where (Key_Column_A,Key_Column_B,Key_Column_C)
in ((('1','A', 'cat'),('2','B', 'dog'),('3','C', ''))
OR (Key_Column_A,Key_Column_B,Key_Column_C) IS NULL);
will not work either.
Is there a way to include a null column in an IN clause? And if not, is there a way to efficiently do the same thing? (My only solution currently is to create a check to make sure there are no nullable columns in my keys which would make this process rather unefficient and somewhat messy).
You can use it this way. I think it would work.
select * from mytable
Where (NVL(Key_Column_A,''),NVL(Key_Column_B,''),NVL(Key_Column_C,''))
in (('1','A', 'cat'),('2','B', 'dog'),('3','C', ''));
I am not sure about this (Key_Column_A,Key_Column_B,Key_Column_C) IS NULL. Wouldn't this imply that all of the columns (A,B,C) are NULL ?
Does SQLite offer a way to search every column of a table for a searchkey?
SELECT * FROM table WHERE id LIKE ...
Selects all rows where ... was found in the column id. But instead to only search in the column id, I want to search in every column if the searchstring was found. I believe this does not work:
SELECT * FROM table WHERE * LIKE ...
Is that possible? Or what would be the next easy way?
I use Python 3 to query the SQLite database. Should I go the route to search through the dictionary after the query was executed and data returned?
A simple trick you can do is:
SELECT *
FROM table
WHERE ((col1+col2+col3+col4) LIKE '%something%')
This will select the record if any of these 4 columns contain the word "something".
No; you would have to list or concatenate every column in the query, or reorganize your database so that you have fewer columns.
SQLite has full-text search tables where you can search all columns at once, but such tables do not work efficiently with any other queries.
I could not comment on #raging-bull answer. So I had to write a new one. My problem was, that I have columns with null values and got no results because the "search string" was null.
Using coalesce I could solve that problem. Here sqlite chooses the column content, or if it is null an empty string (""). So there is an actual search string available.
SELECT *
FROM table
WHERE (coalesce(col1,"") || coalesce(col2,"") || coalesce(col3,"") || coalesce(col4,"")) LIKE '%something%')
I'm not quite sure, if I understood your question.
If you want the whole row returned, when id=searchkey, then:
select * from table where id=searchkey;
If you want to have specific columns from the row with the correct searchkey:
select col1, col2, col3 from table where id=searchkey;
If you want to search multiple columns for the "id": First narrow down which columns this could be found in - you don't want to search the whole table! Then:
select * from table where col1=searchkey or col2=searchkey or col3=searchkey;
I have been trying to find an optimal solution to select unique values from each column. My problem is I don't know column names in advance since different table has different number of columns. So first, I have to find column names and I could use below query to do it:
select column_name from information_schema.columns
where table_name='m0301010000_ds' and column_name like 'c%'
Sample output for column names:
c1, c2a, c2b, c2c, c2d, c2e, c2f, c2g, c2h, c2i, c2j, c2k, ...
Then I would use returned column names to get unique/distinct value in each column and not just distinct row.
I know a simplest and lousy way is to write select distict column_name from table where column_name = 'something' for every single column (around 20-50 times) and its very time consuming too. Since I can't use more than one distinct per column_name, I am stuck with this old school solution.
I am sure there would be a faster and elegant way to achieve this, and I just couldn't figure how. I will really appreciate any help on this.
You can't just return rows, since distinct values don't go together any more.
You could return arrays, which can be had simpler than you may have expected:
SELECT array_agg(DISTINCT c1) AS c1_arr
,array_agg(DISTINCT c2a) AS c2a_arr
,array_agg(DISTINCT c2b) AS c2ba_arr
, ...
FROM m0301010000_ds;
This returns distinct values per column. One array (possibly big) for each column. All connections between values in columns (what used to be in the same row) are lost in the output.
Build SQL automatically
CREATE OR REPLACE FUNCTION f_build_sql_for_dist_vals(_tbl regclass)
RETURNS text AS
$func$
SELECT 'SELECT ' || string_agg(format('array_agg(DISTINCT %1$I) AS %1$I_arr'
, attname)
, E'\n ,' ORDER BY attnum)
|| E'\nFROM ' || _tbl
FROM pg_attribute
WHERE attrelid = _tbl -- valid, visible table name
AND attnum >= 1 -- exclude tableoid & friends
AND NOT attisdropped -- exclude dropped columns
$func$ LANGUAGE sql;
Call:
SELECT f_build_sql_for_dist_vals('public.m0301010000_ds');
Returns an SQL string as displayed above.
I use the system catalog pg_attribute instead of the information schema. And the object identifier type regclass for the table name. More explanation in this related answer:
PLpgSQL function to find columns with only NULL values in a given table
If you need this in "real time", you won't be able to archive it using a SQL that needs to do a full table scan to archive it.
I would advise you to create a separated table containing the distinct values for each column (initialized with SQL from #Erwin Brandstetter ;) and maintain it using a trigger on the original table.
Your new table will have one column per field. # of row will be equals to the max number of distinct values for one field.
For on insert: for each field to maintain check if that value is already there or not. If not, add it.
For on update: for each field to maintain that has old value != from new value, check if the new value is already there or not. If not, add it. Regarding the old value, check if any other row has that value, and if not, remove it from the list (set field to null).
For delete : for each field to maintain, check if any other row has that value, and if not, remove it from the list (set value to null).
This way the load mainly moved to the trigger, and the SQL on the value list table will super fast.
P.S.: Make sure to pass all you SQL from trigger to explain plan to make sure they use best index and execution plan as possible. For update/deletion, just check if old value exists (limit 1).
I had created the table with 200 columns and i had inserted data
Now i need to check that specific 100 columns in one row are filled or not,how can we check this using mysql query .the primary key is defined .please help me out how to resolve this.
select * from tablename where column1 != null or column2 != null ......
That is a lot of columns so at the risk of being mysql server version specific you can use the information schema to get the column names and then write a SQL procedure or something in your chosen shell / language that iterates over them performing a test.
select distinct COLUMN_NAME as 'Field', IS_NULLABLE from information_schema.columns where TABLE_SCHEMA="YourDatabase" and TABLE_NAME="YourTableName" and TABLE_NAME not like "%view%" escape '!' ;
The example above will tell you the column name as "Field" and tell you if it can hold a NULL. Having the field name may give you a better way of automating a field name specific test.