I have a ruby on rails app in which I am querying for a boolean column, Flag. The code is:
Merchant.where("Flag=?",false)
However this does not work at all and the only result is that the Merchants table does not have a column name "flag". Is there any way to fix this? The column name starts with an uppercase letter, but the search is being done for a lower case "flag"
If the column name was quoted when the table was created then you will have to quote it forever. So, if you started with this:
create table merchants (
-- ...
"Flag" boolean
-- ...
)
Then you'll have to refer to it using
Merchant.where('"Flag" = ?', false)
PostgreSQL normalizes all unquoted identifiers to lower case (not upper case as the standard says), that's why the error message complains about flag rather than Flag.
If you can, you might want to rebuild your table with only lower case column names.
Related
I have a table named Posts I would like to count and profile in Snowflake using the current Snowsight UI.
When I return the results via EXPLAIN using TABLULAR I am able to return the set with the combination of TABLE, RESULT_SCAN, and LAST_QUERY_ID functions, but any predicate or filter or column reference seems to fail.
Is there a valid way to do this in Snowflake with the TABLE function or is there another way to query the output of the EXPLAIN using TABLULAR?
-- Works
EXPLAIN using TABULAR SELECT COUNT(*) from Posts;
-- Works
SELECT t.* FROM TABLE(RESULT_SCAN(LAST_QUERY_ID())) as t;
-- Does not work
SELECT t.* FROM TABLE(RESULT_SCAN(LAST_QUERY_ID())) as t where operation = 'GlobalStats';
-- invalid identifier 'OPERATION', the column does not seem recognized.
Tried the third example and expected the predicate to apply to the function output. I don't understand why the filter works on some TABLE() results and not others.
You need to double quote the column name
where "operation"=
From the Documentation
Note that because the output column names from the DESC USER command
were generated in lowercase, the commands use delimited identifier
notation (double quotes) around the column names in the query to
ensure that the column names in the query match the column names in
the output that was scanned
I am using regexp_like function to search specific patterns on a column. But, I see this query is not taking the index created on this column instead going for full table scan. Is there any option to create function based index for regexp_like so that my query will use that index? Here, the pattern SV4889 is not constant expression but it will vary every time.
select * from test where regexp_like(id,'SV4889')
Yup. Regular expressions do not use indexes. What can you do?
Well, if you are just looking for equality, then use equality:
where id = 'SV4889'
This will use an index on (id).
If you are looking for a leading value, then use like:
where id like 'SV4889%'
This will use an index because the wildcard is at the end of the pattern.
If you are storing multiple values in the column, say 'SV4889,SV4890' then fix your data model. It is broken! You should have another table with one row per id.
Finally, if you really need more sophisticated full text capabilities, then look into Oracle's support for full text indexes. However, such capabilities are usually not needed on a column called id.
You can add a virtual column to your table to determine if the substring you're interested in exists in the field, then index the virtual column. For example:
ALTER TABLE TEST
ADD SV4889_FLAG CHAR(1)
GENERATED ALWAYS AS (CASE
WHEN REGEXP_LIKE(ID,'SV4889') THEN 'Y'
ELSE 'N'
END) VIRTUAL;
This adds a field named SV4889_FLAG to your table which will contain Y if the text SV4889 exists in the ID field, and N if it doesn't. Then you can create an index on the new field:
CREATE INDEX IDX_TEST_SV4889_FLAG
ON TEST (SV4889_FLAG);
So to determine if a row has 'SV4889' in it you can use a query such as:
SELECT *
FROM TEST
WHERE SV4889_FLAG = 'Y'
db<>fiddle here
We have Oracle table having default keyword(i.e in as field name) field name.Now i am querying table but unable to extract specific field data.
select a.filename,a.in from table a
Following error appears "invalid field name.
Try using double quotes.
select a."IN" from table a
You can use default (oracle reserved) keywords as the name of the columns but yes it is not advisable to use it.
Anyway, If you want to use oracle reserved keywords then you must have to enclose them in the double-quotes.
Note that oracle is case insensitive in terms of its object names until and unless it is wrapped in the double-quotes. it means if you enclose any object name in double-quotes then you must have to use them anywhere in the entire DB as case sensitive manner.
So if your table definition is:
CREATE TABLE YOUR_TABLE ("IN" NUMBER);
Then you need to use "IN" wherever you want to refer the column but if your table definition is:
CREATE TABLE YOUR_TABLE ("in" NUMBER);
Then you need to use "in" wherever you want to refer the column. -- case sensitive names.
I hope it will clear all your doubts.
Cheers!!
I have a table in Postgres which currently has a NOT NULL constraint on it's email column. This table also has a phone column which is optional. I would like the system to accept some records without email but only if these have phone as NOT NULL. In other words, I need a NOT NULL database constraint such that CREATE or UPDATE queries succeed without any errors if either or both of email or phone fields are present.
Extending the above further, is it possible in Postgres, to specify a set of column names, one or more of which should be NOT NULL for the record to be successfully updated or created?
#Igor is quite right and a couple of OR'ed expression are fast and simple.
For a long list of columns (a, b, c, d, e, f, g in the example), this is shorter and just as fast:
CHECK (NOT (a,b,c,d,e,f,g) IS NULL)
db<>fiddle here
Old sqlfiddle
How does it work?
A more verbose form of the above would be:
CHECK (NOT ROW(a,b,c,d,e,f,g) IS NULL)
ROW is redundant syntax here.
Testing a ROW expression with IS NULL only reports TRUE if every single column is NULL - which happens to be exactly what we want to exclude.
It's not possible to simply reverse this expression with (a,b,c,d,e,f,g) IS NOT NULL, because that would test that every single column IS NOT NULL. Instead, negate the whole expression with NOT. Voilá.
More details in the manual here and here.
An expression of the form:
CHECK (COALESCE(a,b,c,d,e,f,g) IS NOT NULL)
would achieve the same, less elegantly and with a major restriction: only works for columns of matching data type, while the check on a ROW expression works with any columns.
You can use CHECK constraint for this.
Something like:
CHECK (email is not null OR phone is not null)
Details on constraints can be found here
I have a table with column mapping which store record: "IV=>0J,IV=>0Q,IV=>2,V=>0H,V=>0K,VI=>0R,VI=>1,"
What is the sql to check whether or not a substring is in column mapping.
so, I would like this:
if I have "IV=>0J" would return true, because IV=>0J is exact in string "mapping"
if I have "IV=>01" would return false. And so on...
I try this:
SELECT * FROM table WHERE charindex('IV=>0J',mapping)
But when I have "IV=>0", it returns TRUE. But, it should return FALSE.
Thank You..
You can search with commas included. Just also add one at beginning and end of mapping:
SELECT * FROM table WHERE charindex(',IV=>0J,',',' + mapping + ',') <> 0
or
SELECT * FROM table WHERE ',' + mapping + ',' LIKE '%,IV=>OJ,%'
This should do the trick:
SELECT * FROM table
WHERE
mapping LIKE '%,IV=>0J,%'
OR mapping LIKE '%,IV=>0J'
OR mapping LIKE 'IV=>0J,%'
OR mapping = 'IV=>0J'
But you should really normalize the database - you are currently violating the principle of atomicity, and therefore the 1NF. Your current difficulties in querying and the future difficulties with performance that you are about to encounter all stem from this root problem...
While you can search by including a comma in the string, this is a bad design for several reasons.
You are unable to take advantage of indexing
You force a full scan of the table, which will lead to bad performance AND excessive blocking.
You have to make sure that there is always a leading or a trailing comma (depends on what you expect in your LIKE expression).
You are no longer able to edit a single entry, you'll have to replace the entire string each time you want to change even a single mapping.
You open yourself to a concurrency nightmare if more that one users try to update different mappings that just happen to be stored in the same column.
Your table isn't even in 1st normal form any more, which is why you have such difficulties
You should normalize your mapping column, by extracting the data to a different mapping table, with at least the From and To columns you require. You can then add these columns to an index an convert your query using only a single index seek.
You can also add the ID values of your source table to the Mappings table and the index. This will allow you to convert the lookup for a source row to a join between the two tables that takes advantage of indexing
charindex returns the position of the text, not Boolean.
to check if the text exists, compare to 0:
SELECT * FROM table WHERE charindex('IV=>0J',mapping) <> 0
I think you're missing something here, the Charindex function does not return TRUE or FALSE.
It returns the starting point of the substring inside master string, or if the substring is not present, then -1.
So you query should read,
SELECT * FROM table WHERE charindex('IV=>0J',mapping) > 0