Getting all rows from a Table where the column contains only 0 - sql

I got a little problem
i need a sql query that gives all rows back that only contains 0 in it.
the column is defined as varchar2(6)
the values in the column looks like this:
Row Value
1 0
2 00
3 00
4 100
5 bc00
6 000000
7 00000
my first solution would be like this:
Oracle:
substr('000000' || COLUMN_NAME, -6) = '000000'
SQL Server:
right('000000' + COLUMN_NAME, 6) = '000000'
is there an other way?
(it needs to work on both systems)
the output would be the row 1,2,3,6,7

This is the simplest one:
select * from tbl where replace(col,'0','') = ''
If you will not make computed column for that expression, you can opt for function-based index(note: Oracle and Postgres already supports this; Sql Server as of version 2008, not yet) to make that performant:
create index ix_tbl on tbl(replace(col,'0',''))
[EDIT]
I just keep the answer below for posterity, I tried to explain how to make the query use index from computed column.
Use this:
select * from tbl
where ISNUMERIC(col) = 1 and cast(col as int) = 0
For ISNUMERIC needs on Oracle, use this: http://www.oracle.com/technology/oramag/oracle/04-jul/o44asktom.html
[EDIT]
#Charles, re: computed column on Oracle:
For RDBMSes that supports computed column but it doesn't have persisted option, yes it will make function call for every row. If it supports persisted column, it won't make function call, you have real column on the table which is precomputed from that function. Now, if the data could make the function raise an exception, there are two scenarios.
First, if you didn't specify persist, it will allow you to save the computed column (ALTER TABLE tbl ADD numeric_equivalent AS cast(col as int)) even if the result from the data will raise an exception, but you cannot unconditionally select that column, this will raise exception:
select * from tbl
this won't raise exception:
select * from tbl where is_col_numeric = 1
this will:
select * from tbl where numeric_equivalent = 0 and is_col_numeric = 1
this won't (Sql Server supports short-circuiting):
select * from tbl where is_col_numeric = 1 and numeric_equivalent = 0
For reference, the is_col_numeric above was created using this:
ALTER TABLE tbl ADD
is_col_numeric AS isnumeric(col)
And this is is_col_numeric's index:
create index ix_is_col_numeric on tbl(is_col_numeric)
Now for the second scenario, you put computed column with PERSISTED option on table that already has existing data(e.g. 'ABXY','X1','ETC') that raises exception when function/expression(e.g. cast) is applied to it, your RDBMS will not allow you to make a computed column. If your table has no data, it will allow you to put PERSISTED option, but afterwards when you attempt to insert data(e.g. insert into tbl(col) values('ABXY')) that raises an exception, your RDBMS will not allow you to save your data. Thereby only numeric text can be saved in your table, your PERSISTED computed column degenerate into a constraint check, albeit a full detoured one.
For reference, here's the persisted computed column sample:
ALTER TABLE tbl ADD
numeric_equivalent AS cast(col as int) persisted
Now, some of us might be tempted to not put PERSISTED option on computed column. This would be kind of self-defeating endeavor in terms of performance purposes, because you might not be able to create index on them later. When later you want to create index on the unpersisted computed column, and the table already has data 'ABXY', the database won't allow you to create an index. Index creation need to obtain the value from column, and if that column raises an exception, it won't allow you to create index on it.
If we attempt to cheat a bit i.e. we immediately create an index on that unpersisted computed column upon table creation, the database will allow you to do that. But when we insert 'ABXY' to table later, it will not be saved, the database is automatically constructing index(es) after we insert data to the table. The index constructor receives exception instead of data, so it cannot make an index entry for the data we tried inserting, subsequently inserting data will not happen.
So how can we attain index nirvana on computed column? First of all, we make sure that the computed column is PERSISTED, doing this will ensure that errors kicks-in immediately; if we don't put PERSISTED option, anything that could raise exception will be deferred to index construction, just making things fail later. Bugs are easier to find when they happen sooner. After making the column persisted, put an index on it
So if we have existing data '00','01', '2', this will allow us to make persisted computed column. Now after that, if we insert 'ABXY', it will not be inserted, the database cannot persist anything from computed column that raised an exception. So we will just roll our own cast that doesn't raise exception.
To wit(just translate this to Oracle equivalent):
create function cast_as_int(#n varchar(20)) returns int with schemabinding
begin
begin try
return cast(#n as int);
end try
begin catch
return null;
end catch
end;
Please do note that catching exception in UDF will not work yet in Sql Server, but Microsoft have plans to support that
This is now our non-exception-raising persisted computed column:
ALTER TABLE tbl ADD
numeric_equivalent AS cast_as_int(a) persisted
Drop the existing index, then recreate it:
create index ix_num_equiv on tbl(numeric_equivalent)
Now this query will become index-abiding-citizen, performant, and won't raise exception even the order of conditions is reversed:
select * from tbl where numeric_equivalent = 0 and is_col_numeric = 1
To make it more performant, since the numeric_equivalent column doesn't raise any more exceptions, we have no more use for is_col_numeric, so just use this:
select * from tbl where numeric_equivalent = 0

Do you like:
SELECT * FROM MY_TABLE
WHERE REPLACE (MY_COLUMN, '0', NULL) IS NULL
AND MY_COLUMN IS NOT NULL;

This would also work in Oracle (but not in SQL Server):
REPLACE(column_name, '0') IS NULL
This will work in Oracle (and perhaps also in SQL Server, you will have to check):
LTRIM(column_name, '0') IS NULL
Alternatively, since it is a VARCHAR(6) column, you could also just check:
column_name IN ('0', '00', '000', '0000', '00000', '000000')
This is not pretty but it is probably the most efficient if there is an index on the column.

Building off KM's answer, you can do the same thing in Oracle without needing to create an actual table.
SELECT y.*
FROM YourTable y
WHERE YourColumn IN
(SELECT LPAD('0',level,'0') FROM dual CONNECT BY LEVEL <= 6)
or
SELECT y.*
FROM YourTable y
INNER JOIN
(SELECT LPAD('0',level,'0') zeros FROM dual CONNECT BY LEVEL <= 6) z
ON y.YourColumn = z.zeros
I think this is the most flexible answer because if the maximum length of the column changes, you just need to change 6 to the new length.

How about using regular expression (supported by oracle, I think also MSSQL)

Another SQL version would be:
...
where len(COLUMN_NAME) > 0
and len(replace(COLUMN_NAME, '0', '')) = 0
i.e., where there are more than 1 characters in the column, and all of them are 0. Toss in TRIM if there can be leading, trailing, or embedded spaces.

try this, which should be able to use and index on YourTable.COLUMN_NAME if it exists:
--SQL Server syntax, but should be similar in Oracle
--you could make this a temp of permanent table
CREATE TABLE Zeros (Zero varchar(6))
INSERT INTO Zeros VALUES ('0')
INSERT INTO Zeros VALUES ('00')
INSERT INTO Zeros VALUES ('000')
INSERT INTO Zeros VALUES ('0000')
INSERT INTO Zeros VALUES ('00000')
INSERT INTO Zeros VALUES ('000000')
SELECT
y.*
FROM YourTable y
INNER JOIN Zeros z On y.COLUMN_NAME=z.Zero
EDIT
or even just this:
SELECT
*
FROM YourTable
WHERE COLUMN_NAME IN ('0','00','000','0000','00000','000000')
building off of Dave Costa's answer:
Oracle:
SELECT
*
FROM YourTable
WHERE YourColumn IN
(SELECT LPAD('0',level,'0') FROM dual CONNECT BY LEVEL <= 6)
SQL Server 2005 and up:
;WITH Zeros AS
(SELECT
CONVERT(varchar(6),'0') AS Zero
UNION ALL
SELECT '0'+CONVERT(varchar(5),Zero)
FROM Zeros
WHERE LEN(CONVERT(varchar(6),Zero))<6
)
select Zero from Zeros
SELECT
y.*
FROM YourTable y
WHERE y.COLUMN_NAME IN (SELECT Zero FROM Zeros)

Related

Does Oracle allow an SQL INSERT INTO using a SELECT statement for VALUES if the destination table has an GENERATE ALWAYS AS IDENTITY COLUMN

I am trying to insert rows into an Oracle 19c table that we recently added a GENERATED ALWAYS AS IDENTITY column (column name is "ID"). The column should auto-increment and not need to be specified explicitly in an INSERT statement. Typical INSERT statements work - i.e. INSERT INTO table_name (field1,field2) VALUES ('f1', 'f2'). (merely an example). The ID field increments when typical INSERT is executed. But the query below, that was working before the addition of the IDENTITY COLUMN, is now not working and returning the error: ORA-00947: not enough values.
The field counts are identical with the exception of not including the new ID IDENTITY field, which I am expecting to auto-increment. Is this statement not allowed with an IDENTITY column?
Is the INSERT INTO statement, using a SELECT from another table, not allowing this and producing the error?
INSERT INTO T.AUDIT
(SELECT r.IDENTIFIER, r.SERIAL, r.NODE, r.NODEALIAS, r.MANAGER, r.AGENT, r.ALERTGROUP,
r.ALERTKEY, r.SEVERITY, r.SUMMARY, r.LASTMODIFIED, r.FIRSTOCCURRENCE, r.LASTOCCURRENCE,
r.POLL, r.TYPE, r.TALLY, r.CLASS, r.LOCATION, r.OWNERUID, r.OWNERGID, r.ACKNOWLEDGED,
r.EVENTID, r.DELETEDAT, r.ORIGINALSEVERITY, r.CATEGORY, r.SITEID, r.SITENAME, r.DURATION,
r.ACTIVECLEARCHANGE, r.NETWORK, r.EXTENDEDATTR, r.SERVERNAME, r.SERVERSERIAL, r.PROBESUBSECONDID
FROM R.STATUS r
JOIN
(SELECT SERVERSERIAL, MAX(LASTOCCURRENCE) as maxlast
FROM T.AUDIT
GROUP BY SERVERSERIAL) gla
ON r.SERVERSERIAL = gla.SERVERSERIAL
WHERE (r.LASTOCCURRENCE > SYSDATE - (1/1440)*5 AND gla.maxlast < r.LASTOCCURRENCE)
) )
Thanks for any help.
Yes, it does; your example insert
INSERT INTO table_name (field1,field2) VALUES ('f1', 'f2')
would also work as
INSERT INTO table_name (field1,field2) SELECT 'f1', 'f2' FROM DUAL
db<>fiddle demo
Your problematic real insert statement is not specifying the target column list, so when it used to work it was relying on the columns in the table (and their data types) matching the results of the query. (This is similar to relying on select *, and potentially problematic for some of the same reasons.)
Your query selects 34 values, so your table had 34 columns. You have now added a 35th column to the table, your new ID column. You know that you don't want to insert directly into that column, but in general Oracle doesn't, at least at the point it's comparing the query with the table columns. The table has 35 columns, so as you haven't said otherwise as part of the statement, it is expecting 35 values in the select list.
There's no way for Oracle to know which of the 35 columns you're skipping. Arguably it could guess based on the identity column, but that would be more work and inconsistent, and it's not unreasonable for it to insist you do the work to make sure it's right. It's expecting 35 values, it sees 34, so it throws an error saying there are not enough values - which is true.
Your question sort of implies you think Oracle might be doing something special to prevent the insert ... select ... syntax if there is an identity column, but in facts it's the opposite - it isn't doing anything special, and it's reporting the column/value count mismatch as it usually would.
So, you have to list the columns you are populating - you can't automatically skip one. So you statement needs to be:
INSERT INTO T.AUDIT (IDENTIFIER, SERIAL, NODE, ..., PROBESUBSECONDID)
SELECT r.IDENTIFIER, r.SERIAL, r.NODE, ..., r.PROBESUBSECONDID
FROM ...
using the actual column names of course if they differ from the query column names.
If you can't change that insert statement then you could make the ID column invisible; but then you would have to specify it explicitly in queries, as select * won't see it - but then you shouldn't rely on * anyway.
db<>fiddle

UPDATE two columns with new value under large size table

We have table like :
mytable (pid, string_value, int_value)
This table has more than 20M rows in total. Now we have a feature try to mark all the rows from this tables as invalid. So we need update the table columns: string_Value = NULL and int_value = 0 which indicate this is invalid row ( we still want to keep the pid as it is important to us)
So what is the best way?
I use the following SQL:
UPDATE Mytable
SET string_value = NULL,
int_value = 0;
but this query takes more than 4 minutes in my test env. Is there any better way we can improve it?
Updating all the rows can be quite expensive. Often, it is faster to empty the table and reload it.
In generic SQL this looks like:
create table mytable_temp as
select pid
from mytable;
truncate table mytable; -- back it up first!
insert into mytable (pid, string_value, int_value)
select pid, null, 0
from mytable_temp;
The creation of the temporary table may use different syntax, depending on our database.
Updates can take time to complete. Another way of achieving this is to follow the following steps:
Add new columns with the values you need set as the default value
Drop the original columns
Rename the new columns with the names of the original columns.
You can then drop the default values on the new columns.
This needs to be tested as different DBMSs allow different levels of table alters (i.e. not all DMBSs allow a drop default or a drop column).

Insert Column with same value

I am running a query on the table "performance" and I want to insert a column with the same value for all the rows without using alter, update etc.
I wrote a case statement and it works but is there a more refined way?
here is a short query:
SELECT id, name, class,
CASE
WHEN id IS NOT NULL THEN 'Actuals'
ELSE 'Forecast'
END AS type
FROM performance
Basically I need all the values to be labeled "Actuals".
There are many other datasets for which I will use different labels and then append all of them
Just to be clear - don't need to update the table performance itself
use common table expression for your case.
It will add new column in your existing data and you may use this for your further process.
For your point it is not adding nor inserting anything in your existing db structure.
with CTE as (
SELECT id, name, class,
CASE WHEN id IS NOT NULL THEN 'Actuals' ELSE 'Forecast' END AS type
FROM table_performance
)
select * from CTE ----- It give you all the columns from [table] and add another column as you needed.
OR
You may create a view for same, if this condition is fixed.

Create an INT index on a VARCHAR column

I have a unique design where I will need to store all data as VARCHAR. I can't go into details why. I would like to index some fields as a different data type. Is this possible? If so, will there any gotchas doing this? What is the syntax to do this if its possible.
I will be using both SQL Server and PostgresQL for this project.
In PostgreSQL, you can create functional indexes ("index on expression"), that occupies less storage than creating redundant columns.
CREATE INDEX tbl_intasvarchar_idx ON tbl (cast(intasvarchar AS int));
Keep in mind that queries have to match the expression to allow the use of such an index. Like:
SELECT *
FROM tbl
WHERE intasvarchar::int = 123;
(Alternative syntax shorthand for cast works as well as cast().)
Of course, all varchar values must be valid to cast to int and if that's the case the superior approach would be to change the type to integer to begin with. In any RDBMS.
PostgreSQL:
Create a function based index like so:
create index int_index on tbl (cast(cast(num_as_string as decimal) as integer));
Fiddle: http://sqlfiddle.com/#!15/d0f46/1/0
Later, when you run a query such as:
select *
from tbl
where cast(cast(num_as_string as decimal) as integer) = 12
The index will be used, because the index is on the result of that function applied to the column, rather than the column itself.
SQL Server:
In SQL Server you can add a computed column and index that computed column like so:
create table tbl (num_as_string varchar(10));
insert into tbl (num_as_string) values ('12.3');
alter table tbl add num_as_string_int as cast(cast(num_as_string as decimal) as integer);
create index int_index on tbl (num_as_string_int);
Then query against num_as_string_int to use the index.
Fiddle: http://sqlfiddle.com/#!6/1f378/2/0

change type of a computed column to uncomputed in sql

I have some computed columns in my DB with data. Is there anyway to change type of those columns to uncomputed without dropping and copying their data into new uncomputed columns?
For example I want to change
[Fee] AS CONVERT([decimal](19,4),(case when [Quantity]=(0) then (0) else [Price]/[Quantity] end)) PERSISTED,
to
[Fee] [decimal](26, 16) NOT NULL,
The exact answer is "it depends." MySQL doesn't even have computed columns. In SQL Server, I don't think it is possible. In Oracle it can be done with alter table t1 modify fee DECIMAL( m, n ).
However, even when allowed, the DBMS is probably behind the scenes creating a new column, moving the computed value to the new column, dropping the computed column and renaming new column to computed column name. So even if the conversion is not explicitly allowed, you can still get it done.
Computed Columns do not store data in themselves.
When you try to select the column in a query it computes the data and shows you. Also you can not modify computed columns to uncomputed columns.
But you can do this instead:
Create Table Temp (ID BigInt, value Computed_Column_DataType)
Go
Insert Temp(ID, Value)
Select ID, ComputedColumnName
From Your_Table
Go
Alter Table Your_Table Drop Column ComputedColumnName
Go
Alter Table Your_Table Add ComputedColumnName Computed_Column_DataType
Go
Update Your_Table Set ComputedColumnName = A.Value From Temp A Where A.ID = YourTable.ID