What is SQL Server 2005 expected behavior of insert into table select query where one of the columns attempts to convert a null value - sql-server-2005

We have a statement in some legacy SQL Server 2005 code like
insert into myTable
select distinct
wherefield1,
wherefield2,
anotherfield,
convert(numeric(10,2), varcharfield1),
convert(numeric(10,2), varcharfield2),
convert(numeric(10,2), varcharfield3),
convert(datetime, varcharfield4),
otherfields
from myStagingTable
where insertflag='true'
and wherefield1 = #wherevalue1
and wherefield2 = #wherevalue2
Earlier in the code, a variable is set to determine whether varcharfield1 or varcharfield2 is null, and the insert is programmed to execute as long as one of them is not null.
We know that if varcharfield1, varcharfield2, or varcharfield3 is a nonnumeric character string, an exception will be thrown and the insert will not occur. But I am perplexed by the behavior when one of these variables is null, as it often is. Actually, it is always the case that one of these values is null. But it seems that the insertion does take place. It looks like the legacy code relies on this to prevent only insertion of nonnumeric character data, while allowing insertion of null or empty values (in an earlier step, all empty strings in these fields of myStagingTable are replaced with null values).
This has been running on a Production SQL Server 2005 instance with all default settings for a number of years. Is this behavior we can rely on if we upgrade to a newer version of SQL Server?
Thanks,
Rebeccah

conversion of NULL to anything is still NULL. If the column allows NULL, that's what you'll get. If the column is not nullable, it will fail.
You can see this yourself without even doing an INSERT. Just run this:
SELECT CONVERT(numeric(10,2), NULL)
and note how it produces a NULL result. Then run this:
SELECT CONVERT(numeric(10,2), 'x')
and note how it throws an error message instead of returning anything.

Related

Handling Ignoring Empty Values in Porting SQL Data

I am in the process of porting over some data from a SQL environment to my MongoDB backend. I'm familiar with using a NULL check with your SELECT statement, like so:
SELECT * FROM clients WHERE note is not NULL ORDER BY id_number
... but in this old SQL database table I'm noticing a lot of rows where the value is not null, it's simply empty. It would be pointless to port these across. So what would it look like to prevent pulling those over -- in terms of the SELECT statement syntax?
To clarify "note" values are of type varchar. In JavaScript I would just guard against an empty string " ". Is there a way to do this with a SQL statement?
Something like that :
SELECT * FROM clients
WHERE note is not NULL
AND TRIM(note) <> ''
ORDER BY id_number;

how to insert values to a new column in SQL when other columns are defined as not null?

i have created a table with 3 columns in which 2 are defined as NOT NULL in SQL, now i created a new column i wanted to insert values to only the new variable, but i'm having an error while using Insert into statement
If you are using MySQL, here is your answer:
Inserting NULL into a column that has been declared NOT NULL. For multiple-row INSERT statements or INSERT INTO ... SELECT statements, the column is set to the implicit default value for the column data type. This is 0 for numeric types, the empty string ('') for string types, and the “zero” value for date and time types. INSERT INTO ... SELECT statements are handled the same way as multiple-row inserts because the server does not examine the result set from the SELECT to see whether it returns a single row. (For a single-row INSERT, no warning occurs when NULL is inserted into a NOT NULL column. Instead, the statement fails with an error.)
The documentation is here: MySQL 5.7 Reference Manual: Insert
As said above, just use ' ' and 0 depending on the type of column you have. I believe it's the same for any other DBMS system anyone could use.
Disregarding that this normally indicates you should rethink your database design, some database engines will let you do this through temporary schema manipulation (removing and afterwards re-adding the 'not null' constraint).
Alternatively you could either define default values for the not null columns, or pass in through the insert command the appropriate "default" values, or select a database mode where either the checks are not enforced (thus allowing nulls) or default values are automatically generated (eg mysql in non strict mode which replaces the nulls with a calculated default value).
The only valid use case I can think of is replicating the situation where a database has rows with nulls in certain fields, and then the schema is changed to make those columns not null. Mysql for instance will allow you to add 'not null' constraints to columns that already have data with nulls in that column.

Puzzling SQL server behaviour - results in different formats if there is a 1<>2 expression in WHERE clause

I have two almost identical SELECT statements. I am running them on a SQL Server 2012 with server collation Danish_Norwegian_CI_AS, and database collation Danish_Norwegian_CI_AS. The database runs in compatibility level set to SQL Server 2005 (90).
I run both of the queries on the same client via a SQL Server 2012 Management Studio. The client is a Windows 8.1 laptop.
The puzzling part is, although the statements are almost identical, the resultset is different as shown below (one returns 24-hour format time, the other with AM / PM, which gets truncated tpo P in this case). The only difference is the 'and 1<>2' expression in WHERE clause. I looked up and down, searched in google, digged as deep as I could, cannot find an explanation. Tried COLLATE to force conversion, did not help. If I use 108 to force formatting in CONVERT call, then the resultsets are alike. But not knowing the reason why this does not work is eating me alive.
Issue recreated on SqlFiddle, SQL Server 2008:
http://sqlfiddle.com/#!3/a97f8/1
Have someone an explain for this?
The SQL DDL and statements after results can be used to recreate the issue. The script creates a table with two columns, inserts some rows, and makes two selects.
On my machine the sql without the 1<>2 expression returns:
Id StartTime
----------- ---------
2 2:00P
2 2:14P
The sql with the 1<>2 expression returns:
Id StartTime
----------- ---------
2 14:00
2 14:14
if NOT EXISTS (Select * from sysobjects where name = 'timeVarchar')
begin
create table timeVarchar (
Id int not null,
timeTest datetime not null
)
end
if not exists (select * from timeVarchar)
begin
-- delete from timeVarchar
print 'inserting'
insert into timeVarchar (Id, timeTest) values (1, '2014-04-09 11:37:00')
insert into timeVarchar (Id, timeTest) values (2, '1901-01-01 14:00:00')
insert into timeVarchar (Id, timeTest) values (3, '2014-04-05 15:00:00')
insert into timeVarchar (Id, timeTest) values (2, '1901-01-01 14:14:14')
end
select
Id,
convert ( varchar(5), convert ( time, timeTest)) as 'StartTime'
from
timeVarchar
where
Id = 2
select
Id,
convert ( varchar(5), convert ( time, timeTest)) as 'StartTime'
from
timeVarchar
where
Id = 2 and
1 <> 2
I can't answer why this is happening (at least not at the moment), but setting the conversion format explicitly does solve the issue:
select Id,
convert (varchar(5), convert (time, timeTest), 14) as "StartTime"
from timeVarchar
where Id = 2;
select Id,
convert (varchar(5), convert (time, timeTest), 14) as "StartTime"
from timeVarchar
where Id = 2
and 1 <> 2;
Going through the execution plan, the two queries end up very different indeed.
The first one passes 2 as a parameter and (!) does CONVERT_IMPLICIT of the value. The second one passes it as a part of the query itself!
In the end, the actual query that gets to run in the first case actually explicitly does CONVERT(x, y, 0). For US locale, this is not a problem, since 0 is the invariant (~US) culture. But outside of the US, you're suddenly using 0 instead of e.g. 4 (for Germany).
So, definitely, one thing to take from this is that queries that look very much alike could execute completely differently.
The second thing is - always use convert with a specific format. The defaults don't seem to be entirely reliable.
EDIT: Ah, finally fished the thing out of the MSDN:
http://msdn.microsoft.com/en-us/library/ms187928.aspx
In earlier versions of SQL Server, the default style for CAST and
CONVERT operations on time and datetime2 data types is 121 except when
either type is used in a computed column expression. For computed
columns, the default style is 0. This behavior impacts computed
columns when they are created, used in queries involving
auto-parameterization, or used in constraint definitions.
Since the first query is invoked as a parametrized query, it gets the default style 0, rather than 121. This behaviour is fixed in compatibility level 110+ (i.e. SQL SERVER 2012+) - on those servers, the default is always 121.
It seems the problem is solved in SQL2012
see this link
http://sqlfiddle.com/#!6/a97f8/4
p.s Your mentioned url on sqlfiddle is running on SQL2008

SQL Server Ce 2005 Data Conversion fails based on data in table

I have a query like :
select * from table where varchar_column=Numeric_value
that is fine until I run an insert script. After the new data is inserted, I must use this query:
select * from table where varchar_column='Numeric_value'
Can inserting a certain kind of data cause it to no longer implicitly convert?
After the insert script, the error is Data conversion fails OLEDB Status = 2
And the second query does work
I'm not certain of this... the first may be doing an implicit conversion of the varchar_column to a numeric value. Not the other way around. But when you insert values into that column that's no longer convertable, it fails. However, with the second, you're doing a varchar to varchar comparison and all is right again with the world. My guess.

NULL in query values resulting in 0.00 in MySQL

I have a query that's written dynamically (OO PHP via Joomla) to insert some values into a MySQL database. The form that a user fills out has a field on it for dollar amount, and if they leave that blank I want the value going into the system to be NULL. I've written out the query to the error log as it's running; this is what the query looks like:
INSERT INTO arrc_Voucher
(VoucherNbr,securityCode,sequentialNumber, TypeFlag, CreateDT, ActivatedDT, BalanceInit, BalanceCurrent, clientName)
VALUES
('6032100199108006', '99108006','12','V','2010-10-29 12:50:01','NULL','NULL','NULL','')
When I look in the database table, though, although ActivatedDT is set correctly to NULL, BalanceInit and BalanceCurrent are both 0.00. The ActivatedDT field is a datetime, while the other two are decimal(18,2), and all three are set in the table structure as default value NULL.
If I run a query like this:
UPDATE arrc_Voucher
SET BalanceInit = null
WHERE BalanceInit like "0%"
...it does set the value to null, so why isn't the initial insert query doing so? Is it because null is in quotes? And if so, why is it setting correctly for ActivatedDT?
remove the quotes around NULL. What's actually happening is it's trying to insert the string 'NULL' as a number, and since it can't be converted to a number it uses the default value 0.
As for why ActivatedDT works, I'm guessing that's a date field. Failure to convert a string into a date would normally result in setting the value to 0 (which gets formatted as something like '1969-12-31'), but if you have NO_ZERO_DATE mode enabled, then it would be set to NULL instead.
If you'd like MySQL to throw an error in cases like this, when invalid values are passed, you can set STRICT_ALL_TABLES or STRICT_TRANS_TABLES (make sure you read the part about the difference between them) or one of the emulation modes, like TRADITIONAL.
You can try this with the command SET sql_mode='TRADITIONAL', or by adding sql-mode="TRADITIONAL" in my.cnf.
When you insert NULL into a MySQL database, you cannot insert it with quotes around it. It tries to insert the varchar 'NULL'. If your idea worked, you would never be able to insert the actual word NULL into the DB.
Remove the single quotes when you want to insert NULL.
You are not setting the fields to NULL but to strings ('NULL').