I'm trying to create a view where I want a column to be only true or false. However, it seems that no matter what I do, SQL Server (2008) believes my bit column can somehow be null.
I have a table called "Product" with the column "Status" which is INT, NULL. In a view, I want to return a row for each row in Product, with a BIT column set to true if the Product.Status column is equal to 3, otherwise the bit field should be false.
Example SQL
SELECT CAST( CASE ISNULL(Status, 0)
WHEN 3 THEN 1
ELSE 0
END AS bit) AS HasStatus
FROM dbo.Product
If I save this query as a view and look at the columns in Object Explorer, the column HasStatus is set to BIT, NULL. But it should never be NULL. Is there some magic SQL trick I can use to force this column to be NOT NULL.
Notice that, if I remove the CAST() around the CASE, the column is correctly set as NOT NULL, but then the column's type is set to INT, which is not what I want. I want it to be BIT. :-)
You can achieve what you want by re-arranging your query a bit. The trick is that the ISNULL has to be on the outside before SQL Server will understand that the resulting value can never be NULL.
SELECT ISNULL(CAST(
CASE Status
WHEN 3 THEN 1
ELSE 0
END AS bit), 0) AS HasStatus
FROM dbo.Product
One reason I actually find this useful is when using an ORM and you do not want the resulting value mapped to a nullable type. It can make things easier all around if your application sees the value as never possibly being null. Then you don't have to write code to handle null exceptions, etc.
FYI, for people running into this message, adding the ISNULL() around the outside of the cast/convert can mess up the optimizer on your view.
We had 2 tables using the same value as an index key but with types of different numerical precision (bad, I know) and our view was joining on them to produce the final result. But our middleware code was looking for a specific data type, and the view had a CONVERT() around the column returned
I noticed, as the OP did, that the column descriptors of the view result defined it as nullable and I was thinking It's a primary/foreign key on 2 tables; why would we want the result defined as nullable?
I found this post, threw ISNULL() around the column and voila - not nullable anymore.
Problem was the performance of the view went straight down the toilet when a query filtered on that column.
For some reason, an explicit CONVERT() on the view's result column didn't screw up the optimizer (it was going to have to do that anyway because of the different precisions) but adding a redundant ISNULL() wrapper did, in a big way.
All you can do in a Select statement is control the data that the database engine sends to you as a client. The select statement has no effect on the structure of the underlying table. To modify the table structure you need to execute an Alter Table statement.
First make sure that there are currently no nulls in that bit field in the table
Then execute the following ddl statement:
Alter Table dbo.Product Alter column status bit not null
If, otoh, all you are trying to do is control the output of the view, then what you are doing is sufficient. Your syntax will guarantee that the output of the HasStatus column in the views resultset will in fact never be null. It will always be either bit value = 1 or bit value = 0. Don't worry what the object explorer says...
Related
I have a table with 2 varchar columns (name and value)
and I have such a query:
select * from attribute
where name = 'width' and cast( value as integer) > 12
This query works but I suppose there are could be an issue with execution plan because of index build over value column because it is technically varchar but we convert it to integer.
Are there ways to fix it ?
P.S. I can't change type to int because the database design implies that value could be any type.
Performance should not be your first worry here.
Your statement is prone to failures. You read this as:
read all rows with name = 'width'
of these rows cast all values to integer and only keep those with a value graeter than 12
But the DBMS is free to check conditions in the WHERE clause in any order. If the DBMS does that:
cast all values to integer and only keep the rows with a value graeter than 12
of these rows keep all with name = 'width'
the first step will already cause a runtime error, if there is a non-integer value in that table, which is likely.
So first get your query safe. The following should work:
select *
from
(
select *
from attribute
where name = 'width'
) widths
where cast(value as integer) > 12;
This will still fail, when your width contains non-integers. So, to get this even safe in case of invalid data in the table, you may want to add a check that the value only contains digits in the subquery.
And yes, this won't become super-fast. You sacrifice speed (and data consistency checks) for flexibility with this data model.
What you can do, however, is create an index on both columns, so the DBMS can quickly find all width rows and then have the value directly at hand, before it accesses the table:
create index idx on attribute (name, value);
As far as I know, there is no fail-safe cast function in PostgreSQL. Otherwise you could use this and have a function index instead. I may be wrong, so maybe someone can come up with a better solution here.
I'm trying to create a view where I want a column to be only true or false. However, it seems that no matter what I do, SQL Server (2008) believes my bit column can somehow be null.
I have a table called "Product" with the column "Status" which is INT, NULL. In a view, I want to return a row for each row in Product, with a BIT column set to true if the Product.Status column is equal to 3, otherwise the bit field should be false.
Example SQL
SELECT CAST( CASE ISNULL(Status, 0)
WHEN 3 THEN 1
ELSE 0
END AS bit) AS HasStatus
FROM dbo.Product
If I save this query as a view and look at the columns in Object Explorer, the column HasStatus is set to BIT, NULL. But it should never be NULL. Is there some magic SQL trick I can use to force this column to be NOT NULL.
Notice that, if I remove the CAST() around the CASE, the column is correctly set as NOT NULL, but then the column's type is set to INT, which is not what I want. I want it to be BIT. :-)
You can achieve what you want by re-arranging your query a bit. The trick is that the ISNULL has to be on the outside before SQL Server will understand that the resulting value can never be NULL.
SELECT ISNULL(CAST(
CASE Status
WHEN 3 THEN 1
ELSE 0
END AS bit), 0) AS HasStatus
FROM dbo.Product
One reason I actually find this useful is when using an ORM and you do not want the resulting value mapped to a nullable type. It can make things easier all around if your application sees the value as never possibly being null. Then you don't have to write code to handle null exceptions, etc.
FYI, for people running into this message, adding the ISNULL() around the outside of the cast/convert can mess up the optimizer on your view.
We had 2 tables using the same value as an index key but with types of different numerical precision (bad, I know) and our view was joining on them to produce the final result. But our middleware code was looking for a specific data type, and the view had a CONVERT() around the column returned
I noticed, as the OP did, that the column descriptors of the view result defined it as nullable and I was thinking It's a primary/foreign key on 2 tables; why would we want the result defined as nullable?
I found this post, threw ISNULL() around the column and voila - not nullable anymore.
Problem was the performance of the view went straight down the toilet when a query filtered on that column.
For some reason, an explicit CONVERT() on the view's result column didn't screw up the optimizer (it was going to have to do that anyway because of the different precisions) but adding a redundant ISNULL() wrapper did, in a big way.
All you can do in a Select statement is control the data that the database engine sends to you as a client. The select statement has no effect on the structure of the underlying table. To modify the table structure you need to execute an Alter Table statement.
First make sure that there are currently no nulls in that bit field in the table
Then execute the following ddl statement:
Alter Table dbo.Product Alter column status bit not null
If, otoh, all you are trying to do is control the output of the view, then what you are doing is sufficient. Your syntax will guarantee that the output of the HasStatus column in the views resultset will in fact never be null. It will always be either bit value = 1 or bit value = 0. Don't worry what the object explorer says...
When performing a select query from a data base the returned result will have columns of a certain type.
If you perform a simple query like
select name as FirstName
from database
then the type of the resulting FirstName column will be that of database.name.
If you perform a query like
select age*income
from database
then the resulting data type will be that of the return value from the age*income expression.
What happens you use something like
select try_convert(float, mycolumn)
from database
where database.mycolumn has type of nvarchar. I assume that the resulting column has type of float which is decided by the return type of the first call to try_convert.
But consider this example
select coalesce(try_convert(float, mycolumn), mycolumn)
from database
which should give a column with the values of mycolumn unchanged if try_convert fails, but mycolumn as a float when/if that is possible.
Is this determination made as the first row is handled?
Or will the type always be determined by the function called independently of the data in the rows?
Is it possible to conditionally perform a conversion?
I would like to convert to float in the case where this is possible for all rows and leave unchanged in case it fails for any row.
Update 1
It seems that the answer to the first part of the question is that the column type is determined by the expression at compile time which means that you cannot have a dynamic type of your column depending on the data.
I see two workaround for this
Option 1
For each column count the number of not null rows of try_convert(float, mycolumn) and if this number is 0 then do not perform conversion. This will of course read the rows many times and might be inefficient.
Option 2
Simple repeat all columns; once without conversion and once with conversion and then simply use the interesting one.
One could also perform another select statement where only columns with non-null values are included.
Background
I have a dynamically generated pivot table with many (~200 columns) of which some have string values and others have numbers.
I would like to cast all columns as float where this is possible and leave the other columns unchanged (or cast as nvarchar).
The data
The data is mostly NULL values with some columns having text string and other columns having numbers. There are no columns with "mixed" content.
The types are determined at compile time, not at execution. try_convert(float, ...) knows exactly the type at parse/compile time, because float here is a keyword, not a value. As for expressions like COALESCE(foo, bar) the type similarly determined at compile time, following the rules of data type precedence lad already linked.
When you build your dynamic pivot you'll have to know the result type, using the same inference rules the SQL parser/compiler uses. I understand some rules are counter intuitive, when in doubt, test it out.
For the detail oriented: some expressions types can be determined at parse time, eg. N'foo'. But most have to be resolved at compile time, when the names of tables and columns are bind to actual object in the database, because only then the type is discovered.
I'm using SQL Server 2005 with asp.net C#.
There is a search query on my site with different parameters.
fromAge as tinyint
toAge as tinyint
fromHeight as tinyint
toHeight as tinyint
gender as tinyint
withImage as bit
region as tinyint
astrologicaSign as tinyint
I get these parameters from first time use performs a search and save his search preferences in search table and then use them on Users table from which I select users that meet with requirements.
Problem is that some values can be conditional like for example withImage (bit) this means that now I need to have if statement that check whether I provided 0 or 1 to withImage and then perform select ie. if withImage=1 then query's where would be picture1<>'0' else without where condition at all.
I did end up with 10 nested if statements with initial query (which I simplified for example sake).
Is there way to avoid it except dynamic SQL?
This is quite simply achieved using an AND statement
SELECT * FROM User
WHERE (withImage =1 AND picture1<>'0') OR withImage=0
You can then add similar clauses for each element. Note that if the logic gets more complicated you can also use CASE statements in the WHERE clause.
If you can align the parameter values you are passing to be equal to the values you want to retreive (or at least always do an equals comparison) then you can use CASE WHEN quite efectively like this
SELECT * FROM User
WHERE picture1 = CASE WHEN #WithImage = 1 THEN #withImage ELSE picture1 END
That way it is comparing the picture1 field with the parameter if it is 1 or comparing the field with itself if it is not.
Others have given you a solution but honestl , this is one case where dynamic SQL is likely to improve performance. I'm not a big dynamic SQL fan, but this is one case where it does a better job than most anything else.
The question
Is it possible to ask SSIS to cast a value and return NULL in case the cast is not allowed instead of throwing an error ?
My environment
I'm using Visual Studio 2005 and Sql Server 2005 on Windows Server 2003.
The general context
Just in case you're curious, here is my use case. I have to store data coming from somewhere in a generic table (key/value structure with history) witch contains some sort of value that can be strings, numbers or dates. The structure is something like this :
table Values {
Id int,
Date datetime, -- for history
Key nvarchar(50) not null,
Value nvarchar(50),
DateValue datetime,
NumberValue numeric(19,9)
}
I want to put the raw value in the Value column and try to put the same value
in the DateValue column when i'm able to cast it to Datetime
in the NumberValue column when i'm able to cast it to a number
Those two typed columns would make all sort of aggregation and manipulation much easier and faster later.
That's it, now you know why i'm asking this strange question.
============
Thanks in advance for your help.
You could also try a Derived Column component and test the value of the potential date/number field or simply cast it and redirect any errors as being the NULL values for these two fields.
(1) If you just simply cast the field every time with a statement like this in the Derived Column component: (DT_DATE)[MYPOTENTIALDATE] - you can redirect the rows that fail this cast and manipulate the data from there.
OR
(2) You can do something like this in the Derived Column component: ISNULL([MYPOTENTIALDATE]) ? '2099-01-01' : (DT_DATE)[MYPOTENTIALDATE]. I generally send through '2099-01-01' when a date is NULL rather than messing with NULL (works better with Cubes, etc).
Of course (2) won't work if the [MYPOTENTIALDATE] field comes through as other things other than a DATETIME or NULL, i.e., sometimes it is a word like "hello".
Those are the options I would explore, good luck!
In dealing with this same sort of thing I found the error handling in SSIS was not specific enough. My approach has been to actually create an errors table, and query a source table where the data is stored as varchar, and log errors to the error table with something like the below. I have one of the below statements for each column, because it was important for me to know which column failed. Then after I log all errors, I do a INSERT where I select those records in SomeInfo that do not have an errors. In your case you could do more advanced things based on the ColumnName in the errors table to insert default values.
INSERT INTO SomeInfoErrors
([SomeInfoId]
,[ColumnName]
,[Message]
,FailedValue)
SELECT
SomeInfoId,
'PeriodStartDate',
'PeriodStartDate must be in the format MM/DD/YYYY',
PeriodStartDate
FROM
SomeInfo
WHERE
ISDATE(PeriodStartDate) = 0 AND [PeriodStartDate] IS NOT NULL;
Tru using a conditional split and have the records where the data is a date go along one path and the other go along a different path where they are updated to nullbefore being inserted.