Difference between a value and operand - operators

The question is pretty simple, what is the difference between a value and an operand? They very similar however they must have a slight difference as they have different names.

Related

calculate into new column as percentage

Very new and learning SQL. Trying to calculate a percentage from two columns as such:
Select (total_deaths/total_cases)*100 AS death_percentage
From covid_deaths
I’m getting the column but it’s showing as an Integer and all values are zero.
I’ve tried using CAST to make it a decimal but i don’t have the syntax right. Very noob question but seems simple enough. Do I have to declare the numeric type of all calculated columns?
In addition to the answer linked by Stefan Zivkovik in a comment above, it may be good to handle division by zero. Even if you don't ever anticipate total_cases will be zero, someone may reuse this part of the code (for instance, if total_cases is later broken into subcategories).
You probably also want to ROUND to a certain number of decimal places
SELECT
CASE WHEN total_cases > 0 THEN
ROUND((total_deaths::NUMERIC/total_cases)*100,1)
END AS death_percentage
FROM covid_deaths
By not specifying an ELSE clause, the column will be null when total_cases is zero. If this doesn't work for your purposes, you could specify another default value (like zero) with ELSE.

What is the purpose of using `timestamp(nullif('',''))`

Folks
I am in the process of moving a decade old back-end from DB2 9.5 to Oracle 19c.
I frequently see in SQL queries and veiw definitions bizarre timestamp(nullif('','')) constructs used instead of a plain null.
What is the point of doing so? Why would anyone in their same mind would want to do so?
Disclaimer: my SQL skills are fairly mediocre. I might well miss something obvious.
It appears to create a NULL value with a TIMESTAMP data type.
The TIMESTAMP DB2 documentation states:
TIMESTAMP scalar function
The TIMESTAMP function returns a timestamp from a value or a pair of values.
TIMESTAMP(expression1, [expression2])
expression1 and expression2
The rules for the arguments depend on whether expression2 is specified and the data type of expression2.
If only one argument is specified it must be an expression that returns a value of one of the following built-in data types: a DATE, a TIMESTAMP, or a character string that is not a CLOB.
If you try to pass an untyped NULL to the TIMESTAMP function:
TIMESTAMP(NULL)
Then you get the error:
The invocation of routine "TIMESTAMP" is ambiguous. The argument in position "1" does not have a best fit.
To invoke the function, you need to pass one of the required DATE, TIMESTAMP or a non-CLOB string to the function which means that you need to coerce the NULL to have one of those types.
This could be:
TIMESTAMP(CAST(NULL AS VARCHAR(14)))
TIMESTAMP(NULLIF('',''))
Using NULLIF is more confusing but, if I have to try to make an excuse for using it, is slightly less to type than casting a NULL to a string.
The equivalent in Oracle would be:
CAST(NULL AS TIMESTAMP)
This also works in DB2 (and is even less to type).
It is not clear why - in any SQL dialect, no matter how old - one would use an argument like nullif('',''). Regardless of the result, that is a constant that can be calculated once and for all, and given as argument to timestamp(). Very likely, it should be null in any dialect and any version. So that should be the same as timestamp(null). The code you found suggests that whoever wrote it didn't know what they were doing.
One might need to write something like that - rather than a plain null - to get null of a specific data type. Even though "theoretical" SQL says null does not have a data type, you may need something like that, for example in a view, to define the data type of the column defined by an expression like that.
In Oracle you can use the cast() function, as MT0 demonstrated already - that is by far the most common and most elegant equivalent.
If you want something much closer in spirit to what you saw in that old code, to_timestamp(null) will have the same effect. No reason to write something more complicated for null given as argument, though - along the lines of that nullif() call.

Parsing a large value that includes 3 smaller values of scientifc notation

I'm using VB.Net 2013 and really could use some help. Perhaps I have been staring at it too long. I am presented with a value from a variable. The specific value is this
3.190E+01+3.366E+01+8.036E+00
The value is actually 3 smaller values in scientific notation as follows
3.190E+01
3.366E+01
8.036E+00
I need to get the individual values into individual variables. Once I have the individual values I need to calculate the notation of each value so 3.190E+01 is equivalent to 3.190*10^1 and 8.036E+00 is equivalent to 8.036*10^0. I can probably figure out the last part of this question if I can just get the individual values. The caveat is that the numbers will vary in size and the scientific notation part will not always be the same. I do believe it will always be E+XX though so possible to use some regex stuff that I don't fully understand.
Thank you, I look forward to your help and it is very much appreciated.

SQL Server 2008 - Default column value - should i use null or empty string?

For some time i'm debating if i should leave columns which i don't know if data will be passed in and set the value to empty string ('') or just allow null.
i would like to hear what is the recommended practice here.
if it makes a difference, i'm using c# as the consuming application.
I'm afraid that...
it depends!
There is no single answer to this question.
As indicated in other responses, at the level of SQL, NULL and empty string have very different semantics, the former indicating that the value is unknown, the latter indicating that the value is this "invisible thing" (in displays and report), but none the less it a "known value". A example commonly given in this context is that of the middle name. A null value in the "middle_name" column would indicate that we do not know whether the underlying person has a middle name or not, and if so what this name is, an empty string would indicate that we "know" that this person does not have a middle name.
This said, two other kinds of factors may help you choose between these options, for a given column.
The very semantics of the underlying data, at the level of the application.
Some considerations in the way SQL works with null values
Data semantics
For example it is important to know if the empty-string is a valid value for the underlying data. If that is the case, we may loose information if we also use empty string for "unknown info". Another consideration is whether some alternate value may be used in the case when we do not have info for the column; Maybe 'n/a' or 'unspecified' or 'tbd' are better values.
SQL behavior and utilities
Considering SQL behavior, the choice of using or not using NULL, may be driven by space consideration, by the desire to create a filtered index, or also by the convenience of the COALESCE() function (which can be emulated with CASE statements, but in a more verbose fashion). Another consideration is whether any query may attempt to query multiple columns to append them (as in SELECT name + ', ' + middle_name AS LongName etc.).
Beyond the validity of the choice of NULL vs. empty string, in given situation, a general consideration it to try and be as consistent as possible, i.e. to try and stick to ONE particular way, and to only/purposely/explicitly depart from this way for good reasons and in few cases.
Don't use empty string if there is no value. If you need to know if a value is unknown, have a flag for it. But 9 times out of 10, if the information is not provided, it's unknown, and that's fine.
NULL means unknown value. An empty string means a known value - a string with length zero. These are totally different things.
empty when I want a valid default value that may or may not be changed, for example, a user's middle name.
NULL when it is an error if the ensuing code does not set the value explicitly.
However, By initializing strings with the Empty value instead of null, you can reduce the chances of a NullReferenceException occurring.
Theory aside, I tend to view:
Empty string as a known value
NULL as unknown
In this case, I'd probably use NULL.
One important thing is to be consistent: mixing NULLs and empty strings will end in tears.
On a practical implementation level, empty string takes 2 bytes in SQL Server where as NULLs are bitmapped. In some conditions and for wide/larger tables it makes a different in performance because it's more data to shift around.

MySQL Type Conversion: Why is float the lowest common denominator type?

I recently ran into an issue where a query was causing a full table scan, and it came down to a column had a different definition that I thought, it was a VARCHAR not an INT. When queried with "string_column = 17" the query ran, it just couldn't use the index. That really threw me for a loop.
So I went searching and found what happened, the behavior I was seeing is consistent with what MySQL's documentation says:
In all other cases, the arguments are compared as floating-point (real) numbers.
So my question is... why a float?
I could see trying to convert numbers to strings (although the points in the MySQL page linked above are good reasons not to). I could also understand throwing some sort of error, or generating a warning (my preference). Instead it happily runs.
So why convert everything to a float? Is that from the SQL standard, or based on some other reason? Can anyone shed some light on this choice for me?
I feel your pain. We have a column in our DB that holds what is well-known in the company as an "order number". But it's not always a number, in certain circumstances it can have other characters too, so we keep it in a varchar. With SQL Server 2000, this means that selecting on "order_number = 123456" is bad. SQL Server effectively rewrites the predicate as "CAST(order_number, INT) = 123456" which has two undesirable effects:
the index is on order_number as a varchar, so it starts a full scan
those non-numeric order numbers eventually cause a conversion error to be thrown to the user, with a rather unhelpful message.
In a way it's good that we do have those non-numeric "numbers", since at least badly-written queries that pass the parameter as a number get trapped rather than just sucking up resources.
I don't think there is a standard. I seem to remember PostgreSQL 8.3 dropped some of the default casts between number and text types so that this kind of situation would throw an error when the query was being planned.
Presumably "float" is considered to be the widest-ranging numeric type and therefore the one that all numbers can be silently promoted to?
Oh, and similar problems (but no conversion errors) for when you have varchar columns and a Java application that passes all string literals as nvarchar... suddenly your varchar indices are no longer used, good luck finding the occurrences of that happening. Of course you can tell the Java app to send strings as varchar, but now we're stuck with only using characters in windows-1252 because that's what the DB was created as 5-6 years ago when it was just a "stopgap solution", ah-ha.
Well, it's easily understandable: float is able to hold the greatest range of numbers.
If the underlying datatype is datetime, for instance, it can be simply converted to a float number that has the same intrinsic value.
If the datatype is an string it is easy to parse it to a float, degrading performance not withstanding.
So float datatype is better to fallback.