What is returned when I count nulls? - sql

IF I have a table:
ID |field|
1 | |
2 | |
3 | |
All the field records are null. I want to know what is returned when I query:
SELECT Count(field)
FROM table
EDIT
I want to know what to expect on various emplementations that use sql. Since I don't have access to other implementations, besides Access, I cannot check myself, but it is important for me to know this. The Count query is just an example to show how that implementation treats the null values. Please don't show me any workarounds, how to count nulls, or how to ignore nulls. Just answer what will happen when I do. Thank you.

What you do is use a conditional SUM
SELECT
SUM(CASE WHEN field IS NULL THEN 1 ELSE 0 END) as numNULL,
SUM(CASE WHEN field IS NULL THEN 0 ELSE 1 END) as numNOT_NULL,
FROM table
EDIT
Sorry if i missunderstand your question. But your comment is very different to you original question.
http://www.w3schools.com/sql/sql_func_count.asp
SQL COUNT(column_name) Syntax The COUNT(column_name) function returns
the number of values (NULL values will not be counted) of the
specified column:
In all plataform COUNT() should be 0.
You can use sqlFiddle to test your query in different database (MySql, SqlLite, MSSQL, Postgre, Oracle). Use the function Text to DDL to create the table very easy. Take consideration sometimes the site have some timeout so maybe need to try later
Here is mySQL http://sqlfiddle.com/#!9/413ea7/1

Related

Oracle SQL: Select *, statement problem (vs SQL Server)

In SQL Server, I can write some code with select * statement, but it returns an error when writing in Oracle.
Here is an example - let's say I got a table Order which contains these columns:
[Date] | [Order_ID] | [Amt] | [Salesman]
In SQL Server, I can write code like this :
SELECT
*,
CASE WHEN [Amt] >= 0 THEN [Order_ID] END AS [Order_with_Amt]
FROM Order
The result will be :
Date | Order_ID | Amt | Salesman | Order_with_Amt
-----------+----------+-----+----------+---------------
01/01/2022 | A123 | 100 | Peter | A123
01/01/2022 | A124 | 0 | Sam | null
However, in Oracle, I cannot write the code as :
SELECT
*,
CASE WHEN "Amt" >= 0 THEN "Order_ID" END AS "Order_with_Amt"
FROM Order
It will throw an error :
ORA-00923: FROM keyword not found where expected
Any suggestion on this issue?
In Oracle's dialect of SQL, if you combine * with anything else then it has to be prefixed with the table name:
SELECT
Order.*,
CASE WHEN "Amt" >= 0 THEN "Order_ID" END AS "Order_with_Amt"
FROM Order
or if you alias the table (note there is no AS keyword for table aliases):
SELECT
o.*,
CASE WHEN "Amt" >= 0 THEN "Order_ID" END AS "Order_with_Amt"
FROM Order o
That is shown in the railroad diagram in the documentation:
The top branch has a plain* but can't be combined with anything else - there is no loop around to other options. The branches that do allow you to loop and add comma-separated terms have .* prefixed by a table (or view) or a table alias.
You are also using quoted identifiers, both for your column names and column expression aliases. It might be worth reading up on Oracle's object name rules, and seeing if you really need and want to use those.
If you create a table with a column with a quoted mixed-case name like "Amt" then you have to refer to it with quotes and exactly the same casing everywhere, which is a bit of a pain and easy to get wrong.
If you create it with an unquoted identifier like amt or Amt or AMT (or even quoted uppercase as "AMT") then those would all be in the data dictionary in the same form and you could refer to it without quotes and with any case - select amt, select Amt``, select AMT`, etc.
But order is a reserved word, as #Joel mentioned, so if you really do (and must) have a table with that name then that would have to be a quoted identifier. I would strongly suggest you call it something else though, like orders.
I see five things.
The two databases are different dialects of SQL, and so of course there are some features that work differently between them, even if this feature works just fine.
The sample for Postgresql is using string literals instead of column names. It is comparing the string 'Amt' to the value 0, instead of the value from a column named Amt.
ORDER is a reserved word, and therefore you need to take extra steps when using it as a table name. For SQL Server, this is square brackets ([Order]). For Postgresql, it's double quotes ("Order").
Postgresql is sometimes case sensitive about these table names (SQL Server is not; it doesn't care).
SELECT * is poor practice in the first place. I know many of us often use it as a placeholder while building a complex query, but we should always fill in real column names once the query is ready for use.

Replace NULL in my Table with <SOME VALUE> in PostgreSQL

Upon searching ways to replace NULL values in my table with 0 on Stack Overflow, it appears that many threads I've found point to using the COALESCE function. E.g. postgresql return 0 if returned value is null
I understand that the COALESCE function "replaces" null values for your specific query; however, the table itself remains untouched. That is, if you queried the table again in a separate query without COALESCE, null values would still exist.
My question is, is there a way for me to replace NULL values permanently in my table with a specified value (like 0) so I don't have to COALESCE in every query? And as an extension to my question, is it considered bad practice to modify the original table instead of doing manipulations in queries?
You can just do an UPDATE:
UPDATE table SET col1 = 0 WHERE col1 IS NULL;
This should effectively update all matching records on your table
I understand you got the answer but you can also use in your further query nvl function. You can replace at the runtime the NULL values with 0 just to be sure your query is working as expected.
SELECT
id
,nvl(col1, 0)
FROM
...
It's not updating in the table but you are sure that all NULL values are displayed as 0 like you want. Maybe you forget to update.

Find Top 1 best matching string in SQL server

I have a table 'MyTable' which has some business logics. This table has a column called Expression which has a string built using other columns.
My query is
Select Value from MyTable where #Parameters_Built like Expression
The variable #Parameters_Built is built from Input parameters by Concatenating all together.
In my current scenario,
#Parameteres_Built='1|2|Computer IT/Game Design & Dev (BS)|0|1011A|1|0|'
Below are the expressions
---------------------
%%|%%|%%|0|%%|%%|0|
---------------------
1|2|%%|0|%%|%%|0|
---------------------
1|%%|%%|0|%%|%%|0|
---------------------
So my above query returns true for all the three rows. But It should return only the second row (Maximum match).
I just don't need a solution with fix for this scenario. It's just a example. I need a solution like choosing the best match. Any idea?
Try:
Select top 1 * from MyTable
where #Parameters_Built like Expression
order by len(Expression)-len(replace(Expression,'%',''))
- this orders the results by the number of non-% characters in expression.
SQLFiddle here.

How can I check identity in 1 column in SQL query?

I need a way to get true-false reply from SQL query that say to me: value in one column are the same or not?
This will work, if there are no null values in the column. Null values in the column are ignored by this solution.
SELECT
CASE
WHEN MIN(Column1) <> MAX(Column1) THEN 'FALSE'
ELSE 'TRUE'
END
FROM MyTable
I tested this with SQL Server when the datatype of Column1 is varchar and int.
If I understand the question correctly, you're looking to compare two strings? In MySQL, that would be STRCMP().
Update:
Based upon feedback in replies to my answer, the following should work in most SQL variants.
SELECT count(0)=1 GROUP BY column
This will group rows by their value of column, and then count how many groups there are. If there is only one group, all rows have the same value for column.

how to filter in sql script to not include any column null

imagine there are 50 columns. I dont wan't any row that includes a null value. Are there any tricky way?
SQL 2005 server
Sorry, not really. All 50 columns have to be checked in one form or another.
Column1 IS NOT NULL AND ... AND Column50 IS NOT NULL
Of course, under these conditions why not disallow NULLs in the first place by having NOT NULL in the table definition
If it's SQL Server 2005+ you can do something like:
SELECT fields
FROM MyTable
WHERE stuff
EXCEPT -- This excludes the below results
SELECT fields
FROM MyTable
WHERE (Col1 + Col2 + Col3....) IS NULL
Adding a null to a value results in a null, so the sum of all your columns will be NULL.
This may need to change based on your data types, but adding NULL to either a char/varchar or a number will result in another NULL.
If you are looking at the values not being null, you can do this in the select statement.
SELECT ISNULL(firstname,''), ISNULL(lastname,'') FROM TABLE WHERE SOMETHING=1
This will replace nulls with string blanks. If you want another value use: ISNULL(firstname,'empty') for example. You can use anything where the word empty is.
I prefer this query
select *
from table
where column1>''
and column2>''
and (column3>'' or column3<'')
Allows sql server to use an index seek if the proper index/es exist. you would have to do the syntext for column 3 for any numeric values that could be negative.