Oracle SQL: Select *, statement problem (vs SQL Server) - sql

In SQL Server, I can write some code with select * statement, but it returns an error when writing in Oracle.
Here is an example - let's say I got a table Order which contains these columns:
[Date] | [Order_ID] | [Amt] | [Salesman]
In SQL Server, I can write code like this :
SELECT
*,
CASE WHEN [Amt] >= 0 THEN [Order_ID] END AS [Order_with_Amt]
FROM Order
The result will be :
Date | Order_ID | Amt | Salesman | Order_with_Amt
-----------+----------+-----+----------+---------------
01/01/2022 | A123 | 100 | Peter | A123
01/01/2022 | A124 | 0 | Sam | null
However, in Oracle, I cannot write the code as :
SELECT
*,
CASE WHEN "Amt" >= 0 THEN "Order_ID" END AS "Order_with_Amt"
FROM Order
It will throw an error :
ORA-00923: FROM keyword not found where expected
Any suggestion on this issue?

In Oracle's dialect of SQL, if you combine * with anything else then it has to be prefixed with the table name:
SELECT
Order.*,
CASE WHEN "Amt" >= 0 THEN "Order_ID" END AS "Order_with_Amt"
FROM Order
or if you alias the table (note there is no AS keyword for table aliases):
SELECT
o.*,
CASE WHEN "Amt" >= 0 THEN "Order_ID" END AS "Order_with_Amt"
FROM Order o
That is shown in the railroad diagram in the documentation:
The top branch has a plain* but can't be combined with anything else - there is no loop around to other options. The branches that do allow you to loop and add comma-separated terms have .* prefixed by a table (or view) or a table alias.
You are also using quoted identifiers, both for your column names and column expression aliases. It might be worth reading up on Oracle's object name rules, and seeing if you really need and want to use those.
If you create a table with a column with a quoted mixed-case name like "Amt" then you have to refer to it with quotes and exactly the same casing everywhere, which is a bit of a pain and easy to get wrong.
If you create it with an unquoted identifier like amt or Amt or AMT (or even quoted uppercase as "AMT") then those would all be in the data dictionary in the same form and you could refer to it without quotes and with any case - select amt, select Amt``, select AMT`, etc.
But order is a reserved word, as #Joel mentioned, so if you really do (and must) have a table with that name then that would have to be a quoted identifier. I would strongly suggest you call it something else though, like orders.

I see five things.
The two databases are different dialects of SQL, and so of course there are some features that work differently between them, even if this feature works just fine.
The sample for Postgresql is using string literals instead of column names. It is comparing the string 'Amt' to the value 0, instead of the value from a column named Amt.
ORDER is a reserved word, and therefore you need to take extra steps when using it as a table name. For SQL Server, this is square brackets ([Order]). For Postgresql, it's double quotes ("Order").
Postgresql is sometimes case sensitive about these table names (SQL Server is not; it doesn't care).
SELECT * is poor practice in the first place. I know many of us often use it as a placeholder while building a complex query, but we should always fill in real column names once the query is ready for use.

Related

What is returned when I count nulls?

IF I have a table:
ID |field|
1 | |
2 | |
3 | |
All the field records are null. I want to know what is returned when I query:
SELECT Count(field)
FROM table
EDIT
I want to know what to expect on various emplementations that use sql. Since I don't have access to other implementations, besides Access, I cannot check myself, but it is important for me to know this. The Count query is just an example to show how that implementation treats the null values. Please don't show me any workarounds, how to count nulls, or how to ignore nulls. Just answer what will happen when I do. Thank you.
What you do is use a conditional SUM
SELECT
SUM(CASE WHEN field IS NULL THEN 1 ELSE 0 END) as numNULL,
SUM(CASE WHEN field IS NULL THEN 0 ELSE 1 END) as numNOT_NULL,
FROM table
EDIT
Sorry if i missunderstand your question. But your comment is very different to you original question.
http://www.w3schools.com/sql/sql_func_count.asp
SQL COUNT(column_name) Syntax The COUNT(column_name) function returns
the number of values (NULL values will not be counted) of the
specified column:
In all plataform COUNT() should be 0.
You can use sqlFiddle to test your query in different database (MySql, SqlLite, MSSQL, Postgre, Oracle). Use the function Text to DDL to create the table very easy. Take consideration sometimes the site have some timeout so maybe need to try later
Here is mySQL http://sqlfiddle.com/#!9/413ea7/1

Combining concatenation with ORDER BY

I have troubles in combining concatenation with order by in Postgre (9.1.9).
Let's say, I have a table borders with 3 fields:
Table "borders"
Column | Type | Modifiers
---------------+----------------------+-----------
country1 | character varying(4) | not null
country2 | character varying(4) | not null
length | numeric |
The first two fields are codes of the countries and the third one is the length of the border among those countries.
The primary key is defined on the first two fields.
I need to compose a select of a column that would have unique values for the whole table, in addition this column should be selected in decreasing order.
For this I concatenate the key fields with a separator character, otherwise two different rows might give same result, like (AB, C and A, BC).
So I run the following query:
select country1||'_'||country2 from borders order by 1;
However in the result I see that the '_' character is omited from the sorting.
The results looks like this:
?column?
----------
A_CH
A_CZ
A_D
AFG_IR
AFG_PK
AFG_TAD
AFG_TJ
AFG_TM
AFG_UZB
A_FL
A_H
A_I
.
.
You can see that the result is sorted as if '_' doesn't exists in the strings.
If I use a letter (say 'x') as a separator - the order is correct. But I must use some special character that doesn't appear in the country1 and country2 fields, to avoid contentions.
What should I do, in order to make the '_' character to be taken into account during the sorting.
EDIT
It turned out that the concatenation has nothing to do with the problem. The problem is that the order by simply ignores '_' character.
select country1 || '_' || country2 collate "C" as a
from borders
order by 1
sql fiddle demo
Notes according to discussion in comments:
1.) COLLATE "C" applies in the ORDER BY clause as long as it references the expression in the SELECT clause by positional parameter or alias. If you repeat the expression in ORDER BY you also need to repeat the COLLATE clause if you want to affect the sort order accordingly.
sql fiddle demo
2.) In collations where _ does not influence the sort order, it is more efficient to use fog's query, even more so because that one makes use of the existing index (primary key is defined on the first two fields).
However, if _ has an influence, one needs to sort on the combined expression:
sql fiddle demo
Query performance (tested in Postgres 9.2):
sql fiddle demo
PostgreSQL Collation Support in the manual.
Just order by the two columns:
SELECT country1||'_'||country2 FROM borders ORDER BY country1, country2;
Unless you use aggregates or windows, PostgreSQL allows to order by columns even if you don't include them in the SELECT list.
As suggested in another answer you can also change the collation of the combined column but, if you can, sorting on plain columns is faster, especially if you have an index on them.
What happens when you do the following?
select country1||'_'||country2 from borders order by country1||'_'||country2
My knowledge on order by 1 only does an ordinal sort. It won't do anything on concatenated columns. Granted, I'm speaking from SQL Server knowledge, so let me know if I'm way off base.
Edited: Ok; just saw Parado's post as I posted mine. Maybe you could create a view from this query (give it a column name) and then requery the view, order by that column? Or do the following:
select country_group from (
select country1||'_'||country2 as country_group from borders
) a
order by country_group

Find Top 1 best matching string in SQL server

I have a table 'MyTable' which has some business logics. This table has a column called Expression which has a string built using other columns.
My query is
Select Value from MyTable where #Parameters_Built like Expression
The variable #Parameters_Built is built from Input parameters by Concatenating all together.
In my current scenario,
#Parameteres_Built='1|2|Computer IT/Game Design & Dev (BS)|0|1011A|1|0|'
Below are the expressions
---------------------
%%|%%|%%|0|%%|%%|0|
---------------------
1|2|%%|0|%%|%%|0|
---------------------
1|%%|%%|0|%%|%%|0|
---------------------
So my above query returns true for all the three rows. But It should return only the second row (Maximum match).
I just don't need a solution with fix for this scenario. It's just a example. I need a solution like choosing the best match. Any idea?
Try:
Select top 1 * from MyTable
where #Parameters_Built like Expression
order by len(Expression)-len(replace(Expression,'%',''))
- this orders the results by the number of non-% characters in expression.
SQLFiddle here.

SQL with LIMIT1 returns all records

I made a mistake and entered:
SELECT * FROM table LIMIT1
instead of
SELECT * FROM table LIMIT 1 (note the space between LIMIT and 1)
in the CLI of MySQL. I expected to receive some kind of parse error, but I was surprised, because the query returned all of the records in the table. My first thought was "stupid MySQL, I bet that this will return error in PostgreSQL", but PostgreSQL also returned all records. Then tested it with SQLite - with the same result.
After some digging, I realized that it doesn't matter what I enter after the table. As long as there are no WHERE/ORDER/GROUP clauses:
SELECT * FROM table SOMETHING -- works and returns all records in table
SELECT * FROM table WHERE true SOMETHING -- doesn't work - returns parse error
I guess that this is a standardized behavior, but I couldn't find any explanation why's that. Any ideas?
Your first query is equivalent to this query using a table alias:
SELECT * FROM yourtable AS LIMIT1
The AS keyword is optional. The table alias allows you to refer to columns of that table using the alias LIMIT1.foo rather than the original table name. It can be useful to use aliases if you wish to give tables a shorter or a more descriptive alias within a query. It is necessary to use aliases if you join a table to itself.
From the SQL lite documentation:
This is why I want DB engine to force the usage of keyword AS for alias names
http://beyondrelational.com/modules/2/blogs/70/posts/10814/should-alias-names-be-preceded-by-as.aspx
SELECT * FROM table LIMIT1;
LIMIT1 This has taken as alias by SQL, cause LIMIT1 is not a reserved literal of SQL.
Something after table name and that is not a reserved keyword always taken as an table alias by SQL.
SELECT * FROM table LIMIT 1;
When you used LIMIT just after the table name, SQL found that as a reserved keyword and worked for it as per the behavior. IF you want to use reserved key words in query It can be done by putting reserved literals in quotes. like..
SELECT * FROM table `LIMIT`;
OR
SELECT * FROM table `LIMIT 1`;
Now all words covered under `` quotes will treated as user defined.
Commonly we did mistake with date, timestamp, limit etc.. keywords by using them as column names.

Oracle - Select where field has lowercase characters

I have a table, users, in an Oracle 9.2.0.6 database. Two of the fields are varchar - last_name and first_name.
When rows are inserted into this table, the first name and last name fields are supposed to be in all upper case, but somehow some values in these two fields are mixed case.
I want to run a query that will show me all of the rows in the table that have first or last names with lowercase characters in it.
I searched the net and found REGEXP_LIKE, but that must be for newer versions of oracle - it doesn't seem to work for me.
Another thing I tried was to translate "abcde...z" to "$$$$$...$" and then search for a '$' in my field, but there has to be a better way?
Thanks in advance!
How about this:
select id, first, last from mytable
where first != upper(first) or last != upper(last);
I think BQ's SQL and Justin's second SQL will work, because in this scenario:
first_name last_name
---------- ---------
bob johnson
Bob Johnson
BOB JOHNSON
I want my query to return the first 2 rows.
I just want to make sure that this will be an efficient query though - my table has 500 million rows in it.
When you say upper(first_name) != first_name, is "first_name" always pertaining to the current row that oracle is looking at? I was afraid to use this method at first because I was afraid I would end up joining this table to itself, but they way you both wrote the SQL it appears that the equality check is only operating on a row-by-row basis, which would work for me.
If you are looking for Oracle 10g or higher you can use the below example. Consider that you need to find out the rows where the any of the letter in a column is lowercase.
Column1
.......
MISS
miss
MiSS
In the above example, if you need to find the values miss and MiSS, then you could use the below query
SELECT * FROM YOU_TABLE WHERE REGEXP_LIKE(COLUMN1,'[a-z]');
Try this:
SELECT * FROM YOU_TABLE WHERE REGEXP_LIKE(COLUMN1,'[a-z]','c'); => Miss, miss lower text
SELECT * FROM YOU_TABLE WHERE REGEXP_LIKE(COLUMN1,'[A-Z]','c'); => Miss, MISS upper text
SELECT *
FROM mytable
WHERE FIRST_NAME IN (SELECT FIRST_NAME
FROM MY_TABLE
MINUS
SELECT UPPER(FIRST_NAME)
FROM MY_TABLE )
for SQL server where the DB collation setting is Case insensitive use the following:
SELECT * FROM tbl_user WHERE LEFT(username,1) COLLATE Latin1_General_CS_AI <> UPPER(LEFT(username,1))