Oracle case sensitive Format SQL - sql

I need to format SQL statement for Oracle Db. I have a sql statement and I don't want to change the case. For Eg.
CREATE TABLE DPAuditTrailDetail
(
ID NUMBER (19, 0) DEFAULT 0 NOT NULL,
AuditTrail NUMBER (19, 0) DEFAULT 0 NOT NULL,
sysObjectField NUMBER (19, 0) DEFAULT 0 NOT NULL,
OldValue NCLOB DEFAULT NULL ,
NewValue NCLOB DEFAULT '' NOT NULL,
Referenced NUMBER (19, 0) DEFAULT NULL
);
I believe, to create table with this table name and column names in oracle, i will have to add double quotes("") to each name. I have a big script and I would like to do it as quick as possible.
Please do suggest some quick way to do it.
Thanks.

Just use the CREATE statement as-is. The tables will be created so that all the following will work just fine:
select AuditTrail from DPAuditTrailDetail where ID=1;
select AUDITTRAIL from DPAUDITTRAILDETAIL where ID=1;
select aUdITtraIL from dpaudittraildetaiL where id=1;
Oracle queries are case-insensitive by default and your life (and that of those maintaining your code when you're gone) will be easier if you stick to this default.

If you really have to use case-sensitive table/column names, the only way is to add double-quotes to table/column names. But as the commenters said, it's not good practice to use case-sensitive names

Related

SQL Concatenate column values and store in an extra column

I am using SQL Server 2019 (v15.0.2080.9) and I've got a simple task but somehow I am not able to solve it...
I have a little database with one table containing a first name and a last name column
CREATE TABLE [dbo].[person]
(
[first_name] [nchar](200) NULL,
[last_name] [nchar](200) NULL,
[display_name] [nchar](400) NULL
) ON [PRIMARY]
GO
and I want to store the combination of first name with an extra whitespace in between in the third column (yes I really have to do that...).
So I thought I might use the CONCAT function
UPDATE [dbo].[person]
SET display_name = CONCAT(first_name, ' ', last_name)
But my display_name column is only showing me the first name... so what's wrong with my SQL?
Kind regards
Sven
Your method should work and does work. The issue, though is that the data types are nchar() instead of nvarchar(). That means they are padded with spaces and the last name is starting at position 201 in the string.
Just fix the data type.
In addition, I would suggest that you use a computed column:
alter table person add display_name as (concat(first_name, ' ', last_name));
This ensures that the display_name is always up-to-date -- even when the names change.
Here is a db<>fiddle.
As a note: char() and nchar() are almost never appropriate. The one exception is when you have fixed length strings, such as state or country abbreviations or account codes. In almost all cases, you want varchar() or nvarchar().

How to write a date concisely in SQL?

I have the employee table:
CREATE TABLE employee
(
id integer PRIMARY KEY,
surname character(15),
employed_date date
);
to which I insert the following row:
INSERT INTO employee
VALUES (4, 'Smith', to_date('2015-11-28','YYYY-MM-DD'));
Can I instead simply write?
INSERT INTO employee
VALUES (4, 'Smith', '2015-11-28');
It works in my PostgreSQL installation. However, I would like to write portable code that works also in other databases, including Oracle, SQL Server, MySQL and SQLite; and with any locales.
Does it work also in these databases? If no, then maybe another format works in these databases and/or is ANSI standard?
Most DBMSes (but not SQL Server & SQLite) support ANSI/Standard SQL date literals with a fixed YYYY-MM-DD format:
date '2020-09-20'
Similar for time and timestamp:
time '12:34:56'
timestamp '2020-09-20 12:34:56.02'
First, you are using a char() type for the surname. This type pads the name with spaces, which is generally unadvisable for names. In general, you want a varchar() for names:
CREATE TABLE employee (
id integer PRIMARY KEY,
surname varchar(15),
employed_date date
);
I should add that 15 characters is probably not long enough for last names.
Or perhaps nvarchar(). The above works in all your mentioned databases, although Oracle recommends varchar2().
As for date constants, there are basically two ways to insert them:
date '2020-11-03'
'2020-11-03'
The first is standard SQL, but not all databases support it. Of the databases you mention, only Oracle requires the date keyword. SQL Server and SQLite don't allow it.
The databases that do not allow date will convert the string to a date, if a date is expected.
So, there is no single way to pass in a date constant that works across the databases you have mentioned. Well, the major exception is Oracle. And you can change the default date format to accept YYYY-MM-DD rather than DD-MMM-RR, but that is rarely done.
You can use the sql statement below to add a new employee to employee table. With the date format 'YYYY-MM-DD'.
INSERT INTO employee (id,name,date) VALUES (4, 'Smith', '2015-11-28')

Differentiate Exponents in T-SQL

In SQL Server 2017 (14.0.2)
Consider the following table:
CREATE TABLE expTest
(
someNumbers [NVARCHAR](10) NULL
)
And let's say you populate the table with some values:
INSERT INTO expTest VALUES('²', '2')
Why does the following SELECT return both rows?
SELECT *
FROM expTest
WHERE someNumbers = '2'
Shouldn't nvarchar realize that '²' is unicode, while '2' is a separate value? How (without using the UNICODE() function) could I identify this data as being nonequivalent?
Here is a db<>fiddle. This shows the following:
Your observation is true even when the values are entered as national character set constants.
The "ASCII" versions of the characters are actually different.
The problem goes away with a case-sensitive collation.
I think the exponent is just being treated as a different "case" of the number, so they are considered the same in a case-insensitive collation.
The comparison is what you expect with a case-sensitive collation.

Sql Server. Why field value of [almost] any type may be treated as quoted string in Query/DML?

I discovered SUDDENLY that in SQL Server (2000) any field type value may be treated as quoted string (in query or DML).
Question:
Is it normal behavior or accidentally successful result?
Example:
CREATE TABLE [Test_table] (
[int_field] [int] NULL ,
[float_field] [float] NULL ,
[date_field] [datetime] NULL ,
[id] [int] NOT NULL
) ON [PRIMARY]
GO
update test_table set int_field = 100, float_field = 10.01, date_field = CAST('2013-11-10' AS DATETIME) where id = 1
update test_table set int_field = '200', float_field = '20.02', date_field = '2014-12-10' where id = '2'
select * from test_table where id in ('1', 2) -- WHY '1' DOES WORK!???
Why i need this?
It exists idea to send in one Stored Procedure over 270 parameters as integral text (XML or custom serialization by delimiters or like Len1+value1+len2+value2+..) then parse and extract all desired values and use them in UPDATE statement. This SO Question.
Q2: Is there any limitations for some types?
Q3: Is this reliable way or CAST anyway is recommended?
If you check the CAST and CONVERT topic, you'll find a handy table:
You'll note that conversion from char and varchar is supported for every other type, and only a few of them require explicit casts. For some types, there's no obvious way to type a literal of that type, so allowing implicit conversions from a string makes sense.
(But oh, how I wish conversion to datetime required an explicit case with a format code...)
SQL Server, like most (all?) brands of SQL, automatically attempts to cast things to the correct type. This is pretty standard behavior.
It should be reliable in the above cases. In both the update and select statement, the type that must be converted to is known (from the column definitions of the tables).
However, automatic casting can introduce subtle issues when it is part of a more complex query. Some kinds of SQL will have problems with statements like this:
select case when foo=1 then 0 else 'a' end from table
In this case, the result type won't necessarily be something that can accept all types of results, so it could fail when it tries to assign 'a'. Be careful when relying on automatic conversion in complex statements. It is probably better to make it explicit in such cases.
Another potential issue with passing everything in as strings is that there will be an error if you accidentally pass in a non-numeric value.

Achieving properties of binary and collation at the same time

I have a varchar field in my database which i use for two significantly different things. In one scenario i use it for evaluating with case sensitivity to ensure no duplicates are inserted. To achieve this I've set the comparison to binary. However, I want to be able to search case-insensitively on the same column values. Is there any way I can do this without simply creating a redundant column with collation instead of binary?
CREATE TABLE t_search (value VARCHAR(50) NOT NULL COLLATE UTF8_BIN PRIMARY KEY);
INSERT
INTO t_search
VALUES ('test');
INSERT
INTO t_search
VALUES ('TEST');
SELECT *
FROM t_search
WHERE value = 'test' COLLATE UTF8_GENERAL_CI;
The second query will return both rows.
Note, however, that anything with COLLATE applied to it has the lowest coercibility.
This means that it's value that will be converted to UTF8_GENERAL_CI for the comparision purposes, not the other way round, which means that the index on value will not be used for searching and the condition in the query will be not sargable.
If you need good performance on case-insensitive searching, you should create an additional column with case-insensitive collation, index it and use in the searches.
you can use the COLLATE statement to change the collation on a column in a query. see this manual page for extensive examples.