How do I make a column default to NULL explicitly?
I would like to declare a column in Oracle SQL Developer to be NULL on default. I'm aware of the fact, that NULL will be the default value, if I do not define any default value at all. But how do I define NULL as default, if I would want to do it explicitly?
-- 1: Does not work.
ALTER TABLE MY_TABLE ADD (
MY_COLUMN TIMESTAMP(6) DEFAULT null
);
-- 2: Does not work.
ALTER TABLE MY_TABLE ADD (
MY_COLUMN TIMESTAMP(6) DEFAULT NULL
);
-- 3: Does not work.
ALTER TABLE MY_TABLE ADD (
MY_COLUMN TIMESTAMP(6) DEFAULT (null)
);
-- 4: This works.
ALTER TABLE MY_TABLE ADD (
MY_COLUMN TIMESTAMP(6)
);
In case 1-3 the default value will be a String ("NULL", "null" or "(null)"), but not an actual NULL value. So, what am I missing here?
// Edit:
Case (a) and (b) correspond to case 1 and 2. A text value of null or NULL is displayed in SQL Developer. Case (c) corresponds to case 4, where a real (null) value is set explicitly. The screenshots were taken on a table's Columns tab in SQL Developer.
SQL Developer http://s1.postimg.org/fclraa0dp/SQL_Developer.png
As null, NULL and (null) are the same thing, I don't understand what the problem is.
It is also not a SQL Developer "problem".
Oracle simply stores the default expression exactly as you wrote it in the system catalog. SQL Developer simply displays that.
Assume the following statements:
create table my_table (id integer);
alter table my_table add my_column_1 timestamp(6) default null;
alter table my_table add my_column_2 timestamp(6) default null;
alter table my_table add my_column_3 timestamp(6) default (null);
Then
select column_id, column_name, data_type, data_default
from user_tab_columns
where table_name = 'MY_TABLE'
order by column_id;
Will return the following:
COLUMN_ID | COLUMN_NAME | DATA_TYPE | DATA_DEFAULT
----------+-------------+--------------+-------------
1 | ID | NUMBER |
2 | MY_COLUMN_1 | TIMESTAMP(6) | NULL
3 | MY_COLUMN_2 | TIMESTAMP(6) | null
4 | MY_COLUMN_3 | TIMESTAMP(6) | (null)
When you extract the DDL from the system, you again get exactly why you have written:
select dbms_metadata.get_ddl('TABLE', 'MY_TABLE', user)
from dual;
returns:
CREATE TABLE "TK_HIRAC"."MY_TABLE"
( "ID" NUMBER(*,0),
"MY_COLUMN_1" TIMESTAMP (6) DEFAULT NULL,
"MY_COLUMN_2" TIMESTAMP (6) DEFAULT null,
"MY_COLUMN_3" TIMESTAMP (6) DEFAULT (null)
) SEGMENT CREATION DEFERRED
PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
NOCOMPRESS NOLOGGING
TABLESPACE "USERS"
Try this:
HUSQVIK#panel_management> CREATE TABLE MY_TABLE (C1 NUMBER NULL);
Table created.
HUSQVIK#panel_management> ALTER TABLE MY_TABLE ADD (
2 MY_COLUMN TIMESTAMP(6) DEFAULT NULL
3 );
Table altered.
HUSQVIK#panel_management>
works for me.
Related
so I am trying to convert a varchar to an int. I started without the numeric type and I got an error probably because of the . in the varchar. I searched online and found that I should add the numeric type. Now I have another error which is probably because of the , which is used as the thousands separator. Any suggestions?
I would like to use the alter table command if possible not cast or anything else because we have not learned it yet and it's for a school assignment. I have also added a screenshot of the query.
ALTER TABLE table_name
ALTER COLUMN column_name TYPE type USING column_name::type::type,
ALTER COLUMN column_name TYPE type USING column_name::type::type;
You can use a number of ways to convert your text value to integer (assuming the number in text field is actually an integer). For example:
REPLACE(price, ',', '')::numeric::int
TO_NUMBER(price, translate(price, '1234567890', '9999999999'))::int
Your alter table statement should look like this:
ALTER TABLE calendar
ALTER COLUMN price TYPE integer USING REPLACE(price , ',', '')::numeric::integer,
ALTER COLUMN adjusted_price TYPE integer USING REPLACE(adjusted_price, ',', '')::numeric::integer;
I've chosen the shorter way to cast, but TO_NUMBER case would work as well.
Use to_number, that can understand group separators:
ALTER TABLE calendar
ALTER price TYPE integer
USING to_number(price, '999,999,999,999.99')::integer,
ALTER adjusted_price TYPE integer
USING to_number(adjusted_price, '999,999,999,999.99')::integer;
My example/test script.
-- █ Droping and creating the table for test purposes. Don't do this with table with production data.
DROP TABLE IF EXISTS calendar;
CREATE TABLE calendar
(
id bigint NOT NULL GENERATED BY DEFAULT AS IDENTITY ( INCREMENT 1 START 100 MINVALUE 1 MAXVALUE 9223372036854775807 CACHE 1 ),
price character varying(10) COLLATE pg_catalog."default" NOT NULL,
adjusted_price character varying(10) COLLATE pg_catalog."default" NOT NULL,
CONSTRAINT pk_calendar_id PRIMARY KEY (id)
);
-- █ For test purposes, creating example data if table exists.
DO $$ -- DO executes an anonymous code block
BEGIN
IF EXISTS(SELECT * FROM information_schema.tables WHERE table_schema = 'public' AND table_name = 'calendar') THEN
INSERT INTO calendar (price, adjusted_price) VALUES('8,000.00', '8,001.00');
INSERT INTO calendar (price, adjusted_price) VALUES('7,000.00', '7,355.00');
END IF;
END;
$$;
-- █ Alter table columns from varchar to int.
ALTER TABLE calendar
ALTER COLUMN price TYPE int USING SPLIT_PART(REPLACE(price, ',', ''), '.', 1)::int,
ALTER COLUMN adjusted_price TYPE int USING SPLIT_PART(REPLACE(adjusted_price, ',', ''), '.', 1)::int;
-- REPLACE(source, old_text, new_text ) comma is replaced by empty string '8,000.00' -> '8000.00'
-- SPLIT_PART(string, delimiter, position) '8000.00' is splitted in 2 parts ['8000', '00'] we need the part 1 ->'8000'
-- ::int using cast operator ::, convert from varchar to int.
-- █ Select all columns with new types.
select * from calendar;
Example data
id price adjusted_price
100 "8,000.00" "8,001.00"
101 "7,000.00" "7,355.00"
After alter the table
id price adjusted_price
100 8000 8001
101 7000 7355
References
PostgreSql SPLIT_PART
PostgreSql REPLACE
PostgreSql CAST
PostgreSql DO
Check if table exists
I am trying to change the size of CHAR data type.
create table test1(name char2(7));
select * from test1; //table empty
then:
alter table test1 modify name char(4);
Changes the CHAR data type.
but:
create table test2(name char2(7));
insert into test2 values('aaa');
Then I try to change the size of CHAR data type:
alter table test2 modify name char(4);
But it returns the error:
Error starting at line : 4 in command -
alter table test modify name char(4)
Error report -
ORA-01441: cannot decrease column length because some value is too big
01441. 00000 - "cannot decrease column length because some value is too big"
*Cause:
*Action:
How to change the size of a CHAR data type?
A CHAR is a fixed-size string and if you try to put a shorter string into the column then it will be right-padded with spaces.
Either use VARCHAR2 or create a second, shorter CHAR column and then insert a SUBSTR from the larger column.
Option 1
CREATE TABLE test1( name CHAR(7) );
INSERT INTO test1 ( name ) VALUES ( '123' );
SELECT name, LENGTH( name ) FROM test1;
Outputs:
NAME | LENGTH(NAME)
:------ | -----------:
123 | 7
Then, if you do:
ALTER TABLE test1 ADD ( name2 CHAR(4) );
UPDATE test1
SET name2 = SUBSTR( name, 1, 4 );
ALTER TABLE test1 DROP COLUMN name;
ALTER TABLE test1 RENAME COLUMN name2 TO name;
SELECT name, LENGTH( name ) FROM test1;
Outputs:
NAME | LENGTH(NAME)
:--- | -----------:
123 | 4
Option 2
Or, you can just use VARCHAR2:
CREATE TABLE test2( name VARCHAR2(7) );
INSERT INTO test2 ( name ) VALUES ( '123' );
ALTER TABLE test2 MODIFY ( name VARCHAR2(4) );
Which, just works.
db<>fddle here
I'm new to postgres (on 9.5) and I can't find this in the docs anywhere.
Basically create a table like this:
CREATE TABLE test (
id serial primary key,
field1 CHARACTER VARYING(50)
);
Then copy it:
create table test_copy (like test);
The table test has these columns:
COLUMN_NAME id field1
DATA_TYPE 4 12
TYPE_NAME serial varchar
COLUMN_SIZE 10 50
IS_NULLABLE NO YES
IS_AUTOINCREMENT YES NO
But test_copy has these:
COLUMN_NAME id field1
DATA_TYPE 4 12
TYPE_NAME int4 varchar
COLUMN_SIZE 10 50
IS_NULLABLE NO YES
IS_AUTOINCREMENT NO NO
Why am I losing serial and autoincrement? How can I make a copy of a table that preserves these?
This is because serial isn't really a datatype. It gets "expanded" to an integer + a sequence + a default value.
See the manual for details
To get the default definition you need to use create table test_copy (like test INCLUDING DEFAULTS).
However, that will then use the same sequence as the original table.
You can see the difference when you display the table definition in psql:
psql (9.5.3)
Type "help" for help.
postgres=> CREATE TABLE test (
postgres(> id serial primary key,
postgres(> field1 CHARACTER VARYING(50)
postgres(> );
CREATE TABLE
postgres=> create table test_copy_no_defaults (like test);
CREATE TABLE
postgres=> create table test_copy (like test including defaults);
CREATE TABLE
postgres=> \d test
Table "public.test"
Column | Type | Modifiers
--------+-----------------------+---------------------------------------------------
id | integer | not null default nextval('test_id_seq'::regclass)
field1 | character varying(50) |
Indexes:
"test_pkey" PRIMARY KEY, btree (id)
postgres=> \d test_copy
Table "public.test_copy"
Column | Type | Modifiers
--------+-----------------------+---------------------------------------------------
id | integer | not null default nextval('test_id_seq'::regclass)
field1 | character varying(50) |
postgres=> \d test_copy_no_defaults
Table "public.test_copy_no_defaults"
Column | Type | Modifiers
--------+-----------------------+-----------
id | integer | not null
field1 | character varying(50) |
you can try:
create table test_inh () inherits (test);
and then
alter table test_inh no inherit test;
should leave same sequence default value for you
I've seen many times the following syntax which defines a column in a create/alter DDL statement:
ALTER TABLE tbl ADD COLUMN col VARCHAR(20) NOT NULL DEFAULT "MyDefault"
The question is: since a default value is specified, is it necessary to also specify that the column should not accept NULLs? In other words, doesn't DEFAULT render NOT NULL redundant?
DEFAULT is the value that will be inserted in the absence of an explicit value in an insert / update statement. Lets assume, your DDL did not have the NOT NULL constraint:
ALTER TABLE tbl ADD COLUMN col VARCHAR(20) DEFAULT 'MyDefault'
Then you could issue these statements
-- 1. This will insert 'MyDefault' into tbl.col
INSERT INTO tbl (A, B) VALUES (NULL, NULL);
-- 2. This will insert 'MyDefault' into tbl.col
INSERT INTO tbl (A, B, col) VALUES (NULL, NULL, DEFAULT);
-- 3. This will insert 'MyDefault' into tbl.col
INSERT INTO tbl (A, B, col) DEFAULT VALUES;
-- 4. This will insert NULL into tbl.col
INSERT INTO tbl (A, B, col) VALUES (NULL, NULL, NULL);
Alternatively, you can also use DEFAULT in UPDATE statements, according to the SQL-1992 standard:
-- 5. This will update 'MyDefault' into tbl.col
UPDATE tbl SET col = DEFAULT;
-- 6. This will update NULL into tbl.col
UPDATE tbl SET col = NULL;
Note, not all databases support all of these SQL standard syntaxes. Adding the NOT NULL constraint will cause an error with statements 4, 6, while 1-3, 5 are still valid statements. So to answer your question: No, they're not redundant.
Even with a default value, you can always override the column data with null.
The NOT NULL restriction won't let you update that row after it was created with null value
My SQL teacher said that if you specify both a DEFAULT value and NOT NULLor NULL, DEFAULT should always be expressed before NOT NULL or NULL.
Like this:
ALTER TABLE tbl ADD COLUMN col VARCHAR(20) DEFAULT "MyDefault" NOT NULL
ALTER TABLE tbl ADD COLUMN col VARCHAR(20) DEFAULT "MyDefault" NULL
I would say not.
If the column does accept null values, then there's nothing to stop you inserting a null value into the field. As far as I'm aware, the default value only applies on creation of a new row.
With not null set, then you can't insert a null value into the field as it'll throw an error.
Think of it as a fail safe mechanism to prevent nulls.
In other words, doesn't DEFAULT render NOT NULL redundant ?
No, it is not redundant. To extended accepted answer. For column col which is nullable awe can insert NULL even when DEFAULT is defined:
CREATE TABLE t(id INT PRIMARY KEY, col INT DEFAULT 10);
-- we just inserted NULL into column with DEFAULT
INSERT INTO t(id, col) VALUES(1, NULL);
+-----+------+
| ID | COL |
+-----+------+
| 1 | null |
+-----+------+
Oracle introduced additional syntax for such scenario to overide explicit NULL with default DEFAULT ON NULL:
CREATE TABLE t2(id INT PRIMARY KEY, col INT DEFAULT ON NULL 10);
-- same as
--CREATE TABLE t2(id INT PRIMARY KEY, col INT DEFAULT ON NULL 10 NOT NULL);
INSERT INTO t2(id, col) VALUES(1, NULL);
+-----+-----+
| ID | COL |
+-----+-----+
| 1 | 10 |
+-----+-----+
Here we tried to insert NULL but get default instead.
db<>fiddle demo
ON NULL
If you specify the ON NULL clause, then Oracle Database assigns the DEFAULT column value when a subsequent INSERT statement attempts to assign a value that evaluates to NULL.
When you specify ON NULL, the NOT NULL constraint and NOT DEFERRABLE constraint state are implicitly specified.
In case of Oracle since 12c you have DEFAULT ON NULL which implies a NOT NULL constraint.
ALTER TABLE tbl ADD (col VARCHAR(20) DEFAULT ON NULL 'MyDefault');
ALTER TABLE
ON NULL
If you specify the ON NULL clause, then Oracle Database assigns the
DEFAULT column value when a subsequent INSERT statement attempts to
assign a value that evaluates to NULL.
When you specify ON NULL, the NOT NULL constraint and NOT DEFERRABLE
constraint state are implicitly specified. If you specify an inline
constraint that conflicts with NOT NULL and NOT DEFERRABLE, then an
error is raised.
Is there any to get the an AUTO_INCREMENT field of a InnoDB to start counting from 0 not 1
CREATE TABLE `df_mainevent` (
`idDf_MainEvent` int(11) NOT NULL AUTO_INCREMENT,
PRIMARY KEY (`idDf_MainEvent`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
MySQL documentation:
If a user specifies NULL or 0 for the
AUTO_INCREMENT column in an INSERT,
InnoDB treats the row as if the value
had not been specified and generates a
new value for it.
So it means that 0 is a 'special' value which is similar to NULL. Even when you use AUTO_INCREMENT = 0 is will set the initial value to 1.
Beginning with MySQL 5.0.3, InnoDB
supports the AUTO_INCREMENT = N table
option in CREATE TABLE and ALTER TABLE
statements, to set the initial counter
value or alter the current counter
value. The effect of this option is
canceled by a server restart, for
reasons discussed earlier in this
section.
CREATE TABLE `df_mainevent` (
`idDf_MainEvent` int(11) NOT NULL AUTO_INCREMENT,
PRIMARY KEY (`idDf_MainEvent`)
) ENGINE=InnoDB AUTO_INCREMENT=0 DEFAULT CHARSET=latin1;
works with MySQL >= 5.0.3.
EDIT:
Just noticed that MySQL in general does not like auto-increment values equal to 0 - that's independent from the used storage engine. MySQL just uses 1 as the first auto-increment value. So to answer the question: NO that's not possible but it does not depend on the storage engine.
This works in both InnoDB and MyISAM, and the second insert is a 1 not a 2:
CREATE TABLE ex1 (id INT AUTO_INCREMENT PRIMARY KEY) ENGINE=MyISAM;
SET sql_mode='NO_AUTO_VALUE_ON_ZERO';
INSERT INTO ex1 SET id=0;
INSERT INTO ex1 SET id=NULL;
SELECT * FROM ex1;
+----+
| id |
+----+
| 0 |
| 1 |
+----+
2 rows in set (0.00 sec)
CREATE TABLE ex2 (id INT AUTO_INCREMENT PRIMARY KEY) ENGINE=InnoDB;
SET sql_mode='NO_AUTO_VALUE_ON_ZERO';
INSERT INTO ex2 SET id=0;
INSERT INTO ex2 SET id=NULL;
SELECT * FROM ex2;
+----+
| id |
+----+
| 0 |
| 1 |
+----+
2 rows in set (0.00 sec)
Daren Schwenke's technique works. To bad that the next record inserted will be 2.
For example:
CREATE TABLE IF NOT EXISTS `table_name` (
`ID` INT UNSIGNED NOT NULL AUTO_INCREMENT,
`Name` VARCHAR(100) NOT NULL,
PRIMARY KEY( `ID` )
) ENGINE=InnoDB AUTO_INCREMENT=0 DEFAULT CHARSET=latin1;
INSERT INTO `table_name` (`Name`) VALUES ('Record0?');
UPDATE `table_name` SET `ID`=0 WHERE `ID`=1;
INSERT INTO `table_name` (`Name`) VALUES ('Record1?');
SELECT * FROM `table_name`;
ID Name
0 Record0?
2 Record1?
This isn't a big deal its just annoying.
Tim
I have not been able to have autoincrement start at 0, but starting at 1 and then setting it to 0 via an UPDATE works fine.
I commonly use this trick to detect deletes in a table.
On update of any row, I set that row's last update time.
On deletes, I set the last update time of row 0.