sql replace by default - sql

Is it possible to define a constraint on a column that will replace a certain input value by a default value?
I have a table with a type column that should always be a default value unless the user defines the type of the feature.
When the type is NULL, it will be replaced by the default value. Unfortunately the form handler does not fill in NULL when type is not defined. It fills an empty string instead.
How can I program SQL to replace these empty strings by the default value?
Code snip:
CREATE TABLE [dbo].[ACC_Plannen](
[ID] [int] IDENTITY(1,1) NOT NULL,
[PLANID] [nchar](50) NULL,
[plantype] [nvarchar](500) NULL CONSTRAINT [DF_ACC_Plannen_plantype] DEFAULT ('Algemeen'),

If you can only handle this on SQL, you can try few things:
Create a stored procedure that will clean up input and insert/update your table;
Create an insert/update trigger that will fix values for you.
Between the two above I'd go with stored procedure, because you have to explicitly call it while trigger does it's work implicitly. Might even add a trigger that raises error on invalid input instead of fixing it.
Ideally you would clean input before sending it to SQL though. Then you could just have NOT NULL DEFAULT constraint on your table.

The main question is the following: "Do you want to reject the insert and fix the data at the front end (user/application prgm). Or just update the bad data on the back end (database)."
A check constraint can be used to valid the data and make sure there is not empty string. A violation results in a error.
A after insert trigger can be used to re-work (update) the empty string into what ever value you want.
Just be cognizant of the use case.
If you are bulk inserting and a check constraint fails, the whole batch is aborted.
If you are inserting a large amount of data, the trigger will be executed for each record. Maybe a update statement after a large batch load in warranted in this case.
As always, it depends on your situation.
Happy coding!

Related

Cannot create stored procedure to insert data: type mismatch for serial column

CREATE TABLE test ( id serial primary key, name text );
CREATE OR REPLACE PROCEDURE test_insert_data( "name" text)
LANGUAGE SQL
AS $$
INSERT INTO public.test values("name")
$$;
Error & Hint:
column "id" is of type integer but expression is of type character varying
LINE 4: INSERT INTO public.test values("name")
^
HINT: You will need to rewrite or cast the expression.
I followed this tutorial: https://www.enterprisedb.com/postgres-tutorials/10-examples-postgresql-stored-procedures.
Obviously, I don't need to attach the column id for inserting.
There is no quoting issue, like comments would suggest.
And the linked tutorial is not incorrect. (But still bad advise.)
The missing target column list is the problem.
This would work:
CREATE OR REPLACE PROCEDURE test_insert_data("name" text)
LANGUAGE sql AS
$proc$
INSERT INTO public.test(name) -- !! target column list !!
VALUES ("name");
$proc$;
(For the record, since "name" is a valid identifier, all double quotes are just noise and can (should) be omitted.)
If you don't specify the target column(s), Postgres starts to fill in columns from left to right, starting with id in your case - which triggers the reported error message.
(The linked tutorial also provides an ID value, so it does not raise the same exception.)
Even if it would work without explicit target column list, it's typically still advisable to add one for persisted INSERT commands. Else, later modifications to the table structure can break your code silently. With any bad luck in a way you'll only notice much later - like filling in the wrong columns without raising an error.
See:
SQL INSERT without specifying columns. What happens?
Inserting into Postgres within a trigger function
Aside: I would never use "name" as column name. Not even in a generic tutorial. That's not helpful. Any column name is a "name". Use meaningful identifiers instead.

Sybase ASE: Add NOT NULL column without a DEFAULT fails. Why?

Consider the following empty (as in without rows) table:
CREATE TABLE my_table(
my_column CHAR(10) NOT NULL
);
Trying to add a NOT NULL column without a DEFAULT will fail:
ALTER TABLE my_table ADD my_new_column CHAR(10) NOT NULL;
Error:
*[Code: 4997, SQL State: S1000]
ALTER TABLE my_table failed.
Default clause is required in order to add non-NULL column 'my_new_column'.
But adding the column as NULL and then change it to be NOT NULL will work:
ALTER TABLE my_table ADD my_new_column CHAR(10) NULL;
ALTER TABLE my_table MODIFY my_new_column CHAR(10) NOT NULL;
Setting a default and then removing the default will work too:
ALTER TABLE my_table ADD my_new_column CHAR(10) DEFAULT '' NOT NULL;
ALTER TABLE my_table REPLACE my_new_column DEFAULT NULL;
What's the justification for this behavior? What is the database trying to do internally that adding the column directly fails? I have a feeling that it might have something to do with internal versioning but I can't find anything in this regard.
This is speculation. I am guessing that Sybase is being overly conservative. In general, you cannot add a new not null column with no default value to a table that has rows. This is true in all databases, because there is no way to populate the existing rows for the new column.
I am guessing that Sybase simply doesn't check if the table has rows, only if it exists. Clearly it is not doing the check for the alter.
This is only speculation, but I suspect it has to do the combination of needing to both acquire a lock on the whole table to guarantee continued compliance with the schema, and re-allocate space for the records.
Allowing a direct add of a NOT NULL column would compromise any existing records if there's no default value. Yes, we know the table is empty. And the database can (eventually) know the table is empty at execution time... but it can't really know the table is empty at execution plan compile time, because a row could be added while the execution plan is determined.
This means the database would need to generate the worst-possible execution plan, involving a lock on the entire table, for the query to run in a transactionally-safe way. Additionally, adding (or removing) a column causes extra work for the database because it needs to re-allocate any pages and rebuild indexes in order to account for the changed size of individual records.
Put the two together, and it becomes difficult to just rollback a failed query, because you may have actual pages in different states. For whatever reason, the developers chose not to allow this.
The other options allow you to simply fail the query if a bad row gets in the way and would violate the schema, because you're not re-sizing records within pages. It might even allow you to get away with some page and row locks, rather than full table locks.

How can I alter a UDT in HSQLDB?

In HSLQDB v 2.3.1 there is a create type clause for defining UDTs. But there appears to be no alter type clause, as far as the docs are concerned (and the db returns a unexpected token error if I try this).
Is it possible to amend/drop a UDT in HSQLDB? What would be the best practice, if for example I originally created
create type CURRENCY_ID as char(3)
because I decide I'm going to use ISO codes. But then I actually decide that I'm going to store the codes as integers instead. What is the best way to modify the schema in my db? (this is a synthetic example, obviously I wouldn't use integers in this case).
I guess I might do
alter table inventory alter column ccy set data type int
drop type CURRENCY_ID
create type CURRENCY_ID as int
alter table inventory alter column ccy set data type CURRENCY_ID
but is there a better way to do this?
After trying various methods, I ended up writing a script to edit the *.script file of the database directly. It's a plain text file with SQL commands that recreates the DB programmatically. In detail:
open db, shutdown compact
Edit the script file: replace the type definition, e.g. create type XXX as int to create type XXX as char(4)
For each table, replace the insert into table XXX values (i,...) with insert into table XXX values('str',...). This was done with a script that had the mappings from the old (int) value into the new (char) value.
In my particular case, I was changing a primary key, so I had to remove the identity directive from the create table statement, and also I had to remove a line that had a alter table XXX alter column YYY restart sequence 123.
save and close script file, open db, shutdown compact
This isn't great, but it worked. Advantages :
Ability to re-define UDT.
Ability to map the table values programmatically.
Method is generic and can be used for other schema changes, beside UDTs.
Cons
No checking that schema is consistent (although it does throw up errors if it can't read the script).
Dangerous when reading file as a text file. e.g. what if I have a VARCHAR column with newlines in it? When I parse the script file and write it back, this would need to be escaped.
Not sure if this works with non-memory DBs. i.e. those that don't only have a *.script file when shutdown.
Probably not efficient for large DBs. My DB was small ~ 1MB.

Cannot insert duplicate key row in object 'dbo.TitleClient' with unique index 'XAK1TitleClient'

Ever since I cleaned the data on the SQL Database I've been getting this issue, whereas on the unclean database the issue does not happen. When I run my stored procedure (huge procedure) it returns:
General SQL error. Cannot insert duplicate key row in object 'dbo.TitleClient' with unique index 'XAK1TitleClient'. Cannot insert the value NULL into column 'id_title', table 'Database.dbo.TitleCom'; column does not allow null, insert fails.
Is it possible that I deleted data from a table that causes this? Or is that impossible?
Does dbo.TitleClient have an identity column? You might need to run
DBCC CHECKIDENT('dbo.TitleClient')
I'm guessing that the first message
Cannot insert duplicate key row in
object 'dbo.TitleClient' with unique
index 'XAK1TitleClient'
is because the seed value is out of synch with the existing table values and the second error message
Cannot insert the value NULL into
column 'id_title', table
'Database.dbo.TitleCom' column does
not allow null, insert fails.
Comes from a failed attempt at inserting the result of scope_identity from the first statement.
How cleanly did you "clean" the data?
If some tables still have data, that might be causing a problem.
Especially if you have triggers resulting in further inserts.
For you to investigate further.
Take the body of your stored proc, and run it bit-by-bit.
Eventually, you'll get to the actual statement producing the error.
Of course if you aren't inserting into dbo.TitleClient at this point, then it's certainly a trigger causing problems.
Either way: Now you can easily check the data inserted earlier in your proc to figure out the root cause.

Modifying a function used as a column default in SQL Server

We're having an issue with a poorly coded SQL function(which works fine in live, but not in our test environment). This function is used to provide a default value in many tables, trying to ALTER the function returns a "Cannot ALTER ### because it is being referenced by object" error.
Is there any way round this error message? The only way I can think of is attempting to write a script to remove it from every table that has it as a default, altering the function and re-adding it after.
Since the object is referenced, you cannot modify it. This is what you do:
Remove the default constraint from the table/column
Modify the function
Add the default constraint back
SQL Server will not allow you to modify a function that is bound to the DEFAULT constraint of a column.
Your only option is to remove the constraint before altering the function. (Source)