Calculate non-standard auto-increment automatically - sql

The table I am working with does not have a standard auto-increment field to use as a primary key, so I need to come up with a way to automatically calculate the value that should be used in the field.
My first thought was to create a trigger to happen AFTER INSERT, however, as far as I can tell, there's no easy way to reference the row that was just inserted. I could do something like
UPDATE `table` SET `reference_number` = (SELECT ....) WHERE `reference_number` IS NULL
but because reference_number is a PRIMARY KEY, it cannot be null. (Does that mean it would be an empty string ''?)
Is there a better way to do this?

CREATE TRIGGER mkuuid BEFORE INSERT ON SomeTable
FOR EACH ROW BEGIN
SET NEW.primary_key = UUID_SHORT();
END

Related

Primay Key conflicts on insertion of new records

In a database application, I want to insert, update and delete records in a table of database.
Table is as below:
In this table, Ga1_ID is Primary Key.
Suppose, I insert 5 records as show currently.
In second attempt, if I want to insert 5 other records and if any of these new records contains a primary key attribute which is already present in table it show error. Its fine.
But, when I insert new 5 records... how I can verify these new records's primary key value is not present. I mean, how to match or calculate the already present primary key attributes and then insert new records.
What is the best approach to manage this sort of situation ?
use following query in dataadapter:
da=new SqlDataAdapter("select Ga1_ID from table where Ga1_ID=#pkVal",conn);
DataSet=new DataSet();
da.fill(ds);
//pass parameter for #pkVal
da.SelectCommand.Parameters(1).Value = pkValue;
if(ds.Tables[0].Rows.Count>0) //If number of rows >0 then record exists
BEGIN
messagebox.show("Primary key present");
END
Hope its helpful.
Do not check existing records in advance, i.e. do not SELECT and then INSERT. A better (and pretty common) approach is to try to INSERT and handle exceptions, in particular, catch a primary key violation if any and handle it.
Do the insert in a try/catch block, with different handling in case of a primary key violation exception and other sql exception types.
If there was no exception, then job's done, record was inserted.
If you caught a primary key violation exception, then handle it appropriately (your post does not specify what you want to do in this case, and it's completely up to you)
If you want to perform 5 inserts at once and want to make sure they all succeed or else roll back if any of them failed, then do the inserts within a transaction.
you can do a lookup first before inserting.
IF EXISTS (SELECT * FROM tableName WHERE GA1_id=#newId)
BEGIN
UPDATE tableName SET Ga1_docid = #newdocID, GA1_fieldNAme = #newName, Ga1_fieldValue = #newVal where GA1_id=#newId
END
ELSE
BEGIN
INSERT INTO tableName(GA1_ID, Ga1_docid, GA1_fieldNAme Ga1_fieldValue) VALUES (value1,val2,value3,value4)
END
If you're using SQL Server 2012, use a sequence object - CREATE SEQUENCE.
This way you can get the next value using NEXT VALUE FOR.
With an older SQL Server version, you need to create the primary key field as an IDENTITY field and use the SCOPE_IDENTITY function to get the last identity value and then increment it manually.
Normally, you would like to have a surrogate key wich is generally an identity column that will automatically increment when you are inserting rows so that you don't have to care about knowing which id already exists.
However, if you have to manually insert the id there's a few alternatives for that and knowing wich SQL database you are using would help, but in most SQL implementations, you should be able to do something like:
IF NOT EXISTS
IF NOT EXISTS(
SELECT 1
FROM your_table
WHERE Ga1_ID = 1
)
INSERT INTO ...
SELECT WHERE NOT EXISTS
INSERT INTO your_table (col_1, col_2)
SELECT col_1, col_2
FROM (
SELECT 1 AS col_1, 2 AS col_2
UNION ALL
SELECT 3, 4
) q
WHERE NOT EXISTS (
SELECT 1
FROM your_table
WHERE col_1 = q.col_1
)
For MS SQL Server, you can also look at the MERGE statement and for MySQL, you can use the INSERT IGNORE statement.

SQL simplist way to update an INT identity column

I have a column called fixedID, and it's current Value = 2, I want to update this value to be 42. What is the best way to do this. I would like to do this modification as simply as possible. However, if I could do this while doing an select insert that would be fantastic also.
update tblFixedId
set FixedId = (FixedId + 400)
possible to change the value here?
Select * INTO mynewTable from myOldertable
Identity columns are not updatable in SQL Server.
The only way of doing this as an actual UPDATE rather than a DELETE ... INSERT would be to use ALTER TABLE ... SWITCH to mark the column as no longer an IDENTITY column, do the UPDATE then ALTER TABLE ... SWITCH again to re-mark the column as IDENTITY (For example code the first way round see the workarounds on this Connect Item and for the second way here).
Note that in the common scenario that the identity column is the clustered index key the Update will likely be implemented as an INSERT ... DELETE anyway.

Intervals: How can I make sure there is just one row with a null value in a timstamp column in table?

I have a table with a column which contains a 'valid until' Date and I want to make sure that this can only be set to null in a single row within the table. Is there an easy way to do this?
My table looks like this (postgres):
CREATE TABLE 123.myTable(
some_id integer NOT NULL,
valid_from timestamp without time zone NOT NULL DEFAULT now(),
valid_until timestamp without time zone,
someString character varying)
some_id and valid_from is my PK. I want nobody to enter a line with a null value in column valid_until if there is already a line with null for this PK.
Thank you
In PostgreSQL, you have two basic approaches.
Use 'infinity' instead of null. Then your unique constraint works as expected. Or if you cannot do that:
CREATE UNIQUE INDEX null_valid_from ON mytable(someid) where valid_until IS NULL
I have used both approaches. I find usually the first approach is cleaner and it allows you to use range types and exclude constraints in newer versions of PostgreSQL better (to ensure no two time ranges overlap based on a given given someid), bt the second approach often is useful where the first cannot be done.
Depending on the database, you can't have null in a primary key (I don't know about all databases, but in sql server you can't). The easiest way around this I can think of is to set the date time to the minimum value, and then add a unique constraint on it, or set it to be the primary key.
I suppose another way would be to set up a trigger to check the other values in the table to see if another entry is null, and if there is one, don't allow the insert.
As Kevin said in his answer, you can set up a database trigger to stop someone from inserting more than one row where the valid until date is NULL.
The SQL statement that checks for this condition is:
SELECT COUNT(*)
FROM TABLE
WHERE valid until IS NULL;
If the count is not equal to 1, then your table has a problem.
The process that adds a row to this table has to perform the following:
Find the row where the valid until value is NULL
Update the valid until value to the current date, or some other meaningful date
Insert the new row with the valid until value set to NULL
I'm assuming you are Storing Effective-dated-records and are also using a valid from date.
If so, You could use CRUD stored procedures to enforce this compliance. E.G the insert closes off any null valid dates before inserting a new record with a null valid date.
You probably need other stored procedure validation to avoid overlapping records and to allow deleting and editing records. It may be more efficient (in terms of where clauses / faster queries) to use a date far in the future rather than using null.
I know only Oracle in sufficient detail, but the same might work in other databases:
create another column which always contains a fixed value (say '0') include this column in your unique key.
Don't use NULL but a specific very high or low value. I many cases this is actually easier to use then a NULL value
Make a function based unique key on a function converting the date including the null value to some other value (e.g. a string representation for dates and 'x' for null)
make a materialized view which gets updated on every change on your main table and put a constraint on that view.
select count(*) cnt from table where valid_until is NULL
might work as the select statement. And a check constraint limiting the cnt value to the values 0 and 1
I would suggest inserting to that table through an SP and putting your constraint in there, as triggers are quite hidden and will likely be forgotten about. If that's not an option, the following trigger will work:
CREATE TABLE dbo.TESTTRIGGER
(
YourDate Date NULL
)
CREATE TRIGGER DupNullDates
ON dbo.TESTTRIGGER
FOR INSERT, UPDATE
AS
DECLARE #nullCount int
SELECT #nullCount = (SELECT COUNT(*) FROM TESTTRIGGER WHERE YourDate IS NULL)
IF(#NullCount > 1)
BEGIN
RAISERROR('Cannot have Multiple Nulls', 16, 1)
ROLLBACK TRAN
END
GO
Well if you use MS SQL you can just add a unique Index on that column. That will allow only one NULL. I guess that if you use other RDBMS, this will still function.

Update a primary key without triggering unique key violation

I just came to this very simple situation where I needed to shift a primary key up a certain value. Suppose the following table:
CREATE TABLE Test (
Id INTEGER PRIMARY KEY,
Desc TEXT);
Loaded with the following values:
INSERT INTO Test VALUES (0,'one');
INSERT INTO Test VALUES (1,'two');
If there's an attempt at updating the primary key, it will, of course, fail:
UPDATE Test SET Id = Id+1;
Error: column id is not unique
Is there some way to suspend unicity check until after the update query has run?
Find a nice pivot point, and move the data around that pivot. For example, if all your IDs are positive, a good pivot is 0.
When you would normally do
UPDATE Test SET Id = Id+1;
Do this sequence instead
UPDATE Test SET Id = -Id;
UPDATE Test SET Id = -Id +1;
For times, you can find a similar pivot point, but the formula is just a tad harder.
without understanding the fundamental problem (and yeah, you seem like a victim of code and run on this one!), multiplying the ID by the largest value in the table should work.
update test
set id = id * (select max(id) + 1 from test)
However, it's dirty, and really, databases make it hard to change primary keys for a reason...
OK. Second attempt. Try this:
Get the MAX of the key column.
UPDATE table SET key = key + max + 1
UPDATE table SET key = key - max
This will avoid duplicated keys at any time in the update process by moving the window far enough.

How to set a default value for one column in SQL based on another column

I'm working with an old SQL 2000 database and I don't have a whole lot of SQL experience under my belt. When a new row is added to one of my tables I need to assign a default time value based off of a column for work category.
For example, work category A would assign a time value of 1 hour, category B would be 2 hours, etc...
It should only set the value if the user does not manually enter the time it took them to do the work. I thought about doing this with a default constraint but I don't think that will work if the default value has a dependency.
What would be the best way to do this?
I would use a trigger on Insert.
Just check to see if a value has been assigned, and if not, go grab the correct one and use it.
Use a trigger as suggested by Stephen Wrighton:
CREATE TRIGGER [myTable_TriggerName] ON dbo.myTable FOR INSERT
AS
SET NOCOUNT ON
UPDATE myTable
SET
timeValue = '2 hours' -- assuming string values
where ID in (
select ID
from INSERTED
where
timeValue = ''
AND workCategory = 'A'
)
Be sure to write the trigger so it will handle multi-row inserts. Do not process one row at a time in a trigger or assume only one row will be in the inserted table.
If what you are looking for is to define a column definition based on another column you can do something like this:
create table testable
(
c1 int,
c2 datetime default getdate(),
c3 as year(c2)
);
insert into testable (c1) select 1
select * from testable;
Your result set should look like this :
c1 | c2 | c3
1 | 2013-04-03 17:18:43.897 | 2013
As you can see AS (in the column definition) does the trick ;) Hope it helped.
Yeah, trigger.
Naturally, instead of hard-coding the defaults, you'll look them up from a table.
Expanding on this, your new table then becomes the work_category table (id, name, default_hours), and you original table maintains a foreign key to it, transforming fom
(id, work_category, hours) to (id, work_category_id, hours).
So, for example, in a TAG table (where tags are applied to posts) if you want to count one tag as another...but default to counting new tags as themselves, you would have a trigger like this:
CREATE TRIGGER [dbo].[TR_Tag_Insert]
ON [dbo].[Tag]
AFTER INSERT
AS
BEGIN
SET NOCOUNT ON;
UPDATE dbo.Tag
SET [CountAs] = I.[ID]
FROM INSERTED AS I
WHERE I.[CountAs] IS NULL
AND dbo.Tag.ID = I.ID
END
I can think of two ways:
triggers
default value or binding (this should work with a dependency)
Triggers seem well explained here, so I won't elaborate. But generally I try and stay away from triggers for this sort of stuff, as they are more appropriate for other tasks
"default value or binding" can be achieved by creating a function e.g.
CREATE FUNCTION [dbo].[ComponentContractor_SortOrder] ()
RETURNS float
AS
BEGIN
RETURN (SELECT MAX(SortOrder) + 5 FROM [dbo].[tblTender_ComponentContractor])
END
And then setting the "default value or binding" for that column to ([dbo].ComponentContractor_SortOrder)
Generally I steer away from triggers. Almost all dbms have some sort of support for constraints.
I find them easier to understand , debug and maintain.