Custom sort in SQL Server - sql

I have a table where the results are sorted using an "ORDER" column, eg:
Doc_Id Doc_Value Doc_Order
1 aaa 1
12 xxx 5
2 bbb 12
3 ccc 24
My issue is to initially set up this order column as efficiently and reusably as possible.
My initial take was to set up a scalar function that could be used as a default value when a new entry is added to the table:
ALTER FUNCTION [dbo].[Documents_Initial_Order]
( )
RETURNS int
AS
BEGIN
RETURN (SELECT ISNULL(MAX(DOC_ORDER),0) + 1 FROM dbo.Documents)
When a user wants to permute 2 documents, I can then easily switch the 2 orders.
It works nicely, but I now have a second table I need to set up the same way, and I am quite sure there is a nicer way to do it. Any idea?

Based on your comment, I think you have a very workable solution. You could make it a little more userfriendly by specifying it as a default:
alter table documents
add constraint constraint_name
default (dbo.documents_initial_order()) for doc_order
As an alternative, you could create an update trigger that copies the identity field to the doc_order field after an insert:
create trigger Doc_Trigger
on Documents
for insert
as
update d
set d.doc_order = d.doc_id
from Documents d
inner join inserted i on i.doc_id = d.doc_id
Example defining doc_id as an identity column:
create table Documents (
doc_id int identity primary key,
doc_order int,
doc_value ntext
)

It sounds like you want an identity column that you can then override once it gets it initial value. One solution would be to have two columns, once call "InitialOrder", that is an auto-increment identity column, and then a second column called doc_order that initially is set to the same value as the InitialOrder field (perhaps even as part of the insert trigger or a stored procedure if you are doing inserts that way), but give the user the ability to edit that column.
It does require an extra few bytes per record, but solves your problem, and if its of any value at all, you would have both the inital document order and the user-reset order available.
Also, I am not sure if your doc_order needs to be unique or not, but if not, you can then sort return values by doc_order and InitialOrder to ensure a consistent return sequence.

If there is no need to have any control over what that DOC_ORDER value might be, try using an identity column.

Related

How to create an insert trigger based on primary key?

I am stuck on a problem creating a trigger in SQL Server. I have MyTable with the following columns (simplified):
Sn - Identity, not nullable;
SomeId - int, not nullable;
miscellaneous other columns
The Sn column is nothing special, values 1,2...n.
SomeId needs to be 1000 + Sn, i.e. this is what I want the trigger to do on insert.
The problem I am having is a standard trigger fails if I don't include something for SomeId (error is that null is not allowed), if that trigger is using after insert. Maybe I am meant to use instead of insert, but I am having trouble getting that to work correct or find details about it.
The other factor here - I am not even sure if it is possible for what I am trying to do to work. I.e when a new row is being created and SQL Server generates an Sn (Identity column), can a trigger be part of that process and also compute the SomeId value (which needs the Sn value) before inserting?
If not, as a fallback I could either make the SomeId column nullable (not desirable), or always insert 0 into it (and let the trigger fire afterwards to update it), but that would be a bit grim also.
No need for a trigger. Just use a SEQUENCE to generate the ID values instead of an IDENTITY column, eg
create sequence seq_MyTable
start with 1
increment by 1
go
CREATE TABLE MyTable
(
Sn int not null primary key default (next value for seq_MyTable),
SomeID int not null unique default (next value for seq_MyTable) + 1000,
Name VARCHAR(50) NOT NULL
)
insert into MyTable(Name)
values ('A'),('B'),('C')
select * from Mytable

Inserting new rows and generate a new id based on the current last row

The primary key of my table is an Identity column of an ID. I want to be able to insert a new row and have it know what the last ID in the table currently is and add one to it. I know I can use Scope Identity to get the last inserted column from my code, but I am worried about people manually adding entries in the database, because they do this quite often. Is there a way I can look at the last ID in the table and not just the last ID my code inserted?
With a SQL Identity column, you don't need to do anything special. This is the default behavior. SQL Server will handle making sure you don't have collisions regardless of where the inserts come from.
The ##Identity will pull the latest identity, and scope_identity will grab the identity from the current scope.
A scope is a module: a stored procedure, trigger, function, or batch. Therefore, if two statements are in the same stored procedure, function, or batch, they are in the same scope.
If you don't want to allow manual entries to the primary column, then you can add Identity constraint to it along with primary key constraint.
Example, while creating a table,
CREATE Table t_Temp(RowID Int Primary Key Identity(1,1), Name Varchar(50))
INSERT Into t_Temp values ('UserName')
INSERT Into t_Temp values ('UserName1')
SELECT * from t_Temp
You can query the table and get the next available code in one SQL query:
SELECT COALESCE(MAX(CAST("RowID" AS INT)),0) +1 as 'NextRowID' from <tableName>
The "0" here is a default, meaning if there are no rows found, the first code returned would be (0+1) =1
Generally I have 999 instead of the 0 as I like my RowID/primary key etc. to start at 1000.

Adding Row in existing table (SQL Server 2005)

I want to add another row in my existing table and I'm a bit hesitant if I'm doing the right thing because it might skew the database. I have my script below and would like to hear your thoughts about it.
I want to add another row for 'Jane' in the table, which will be 'SKATING" in the ACT column.
Table: [Emp_table].[ACT].[LIST_EMP]
My script is:
INSERT INTO [Emp_table].[ACT].[LIST_EMP]
([ENTITY],[TYPE],[EMP_COD],[DATE],[LINE_NO],[ACT],[NAME])
VALUES
('REG','EMP','45233','2016-06-20 00:00:00:00','2','SKATING','JANE')
Will this do the trick?
Your statement looks ok. If the database has a problem with it (for example, due to a foreign key constraint violation), it will reject the statement.
If any of the fields in your table are numeric (and not varchar or char), just remove the quotes around the corresponding field. For example, if emp_cod and line_no are int, insert the following values instead:
('REG','EMP',45233,'2016-06-20 00:00:00:00',2,'SKATING','JANE')
Inserting records into a database has always been the most common reason why I've lost a lot of my hairs on my head!
SQL is great when it comes to SELECT or even UPDATEs but when it comes to INSERTs it's like someone from another planet came into the SQL standards commitee and managed to get their way of doing it implemented into the final SQL standard!
If your table does not have an automatic primary key that automatically gets generated on every insert, then you have to code it yourself to manage avoiding duplicates.
Start by writing a normal SELECT to see if the record(s) you're going to add don't already exist. But as Robert implied, your table may not have a primary key because it looks like a LOG table to me. So insert away!
If it does require to have a unique record everytime, then I strongly suggest you create a primary key for the table, either an auto generated one or a combination of your existing columns.
Assuming the first five combined columns make a unique key, this select will determine if your data you're inserting does not already exist...
SELECT COUNT(*) AS FoundRec FROM [Emp_table].[ACT].[LIST_EMP]
WHERE [ENTITY] = wsEntity AND [TYPE] = wsType AND [EMP_COD] = wsEmpCod AND [DATE] = wsDate AND [LINE_NO] = wsLineno
The wsXXX declarations, you will have to replace them with direct values or have them DECLAREd earlier in your script.
If you ran this alone and recieved a value of 1 or more, then the data exists already in your table, at least those 5 first columns. A true duplicate test will require you to test EVERY column in your table, but it should give you an idea.
In the INSERT, to do it all as one statement, you can do this ...
INSERT INTO [Emp_table].[ACT].[LIST_EMP]
([ENTITY],[TYPE],[EMP_COD],[DATE],[LINE_NO],[ACT],[NAME])
VALUES
('REG','EMP','45233','2016-06-20 00:00:00:00','2','SKATING','JANE')
WHERE (SELECT COUNT(*) AS FoundRec FROM [Emp_table].[ACT].[LIST_EMP]
WHERE [ENTITY] = wsEntity AND [TYPE] = wsType AND
[EMP_COD] = wsEmpCod AND [DATE] = wsDate AND
[LINE_NO] = wsLineno) = 0
Just replace the wsXXX variables with the values you want to insert.
I hope that made sense.

Intervals: How can I make sure there is just one row with a null value in a timstamp column in table?

I have a table with a column which contains a 'valid until' Date and I want to make sure that this can only be set to null in a single row within the table. Is there an easy way to do this?
My table looks like this (postgres):
CREATE TABLE 123.myTable(
some_id integer NOT NULL,
valid_from timestamp without time zone NOT NULL DEFAULT now(),
valid_until timestamp without time zone,
someString character varying)
some_id and valid_from is my PK. I want nobody to enter a line with a null value in column valid_until if there is already a line with null for this PK.
Thank you
In PostgreSQL, you have two basic approaches.
Use 'infinity' instead of null. Then your unique constraint works as expected. Or if you cannot do that:
CREATE UNIQUE INDEX null_valid_from ON mytable(someid) where valid_until IS NULL
I have used both approaches. I find usually the first approach is cleaner and it allows you to use range types and exclude constraints in newer versions of PostgreSQL better (to ensure no two time ranges overlap based on a given given someid), bt the second approach often is useful where the first cannot be done.
Depending on the database, you can't have null in a primary key (I don't know about all databases, but in sql server you can't). The easiest way around this I can think of is to set the date time to the minimum value, and then add a unique constraint on it, or set it to be the primary key.
I suppose another way would be to set up a trigger to check the other values in the table to see if another entry is null, and if there is one, don't allow the insert.
As Kevin said in his answer, you can set up a database trigger to stop someone from inserting more than one row where the valid until date is NULL.
The SQL statement that checks for this condition is:
SELECT COUNT(*)
FROM TABLE
WHERE valid until IS NULL;
If the count is not equal to 1, then your table has a problem.
The process that adds a row to this table has to perform the following:
Find the row where the valid until value is NULL
Update the valid until value to the current date, or some other meaningful date
Insert the new row with the valid until value set to NULL
I'm assuming you are Storing Effective-dated-records and are also using a valid from date.
If so, You could use CRUD stored procedures to enforce this compliance. E.G the insert closes off any null valid dates before inserting a new record with a null valid date.
You probably need other stored procedure validation to avoid overlapping records and to allow deleting and editing records. It may be more efficient (in terms of where clauses / faster queries) to use a date far in the future rather than using null.
I know only Oracle in sufficient detail, but the same might work in other databases:
create another column which always contains a fixed value (say '0') include this column in your unique key.
Don't use NULL but a specific very high or low value. I many cases this is actually easier to use then a NULL value
Make a function based unique key on a function converting the date including the null value to some other value (e.g. a string representation for dates and 'x' for null)
make a materialized view which gets updated on every change on your main table and put a constraint on that view.
select count(*) cnt from table where valid_until is NULL
might work as the select statement. And a check constraint limiting the cnt value to the values 0 and 1
I would suggest inserting to that table through an SP and putting your constraint in there, as triggers are quite hidden and will likely be forgotten about. If that's not an option, the following trigger will work:
CREATE TABLE dbo.TESTTRIGGER
(
YourDate Date NULL
)
CREATE TRIGGER DupNullDates
ON dbo.TESTTRIGGER
FOR INSERT, UPDATE
AS
DECLARE #nullCount int
SELECT #nullCount = (SELECT COUNT(*) FROM TESTTRIGGER WHERE YourDate IS NULL)
IF(#NullCount > 1)
BEGIN
RAISERROR('Cannot have Multiple Nulls', 16, 1)
ROLLBACK TRAN
END
GO
Well if you use MS SQL you can just add a unique Index on that column. That will allow only one NULL. I guess that if you use other RDBMS, this will still function.

How to set a default value for one column in SQL based on another column

I'm working with an old SQL 2000 database and I don't have a whole lot of SQL experience under my belt. When a new row is added to one of my tables I need to assign a default time value based off of a column for work category.
For example, work category A would assign a time value of 1 hour, category B would be 2 hours, etc...
It should only set the value if the user does not manually enter the time it took them to do the work. I thought about doing this with a default constraint but I don't think that will work if the default value has a dependency.
What would be the best way to do this?
I would use a trigger on Insert.
Just check to see if a value has been assigned, and if not, go grab the correct one and use it.
Use a trigger as suggested by Stephen Wrighton:
CREATE TRIGGER [myTable_TriggerName] ON dbo.myTable FOR INSERT
AS
SET NOCOUNT ON
UPDATE myTable
SET
timeValue = '2 hours' -- assuming string values
where ID in (
select ID
from INSERTED
where
timeValue = ''
AND workCategory = 'A'
)
Be sure to write the trigger so it will handle multi-row inserts. Do not process one row at a time in a trigger or assume only one row will be in the inserted table.
If what you are looking for is to define a column definition based on another column you can do something like this:
create table testable
(
c1 int,
c2 datetime default getdate(),
c3 as year(c2)
);
insert into testable (c1) select 1
select * from testable;
Your result set should look like this :
c1 | c2 | c3
1 | 2013-04-03 17:18:43.897 | 2013
As you can see AS (in the column definition) does the trick ;) Hope it helped.
Yeah, trigger.
Naturally, instead of hard-coding the defaults, you'll look them up from a table.
Expanding on this, your new table then becomes the work_category table (id, name, default_hours), and you original table maintains a foreign key to it, transforming fom
(id, work_category, hours) to (id, work_category_id, hours).
So, for example, in a TAG table (where tags are applied to posts) if you want to count one tag as another...but default to counting new tags as themselves, you would have a trigger like this:
CREATE TRIGGER [dbo].[TR_Tag_Insert]
ON [dbo].[Tag]
AFTER INSERT
AS
BEGIN
SET NOCOUNT ON;
UPDATE dbo.Tag
SET [CountAs] = I.[ID]
FROM INSERTED AS I
WHERE I.[CountAs] IS NULL
AND dbo.Tag.ID = I.ID
END
I can think of two ways:
triggers
default value or binding (this should work with a dependency)
Triggers seem well explained here, so I won't elaborate. But generally I try and stay away from triggers for this sort of stuff, as they are more appropriate for other tasks
"default value or binding" can be achieved by creating a function e.g.
CREATE FUNCTION [dbo].[ComponentContractor_SortOrder] ()
RETURNS float
AS
BEGIN
RETURN (SELECT MAX(SortOrder) + 5 FROM [dbo].[tblTender_ComponentContractor])
END
And then setting the "default value or binding" for that column to ([dbo].ComponentContractor_SortOrder)
Generally I steer away from triggers. Almost all dbms have some sort of support for constraints.
I find them easier to understand , debug and maintain.