Equivalent of C# 'readonly' for an MS SQL column? - sql

Imagine there is a Price column in Products table, and the price may change.
I'm fine with it changing but I want to store the original Price value in another column.
Is there any automatic way MS SQL server may do this?
Can I do this using Default Value field?
Do I have to declare a trigger?
Update
I tried to use Price to simplify the question but it looks like this provoked "use separate table" type of answers.
I'm sorry for the confusion I caused.
In the real world, I need to store a foreign key ID and I'm 100% I only need current and original values.
Update 2
I got a little confused by the different approaches suggested so please let me explain the situation again.
Imaginary Products table has three fields: ID, Price and OriginalPrice.
I want to setOriginalPrice to Price value on any insert.
Sometimes it is a single product that gets created from code. Sometimes there are thousands of products created by a single insert from a stored procedure so I want to handle these properly as well.
Once OriginalPrice has been set, I never intend to update it.
Hope my question is clearer now.
Thanks for your effort.
Final Update
I want to thank everyone, particularly #gbn, for their help.
Although I posted my own answer, it is largely based on #gbn's answer and his further suggestions. His answer is also more complete, therefore I mark it as correct.

After your update, let's assume you have only old and new values.
Let's ignore if the same update happens in quick succession because of a client-code bug and that you aren't interested in history (other answers)
You can use a trigger or a stored procedure.
Personally, I'd use a stored proc to provide a basic bit of control. And then no direct UPDATE permissions are needed, which means you have read only unless via your code.
CREATE PROC etc
...
UPDATE
MyTable
SET
OldPrice = Price,
Price = #NewPrice,
UpdatedBy = (variable or default)
UpdatedWhen = DEFAULT --you have a DEFAULT right?
WHERE
PKCol = #SomeID
AND --provide some modicum of logic to trap useless updates
Price <> #NewPrice;
A trigger would be similar but you need to have a JOIN with the INSERTED and DELETED tables
What if someone updates OldPrice directly?
UPDATE
T
SET
OldPrice = D.Price
FROM
Mytable T
JOIN
INSERTED I ON T.PKCol = I.PKCol
JOIN
DELETED D ON T.PKCol = D.PKCol
WHERE
T.Price <> I.Price;
Now do you see why you got jumped on...?
After question edit, for INSERT only
UPDATE
T
SET
OriginalPrice = I.Price
FROM
Mytable T
JOIN
INSERTED I ON T.PKCol = I.PKCol
But if all INSERTs happen via stored procedure I'd set it there though....

There is no readonly attribute for a SQL Server table column. BUT you could implement the functionality you describe using a trigger (and restricting permissions)
Except, it is not the best way to solve the problem. Instead treat the price as Type 2 'slowly changing dimension'. This involves having a 'ValidTo' column (os 'StartDate' and 'EndDate' columns), and closing off a record:
Supplier_Key Supplier_Code Supplier_Name Supplier_State Start_Date End_Date
123 ABC Acme Supply Co CA 01-Jan-2000 21-Dec-2004
124 ABC Acme Supply Co IL 22-Dec-2004
If you do go the route of a trigger (I suggest you use SCD type 2), make sure it can handle multiple rows: Multirow Considerations for DML Triggers

I would recommend storing your price in a seperate table called Prices, with the columns Price and Date.
Then whenever the price is updated, INSERT a new record into the Prices table. Then when you need to know the current price, you can pull from there.
However, if you wish to update an OriginalPrice column automatically, you could add a TRIGGER to the table to do this:
http://msdn.microsoft.com/en-us/library/aa258254%28v=sql.80%29.aspx

This is what I ended up with, with a heavy help from #gbn, #Mitch and #Curt:
create trigger TRG_Products_Price_I --common-ish naming style
on dbo.Products after insert as
begin
set nocount on
update m
set OriginalPrice = i.Price
from Products p
join inserted i on p.ID = i.ID
end
I tried to follow this article as well.
Thanks to everyone!

Related

How to keep the updated rows for a period of time

I am working with small project using SQL Server, and I need some help from you, to clarify the problems I am having with a task.
What I am trying to do is update the row in table and keep these updated rows for a period of time.
For example I am using this code to update:
ALTER PROCEDURE [dbo].[updtprice]
#bc int,
#price float
AS
UPDATE tblproduct
SET Price = #price
WHERE Barcode = #bc
What I am trying to do is, for example I update the price of the product with barcode 2233 from 0.99 cent to 0.70 (today). And I want the price to be reverted to old one after one or two week( certain data set by user).
How can I accomplish this task?
Thanks to everyone
There may be other ways to do it but it sounds like you need another table that records the previous value and date that it expires. Then you would have a "job" that runs daily that takes note of any expired and does the update back to previous value.
OR you have additional columns in this table with the special price and expiration date and any applications that use that row look at the special price, is it still good, if so use it, if not use the regular price.
To be honest, the only way to properly do this is either:
Write a script that does this.
Write a scheduled task in MS SQL.
Both not great ways to do it in all honesty. I would personally go with option 1 if it's a web-application and option 2 if it's just a Database.
you can have 3 columns in your table.
actual_price
current_price
price_update_date
actual_price columns will always have exact price of the product.
you can update the current_price with the discounted price and also update the strong text price_update_date column when updating the current_price.
like.
ALTER procedure [dbo].[updtprice]
#bc int,
#price float
as
update tblproduct set current_price= #price,
price_update_date=SYSDATETIME()
where
Barcode = #bc
and then you can create another stored Proc which will run everyday at 12 am.
and chech if current_time -price_update_date is greater than or equal to your threshold then update the current_price of the product with actual_price

Sql trigger for update in specific column after insert

I made a trigger that is supposed to update another value in the same table a after I make an insert to the table. I do get the result I am looking for, but when I ask my teacher if it is correct he responded that this trigger updates "all" tables(?) and thus incorrect. He would not explain more than that (he is that kind of teacher...). Can anyone understand what he means? Not looking for the right code, just an explanation of what I might have misunderstood.
CREATE TRIGGER setDate
ON Loans
AFTER INSERT
AS
BEGIN
UPDATE Loans
set date = GETDATE()
END;
Your teacher intends to say that the query updates all rows in the table -- perhaps you misunderstood her or him.
The best way to do what you want is to use a default value:
alter table loans alter column date datetime default getdate();
That is, a trigger is not needed. If you did use a trigger, I'll give you two hints:
An instead of trigger.
inserted should be somewhere in the trigger.
Hello there. I upvoted your question because that might be the question of many beginners in SQL Server.
As I see you defined the trigger right! It is a correct way although it's not the best.
By the way we are not discussing all ways you could choose or not, I'm gonna correct your code and explain what your teacher meant.
Look at this UPDATE you wrote:
UPDATE Loans
SET date = GETDATE()
If you write a SELECT without a WHERE clause, what does it do? It would result all the rows in the table which was selected. right?
SELECT * FROM dbo.loans
So without a WHERE clause, your UPDATE will update all of the rows in the table.
OK now what to do to just update the only row ( or rows) which were recently inserted?
When you are writing a trigger, you are allowed to use this table: Inserted
which it has the rows were recently inserted. kinda these rows first come to this table and then go to the final destination table. So you can update them before they go the dbo.loans
it would be like this:
Update dbo.loans
SET date = GETDATE()
WHERE Id IN (SELECT * FROM Inserted)

Is storing calculated values in database a bad idea? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
Let's say I'm working on an online store project. So, I'm gonna have to create a table called 'Product' in my database. Say I also want users to be able to 'like' my products. That requires me to create another table called 'ProductLike' to store users' IDs alongside the ID of the product they like (a junction table).
The main scenario: Every time when a user sends a request to my website to get a product page, I'm gonna have to recalculate the number of likes that product has.
My question is: So, I know the standard approach is not to store 'Calculated values' in the database (normalization). But what about cases like this? (I mean cases in which it might be expensive to calculate something). For instance in the example above, isn't it better to have a column named 'NumberOfLikes' in the 'Product' table to store the calculated number of the product likes for fast retrieval?
Update
isn't it better to have a column named 'NumberOfLikes' in the 'Product' table to store the calculated number of the product likes?
IMHO, the direct answer to this question is "No, unless you have a real performance problem due to the counting of likes".
If you do have a performance problem, and you've identified its source as the count of likes, Then you might want to consider adding a LikesCount column to the products table. If you do add such a column, please note you are going to have to update it on every change to the ProductLike table - delete, update and insert.
This means you are going to have to write a trigger for this table to handle all these cases, but it shouldn't be too hard since you can do everything in a single trigger - something like this:
create trigger ProductLikeChaneged on ProductLike
for insert, update, delete
as
update p
set LikesCount = (select count(*) from ProductLike as pl where pl.productId = p.Id)
from product as p
where exists
(
select 1 from inserted as i where p.id = i.productId
)
or exists
(
select 1 from deleted as d where p.id = d.productId
)
Original version
Based on your description, "calculating" the number of likes for a product is simply a count of rows in the ProductLike table where the product id is the id of the product you are currently displaying to the user.
This can be done very fast, especially if the ProductLike table clustered index is ProductId and then UserId, thus allowing SQL Server to use clustered index seek and not a table scan.
Basically, your ProductLike table should look like this:
Create table ProductLike
(
ProductId int,
UserId int,
Constraint PK_ProductLike PRIMARY KEY (ProductId, UserId)
)
Note that by default, SQL Server will use the primary key as the clustered index of the table.
Then your select statement for the product page can be something like this:
select Name, Description, -- Other product related details
(select count(*)
from productLike as pl
where pl.ProductId = p.Id) as likeCount
from product as p
By "calculated" value, I suspect you mean an accumulation of the number of requests.
The simplest approach in terms of database design and maintenance is to store each request as a row in a table and to summarize when needed. This has certain nice features:
A user can "unrequest" or "unlike" quite easily.
Inserts are (typically) at the "end" of the table, minimizing fragmentation and speeding inserts. Note: This can result in contention for the last page if multiple threads are writing at the same time.
Counts can be flexible, limited to a particular date range or type of user for instance.
The data is drill-downable. That is, for a given count you know exactly what produced it.
Summarization is often very reasonable, if you have the right indexes and partitions on the data.
That said, such summarization does not meet all needs. A traditional approach is to use a trigger to maintain summary tables -- adding lots of complexity for maintenance (you need insert, delete, and update triggers). I think #daniherrera's answer gives guidance on the best approach.
For real life they are real solutions. You should to materialize this field and denormalize database to keep performance. Do you have serveral options to keep this field uptodate:
Materialized views.
Triggers.
Store procedure.
Disclaimer: Your question is a primary opinion-based, I guess will be closed in a while.
Number of product liked by user can be fetched by UserProductLike table, where userid is id of your user.

Why Is SQL Trigger Not Inserting Rows In Sequential Order?

Recently I inherited a new ASP web application that merely allows customers to pay their outstanding invoices online. The application was poorly designed and did not have a payment history table. The entire payments table was deleted by the web service that transports the payment records to the accounting system of record.
I just created a simple trigger on the Payments table that simply copies the data from the Payments table into a Payment_Log table. Initially, the trigger just did a select * on inserted to copy the data. However, I just modified the trigger to insert the date of the payment into the Payment_Log table since one of our customers is having some issues that I need to debug. The new trigger is below. My question is that I have noticed that with this new version of the trigger, the rows are being inserted into the middle of the table (i.e. not at the end). Can someone explain why this is happening?
ALTER trigger [dbo].[PaymentHistory] on [dbo].[Payments]
for insert as
Declare #InvoiceNo nvarchar(255),
#CustomerName nvarchar(255),
#PaymentAmount float,
#PaymentRefNumber nvarchar(255),
#BulkPaid bit,
#PaymentType nvarchar(255),
#PaymentDate datetime
Select #InvoiceNo = InvoiceNo,
#CustomerName = CustomerName,
#PaymentAmount = PaymentAmount,
#PaymentRefNumber = PaymentRefNumber,
#BulkPaid = BulkPaid,
#PaymentType = PaymentType
from inserted
Set #PaymentDate = GETDATE()
Insert into Payment_Log
values (#InvoiceNo, #CustomerName, #PaymentAmount, #PaymentRefNumber, #BulkPaid, #PaymentType, #PaymentDate)
Below is a screenshot of SQL Server Management Studio that shows the rows being inserted into the middle of the table data. Thanks in advance for the help guys.
Datasets don't have an order. This means that SELECT * FROM x can return the results in a different order every time.
The only time that data is guaranteed to come back in the same order is when you specify an ORDER BY clause.
That said, there are circumstances that make the data normally come back in a certain order. The most visible one is with a clustered index.
This makes me wonder if the two tables have a Primary Key or not. Check all the indexes on each table and, at the very least, enforce a Primary Key.
As an aside, triggers in SQL Server are not fired for each row, but each batch. This can mean that the inserted table can contain more than just one row. (For example, when bulk inserting test data, or re-loading a large batch of transactions.)
For this reason, copying the data into variable is not a standard practice. Instead, you could just do the following...
ALTER trigger [dbo].[PaymentHistory] on [dbo].[Payments]
for insert as
INSERT INTO
Payment_Log
SELECT
InvoiceNo, CustomerName, PaymentAmount, PaymentRefNumber,
BulkPaid, PaymentType, GetDate()
FROM
inserted
Ok This question has the same answer as why do rows come back in different orders when I do not use an order by clause in my SQL query. If you ask SQL to process rows in any way it will process them in the fastest way it can, cashed rows first, those nearest the read head on the hard drive next and finally the rest.
Put it another way: you would be annoyed if queries took 10 times longer with rows in order than just on a first come first served basis. SQL does what you ask as quickly as it can
Hope this helps

SQL Server triggers "For Each Row" equivalent

I need to update multiple rows in a Parts table when a field in another table changes and I want to use a trigger. The reason for the trigger is many existing application use and modify the data and I don't have access to all of them. I know some databases support a For Each Row in the trigger statement but I don't think Microsoft does.
Specificly I have two tables Parts and Categories.
Parts has Part#, Category_ID, Part_Name and Original and lots of other stuff
Category has Category_ID and Category_name.
Original is a concatenation of Category_Name and Part_Name separated by a ':'
For example Bracelets:BB129090
If someone changes the Category_Name (for excample from Bracelets to Bracelets), the Original field must be updated in every row of the Parts table. While this is an infrequent event it happens enough to cause trouble.
No Web and desktop applications uses Original
All Accounting application use only Original
It is my task to keep Accounting and the other application in sync.
I did not design the database and the company that wrote the accounting program will not change it.
Or another option: why don't you just create a view over those two tables, for your Accounting department, which contains this concatenated column:
CREATE VIEW dbo.AccountingView
AS
SELECT
p.PartNo, p.Part_Name, p.Category_ID,
c.Category_Name + ':' + p.PartName as 'Original'
FROM
Parts p
INNER JOIN
Category c ON p.Category_ID = c.Category_ID
Now your Accounting people can use this view for their reporting, it's always fresh, always up to date, and you don't have to worry about update and insert triggers and all those tricky things.....
Marc
The Original column violates 1NF, which is a very bad idea. You can either
Skip the column completely and concatenate it in each query (probably not the best solution, but I argue that it's probably better than the trigger).
Create a view over the table and have the Original column in the view (probably what I would do), or
Make Original a computed column, which is the best way if you want to create an index on it.
I guess in your case there is no need for a row-level trigger.
You can do something like
IF UPDATE(Category_Name)
UPDATE Parts
SET Original = inserted.Category_Name + ':' + Part_Name
FROM Parts
INNER JOIN inserted ON Parts.Category_ID = inserted.Category_ID
as an UPDATE trigger on the Category table.
If you really need per-row processing (say, of a stored procedure), you need a CURSOR or a WHILE loop over inserted.
If you can alter the table schemas, on option that you would have that would ensure that the Original column is always up to date, no matter what, is to make Original a computed column - a column that's computed from the Category_Name plus the Part_Name as needed.
For this, you need to create a stored function that will do that computation for you - something like this:
CREATE FUNCTION dbo.CreateOriginal(#Category_ID INT, #Part_Name VARCHAR(50))
RETURNS VARCHAR(50)
WITH SCHEMABINDING
AS BEGIN
DECLARE #Category_Name VARCHAR(50)
SELECT #Category_Name = Category_Name FROM dbo.Category
WHERE Category_ID = #Category_ID
RETURN #Category_Name + ': ' + #Part_Name
END
and then you need to add a column to your Parts table which will show the result of this function for each row in the table:
ALTER TABLE Parts
ADD Original AS dbo.CreateOriginal(Category_ID, Part_Name)
The main drawback is the fact that to display the column value, the function has to be called each time, for each row.
On the other hand, your data is always up to date and always guaranteed to be correct, no matter what. No triggers needed, either.
See if that works for you - depending on your needs and the amount of data you have, it might well perform just fine for you.
Marc