SQL change rule based off when a time happens - sql

I am using postgresql is it possible for me to change a table when a specific time happens. I would like to modify values when a specific date happens that is specified in the table being modified. For example:
a piece of artwork is located in a museum, after it's exhibition ends it is automatically placed back into storage, changing its location attribute. This occurs on a specified date.

It is not possible. See cron jobs, as suggested by Muleinik... but then, to expand on your example:
a piece of artwork is located in a museum, after it's exhibition ends it is automatically placed back into storage, changing its location attribute. This occurs on a specified date.
What happens if the piece of art is stolen (happens), or the museum it got sent to as part of a temporary exhibition decides to keep it (happens) or return it to its "rightful owner" (happens), or it's shelved in the wrong location (happens), etc.?
Don't just assume that things will go well -- they won't.

postgres does not have triggers on system-wide event (such as time).
What you can do, however, is have the OS's cron or at services do it for you by scheduling a statement like this:
echo "UPDATE artwork SET location='storage' WHERE name='Mona Lisa' | psql -u some_user -d some_database

Maybe you need a different data model, one that would allow you to store "historical" location. (in quotes because you'll keep future records there, too.)

Related

Multiple users accessing a linked table occasionally see a message "Cannot update. Database or object is read-only"

We have a split MS Access database. When users log on, they are connected/linked to two separate Access database (one for the specific project they are working on and one for record locking (and other global settings)). The "locking" database is the one I need to find a solution for.
One of the tables "tblTS_RecordLocking", simply stores a list of user names and the recordID of the record they are editing. This never has more than 50 records - usually being closer to 5-10. But before a user can do anything on a record, it opens "tblTS_RecordLocking" to see if the record is in use (so it happens a lot):
Set recIOC = CurrentDb.OpenRecordset("SELECT tblTSS_RecordLocking.* FROM tblTSS_RecordLocking WHERE (((tblTSS_RecordLocking.ProjectID_Lock)=1111) AND ((tblTSS_RecordLocking.RecordID_Lock)=123456));", , dbReadOnly)
If it's in use, the user simply gets a message and the form/record remains locked. If not in use, it will close the recordset and re-open it so that the user name is updated with the Record ID:
Set recIOC = CurrentDb.OpenRecordset("SELECT tblTSS_RecordLocking.* FROM tblTSS_RecordLocking WHERE (((tblTSS_RecordLocking.UserName_Lock)='John Smith'));")
If recIOC.EOF = True Then
recIOC.AddNew
recIOC.Fields![UserName_Lock] = "John Smith"
Else
recIOC.Edit
End If
recIOC.Fields![RecordID_Lock] = 123456
recIOC.Fields![ProjectID_Lock] = 111
recIOC.Update
recIOC.Close: set recIOC=Nothing
As soon as this finishes, everything realting to the second database is closed down (and the .laccdb file disappears).
So here's the problem. On very rare occasions, a user can get a message:
3027 - Cannot update. Database or object is read-only.
On even rarer occasions, it can flag the db as corrrupt needing to be compressed and re-indexed.
I really want to know the most reliable way to do the check/update. The process may run several hundred times in a day, and whilst I only see the issue once every few weeks, and for the most part handle it cleanly (on the front-end), I'm sure there is a better, more reliable way.
I agree with mamadsp that moving to SQL is the best option and am already in the process of doing this. However, whilst I was not able to create a fix for this issue, I was able to find a work-around that has yet to fail.
Instead of having a single lock table in the global database. I found that creating a lock table in the project database solved the problem. The main benefit of this is that there are much fewer activities on the table. So, not perfect - but it is stable.

SSRS Data-Driven Subscription [based on static Subscription table] Not Picking Up Changes Made to Subscription Table

I have a .RDL report which I designed in BIDS and have deployed to my report server. The report asks for three parameters before viewing report: Year, Month and Customer ID. The report works great and does exactly what it is supposed to.
While I used to run each report individually because there were 2-3 customers, now there are 30+ customers who receive the report, so I wanted to switch to a more automated fulfillment method to get the reports generated. After doing some research it appears that a using Report Manager to create a "Data Driven Subscription" (DDS) using the "Windows File Share" option gives me the capabilities I need.
As part of creating the DDS, I created a table called [Subscription] which is a table containing one row for each customer receiving the report and has the following columns:
Year
Month
CustomerID
FileName
FileLocation
Overwrite
Format
...so through using the DDS Wizard in Report Manager, I was able to successfully set up a Data Driven Subscription (which is linked to various columns in the [Subscription] table) which creates a new report for each customer in the [Subscription] table, saves [and overwrites, if necessary] it in a location of my choosing as a PDF (specified in [Subscription].[FileLocation], or the FileLocation column of my table for each row), and runs every minute (I plan on changing frequency to once a week, eventually).
This works flawlessly, giving me a new set of 30 reports in the directory of my choosing, with each report having a name I assigned in the FileName column of my table. Exactly what I was looking for.
HERE'S THE PROBLEM: When I update the FileLocation or FileName (or anything, really) in the [Subscription] table - it doesn't pick up the changes right away. Sometimes it doesn't even pick it up at all (for example I updated the [ReportName] column for one customer from Report_711622 to SpecialReport_711622, so that the output file for that customer should be named SpecialReport_711622 while all of the other reports should be called Report_XXXXX [no Special prefix]. But the file name of report for Customer 711622 remains the same!
It's almost like the job only see's what it needs to do once a day, and then does not go back and reference the [Subscription] table until I leave for the night, then when I come back in the morning it picks up the change.
Since I am about to scale this process out to a large customer-base using a different report, I need to be able to make edits to the [Subscription] table and have them get picked up by the Data Driven Subscription immediately (and if not immediately, at least a fixed interval of time that I can adjust, so that I can know 100% when the change will get picked up).
Does anyone know what's causing my lag? How do I change it so that updates to the Subscription table get picked up regularly? I'm also having issues with creating new DDS on other reports (following the exact process outlined above) - I've created the subscriptions, for every minute, and it says they are running and the number of outputs match the number of customers with 0 errors, but there are no files in the drive I specified (or anywhere else I've looked, for that matter).
Any help would be greatly appreciated!
I think the answer lies in the mechanism SSRS uses. There are a few places "lag" can occur.
The subscription is in fact an SQL Agent job which creates a record in the Event table. This table is a queue that SSRS checks to do scheduled tasks.
There is a small amount of time between the moment the subscription creates the Event record and the moment SQL reads it and starts creating the dataset for your DDS. The creation of the DDS dataset takes some time, too. In this time, the subscription will be in the Pending state. If you change anything in the data during this time, The subscription will still use the old data as report parameters. So obviously you will not notice your change until the next scheduled run.
Which brings me to the following: if a subscription is still being run and the next schedule kicks in (chances are, because yours runs every minute), the engine will not execute it, but wait for the next subscription schedule, and so on. So that's another possibility of lag - and cause of missing reports for a certain schedule minute. The subscription processes reports sequentially, one row from your DDS recordset at a time. Again, this takes some time. You can also see that in the subscription window when it says: # of # processed.
I suggest you look at the Event table in the database ReportServer during an execution. Also the ExecutionHistory views (there are 3) may be interesting. A scheduled run shows up as a RequestType = 1 and generates one record for each report. You can see the exact timing and parameters of each report that is run in the subscription. You may be able to extract the data you need to resolve your other issues.
EDIT: Here is a more elaborate guide to DDS data and events
http://blogs.msdn.com/b/deanka/archive/2009/01/13/diagnosing-and-troubleshooting-subscriptions.aspx
http://blogs.msdn.com/b/deanka/archive/2010/02/16/troubleshooting-subscriptions-part-ii-using-the-report-services-trace-log-file.aspx
Could this "Double-Hop" problem be the source of my issues? I'm so stuck on this one!
The Double-Hop Problem - MSDN Knowledgecast

Delete record 24 hours after insert

Is there a way to automatically delete a row 24 hours after its creation in Transact-SQL?
I'm making a site (learning experience) where the user needs to click a validation link sent by e-mail once they register. I want the users to validate themselves within 24 hours.
I suppose what I'd need is a trigger, but I'm really not sure on the syntax, nor if it is even possible.
I'm not sure of your schema but I would do it a different way. I would have a date/time against the database record that corresponds to the validation link. When they click the link, verify that the date and time of the database record is within 24 hours of the current time. If so, allow it, otherwise reject it.
Q: Is there a way to automatically delete a row 24 hours after its creation in Transact-SQL?
A: Sure. Write a "sqlcmd" script, wrap it in a .bat file, and invoke it from Windows Scheduled Tasks:
http://windows.microsoft.com/en-US/windows7/schedule-a-task
Alternatively, depending on your version, you could schedule the same SQL script from SQL Server Agent:
http://msdn.microsoft.com/en-us/library/ms189237.aspx
Putting a different spin on things:
When the user clicks your link, you can check if the current time (with respect to MSSQL) is >> 24 hours. If so, you'll reply with a "Too late" message (rather than validating the entry).
In any case - you absolutely, completely, totally, do NOT want to use a trigger!

How to backup tcsh history periodically to a single file in chronological manner?

I use tcsh at work - one of the features I use extensively is command-line history completion at the shell prompt. Currently, I've limited the size of my history file to 2000 (as I don't want to slow down the shell too much). However at times I need a command I know I've used a month or two back , but by now has been erased. So I want a system wherein:
My history buffer stores 2000 lines only
Instead of older commands getting erased , they should be saved into a "master" history file, ordered chronologically i.e if two shells were opened , then the commands entered in the history should be sorted as per the datestamp (not the order in which the shells were closed)!
It would be perfect , if this master history file could be auto-backed up, say per week basis.
I'm sure many of avid shell users have faced a situation like this - I'm hoping to get the answer from one of such users !!
2000 is pretty low. You could raise that a fair amount without suffering too much.
Next you probably want to store the history on logout, since this is when new commands are added to the .history file.
Create a file called .logout in your $HOME (for bash users, this file is .bash_logout). In this, copy the contents of the history to a permanent store. For example:
cat $HOME/.history >> $HOME/.ancient_history
This will append the history to a file ".ancient_history". For bash users, the file to copy is called .bash_history.
Then create a cron job that creates a back up of this every now and again. For starters here is one that moves the file to a filename with a date stamp at 5 minutes past midnight every day.
5 0 * * * mv $HOME/.ancient_history $HOME/.ancient_history_`date +%s`
There are probably more things you could do with this, but this is enough to get started. It's a pretty good idea that I hadn't thought of doing before either :-)
never quite thought of doing this but the simplest way would be to write a cron job that appended the history file to another file. The problem with this would be that you would get duplicates unless you wrote the cron to clear the history file after it did the dump.
history is stored (as far as i am aware) by line number only so the numbers would repeat for each dump. but you cold add a marker line with the date of the dump.

What do I gain by adding a timestamp column called recordversion to a table in ms-sql?

What do I gain by adding a timestamp column called recordversion to a table in ms-sql?
You can use that column to make sure your users don't overwrite data from another user.
Lets say user A pulls up record 1 and at the same time user B pulls up record 1. User A edits the record and saves it. 5 minutes later, User B edits the record - but doesn't know about user A's changes. When he saves his changes, you use the recordversion column in your update where clause which will prevent User B from overwriting what User A did. You could detect this invalid condition and throw some kind of data out of date error.
Nothing that I'm aware of, or that Google seems to find quickly.
You con't get anything inherent by using that name for a column. Sure, you can create a column and do the record versioning as described in the next response, but there's nothing special about the column name. You could call the column anything you want and do versioning, and you could call any column RecordVersion and nothing special would happen.
Timestamp is mainly used for replication. I have also used it successfully to determine if the data has been updated since the last feed to the client (when I needed to send a delta feed) and thus pick out only the records which have changed since then. This does require having another table that stores the values of the timestamp (in a varbinary field) at the time you run the report so you can use it compare on the next run.
If you think that timestamp is recording the date or time of the last update, it does not do that, you would need dateTime fields and constraints (To get the orginal datetime)and triggers (to update) to store that information.
Also, keep in mind if you want to keep track of your data, it's a good idea to add these four columns to every table:
CreatedBy(varchar) | CreatedOn(date) | ModifiedBy(varchar) | ModifiedOn(date)
While it doesn't give you full history, it lets you know who and when created an entry, and who and when last modified it. Those 4 columns create pretty powerful tracking abilities without any serious overhead to your DB.
Obviously, you could create a full-blown logging system that tracks every change and gives you full-blown history, but that's not the solution for the issue I think you are proposing.