Is there a way to set a nullubule Timstamp2 back on null? - sql

I have in a table a nullubule timestamp that tracks when the entry got called from a client. Sometimes something goes wrong on the client side and I need to set the timestamp back to null. I tried directly in SQL management studio to execute the query:
USE [MyDB]
GO
UPDATE [dbo].[MyTable]
SET [MyTimestamp]=null
WHERE ID=SomeInt;
I get the message that one row got altered but when I refresh my select * on the table there is no change on the timestamp.
PS: The whole DB runs on an azure server but I can also not get it to work on my test DB on local host in SQL Server 2014.
Would be grateful for input 

The answer is you cannot change the timestamp column to NULL. It is like a row version number.
Also
The timestamp data type is just an incrementing number and does not
preserve a date or a time.
There are some workarounds which you can use as the one which is used here in the related thread but now Timestamp datatype is rarely used.

Related

colon(:) and dot(.) as millisecond separator in datetime2

I have migrated a Sybase database to SQL server 2008.
The main application that using the database trying to set some of dateTime2 column with data like 1986-12-24 16:56:57:81000 which is giving this error:
Conversion failed when converting date and/or time from character string.
Running the same query using dot(.) instead of colon(:) as millisecond separator like 1986-12-24 16:56:57.81000 or limiting the milliseconds to 3 digits like 1986-12-24 16:56:57:810 will solve the problem.
NOTE:
1- I don't have access to the source of application to fix this issue and there are lots of table with the same problem.
2. Application connect to database using ODBC connection.
Is there any fast forwarding solution or should i write lots of triggers on all tables to fix it using the above solutions?
Thanks in advance
AS Gordon Linoff said
A trigger on the current table is not going to help because the type
conversion happens before the trigger is called. Think of how the
trigger works: the data is available in a "protorow".
But There is a simple answer!
Using SQL Server Native Client Connection instead of basic SQL Server ODBC connection handle everything.
Note:
1. As i used SQL Server 2008 version 10 of SQL server native client works fine but not the version 11 (it's for SQL Server 2012).
2. Use Regional Settings make some other conversion problem so don't use it if you don't need it.
Select REPLACE(getdate(), ':', '.')
But it will Give String Formate to datetime Which is not covert into DateTime formate
Why would you need triggers? You can use update to change the last ':' to '.':
update t
set col = stuff(col, 20, 1, '.');
You also mistakenly describe the column as datetime2. That uses an internal date/time format. Your column is clearly a string.
EDIT:
I think I misinterpreted the question (assuming the data is already in a table). Bring the data into staging tables and do the conversion in another step.
A trigger on the current table is not going to help because the type conversion happens before the trigger is called. Think of how the trigger works: the data is available in a "protorow".
You could get a trigger to work by creating views and building a trigger on a view, but that is even worse. Perhaps the simplest solution would be:
Change the name and data type of the column so it contains a string.
Add a computed column that converts the value to datetime2.

How to determine when a SQL Subscription was marked for re-initialization?

MS SQL Server 2012
I am trying to determine when a subscription was marked for reinitialization. I can see when the subscription started to reinitialize, but I want to see at what time the command was issued to reinitialize the subscriptions.
I have looked in the syssubscriptions table, there is a timestamp column, but that is not actually a time. Any way to determine in the sql logs or a modified datetime somewhere else?
Timestamp refers to the date and time that the subscription was created.
I just tested and you can get it from subscription_time value in your distribution database metadata like so:
select publisher_db, subscriber_db, subscription_time, *
from distribution.dbo.MSsubscriptions
where subscriber_id >=0
-Chuck

SSIS - fill unmapped columns in table in OLE DB Destination

As you can see in the image below, I have a table in SQL Server that I am filling via a flat file source. There are two columns in the destination table that I want to update based on the logic listed below:
SessionID - all rows from the first CSV import will have a value of 1; the second import will have a value of 2, and so on.
TimeCreated - datetime value of when the CSV imports happened.
I don't need help with how to write the TSQL code to get this done. Instead, I would like someone to suggest a method to implement this as a Data Flow task within SSIS.
Thank you in advance for your thoughts.
Edit 11/29/2012
Since all answers so far suggested taking care of this on the SQL Server side, I wanted to show you what I had initially tried doing (see image below), but it did not work. The trigger did not fire in SQL Server after SSIS inserted the data into the destination table.
If any of you can explain why the trigger did not fire, that would be great.
If you are able to modify the destination table, you could make the default values for SessionID and TimeCreated do all the work for you. SessionID would be an auto-incremental integer while the default value for TimeCreated would be getdate() or gettime() depending on the data type.
Now, if you truly need it the values to be created as part of your workflow, you can use variables for each.
SessionID would be a package variable which is set by an Execute SQL Task. Just reference the variable in your result set and have your SQL determine the next number to use. There are potential concurrency issues with this, though.
TimeCreated is easily done by creating a Derived Column in your data flow based on the system variable StartTime.
You can use a Derived Column to fill the TimeCreated column, if you want the time of the data flow to happen, you just use the date and time function to get the current datetime. If you want a common timestamp for the whole package (all files) you can use the system variable #[System::StartTime] (or whatitwascalled).
For the CSV looping (i guess), you use a foreach loop container, and map an iterative value to a user variable that you map in the derived column for SessionID as mentioned above.
First, I'd better do it on SQL Server side :)
But if you don't want or cannot to do it on server side you can use this approach:
It is obvious that you need to store SessionID somewhere you can create a txt file for that or better some settings table in SQL Server or there can be other approaches.
To add columns SessionID and TimeCreated to OLE Destination you can use Derived columns

MS SQL 2005: Any way to temporarily set the system date for a T-SQL script to a different date?

We have a bunch of T-SQL scripts dependent on today's date and when they run. If one doesn't run on the week it should, we end up temporarily setting the system time a day before, run the script, then set it back.
Is there anyway to temporarily set the system date for a script without changing the original script, like when you execute it or only for that session?
You could store the actual date in a table / temp table.
THen retrieve or update that date rather then making a call to GetDate().
I've found an answer by someone else, here I share it: "The date is tied to the OS date and time. See here: http://msdn.microsoft.com/en-us/library/ms188383.aspx".
You could refer to this other question Simulate current date on a SQL Server instance?

How to remove a tuple from an SQL table after a timeout?

I am faced with a peculiar requirement which is as follows:
A network-intensive operation is triggered to a server by multiple clients, through a web-interface. However, only one operation is allowed at a time, and hence an entry(tuple) is made in an SQL table to indicate that the operation is in progress. Once the operation is complete (irrespective of success or failure), the appropriate result is displayed back to the client(s), and the corresponding tuple is removed from the SQL table.
Since the operation is network-intensive, a scenario where the operation needs to be "considered" to be cancelled, after some timeout (10 minutes) has to be introduced.
Is there ANY way the lifetime of a row in SQL be associated with a timeout value, so that is is deleted after certain time? My application is primarily written in Java 1.5 and EJB 3.0, using JPA/Hibernate to access Oracle 10g DB engine.
Thanks in advance.
Regards,
Nagendra U M
I would suggest that you try using a timestamp column containing the start time of the task.
A before trigger can be then made to delete the old column before a new one is inserted if the task timed out.
If you want to have multiple tasks with different timeouts, you can even add a column with the timeout in seconds. Just code your trigger accordingly.
I don't know that Oracle has this kind of facility but I think no db engine have this.
If you want to do it at DB level,
you must have a datetime column, e.g.; 'CreatedDate' in
table. This column will have
datetime when record was created.
Write a procedure and put it in a
schedule job. This job will run after every 10 minutes and remove the 10 minutes old records. The query will be like
this.
T-SQL: Please convert it according to your db engine.
DELETE FROM yourtable WHERE CreatedDate < DATEADD(mi, -10, GETDATE())
This will delete all records older than 10 minutes from table.
This is just to give you idea of schedule job. It is in SQL Server. I don't know about Oracle
step_by_step_guide_to_add_a_sql_job_in_sql_server_2005
It sounds like you're implementing a mutex using the database, take a look at this question and see if it helps? Sounds like transactional access to a flag table will solve this for you, as long as you catch both success & failure states in your server code.