SQL 2016 - Memeory Optimized Table - Row updated since SELECT - sql-server-2016

In my current application (using SQL 2008) we use a TimeStamp column in each table. When the application reads a row of data we send the timestamp value with the data. When they try to save any changes, we compare the timestamp columns to see if the row was modified by anyone else since it was read. If there has been a change we reject the update and tell them to refresh the data and try again so they can see what was changed and be sure they are not overwriting anything important without knowing. If the timestamps match then we allow the update and send them the new timestamp (in case they want to make more changes).
In SQL 2016 memory optimized tables, they no longer support this column type. They do have row versioning which is great, but is there a way to extract the “timestamp” when the record was created so we can use it in the same way? Is there a new way of doing this we can use instead?
I appreciate any help you can offer.

You could just add a ROWVERSION column to your table? Or if you only
need it to satisfy a need for a column in the correct format, and the
actually versioning is not required you could just generate a column
on the fly. Since TIMESTAMP (now replaced by ROWVERSION) is
VARBINARY(8) you could just convert your Timestamp column to this -
SELECT [Timestamp] = CONVERT(VARBINARY(8), Timestamp_S) – GarethD Jul
12 '16 at 13:09
This was taken from SQL Server 2016 timestamp data type, maybe that can help?

Related

Is it possible in SSMS to pull a DATETIME value as DATE?

I want to start by saying my SQL knowledge is limited (the sololearn SQL basics course is it), and I have fallen into a position where I am regularly asked to pull data from the SQL database for our ERP software. I have been pretty successful so far, but my current problem is stumping me.
I need to filter my results by having the date match from 2 separate tables.
My issue is that one of the tables outputs DATETIME with full time data. e.g. "2022-08-18 11:13:09.000"
While the other table zeros the time data. e.g. "2022-08-18 00:00:00.000"
Is there a way I can on the fly convert these to just a DATE e.g. "2022-08-18" so I can set them equal and get the results I need?
A simple CAST statement should work if I understand correctly.
CAST( dateToConvert AS DATE)

A date time field precise enough to differentiate between rows on bulk/mass insert

I am using SSIS to insert 500 to 3+ million rows into various tables. The data source is anything from a flat CSV file to another DB (Oracle, MySQL, SQL Server).
I am trying to create an "inserted_on" column that shows the date/time stamp of when the row was added and I need it to be precise enough to differentiate between the previous and next row. In other words, every row should have a different date time value, even if its really close.
I tried a datetime2(7) field with a default value of (gettime()) but that doesn't seem precise enough.
As described in this answer, you should use timestamp.
See documentation here or additional details available here.
Hope this help.

SQL - Inserting a time value into column?

I am using Oracle SQL and I would like to insert a time value (eg 15:45 or 15:45:00) into a column which has a data type of TIMESTAMP. I have tried the following but It gives a error about it not being a valid month.
INSERT trainTbl(Dest, trainTime)
VALUES
('Waterloo', '15:00:00');
Would appreciate if someone could put me on the right direction.
Thanks
Oracle's TIMESTAMP data type holds a complete time and date. You cannot use it to store a time only; either store a complete time and date, or use a different data type for your column.
Some options for ways to store the time only are discussed in How to store only time; not date and time?. However, if you're already storing the date in another column, you should probably just store this information together. There is a reason Oracle provides the data types that it does.

Check date value in Netezza

IN Netezza , I am trying to check if date value is valid or not ; something like ISDATE function in SQL server.
I am getting dates like 11/31/2013 which is not valid, how in Netezza I can check if this date is valid so I exclude them from my process.
Thanks
I don't believe there is a built-in Netezza function to check if a date is valid. You may be able to write a LUA function to do this, or you could try joining to a "Date" lookup table, like so:
Create a table with two columns:
DATE_VALUE date
DATE_STRING varchar(10)
Load data into this table for valid dates (generate a file in your favorite tool, excel, unix, whatever). There can even be more than one row per DATE_VALUE (different "valid" formats) if all you use this for is this check. If you fill in from, say, 1900 to 2100, as long as your data is within that range, you'll be fine. And it's a small table, too, for ~200 years only ~7300 rows. Add more if needed. Heck, since the NZ date datatype goes from AD1 to AD 9999, you could fill it completely with only 3.4 million rows (small for NZ).
Then, to isolate rows that have invalid dates, just use a JOIN or an EXISTS / NOT EXISTS to this table, on DATE_STRING. Since the table is so small, netezza will likely broadcast it to all SPUs, making the performance impact trivial.
Netezza Analytics Package 3.0 (free download) comes with a couple LUA functions that verify date values: isdate() and todate(). Very simple to install / compile.

Splitting time and date in two separate date columns, in SQL server, is best practice?

Is it best practice to split a dateTime in two datetime SQL columns?
For example, 2010-12-17 01:55:00.000 is put in two colums,
one column containing a datetime for
the date portion: 2010-12-17 00:00:00.000
one column containing a datetime
for the time portion: 1900-01-01 01:55:00.000
I'm being told this is best practice because at some point SQL 2000 didn't allow to put time in a date? and that there are even data storage standards that enforce this and that some companies have ensure that all their data is stored in that manner to comply to some data storage standards?
If this is the case, I'm sure someone heard about it here, any of this sounds familiar?
In sql server 2008 you have date and time data types so this becomes a non issue. datetime always allowed for time even back in sql server 6 and 7
the reason people split it up is because with everything in 1 column a query that returns all orders placed between 3 and 4 PM for any day requires a scan, with a time column this can be accomplished with a seek (much, much faster)
Starting in SQL 2005 I would do only one column.
If you wanted this information to be Sargable I would use computed columns instead. This way you can query on date or time or both and your application code is only responsible for maintaining the one column.
I know this is old, but another reason you might want to keep separate is for user input (and GenEric said in a comment that this is for time management). If you allow users to enter date/time as separate fields, and you want to be able to save the data with either field being empty, it is nice to have 2 separate null-able fields in your database. Otherwise I guess you either have to resort to kludges where certain date values equal "empty" or add extra bit fields as "no time / no date" flags.