I am using Microsoft SQL Server 2014 - 12.0.4100.1 (X64)
I have a string column that is populated with a date.
The problem is that some of the dates come in the format: Oct 17 2017 9:21AM. I need to enforce this format on the column: 2018-01-15 11:22:11.999
I did some research and I know I can set constraints with CHECK () at table creation or I know I can use ISDATE('2014-05-01') but I don't have an end to end solution.
I can clean the current data so this constraint is just for future inserts.
I am looking for a simple solution.
Any ideea? Any suggestion would be much appreciated
EDIT :
Is a legacy solution. I know is a bad approach but I can not change the format of the field! Only relevant solutions to the problem please.
If you have to stick with this you might add a computed column like here:
CREATE TABLE test(SomeSillyDate VARCHAR(100));
INSERT INTO test VALUES ('2018-01-15 11:22:11.99')
,('Oct 17 2017 9:2')
,('invalid date');
GO
ALTER TABLE test ADD CheckedDate AS TRY_CAST(SomeSillyDate AS DATETIME);
GO
SELECT * FROM test;
GO
DROP TABLE test;
But - if there's any chance - you should store this as DATETIME typed value.
You might use a VIEW with the tables name on top of a new table. This would not change the approach from outside...
UPDATE: Trigger approach
CREATE TABLE test(ID INT IDENTITY, SomeSillyDate VARCHAR(100));
GO
CREATE TRIGGER TestTrigger ON test
AFTER INSERT,UPDATE
AS
BEGIN
SET NOCOUNT ON;
WITH UpdCTE AS
(
SELECT t.SomeSillyDate AS oldValue
,ISNULL(FORMAT(TRY_CAST(i.SomeSillyDate AS DATETIME),'yyyy-MM-dd HH:mm:ss'),t.SomeSillyDate) AS newValue
FROM test AS t
INNER JOIN inserted AS i ON t.ID=i.ID
)
UPDATE UpdCTE
SET oldValue=newValue;
END
GO
INSERT INTO test VALUES ('2018-01-15 11:22:11.99')
,('Oct 17 2017 9:2')
,('invalid date');
GO
UPDATE test SET SomeSillyDate = GETDATE() WHERE ID=2;
GO
SELECT * FROM test;
GO
DROP TABLE test;
Related
I'm attempting to create a 'history' table that gets updated every time a row on the source table is updated.
Here's the (SQL Server) code I'm using to create the history table:
DROP TABLE eventGroup_History
SELECT
CAST(NULL AS UNIQUEIDENTIFIER) AS NewId,
CAST(NULL AS varchar(255)) AS DoneBy,
CAST(NULL AS varchar(255)) AS Operation,
CAST(NULL AS datetime) AS DoneAt,
*
INTO
eventGroup_History
FROM
eventGroup
WHERE
1 = 0
GO
ALTER TABLE eventGroup_History
ALTER COLUMN NewId UNIQUEIDENTIFIER NOT NULL
go
ALTER TABLE eventGroup_History
ADD PRIMARY KEY (NewId)
GO
ALTER TABLE eventGroup_History
ADD CONSTRAINT DF_eventGroup_History_NewId DEFAULT NewSequentialId() FOR NewId
GO
The trigger is created like this:
drop trigger eventGroup_LogUpdate
go
create trigger eventGroup_LogUpdate
on dbo.eventGroup
for update
as
declare #Now as DateTime = GetDate()
set nocount on
insert into eventGroup_History
select #Now, SUser_SName(), 'update-deleted', *
from deleted
insert into eventGroup_History
select SUser_SName(), 'update-inserted', #Now, *
from inserted
go
exec sp_settriggerorder #triggername = 'eventGroup_LogUpdate', #order = 'last', #stmttype = 'update'
But when I update a row in SQL Server Management Studio, I get a message:
The data in row 2 was not committed.
Error Source: .Net SqlClient Data Provider.
Error Message: Conversion failed when converting from a character string to uniqueidentifier.
I think that the trigger is attempting to insert the SUserSName() as the first column of the row but that is the PK NewId:
There are no other uniqueidentifier columns in the table.
If I add row from the SQL Management Studio's edit grid, the row gets added without me having to specify the NewId value.
So, why is the SQL Server trigger attempting to populate NewId with first item in the INSERT INTO clause rather than skipping it to let the normal IDENTITY operation provide a value?
(And how do I stop this happening so that the trigger works?)
Because the automatic skipping only applies to IDENTITY columns - a GUID column set with the NewSequentialId() constraint behaves similarly to IDENTITY in many ways but not this one.
You can achieve what you are looking for by specifying the columns for the INSERT explicitly.
If you're going to use a default value on your NewId column, you need to explicitly list the column names in the INSERT statements. By default, SQL Server will insert the columns in the order they're listed in the SELECT, unless you give it enough information to do otherwise. Listing out the columns explicitly is a best practice, one way or the other, in order to avoid just this sort of unanticipated result.
So your statements will end up looking like this:
INSERT INTO eventGroup_History
(
DoneBy,
Operation,
DoneAt,
<All the other columns that are masked by the *>
)
SELECT....
I have a big database and I should to normalize it. One of the table contains field with type "integer" but it contains only 1 and 0 values. So it is good reason to convert this field to bit type. But when I try to save changes in SQL Server Management Studio it tells me that I can't do it. Also I have many field with values like nvarchar that should be converted to int or float that should be converted to int too.
Moreover I should create migration scripts for all changes so I can update real database without loosing data. Maybe somebody knows useful utility for this?
EDIT: It tells me that I can't update unable without drop it. And I want to update table without losing any data.
SQL version 2014
---Create one Temp. Column
Alter Table [dbo].[Demo2]
Add tempId int
GO
--Copy Data in temp. Coulmn
Update [dbo].[Demo2] set tempId=Id
--Drop column which you want to modify
Alter Table [dbo].[Demo2]
Drop Column Id
Go
--Create again that column with bit type
Alter Table [dbo].[Demo2]
Add Id bit
GO
--copy date back
Update [dbo].[Demo2] set Id=tempId
--drop temp column
Alter Table [dbo].[Demo2]
Drop Column tempId
Go
Here's how to add a new column to a table, set that column to the old column and then remove the old column
CREATE TABLE #test
(inttest int
)
Insert [#test]
( [inttest] )
Values ( 0
)
Insert [#test]
( [inttest] )
Values ( 1
)
Alter Table [#test] Add bittest bit
Update [#test] Set bittest=inttest
Alter Table [#test] Drop Column [inttest]
SELECT * FROM [#test] [T]
To generate migration script you don't need a special utility, SSMS does it pretty well.
Right-click the table in SSMS object explorer. Choose Design item in the context menu. Change the type of the column. In the main menu Table Designer choose item Generate Change Script. Save the generated script to a file, review it and make sure you understand each line in it before you run it on a production system. Adjust the script if needed.
On the other hand, to change the column type from int to bit you can use the ALTER TABLE statement:
ALTER TABLE dbo.TableName
ALTER COLUMN ColumnName bit NOT NULL
Before running this you should check that actual int values are indeed only 0 and 1.
After reading of all comments and posts I found solution in building procedure which will convert passed table and column in required. So I wrote this function
IF EXISTS (select * from dbo.sysobjects where id = object_id(N'IntToBit') and OBJECTPROPERTY(id, N'IsProcedure') = 1)
DROP PROCEDURE IntToBit
GO
IF OBJECT_ID('convertion_table', 'U') IS NOT NULL
DROP TABLE dbo.convertion_table;
go
CREATE TABLE dbo.convertion_table
(
bitTypeColumn bit NOT NULL DEFAULT 0,
intTypeColumnt integer ,
)
go
CREATE procedure IntToBit
#table nvarchar(150),
#column nvarchar(150)
AS
begin
DECLARE #sql nvarchar(4000)
SELECT #sql ='
--copy data to temp table
INSERT INTO convertion_table (bitTypeColumn)
SELECT '+#column +'
FROM ' +#table+'
--Drop column which you want to modify
Alter Table ' +#table+'
Drop Column '+#column +'
--Create again that column with bit type
Alter Table ' +#table+'
Add '+#column +' bit NOT NULL DEFAULT(0)
--copy date back
INSERT INTO '+#table+'('+#column+')
SELECT bitTypeColumn
FROM convertion_table
--cleare temp table
--DELETE bitTypeColumn FROM convertion_table
'
exec sp_executesql #sql
end
GO
and then call it passing field and table name :
exec dbo.IntToBit #table = 'tbl_SystemUsers', #column='intUseLogin';
Special thanks to Chris K and Hitesh Thakor
Simply Use TSql script to modify the table rather than using the designer
ALTER TABLE YourTableNameHere
ALTER COLUMN YourColumnNameHere INT
If you are using sql Server then you might wanna generate script for the table before altering the table so that you dont loose any data ..and you can simply retrieve everything using the script
I am confused on the logic of this question, my prof did not teach us anything regarding this.... can someone explain it to me and this is my horrible example of what i did> i definitely need to fixed the
Inserted information and the stored values.
Create A trigger called TR_5 to log changes made to the CostPerHour of the services. When change is made to the CostPerHour add a record to the CostPerHourLog table (shown below). However, DO NOT record a change if the value of the CostPerHour did not change! Show the code to create the trigger and to create the table. All the table attributes are required.
Create Table CostPerHourLog
(
LogID [int]Identity(1,1) NOT NULL,
ChangeDateTime smalldatetime,
ServiceCode varchar(15),
Description varchar(100),
OldCostPerHour smallmoney,
NewCostPerHour smallmoney
)
Drop trigger TR_5
go
Create trigger TR_5
on CostPerHour
for update
as
if ##rowcount<0
begin
if not exists (select * from costperhourlog)
insert into CostperHourLog
(LogID,ChangeDateTime,ServiceCode,Description,OldCostPerHour,NewCostPerHour)
Values
(LogID,ChangeDateTime,ServiceCode,Description,OldCostPerHour,NewCostPerHour)
end
return
ALTER TRIGGER TR_5
ON CostPerHour
AFTER UPDATE
AS
BEGIN
SET NOCOUNT ON;
if exists(select * from inserted except select * from deleted)
begin
insert into CostPerHourLog
end
END
GO
We consume a web service that decided to alter the max length of a field from 255. We have a legacy vendor table on our end that is still capped at 255. We are hoping to use a trigger to address this issue temporarily until we can implement a more business-friendly solution in our next iteration.
Here's what I started with:
CREATE TRIGGER [mySchema].[TruncDescription]
ON [mySchema].[myTable]
INSTEAD OF INSERT
AS
BEGIN
SET NOCOUNT ON;
INSERT INTO [mySchema].[myTable]
SELECT SubType, type, substring(description, 1, 255)
FROM inserted
END
However, when I try to insert on myTable, I get the error:
String or binary data would be
truncated. The statement has been
terminated.
I tried experimenting with SET ANSI_WARNINGS OFF which allowed the query to work but then simply didn't insert any data into the description column.
Is there any way to use a trigger to truncate the too-long data or is there another alternative that I can use until a more eloquent solution can be designed? We are fairly limited in table modifications (i.e. we can't) because it's a vendor table, and we don't control the web service we're consuming so we can't ask them to fix it either. Any help would be appreciated.
The error cannot be avoided because the error is happening when the inserted table is populated.
From the documentation:
http://msdn.microsoft.com/en-us/library/ms191300.aspx
"The format of the inserted and deleted tables is the same as the format of the table on which the INSTEAD OF trigger is defined. Each column in the inserted and deleted tables maps directly to a column in the base table."
The only really "clever" idea I can think of is to take advantage of schemas and the default schema used by a login. If you can get the login that the web service is using to reference another table, you can increase the column size on that table and use the INSTEAD OF INSERT trigger to perform the INSERT into the vendor table. A variation of this is to create the table in a different database and set the default database for the web service login.
CREATE TRIGGER [myDB].[mySchema].[TruncDescription]
ON [myDB].[mySchema].[myTable]
INSTEAD OF INSERT
AS
BEGIN
SET NOCOUNT ON;
INSERT INTO [VendorDB].[VendorSchema].[VendorTable]
SELECT SubType, type, substring(description, 1, 255)
FROM inserted
END
With this setup everything works OK for me.
Not to state the obvious but are you sure there is data in the description field when you are testing? It is possible they change one of the other fields you are inserting as well and maybe one of those is throwing the error?
CREATE TABLE [dbo].[DataPlay](
[Data] [nvarchar](255) NULL
) ON [PRIMARY]
GO
and a trigger like this
Create TRIGGER updT ON DataPlay
Instead of Insert
AS
BEGIN
SET NOCOUNT ON;
INSERT INTO [tempdb].[dbo].[DataPlay]
([Data])
(Select substring(Data, 1, 255) from inserted)
END
GO
then inserting with
Declare #d as nvarchar(max)
Select #d = REPLICATE('a', 500)
SET ANSI_WARNINGS OFF
INSERT INTO [tempdb].[dbo].[DataPlay]
([Data])
VALUES
(#d)
GO
I am unable to reproduce this issue on SQL 2008 R2 using:
Declare #table table ( fielda varchar(10) )
Insert Into #table ( fielda )
Values ( Substring('12345678901234567890', 1, 10) )
Please make sure that your field is really defined as varchar(255).
I also strongly suggest you use an Insert statement with an explicit field list. While your Insert is syntactically correct, you really should be using an explicit field list (like in my sample). The problem is when you don't specify a field list you are at the mercy of SQL and the table definition for the field order. When you do use a field list you can change the order of the fields in the table (or add new fields in the middle) and not care about your insert statements.
Will datetime datatype in a table column(last_modified_timestamp) update the current time automatically?
i have a table column as shown below , i need to know whether it will insert the current time in the column automatically?
How i know currently i have default settings in my table?
i changed it to insert ...not for updating !
No, it will not. How would you expect SQL to guess which datetime columns should be automatically updated like this, and yet others are meant to record, e.g. historic dates.
For INSERT purposes, you can have a DEFAULT constraint on the column that inserts the current date (Getdate()/CURRENT_TIMESTAMP).
But for UPDATEs to work, you'd have to implement a trigger.
For INSERT purposes, and using the table designer, you can look at the "Default Value or Binding" property - you'd set this to (CURRENT_TIMESTAMP) or (GetDate()) (they mean the same thing). Or in the Object Explorer, you can look at the constraints on the table - if there's a default set, it will appear in there.
Also, worth pointing out that a default is exactly as it sounds - there's nothing to prevent someone providing their own value for this column. If you want to prevent this, then trigger's are probably the answer (although a lot of people dislike triggers).
No, it wont.
You either need to specify a default like
DECLARE #Table TABLE(
ID INT,
LastDate DATETIME DEFAULT GETDATE()
)
INSERT INTO #Table (ID) SELECT 1
SELECT *
FROM #Table
Or make use of triggers, or update the values manually using GETDATE() in your INSERT/UPDATES
Definitely NOT.
You can use DEFAULT to be getdate() which will add your current datetime for your column
If default is set on a column, you can see that in your table constraints. Check in the table properties, or type sp_help <tablename>
If you use the use DEFAULT getdate() you must disable the nulls for that column, and if you sent a null then sql will set the default.
sounds like you might need a trigger to update the date when an update is made.
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
-- =============================================
-- Author: David Forck
-- Create date: 12Nov10
-- Description: Update last modified date
-- =============================================
CREATE TRIGGER dbo.UpdateLastModified
ON dbo.table1
AFTER update
AS
BEGIN
SET NOCOUNT ON;
update dbo.table1
set last_modified_timestap=getdate()
where ID in (select ID from inserted)
END
GO