Set datefirst permanently - sql

I'd like to change permanently the value of DATEFIRST (which is used for functions like DATEPART)
If I do this : SET DATEFIRST 1 the value stay during the execution but it return to the default value - 7 here - after the execution
I already had the problem, I know that it's related to the country of the login but I forgot which table & which property I had to change.

I must say this took some research on my part. Take a look at the following query. You will notice the datefirst field. I would imagine there are all kinds of permission implications that go along with changing language settings at this level.
SELECT
*
FROM
sys.syslanguages
I don't have a server I can test this on at the moment but I would imagine through a set statement you could set the datefirst column to whatever you wanted it to be.
TEST TEST TEST as this will have huge implications across more than problem you are trying to solve.
Interesting Resource # 1
Interesting Resource # 2
Interesting Resource # 3

You can alter the default language of a sql login or windows user authenticated on SQL Server using the Microsoft SQL Server Management Studio or executing the below sample t-sql script :
USE [master]
GO
ALTER LOGIN [login] WITH DEFAULT_LANGUAGE = [you_language]
GO
The script was taken from this site

Perhaps a bit late to the party, but here's a way to "hack" it, regardless of the datefirst value, set either to the server, db, login or session and parametrizable (WOW, never used this word before) by the first day you actual want it to be :
declare
/*
Let's say for a regular, boring day of Monday
, regarldess of the datefirst value, I want it to return Monday as 1 (first day of week)
*/
#ForDate datetime = '20170320',
-- you can play with any value here
#SetDateFirstValue int = 5
--this is only for testing
set datefirst #SetDateFirstValue;
select
datepart(dw,#ForDate) as ActualReturnedDayOfWeek
, datepart(dw,dateadd(dd, ##DATEFIRST - 1,#ForDate)) As IWantItThisWay
P.S. : Perhaps the only situations you'll ever need this, is in a function or constraint (computed columns maybe) where SET operations are not allowed. If SET is allowed, SET it as you need it and forget about it.

Related

Select Statement yields different result when executed via SQL Server Agent

I am observing a strange behavior on our SQL Server 2014, Standard Edition (64-bit) which I cannot explain:
A simple select statement behaves differently when executed manually or via an SQL Job:
The sql-statement is as follows:
[USE DB2]
GO
Select * from DB1.dbo.price p
where
p.sec_id = 10 and
p.dt = CONVERT(date,getdate() - (case when datename(dw,getdate()) = 'Monday' then 3 else 1 end))
The statement pulls out the price record from table dbo.price for a certain security (sec_id = 10) for the previous business day which normally is 1 day prior, however on Mondays it is 3 days prior as there are only price records on business days available (1 price record per security per business day).
This sql-statement is embedded in a stored procedure which itself is executed via an SQL Server Agent Job.
The strange thing happening is:
If the above sql statement is executed "manually", i.e. via a query editor, it yields the correct result, i.e. one price record is returned when executed Monday to Friday.
The same is true when the above sql statement is executed "manually" via a stored procedure.
However, when the stored procedure containing the above statement is executed via an SQL Server Agent Job, the statement only returns a price record on Tuesday to Friday. On Mondays, the statement returns no record. (Even though the stored procedure respectively the sql statement return a record when executed manually).
Since the job is working Tuesdays to Fridays, it should not be any issue of privileges etc. And since the statement is working when executed manually, there shouldn't be any issue with the statement per se neither.
But why would it not work on a Monday when executed via an SQL job?
Would anybody have an idea what the reason could be? I have none unfortunately ...
Thanks a lot for any help.
Cheers
It's due to the default language of the identity that the Agent job runs under.
In your agent job add this to the script :
SET DATEFIRST 7
[or whatever day of week you expect to be deemed first day of week]
(it's connection specific, so won't affect other connections.)
Or you could change the default language of the login used by SQL Agent (or proxy if you are using one):
USE [master]
GO
ALTER LOGIN [LoginName] WITH DEFAULT_LANGUAGE = [SomeLanguage]
GO
Ref: SET DATEFIRST
As Mitch says, it's likely to be because of different language/date settings used by the Agent job.
My preferred fix though is not to fiddle with settings, but instead to pick a "known good" day with the correct property:
datename(dw,getdate()) = datename(dw,'20150720')
It so happens that 20th July 2015 (selection was entirely arbitrary, I just happen to have a 2015 desk calendar in eyesight) was a Monday and I'm using an unambiguous date format as my literal. So, whetever datename(dw,getdate()) happens to return on Mondays should always be what datename(dw,'20150720') produces.

Query against a view under master database is much slower than query directly under specific database

I am not sure whether there exists a general answer before I give more details.
For exmaple: I have a view named vw_View
I tried the following two queries to get the result:
Under master database select * From [test].[dbo].[vw_View]
Under test database select * From [dbo].[vw_View]
Could anyone tell me why query against the same query but from master database is much slower than query against from the other databases, I even tried the others by:
Use [db] --any other databases not master database
select * From [test].[dbo].[vw_View]
I have checked the actual execution plan, the join order differs but why it will change since I have already specify [test].[dbo].[vw_View] when under master
Just out of curiosity, thanks in advance.
Note this might not be the answer but it was too much text for a comment anyway...
One thing that we hear about a lot is when the developers complain about a slow running procedure which only runs slow when called from the application but runs fine when executing from the SSMS.
More often than not it is due to the different execution settings depending on from where the procedure is being called. To check if there is a difference in those setting I usually use SQL Profiler.
In your case you can open two different windows in the SSMS one in the context of Master database and the other in the context of the User Database and run SQL Profiler, the very first event profiler will capture, will be the Event Class = Existing Connections and Text Data = -- network protocol: LPC......
This record will show you all the default settings for each session where your are executing the commands, The settings would look something like....
-- network protocol: LPC
set quoted_identifier on
set arithabort off
set numeric_roundabort off
set ansi_warnings on
set ansi_padding on
set ansi_nulls on
set concat_null_yields_null on
set cursor_close_on_commit off
set implicit_transactions off
set language us_english
set dateformat mdy
set datefirst 7
set transaction isolation level read committed
Now compare the settings of both sessions and see what are the differences.
The profiler also has a column SIPD which will help you to identify which window is which. I am pretty sure the answer is around somewhere there.
Have the same issue - executing a view from master goes infinitely long, but executing the same view "under any other user database on this server" goes only 8 sec.
I have an environment where we just migrated to SQL Server 2017 and "all other databases" have Compatibility Level = 2008 (or 2012)
So I did a few tests:
If I create a new DB with default Compatibility Level = 2017 and run the query it executes infinitely long
If I change Compatibility Level to 2008 and reconnect - 8 sec
If I change Compatibility Level back to 2017 - long run again
And the final thing we have noticed about the query itself - the query is using CHARINDEX function and if I comment it out the query executes equally 8 sec for both compatibility levels.
So... it looks like we have an mixed issue with CHARINDEX function execution on legacy database under Compatibility Level = 2017 context.
The solution is (if you can call it this way...) - to execute legacy queries under (the same) legacy execution context.

Current transaction cannot be committed and cannot support operations that write to the log file. Roll back the transaction

I know that there are other questions with the exact title as the one I posted but each of them are very specific to the query or procedure they are referencing.
I manage a Blackboard Learn system here for a college and have direct database access. In short there is a stored procedure that is causing system headaches. Sometimes when changes to the system get committed errors are thrown into logs in the back end, identifying a stored procedure known as bbgs_cc_setStmtStatus and erroring out with The current transaction cannot be committed and cannot support operations that write to the log file. Roll back the transaction.
Here is the code for the SP, however, I did not write it, as it is a stock piece of "equipment" installed by Blackboard when it populates and creates the tables for the application.
USE [BBLEARN]
GO
/****** Object: StoredProcedure [dbo].[bbgs_cc_setStmtStatus] Script Date: 09/27/2013 09:19:48 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
ALTER PROCEDURE [dbo].[bbgs_cc_setStmtStatus](
#registryKey nvarchar(255),
#registryVal nvarchar(255),
#registryDesc varchar(255),
#overwrite BIT
)
AS
BEGIN
DECLARE #message varchar(200);
IF (0 < (SELECT count(*) FROM bbgs_cc_stmt_status WHERE registry_key = #registryKey) ) BEGIN
IF( #overwrite=1 ) BEGIN
UPDATE bbgs_cc_stmt_status SET
registry_value = #registryVal,
description = #registryDesc,
dtmodified = getDate()
WHERE registry_key = #registryKey;
END
END
ELSE BEGIN
INSERT INTO bbgs_cc_stmt_status
(registry_key, registry_value, description) VALUES
(#registryKey, #registryVal, #registryDesc);
END
SET #message = 'bbgs_cc_setStmtStatus: Saved registry key [' + #registryKey + '] as status [' + #registryVal + '].';
EXEC dbo.bbgs_cc_log #message, 'INFORMATIONAL';
END
I'm not expecting Blackboard specific support, but I want to know if there is anything I can check as far as SQL Server 2008 is concerned to see if there is a system setting causing this. I do have a ticket open with Blackboard but have not heard anything yet.
Here are some things I have checked:
tempdb system database:
I made the templog have an initial size of 100MB and have it auto grow by 100MB, unrestricted to see if this was causing the issue. It didn't seem to help. Our actual tempdb starts at 4GB and auto grows by a gig each time it needs it. Is it normal for the space available in the tempdb to be 95-985 of the actual size of the tempdb? For example, right now tempdb has a size of 12388.00 MB and the space available is 12286.37MB.
Also, the log file for the main BBLEARN table had stopped growing because it reached its maximum auto grwoth. I set its initial size to 3GB to increase its size.
I see a couple of potential errors that could be preventing the commit but without knowing more about the structure these are just guesses:
The update clause in the nested if is trying to update a column (or set of columns) that must be unique. Because the check only verifies that at least one item exists but does not limit that check to making sure only one item exists
IF (0 < (SELECT ...) ) BEGIN
vs.
IF (1 = (SELECT ...) ) BEGIN
you could be inserting non-unique values into rows that must be unique. Check to make sure there are no constraints on the attributes the update runs on (specifically look for primary key, identity, and unique constraints). Likelyhood of this being the issue: Low but non-zero.
The application is not passing values to all of the parameters causing the #message string to null out and thus causing the logging method to error as it tries to add a null string. Remember that in SQL anything + null = null so, while you're fine to insert and update values to null you can't log nulls in the manner the code you provided does. Rather, to account for nulls, you should change the setter for the message variable to the following:
SET #message = 'bbgs_cc_setStmtStatus: Saved registry key [' + COALESCE(#registryKey, '') + '] as status [' + COALESCE(#registryVal,'') + '].';
This is far more likely to be your problem based on the reported error but again, without the app code (which might be preventing null parameters from being passed) there isn't any way to know.
Also, I would note that instead of doing a
IF (0 < (SELECT count(*) ...) ) BEGIN
I would use
IF (EXISTS (SELECT 1 ...) ) BEGIN
because it is more efficient. You don't have to return every row of the sub-query because the execution plan will run the FROM statement first and see that rows exist rather than having to actually evaluate the select, count those rows, and then compare that with 0.
Start with those suggestions and, if you can come back with more information, I can help you troubleshoot more.
Maybe you could use a MERGE statement :
http://msdn.microsoft.com/fr-fr/library/bb510625%28v=sql.100%29.aspx
I think it will be more efficient.
Launch SSMS
Navigate to the database name
Right click on the database name and choose Properties > Options
Change "Delayed Durability" to Allowed
Click OK

Setting A Date to End the Use of a Section of a SQL Script

I am using SQL 2005. For one of our customers, we run a script everytime we set-up a new database. The script defines what information remains and what information is deleted from the database....we use a master database to set 'typical' default information. I have been asked to add a delete statement to the script with a 'test' for the delete statement to quit running automatically after 1 January 2011. We don't want any other information if the script affected; just the one statement. Does anyone know how to structure the syntax for this kind of request?
Thank you.
IF GETDATE() < '20110101'
BEGIN
--Deletez
END
You may need to cast, but i don't think so CAST('20110101' AS DATETIME)

SQL poor stored procedure execution plan performance - parameter sniffing

I have a stored procedure that accepts a date input that is later set to the current date if no value is passed in:
CREATE PROCEDURE MyProc
#MyDate DATETIME = NULL
AS
IF #MyDate IS NULL SET #MyDate = CURRENT_TIMESTAMP
-- Do Something using #MyDate
I'm having problems whereby if #MyDate is passed in as NULL when the stored procedure is first compiled, the performance is always terrible for all input values (NULL or otherwise), wheras if a date / the current date is passed in when the stored procedure is compiled performance is fine for all input values (NULL or otherwise).
What is also confusing is that the poor execution plan that is generated in is terrible even when the value of #MyDate used is actually NULL (and not set to CURRENT_TIMESTAMP by the IF statement)
I've discovered that disabling parameter sniffing (by spoofing the parameter) fixes my issue:
CREATE PROCEDURE MyProc
#MyDate DATETIME = NULL
AS
DECLARE #MyDate_Copy DATETIME
SET #MyDate_Copy = #MyDate
IF #MyDate_Copy IS NULL SET #MyDate_Copy = CURRENT_TIMESTAMP
-- Do Something using #MyDate_Copy
I know this is something to do with parameter sniffing, but all of the examples I've seen of "parameter sniffing gone bad" have involved the stored procedure being compiled with a non-representative parameter passed in, however here I'm seeing that the execution plan is terrible for all conceivable values that SQL server might think the parameter might take at the point where the statement is executed - NULL, CURRENT_TIMESTAMP or otherwise.
Has anyone got any insight into why this is happening?
Basically yes - parameter sniffing (in some patch levels of) SQL Server 2005 is badly broken. I have seen plans that effectively never complete (within hours on a small data set) even for small (few thousand rows) sets of data which complete in seconds once the parameters are masked. And this is in cases where the parameter has always been the same number. I would add that at the same time I was dealing with this, I found a lot of problems with LEFT JOIN/NULLs not completing and I replaced them with NOT IN or NOT EXISTS and this resolved the plan to something which would complete. Again, a (very poor) execution plan issue. At the time I was dealing with this, the DBAs would not give me SHOWPLAN access, and since I started masking every SP parameter, I've not had any further execution plan issues where I would have to dig in to this for non-completion.
In SQL Server 2008 you can use OPTIMIZE FOR UNKNOWN.
One way I was able to get around this problem in (SQL Server 2005) instead of just masking the parameters by redeclaring local parameters was to add query optimizer hints.
Here is a good blog post that talks more about it:
Parameter Sniffing in SqlServer 2005
I used: OPTION (optimize for (#p = '-1'))
Declare the procedure parameter inside the procedure and pass the external parameter to the internal .. compile ..