I'm having the following problem:
I need to configure the TIMEZONE of my PostgreSQL installation, because from different terminals I'm obtaining different results when converting timestamps to dates.
I have read that the command to change the time zone is: SET TIMEZONE = 'xxx'.
However, from one terminal I can set the parameter without problems, but from the production server, whenever I set the timezone and I query with SELECT current_setting('TIMEZONE'); I obtain UTC (which is not the time zone I'm setting it to).
It seems to not follow the command and keep the value it has already configured.
Any reason why such a behaviour could be occurring? Am I operation under some false assumption?
You must be doing something wrong, like querying from different connections. The SET command changes the setting only for the current database session. Perhaps you are using a connection pool, then you will have to set the parameter every time you get a connection from the pool.
Related
I know there is a system function to get the host name - which returns the Server Name (Application server name in my case).
SELECT HOST_NAME()
Is there a similar function like HOST_TIME() to get the time of application server?
Or is there any workaround at database side to get the time of application server when a procedure is called?
This doesn't sound like a database question to me. You should have the application pass along a timestamp along with whatever query is getting sent along to the database server.
If you must define this at the database level, why don't you simply add the time zone difference? I'm not sure where you're located, though here in the US if I had that issue I would simply do at GETDATE() and then use DATEADD to adjust to the timezone of the other table. The only thing which you would need to spend some time on would be what to do when the time changes (daylight savings time etc.). In my case, none of our users use our system at that hour, though that may be different in your case.
Good luck, and let us know what happens.
I'm using SQL Server 2016 at Amazon AWS. My emails are being sent with incorrect times when sent from my pupper at Amazon. When I try to recreate this bug locally the times are correct. Here is an example of how I use at time zone.
getDate() at time zone 'utc' at time zone u.timezone
where u.timezone is the user's timezone and u refers to an aliased table users.
The times being outputted are at UTC time, so I see 7:36pm instead of 2:36pm (they are formatted with MomentJS)
I don't really know where to start with this one, sorry guys and gals.
UPDATE
My server is sending the correct time (with the correct timezone offset) to the email factory. When the server creates the emails, times are formatted using MomentJS. The barebones moment() function will take a time with a timezone offset (-5:00) and adjust it to the local machine's local time. Local time on my machine is EST, but in Amazon (where the email is being created) is not. Thus I must use moment.parseZone().
From the MomentJS docs:
If your date format has a fixed timezone offset, use moment.parseZone:
moment.parseZone("2013-01-01T00:00:00-13:00");
This results in a date with a fixed offset:
"2013-01-01T00:00:00-13:00"
Since I can't see this change until it is pushed onto our dev environment, I won't be able to know if this fixed it, but I think this was the problem.
My server was sending the correct time (with the correct timezone offset) to the email factory. When the server created the emails, times were formatted using MomentJS.
The barebones moment() function takes a time with a timezone offset (-5:00) and adjusts it to the local machine's local time.
Local time on my machine is EST, but in Amazon (where the email is being created) is not. Thus I must use moment.parseZone().
Changing to moment.parseZone() fixed this issue. Problem solved.
I have in a table a nullubule timestamp that tracks when the entry got called from a client. Sometimes something goes wrong on the client side and I need to set the timestamp back to null. I tried directly in SQL management studio to execute the query:
USE [MyDB]
GO
UPDATE [dbo].[MyTable]
SET [MyTimestamp]=null
WHERE ID=SomeInt;
I get the message that one row got altered but when I refresh my select * on the table there is no change on the timestamp.
PS: The whole DB runs on an azure server but I can also not get it to work on my test DB on local host in SQL Server 2014.
Would be grateful for input
The answer is you cannot change the timestamp column to NULL. It is like a row version number.
Also
The timestamp data type is just an incrementing number and does not
preserve a date or a time.
There are some workarounds which you can use as the one which is used here in the related thread but now Timestamp datatype is rarely used.
We have an application ,where the session data of the application is stored in a table, from that table we have a SQL job which places the above data in one more table segregating it more meaningfully.
When we created the job ,the job passed in DEV environment and TEST ,but when we implemented the job in production and stage, the job is failing with below error.
Conversion failed when converting date and/or time from character
string
We tried restoring the DB to some other instance other than where the application DB resides and the SQL job is completing successfully. The Job is failing only in the instance where application DB resides.
Steps what we tried:
We tried comparing the SQL configuration of the instances where the job completed successfully to the instance where it is failing, no differences
we tried executing the stored proc manually writing some print statements to see if it really a code issue, this didn't helped us since the job is not failing for a particular session GUID and the same step is passing in DEV environment.
We are not able to figure out why this is happening only on instances where application DB resides.
"Conversion failed when converting date and/or time from character string". This is error is based on data. It has a string which is not in required format to be converted to a data. The issue is not with code, its with data. Add a preprocessing step to convert data to requires format.
Check the default language of the server account the job is being run under - my guess would be something similar to DEV/TEST's account being set to British English while LIVE is set to English.
However, even if that is the case, this still only indicates why the issue appears on LIVE. The underlying thing you should do to correct this is make sure that your job makes no assumptions about date formats, does not do any implicit date conversions, and holds dates in date variables/columns, not character ones.
This is more of a question than a problem as our production system is working as intended.
I am relatively new to the SQL environment. I've been poking through various configurations on the server just to get myself familiarized with the system. One thing that I noticed is the mail queues seem to use the UTC time instead of the local time. For example, if I run
exec sysmail_help_queue_sp
The last empty_rowset_time column shows a time that is exactly 12 hours behind the value getdate() returns (I am in New Zealand) and happens to coincide with the value of getutcdate(). I was more than a little surprised to say the least. The server is configured with the correct time zone (Auckland/Wellington).
I have made sure that the value(s) in the last_empty_rowset_time is indeed updated every time I sp_send_dbmail.
Does anyone know why this is the case? I am just curious to know. I do apologize for my newbiness if this sounds obvious to some of you.
Thanks.
James
This isn't something that is affected by local configuration. Based on documentation here:
http://msdn.microsoft.com/en-us/library/ms187400.aspx
Microsoft explicitly states "military time format and GMT time zone". If you want to see it in your local time zone you'll have to modify your query as such.