I have a Splunk search string. If I add earliest=10/05/2020:23:59:58, the search string still works. However, if I changed that to earliest=10/05/2020:23:59:58:01, I got an error message say invalid value "10/05/2020:23:59:58:01" for time term 'earliest'. Does that mean Splunk's earliest parameter's precision is to second only? I cannot find the answer in their documents.
Thanks!
Yes, earliest's precision is limited to "standard" Unix epoch time (ie the number of elapsed seconds since the dawn of Unix (arbitrarily set to 01 Jan 1970 00:00:01 (or, sometimes, 31 Dec 1969 23:59:59))) because the _time field holds whole-number seconds.
Splunk knows how to convert timestamps seen with more precision than mere seconds, but that does not mean _time natively holds them.
_time, and, therefore, anything that references it (like earliest) does not understand subsecond precision. For that, you will need to have another field that contains it in your event.
For millisecond search time, include timeformat=%m/%d/%Y:%H:%M:%S:%3N together with your earliest=10/05/2020:23:59:58:01.
Related
I am applying the VRP example of optaplanner with time windows and I get feasible solutions whenever I define time windows in a range of 24 hours (00:00 to 23:59). But I am needing:
Manage long trips, where I know that the duration between leaving the depot to the first visit, or durations between visits, will be more than 24 hours. So currently it does not give me workable solutions, because the TW format is in 24 hour format. It happens that when applying the scoring rule "arrivalAfterDueTime", always the "arrivalTime" is higher than the "dueTime", because the "dueTime" is in a range of (00:00 to 23:59) and the "arrivalTime" is the next day.
I have thought that I should take each TW of each Customer and add more TW to it, one for each day that is planned.
Example, if I am planning a trip for 3 days, then I would have 3 time windows in each Customer. Something like this: if Customer 1 is available from [08:00-10:00], then say it will also be available from [32:00-34:00] and [56:00-58:00] which are the equivalent of the same TW for the following days.
Likewise I handle the times with long, converted to milliseconds.
I don't know if this is the right way, my consultation would be more about some ideas to approach this constraint, maybe you have a similar problematic and any idea for me would be very appreciated.
Sorry for the wording, I am a Spanish speaker. Thank you.
Without having checked the example, handing multiple days shouldn't be complicated. It all depends on how you model your time variable.
For example, you could:
model the time stamps as a long value denoted as seconds since epoch. This is how most of the examples are model if I remember correctly. Note that this is not very human-readable, but is the fastest to compute with
you could use a time data type, e.g. LocalTime, this is a human-readable time format but will work in the 24-hour range and will be slower than using a primitive data type
you could use a date time data tpe, e.g LocalDateTime, this is also human-readable and will work in any time range and will also be slower than using a primitive data type.
I would strongly encourage to not simply map the current day or current hour to a zero value and start counting from there. So, in your example you denote the times as [32:00-34:00]. This makes it appear as you are using the current day midnight as the 0th hour and start counting from there. While you can do this it will affect debugging and maintainability of your code. That is just my general advice, you don't have to follow it.
What I would advise is to have your own domain models and map them to Optaplanner models where you use a long value for any time stamp that is denoted as seconds since epoch.
I want to be able to convert the current UTC time to a user's timezone, and get the value as a timestamp. Simple.
I seem to be having some difficulty though... this is what I've tried:
Carbon::now()->setTimezone('America/Los_Angeles'); // this works, but returns the time in the format "Y-m-d H:i:s"
So then I tried this:
Carbon::now()->setTimezone('America/Los_Angeles')->timestamp; // for some reason doing this seems to revert the timezone conversion and instead returns the timestamp for the current UTC time? :/
So, I'm using this workaround:
$converted_time = Carbon::now()->setTimezone('America/Los_Angeles');
return Carbon::createFromFormat('Y-m-d H:i:s', $converted_time)->timestamp;
...which gives me what I need, but seems unnecessary. Is there a better way to do this (in one line)?
Thanks.
The Unix Timestamp is per definition the number of second since 1st January 1970 UTC.
If you're trying to calculate the number of second since an other date (like 1st January in Los Angeles), you're calculating something else that is not Unix Timestamp.
To make sense, it's very important that the timestamp as a number would be a non-ambigus representation of a single second that can be used safely worldwide and still represent the same moment. If you have a number that can be different moment regarding from which timezone you're looking at it, you're likely not using the right tool, and would get more safety to store date-time + timezone.
So the very first question, is why would you need to get the number of second since 1st January in Los Angeles?
You can read more here about how to properly work with UTC and why to avoid having a strong dependence in your back-end with calculations in other timezones:
https://kylekatarnls.medium.com/always-use-utc-dates-and-times-8a8200ca3164
I can't seem to find a question/answer that works for what I'm trying to achieve. Currently, this is how my DB outputs a timestamp:
2015-08-18T19:43:04.738-06:00
However, I would like it to appear as such in the column:
2015-08-18T19:43:04.738 America/Denver
Google has recently changed their formatting options and instead of downloading the output and performing a find/replace, I want an output that doesn't require additional work. I looked on SO and have tried using trim and replace but having no luck.
Thanks for the help in advance!
For whatever reason, the one we've used since February (third from the bottom) no longer works.
2015-08-18T19:43:04.738-06:00 is not quite the right format. Google does not accept milliseconds (which is annoying if they don't just ignore it). You need to send 2015-08-18T19:43:04-06:00. They may have become more strict in what they accept.
Try date_trunc('second', yourtime).
It's not possible to accurately translate an offset like -0600 to a time zone like America/Denver. They say two different things.
-0600 says, with absolute certainty, that this time is 6 hours behind UTC. 12:00:00-06:00 and 18:00:00Z (Z represents UTC) are the same time.
America/Denver means to interpret this timestamp under the rules applicable to the city of Denver, Colorado, USA at that time. To figure out what time it is in UTC you need to look up the offset rules for Denver, Colorado, USA. The offset will change depending on the time of year, usually because of daylight savings time. Because the rules change, it's important to apply the rules as they were at that time.
For example, 2006-03-15 12:00 America/Denver is -0700. But the next year on 2007-03-15 12:00 America/Denver is -0600. Between 2006 and 2007 the daylight savings time rules in the US changed.
Whereas -06:00 avoids all that and simply says the time is offset from UTC by six hours.
You could fake it by simply replacing the offset with America/Denver. So long as you're only sending recent times that should work. You'll be off by at most an hour. But don't do that.
Unless Google Ads specifically needs a time zone there's no point in sending them one. Internally, Postgres is storing your times in UTC anyway and translating them to your server's time zone, America/Denver. Send Google UTC. And, as noted above, chop off the milliseconds.
select date_trunc('second', '2015-08-18T19:43:04.738-06:00'::timestamp with time zone at time zone 'UTC') as datetime;
datetime
---------------------
2015-08-19 01:43:04
I have two time value like
Time1=23:59:59:999
Time2=23:59:59:999
when i add up these two time (Time1+Time2) and i want result will be 47:59:58 rather than 23:59:58.
How can i do this? please suggest!!
Let's look at times and durations:
A datetime is a moment in time, say June 28, 1978 at 15:23.
A time without a datepart is a repetitive moment in time, e.g. "I get up every day at 7:00". (This is what you are using. But I get up at 8:00 and go to sleep at 23:00 doesn't make 8 + 23 = 31. It makes no sense to add times.)
Then there is timespan (e.g. from 2016-01-01 3:00 to 2016-01-02 13:00).
And then there is duration (e.g. six minutes). This is what you want to deal with.
You are storing a duration in a time, which is not really the appropriate data type. As SQL Server does not provide a special data type for a duration, you can use a numeric type for this and store seconds or microseconds or whatever you think appropriate. These you can easily add (provided both values have the same unit, e.g. microseconds).
As to displaying the duration you can write a function providing you with the format you like best for a duration (e.g. '913 days, 9 hours, 5 minutes, and 55.123 seconds').
Actually, that is really hard in SQL Server. How about '1 23:59:58'? If so:
select cast(time1 as datetime) + time2
If you actually want the format in HH:MM:SS format, then you will need to do a lot of string manipulation.
My table has a category 'timestamp' where the timestamps are formatted 2015-06-22 18:59:59
However, using DBVisualizer Free 9.2.8 and Vertica, when I try to pull up rows by timestamp with a
SELECT * FROM table WHERE timestamp = '2015-06-22 18:59:59';
(directly copy-pasting the stamp), nothing comes up. Why is this happening and is there a way around it?
FYI, saying "the timestamps are formatted 2015-06-22 18:59:59" is incorrect if you are indeed using a TIMESTAMP type. Such types have their own internal representation of a date-time value, almost always a count since epoch. In your case with Vertica, 8 bytes are used for such storage. The formatting of the date-time value happens when a string representation is generated. Never confuse the string representation with the date-time value. Conflating the two may well be related to your problem/confusion.
A few different thoughts about possible problems…
String Literals
Are you sure Vertica takes strings as timestamp literals? That format you used is common SQL format. But given that Vertica seems to be a specialized database, I would double-check that.
If strings are not allowed, you may need to call some kind of function to transform the string into a date-time values.
Fractional Second
As the comment by Martin Smith points out, the doc for Timestamp-related data types in Vertica 7.1 says those types can have a fractional second to resolution of microseconds. That means up to 6 decimal places of a fraction.
So if you are searching for "2015-06-22 18:59:59" but the stored value is "2015-06-22 18:59:59.012345", no match on the query.
Half-Open
The fractional seconds issue described above is often the cause of problems people have when handling a span of time. If you naïvely try to pinpoint the ending time, you are likely to have problems. Seeing the "59:59" in your example string makes me think this applies to you.
The better approach to spans of time is "Half-Open" (or Half-Closed, whatever) where the beginning is inclusive while the ending is exclusive. Common notation for this is [). In comparison logic this means: value >= start AND value < stop. Notice the lack of EQUALS SIGN in the stop comparison. In English we would say "look for an hour's worth of invoices starting at 2:00 PM and going up to, but not including, 3:00 PM".
Half-Open for a week means Monday-Monday, for a month the first of one month to the first of the next month, and for a year the January 1 of one year to January 1 of the following year.
Half-Open means not using BETWEEN in SQL. SQL's BETWEEN has often be criticized. Instead do something like the following to look for an hour's worth of invoices. Notice the Z on the end of string literal which means "UTC time zone" ("Z" for "Zulu"). (But verify, as my SQL syntax may need fixing.)
SELECT *
FROM some_table_
WHERE invoice_received_ >= '2015-06-22 18:00:00Z'
AND invoice_received_ < '2015-06-22 19:00:00Z'
;
This query will catch any values such as '2015-06-22 18:59:59.654321" which seems to be eluding you.
Reserved Word
I hope you have not really named your table 'table' and your column 'timestamp'. Such use of keywords and reserved words can cause explicit errors or more subtle weird problems.
Tip: The easy way to avoid any of the over a thousand reserved words in various databases is to append a trailing underscore. The SQL standard explicitly promises to never using a trailing underscore in its reserved words. So use "timestamp_" rather than "timestamp". Another example: "invoice_" table and "received_" column. I recommend doing that as a habit on everything your name in SQL: columns, tables, constraints, indexes, and so on.
Time Zone
You are using the TIMESTAMP which is short for TIMESTAMP WITHOUT TIME ZONE. Or so I presume; the Vertica doc is vague but that is the common usage as seen in the Postgres doc, and may even be standard SQL.
Anyways, TIMESTAMP WITHOUT TIME ZONE is usually the wrong type for most business purposes. The WITH time zone is misnamed and often misunderstood as a consequence: It means "with respect for time zone" where data inputs that include an offset or other time zone information from UTC are adjusted to UTC during the INSERT/UPDATE operations. The WITHOUT type simply ignores any such offset or time zone information.
The WITHOUT type should only be used for the concept of a date-time generally without being tied to any one locality. For example, saying "Christmas this year starts at beginning of December 25, 2015". That means in any time zone rather than a specific time zone. Obviously Christmas starts earlier in Paris, for example, than in Montréal.
If you are timestamping legal documents such as invoices, or booking appointments with people across time zones, or scheduling shipments in various localities, you should be using WITH time zone type.
So back to your possible problem: Test how Vertica or your client app or your database driver is handling your input string. It may be adjusting time zones as part of the parsing of the string using your client machine’s current default time zone. When sent to the database, that value will not match the stored value if during storage no adjustment to UTC was made.
Tip: Generally best practice is to do all your storage and business logic in UTC, adjusting to local time zones only where expected by user.