I have an API where I have created a token and it has replied with.
expires_in=7776000
created_at=1463347242
expires_in seems to be seconds (also what the spec calls for) 7776000 / 60 / 60 / 24 = 90 days
However I have no idea how to even begin to decode the created_at response and the endpoint doesn't have any documentation on it.
The created_at value is just the unix timestamp in seconds the token was created at. In your case it is the RFC 2822 date Sunday, 15-May-16 21:20:42 UTC. For this field I couldn't find a specification, but it seems to be a common field to many implementations. With it you can calculate the absolute timestamp of expiration: expires_at = created_at + expires_in
Related
I am trying to get a handle on the interaction between DateTime.Now and ISO-8601 timezone designation for an API I am working on. An endpoint incorporates a timestamp as criteria for the interrogation of our data historian. The EndPoint returns the nearest previous 1/2 hour average for the datapoint requested and incorporates the following code:
If criteria.EndTime = "" Then
timeStamp = DateTime.Now
Else
timeStamp = CType(criteria.EndTime, DateTime)
End If
The body of the post accepts the following json:
{
"Interval": "30m",
"StartTime": "",
"EndTime": "2022-04-21T00:45:00Z",
"TagList": "PointName",
"Delim":",",
"PointSource": "P",
"ServerName": "PIServer"
}
Where the EndTime is the parameter picked up by the endpoint. We are currently in British Summer Time i.e. DST is UTC +01:00. With the above criteria the data I get back is timestamped as follows:
[
{
"PointName": "PointName",
"TimeStamp": "21/04/2022 01:30:00",
"Value": "-2.3607"
},
{
"PointName": "PointName",
"TimeStamp": "21/04/2022 01:30:00",
"Value": "-2.6333"
}
]
As you can see the data returned is for 01:30 rather than the 00:30 that is expected. If I leave the 'Z' designation out of the criteria then I get the correct content returned. Can someone explain what is happening here please.
Let's assume the client sent UTC based time (yours is) and the server responds in UTC time (it does not). As described, ...T00:30:00Z (00:30:00 UTC) is the correct answer to your input of ...T00:45:00Z based on what you said the endpoint does.
Now let's say the server responds in BST (which, it seems to) to the same UTC based client request. Then ...T01:30:00 (no time zone designation, assumed to be BST) is the correct answer for your input because 00:30:00 UTC is equivalent to 01:30:00 BST, which is the correct result given the server's rules.
I think you've missed the fact that the server appears to always be returning BST (perhaps that's local to the server) without a time zone designation (i.e. you could argue it should be returning T01:30:00+1:00)
It appears that if you send UTC (or any other time with time zone info), you get a BST response with no time zone designation, and if you send a time with no time zone info, the server assumes you're sending BST, and still returns BST.
So, when you remove the Z from your request, you get what you think is the wrong answer, but isn't because T00:45:00 (aka 00:45:00 assumed BST) the server will respond with T00:30:00 (aka 00:30:00 assumed BST).
I suspect if you sent T00:45:00+1:00 you'd get back T00:00:00+1:00, but if you don't (i.e. no to zime zone info), then it might be a server bug. You could test this by sending T00:45:00+2:00 and seeing if you get back T00:00:00+2:00 or not.
I am using spring-session and wondering how to know a specific session expired.
Querying Redis persistence, I find a couple of the following lines
[ root#redis-master-6fbbc44567-cc28m:/data ]$ redis-cli
127.0.0.1:6379> keys *
1) "spring:session:expirations:1581796140000"
2) "spring:session:sessions:23d6aff1-cb43-44f6-920d-cc3536ab6d46"
127.0.0.1:6379>
Converting the expirations to date, they are equavalent to Mon 14 Feb 52095 16:40:00 GMT which looks weird at the year.
We might extract the expired time from HttpSession:
HttpSession httpSession = request.getSession()
long currTime = System.currentTimeMillis()
long expiryTime = currTime + httpSession.getMaxInactiveInterval()
But the snippet doesn't show exactly what weed anticipate.
I reckon we need to retrieve expirations from Redis server instead.
What am I doing wrong here?
Any of you know how to retrieve sessions and expirations from Redis?
The expiration timestamp 1581796140000 is a Unix epoch in milliseconds and translates to GMT: Saturday, February 15, 2020 7:49:00 PM.
When attempting to send an event via post to your api in version 4, I am sending
"data"=>
{"id"=>"bfc50100-02eb-11e9-b178-db8890d0b369",
"name"=>"Name of Event",
"type"=>nil,
"description"=>nil,
"start_epoch"=>1343815200,
"end_epoch"=>1343869200,
"archived"=>0,
"deleted"=>0,
"is_public"=>0,
"status"=>"ACTIVE",
"has_time"=>1,
"timezone"=>nil,
"legacy_id"=>nil,
"created_at"=>"2018-12-18T17:38:36.000Z",
"updated_at"=>"2018-12-18T17:38:36.000Z",
"industry"=>nil}}
And receiving success from your API, but when going to the url for this event, I am seeing the date formatted as 1/18/70, though in Unix time this should be showing as 8/1/2012.
This occurs with all dates. Am I missing something? Is there another date format you would like? The term epoch led me to believe that you wanted a standard unix timestamp.
you need to send unix time stamp, e.g., 1545326867 - which is in milliseconds
I am making following request for Snapchat API:
GET https://adsapi.snapchat.com/v1/ads/7e4ebe9a-f903-4849-bd46-c590dbb4345e/stats?
granularity=DAY
&fields=android_installs,attachment_avg_view_time_millis,attachment_impressions,attachment_quartile_1,attachment_quartile_2,attachment_quartile_3,attachment_total_view_time_millis,attachment_view_completion,avg_screen_time_millis,avg_view_time_millis,impressions,ios_installs,quartile_1,quartile_2,quartile_3,screen_time_millis,spend,swipe_up_percent,swipes,total_installs,video_views,view_completion,view_time_millis,conversion_purchases,conversion_purchases_value,conversion_save,conversion_start_checkout,conversion_add_cart,conversion_view_content,conversion_add_billing,conversion_searches,conversion_level_completes,conversion_app_opens,conversion_page_views,attachment_frequency,attachment_uniques,frequency,uniques,story_opens,story_completes,conversion_sign_ups,total_installs_swipe_up,android_installs_swipe_up,ios_installs_swipe_up,conversion_purchases_swipe_up,conversion_purchases_value_swipe_up,conversion_save_swipe_up,conversion_start_checkout_swipe_up,conversion_add_cart_swipe_up,conversion_view_content_swipe_up,conversion_add_billing_swipe_up,conversion_sign_ups_swipe_up,conversion_searches_swipe_up,conversion_level_completes_swipe_up,conversion_app_opens_swipe_up,conversion_page_views_swipe_up,total_installs_view,android_installs_view,ios_installs_view,conversion_purchases_view,conversion_purchases_value_view,conversion_save_view,conversion_start_checkout_view,conversion_add_cart_view,conversion_view_content_view,conversion_add_billing_view,conversion_sign_ups_view,conversion_searches_view,conversion_level_completes_view,conversion_app_opens_view,conversion_page_views_view
&swipe_up_attribution_window=28_DAY
&view_attribution_window=1_DAY
&start_time=2018-10-05T00:00:00.000-08:00
&end_time=2018-10-19T00:00:00.000-08:00
Getting following Error:
{
"request_status": "ERROR",
"request_id": "5bf3f47e00ff060ab0faf7f4330001737e616473617069736300016275696c642d30666635373463642d312d3232302d350001010c",
"debug_message": "The start time should be start of a Local Time Zone day for DAY query.",
"display_message": "We're sorry, but the data provided in the request is incomplete or incorrect",
"error_code": "E1008"
}
Certain date ranges will work and others won't. It also doesn't matter what timezone offset (Europe/London +00:00, Los Angeles, -08:00) I use or how I format the request dates (2018-10-01T00:00:00Z, 2018-10-01T00:00:00.000, 2018-10-01T00:00:00.000-08:00, etc) for the ad stats request date range, the error will come back the same. The error has a code but it's not detailed in Snapchat's documentation. All they say is "it's a bad request".
For example, one ad would let me query 29/10/2018 to date or even 29/10/2018 to 30/10/2018 but as soon as I change it to 28/10/2018, it fails with the same error.
There's no apparent start/end times on ads as I thought it might be related to that. It's also not related to the campaign start/end times in this one case we tested.
API DOC: https://developers.snapchat.com/api/docs/?shell#overview
Solved the issue with above error. I forgot to consider the day light saving while passing the timezone offset.
For e.g. We need to check if there is day light saving for the start_time or end_time and adjust the offset accordingly for that timezone.
When a server gives Cache-Control: max-age=4320000,
Is the freshness considered 4320000 seconds after the time of request, or after the last modified date?
RFC 2616 section 14.9.3:
When the max-age
cache-control directive is present in a cached response, the response
is stale if its current age is greater than the age value given (in
seconds) at the time of a new request for that resource. The max-age
directive on a response implies that the response is cacheable (i.e.,
"public") unless some other, more restrictive cache directive is also
present.
It is always based on the time of request, not the last modified date. You can confirm this behavior by testing on the major browsers.
tl;dr: the age of a cached object is either the time it was stored by any cache or now() - "Date" response header, whichever is bigger.
Full response:
The accepted response is incorrect. The mentioned rfc 2616 states on section 13.2.4 that:
In order to decide whether a response is fresh or stale, we need to compare its freshness lifetime to its age. The age is calculated as described in section 13.2.3.
And on section 13.2.3 it is state that:
corrected_received_age = max(now - date_value, age_value)
date_value is the response header Date:
HTTP/1.1 requires origin servers to send a Date header, if possible, with every response, giving the time at which the response was generated [...] We use the term "date_value" to denote the value of the Date header.
age_value is for how long the item is stored on any cache:
In essence, the Age value is the sum of the time that the response has been resident in each of the caches along the path from the origin server, plus the amount of time it has been in transit along network paths.
This is why good cache providers will include a header called Age every time they cache an item, to tell any upstream caches for how long they cached the item. If an upstream cache decides to store that item, its age must start with that value.
A practical example: a item is stored on the cache. It was stored 5 days ago, and when this item was fetched, the response headers included:
Date: Sat, 1 Jan 2022 11:05:05 GMT
Cache-Control: max-age={30 days in seconds}
Age: {10 days in seconds}
Assuming now() is Feb 3 2022, the age of the item must be calculated like (rounding up a bit for clarity):
age_value=10 days + 5 days (age when received + age on this cache)
now - date_value = Feb 3 2022 - 1 Jan 2022 = 34 days
The corrected age is the biggest value, that is 34 days. That means that the item is expired and can't be used, since max-age is 30 days.
The RFC presents a tiny additional correction that compensates for the request latency (see section 3, "corrected_initial_age").
Unfortunately not all cache servers will include the "Age" response header, so it is very important to make sure all responses that use max-age also include the "date" header, allowing the age to always be calculated.