I went to GitHub issues to raise a support ticket but thought of asking the question first to avoid noise.
This is what the docs says-
Omit the version completely or use "latest" to load the latest one (not recommended for production usage):
/npm/jquery#latest/dist/jquery.min.js
/npm/jquery/dist/jquery.min.js
According to the doc, either we can latest or omit it completely to load the latest version. But I'm seeing a difference-
With latest added (URL 1 - U1)
Example- https://cdn.jsdelivr.net/npm/#letscooee/web-sdk#latest/dist/sdk.min.js
It loads the last released version that is cached for 24 hours. That means if we release v2 & v3 within 24 hours, the above URL will still show v1.
The caching period is 1 week.
Without latest (URL 2 - U2)
Example- https://cdn.jsdelivr.net/npm/#letscooee/web-sdk/dist/sdk.min.js
While we omit the latest completely, this loads the latest release immediately i.e. v3 and the caching period is also 1 week.
I have requested for the purge API as per their docs but I believe this behaviour is not aligning with their docs.
Tried to Google the cause and read their docs 3 times. Am I missing something?
Edit 1
After reading Martin's answer, I did the following-
(To view the images, open them in new tab & remove t before .png)
Step Taken
# Time
U1
U2
Purge Cache
12:39:00 UTC
Purged
Purged
See Age Header
# 12:40 UTC
0
0
See Date Header
# 12:40 UTC
Sun, 12 Sep 2021 12:40:25 GMT
Sun, 12 Sep 2021 12:40:31 GMT
Headers
12:41:00 UTC
Result
12:41:00 UTC
Points to latest release 0.0.3
Points to latest release 0.0.3
Publish new NPM release 0.0.4
12:48:00 UTC
Refresh both the URLs
12:49:00 UTC
Shows old release 0.0.3
Shows latest release 0.0.4
The last step shows that I was wrong here. This is working as expected (i.e. showing 0.0.3 only) as per the docs
The caching time is the same in both cases - 12 hours at the CDN level and 7 days in the browser: cache-control: public, max-age=604800, s-maxage=43200
That doesn't necessarily mean both URLs will always return the same content because both the CDN and your browser calculate the expiration for each URL independently, based on when it was first retrieved, so the CDN may serve different versions for up to 12 hours after the release.
Seems to me both the links point to the same sdk URL.
As per how cdns work would be to mention the version of the sdk for example:
<script src="https://unpkg.com/three#0.126.0/examples/js/loaders/GLTFLoader.js"></script>
or as per below which will always point to the latest version of the sdk:
<script src="https://cdn.rawgit.com/mrdoob/three.js/master/examples/js/loaders/GLTFLoader.js"></script>
Related
I am using our Enterprise's Splunk forawarder which seems to be logging events in splunk like this which makes reading splunk logs a bit difficult.
{"log":"[https-jsse-nio-8443-exec-5] 19 Jan 2021 15:30:57,237+0000 UTC INFO rdt.damien.services.CaseServiceImpl CaseServiceImpl :: showCase :: Case Created \n","stream":"stdout","time":"2021-01-19T15:30:57.24005568Z"}
However, there are different Orgs in our Sibling Enterprise who log splunks thus which is far more readable. (No relation between us and them in tech so not able to leverage their tech support to triage this)
[http-nio-8443-exec-7] 15 Jan 2021 21:08:49,511+0000 INFO DaoOImpl [{applicationSystemCode=dao-app, userId=ANONYMOUS, webAnalyticsCorrelationId=|}]: This is a sample log
Please note the difference in logs (mine vs other):
{"log":"[https-jsse-nio-8443-exec-5]..
vs
[http-nio-8443-exec-7]...
Our Enterprise team is struggling to determine what causes this. I checked my app.log which looks ok (logged using Log4J) and doesn't have the aforementioned {"log" :...} entry.
[https-jsse-nio-8443-exec-5] 19 Jan 2021 15:30:57,237+0000 UTC INFO
rdt.damien.services.CaseServiceImpl CaseServiceImpl:: showCase :: Case
Created
Could someone guide me as to where could the problem/configuration lie that is causing the Splunk Forwarder to send the logs with the {"log":... format to splunk? I thought it was something to do with JSON type vs RAW which I too dont understand if its the cause and if it is - what configs are driving that?
Over the course of investigation - I found that is not SPLUNK thats doing this but rather the docker container. The docker container defaults to json-file that writes the outputs to the /var/lib/docker/containers folder with the **-json postfix which contains the logs in the `{"log" : <EVENT NAME} format.
I need to figure out how to fix the docker logging (aka the docker logging driver) to write in a non-json format.
During a transition, our S3 costs jumped a lot due to ListBucket and HeadObject calls. We are trying to figure out how to debug a sudden increase in our S3 costs. We made some changes that should NOT have affected it but the major change seems to be
10-20X increase in HeadObject calls
Sudden appearance of ListBucket calls
I have attached a chart showing the jump between the April 10, 2018 and April 14, 2018. The dates in between, we made the following changes
Changed from (debian 8) S3FS v1.61 (super old from 2012, not even in Github) to v1.84 (latest)
https://github.com/s3fs-fuse/s3fs-fuse
Moved from N. Virginia to N. California AZ (10% higher cost)
The giant yellow bars are showing the moving of the files using Amazon CLI (April 11 to 13)
In order to try to calm this down, we added to the mount command in /etc/fstab the following:
noatime,stat_cache_expire=3600,enable_noobj_cache
The bars that look uneven starting Apr 14 are now stable around $25/day
Options that are already there were there since the start (no change)
_netdev,allow_other,use_cache=/tmp,umask=0000,use_path_request_style,ensure_diskfree=10240
We have done the following to try to debug this
Enabled S3 Logging
Dumped the logs into Athena and then CSV export into MySQL
These logs are just 1 days worth
Screenshot "query 1" shows that there is 4.8m hits into a path ... basically, we think it is traversing the entire directory tree (with most like about 100k files) looking for a file if it exists
Screenshot "query 2" shows the same thing (kind of) where it is also doing down a path
Not really sure what else to do but our normal bill of about $5/day (including other services) is now about $25/day (5x increase) .. with the /etc/fstab changes, it is down to $13/day but still trying to get it to $5/day if we can get back to the zero ListBucket calls and 20% of the HeadObject calls.
Any ideas on what to try greatly appreciated.
ListBucket and HeadObject API calls were being made updatedb (and located).
Solution: Add your mount point (in my case /mnt/s3fs) to PRUNEPATHS in /etc/updatedb.conf so updatedb does not include this when it scans
https://linux.die.net/man/5/updatedb.conf
https://github.com/s3fs-fuse/s3fs-fuse/issues/193#issuecomment-109617253
Is it some sort of overflow?
phantomjs> new Date("1400-03-01T00:00:00.000Z")
"1400-03-01T00:00:00.000Z"
phantomjs> new Date("1400-02-28T20:59:59.000Z")
"1400-02-27T20:59:59.000Z"
what you would expect:
>>(new Date("1400-03-01T00:00:00.000Z")).toISOString()
"1400-03-01T00:00:00.000Z"
>>(new Date("1400-02-28T20:59:59.000Z")).toISOString()
"1400-02-28T20:59:59.000Z"
apparently there is a gap of 24 hours when parsing dates between the 28th of February in 1400 and the 1st of March in 1400.
any ideas?
Phantomjs anyway is obsolete but still ... our legacy tests are failing when we try to upgrade to chrome headless ...
PhantomJS uses a version of Qt WebKit which is maintained independently of Qt.
The date format you are using is part of the ISO-8601 date and time format. [related]
The version of Qt WebKit that PhantomJS uses has a function that parses dates of the form defined in ECMA-262-5, section 15.9.1.15 (similar to RFC 3339 / ISO 8601: YYYY-MM-DDTHH:mm:ss[.sss]Z).
In the source code, we can see that the function used to parse these types of dates is called:
double parseES5DateFromNullTerminatedCharacters(const char* dateString)
The file that contains this function in the PhantomJS repository has not been updated since July 27, 2014, while the official file was updated as recently as October 13, 2017.
It appears that there is a problem in the logic having to do with handling leap years.
Here is a comparison of DateMath.cpp between the most recent versions from the official qtwebkit repository (left) and the PhantomJS qtwebkit repository (right).
After upgrading to TF1.8, Pretty Tensor stopped working with the following error. It seems _VARSCOPE_KEY is removed from variable_scope.
.../lib/python3.6/site-packages/prettytensor/scopes.py in
var_and_name_scope(names)
53 full_name = var_scope.name
54
---> 55 vs_key = tf.get_collection_ref(variable_scope._VARSCOPE_KEY)
56 try:
57 # TODO(eiderman): Remove this hack or fix the full file.
AttributeError: module 'tensorflow.python.ops.variable_scope' has no attribute '_VARSCOPE_KEY'
In the package PyPi page, it is mentioned
Last released: Feb 20, 2017
Similarly, from Github, we see that the last commit was on Feb 1, 2017, regarding "a few more transformations in anticipation of TF1.0".
There is also an open issue on the exact problem you describe.
The last reply to an issue from the package maintainers dates back to March 2017.
All the above are signs of a rather abandoned project, with its present status frozen before the release of Tensorflow 1.0. So, I seriously advise you to move on; if your codebase still have dependencies on this package, you can downgrade to version 1.7 of Tensorflow, which seems to work fine with Pretty Tensor...
When a server gives Cache-Control: max-age=4320000,
Is the freshness considered 4320000 seconds after the time of request, or after the last modified date?
RFC 2616 section 14.9.3:
When the max-age
cache-control directive is present in a cached response, the response
is stale if its current age is greater than the age value given (in
seconds) at the time of a new request for that resource. The max-age
directive on a response implies that the response is cacheable (i.e.,
"public") unless some other, more restrictive cache directive is also
present.
It is always based on the time of request, not the last modified date. You can confirm this behavior by testing on the major browsers.
tl;dr: the age of a cached object is either the time it was stored by any cache or now() - "Date" response header, whichever is bigger.
Full response:
The accepted response is incorrect. The mentioned rfc 2616 states on section 13.2.4 that:
In order to decide whether a response is fresh or stale, we need to compare its freshness lifetime to its age. The age is calculated as described in section 13.2.3.
And on section 13.2.3 it is state that:
corrected_received_age = max(now - date_value, age_value)
date_value is the response header Date:
HTTP/1.1 requires origin servers to send a Date header, if possible, with every response, giving the time at which the response was generated [...] We use the term "date_value" to denote the value of the Date header.
age_value is for how long the item is stored on any cache:
In essence, the Age value is the sum of the time that the response has been resident in each of the caches along the path from the origin server, plus the amount of time it has been in transit along network paths.
This is why good cache providers will include a header called Age every time they cache an item, to tell any upstream caches for how long they cached the item. If an upstream cache decides to store that item, its age must start with that value.
A practical example: a item is stored on the cache. It was stored 5 days ago, and when this item was fetched, the response headers included:
Date: Sat, 1 Jan 2022 11:05:05 GMT
Cache-Control: max-age={30 days in seconds}
Age: {10 days in seconds}
Assuming now() is Feb 3 2022, the age of the item must be calculated like (rounding up a bit for clarity):
age_value=10 days + 5 days (age when received + age on this cache)
now - date_value = Feb 3 2022 - 1 Jan 2022 = 34 days
The corrected age is the biggest value, that is 34 days. That means that the item is expired and can't be used, since max-age is 30 days.
The RFC presents a tiny additional correction that compensates for the request latency (see section 3, "corrected_initial_age").
Unfortunately not all cache servers will include the "Age" response header, so it is very important to make sure all responses that use max-age also include the "date" header, allowing the age to always be calculated.