We are trying to build custom stock levels based on the Warehouse, a user selected. Until the user does not have a selected Warehouse, the stock level should be 0.
Now we are experiencing weird behaviour, where it still shows a Stock Level of 0, even though the user has selected a store with valid stock level for that product.
After some researching, we found several places in the java code, where calls like e.g. getProduct are being cached for 120 seconds or more. I guess this is done due to performance improvements..?
This cache control and max age for these calls even seems to persist over fresh incognito windows in e.g. google chrome.. Sometimes the getProduct call is not even executed (Not listed in the network tab) and only get's executed after several hard refresh reloads (but still gives back the cached and therefore wrong response).
Sometimes it seems, that the cache persists for even longer than this 120 seconds, we haven't figured out yet, why..
This page explains how the caching can be implemented, but it does not say, how the server side caching works: https://help.sap.com/viewer/9d346683b0084da2938be8a285c0c27a/1905/en-US/8b711228866910149500b73575cb386e.html
My Questions around it:
How does the Server side caching work, how can it be invalidated (besides turning off the whole feature)
How should dynamic Product information be handled? Should the whole cache be disabled or rather a custom OCC endpoint be used to get the stock level?
In the mean time we partially figured out, how we guess it works:
Server side Caching is done via EHCache, the time to live is defined in ehcache.xml
we needed to enhance the cache for productCode to also contain the selected store
Besides "fixing" the server side caching, we still have the problem, the the data is getting cached in the browser (we assume via Etag header, which is based on the response body). As the response body does not know anything about the selected store (sent via http header), it doesn't change, and therefore the data is not requested again after selecting the store..
Any idea on how include the information from the http header into the Etag value?
Related
My company uses a systemd service that makes rest api calls to the podio API in a python wrapper on a services server. The process takes in bugsnag bugs, checks that a podio bug item with that external ID already exists, and then creates a new podio bug if no associated podio bug exists and updates any existing item if one does exist. Recently, this check was behaving unpredicatably. It would encounter a request for a given bugsnag ID and for no discernable reason would fail to find a podio bug with a matching external ID on one call, and then successfully find one the next call. The specific API rest call is https://developers.podio.com/doc/items/filter-items-4496747, filtering by external ID.
We haven't been able to recreate the issue on local testing, and reverting the code running on our services server to before a major refactor (the issue started happening around the same time) didn't stop the issue from happening. Was there a change in how the filter request works recently? Even so, that wouldn't explain why we are getting different responses for the same call.
don't know if I'm handing you a Red Herring but, I had to rewrite my Podio.Net code because all the references to integer Podio ItemIds were acting erratically like you described. The issue was that Podio's Id numbering system exceeded the size of an Integer and I had to switch all the calls to Long ItemId values.
In Shopify:-
When my for loop code run then this issue occurred:-
"There was a problem loading this website
Try refreshing the page.
If the site still doesn't load, please try again in a few minutes."
console error is
"Failed to load resource: the server responded with a status of 502 ()"
How to fix it.
Seems like your Shopify site has too many collections and products within them so it just fails to load all of them due exceeding memory limits.
I'm assuming that you're trying to replicate the page from the reference URL you provided in your comment. Consider one of the options below to implement the required functionality:
Create different automated collections for each price range using Product price is less than condition. This approach is good as it uses Shopify's engine to generate collections, but it still might be quiet tricky to implement grouping as on the ref site you provided in comments.
Load collections and its products using AJAX requests i.e. request the data only when a customer is scrolling the page down. It will increase page load speed and will slightly decrease the Shopify site load but still is not ideal as data still will be requested on every page load and scrolling down events. You can slightly improve the situation by caching results on the clients' side, but again, is not ideal still.
Create a custom Shopify application that syncs products with your database. Then you can create an URL on your server that will be used as a data provider for your page. It can be requested via AJAX and return JSON with all the products, grouped by collections and matching the request parameters e.g. price less than X.
You can go further and add a proxy extension to your app. Shopify proxy would allow you loading a custom page directly from your server with the data from your database and render it within Shopify site like a part of itself.
In general, this approach gives you more flexibility on data to output, which can also be cached on your side to increase page speed load drastically.
Personally, I would prefer the last option.
For a platform using a mostly-RESTful HTTP API to moderate many types of content, I am wondering if having clients call DELETE on the same endpoint they used to create the content makes sense.
The API would identify the client as either the content's creator, a platform moderator, or a regular user.
In the case of the first two, the content would be immediately deleted, but in the case of the regular user, the content would be flagged for review and essentially be deleted only for that user.
This is as opposed to POSTing to /flag and /remove endpoints for each type of content as this requires additional routes and other overhead.
Update: The real question here is:
Does it make sense to use HTTP DELETE to moderate content in the way described? Will that lead to future complications?
I'm assuming clients created the content by a PUT request to an endpoint of their choice.
From the client viewpoint, I don't see any obvious problems with the approach. In fact, this is exactly how DELETE is intended to be used in remote authoring applications, but there are some minor issues that depend on how much information you want the clients to have.
Do you want the regular user to know his resource is flagged for deletion, or do you want that to be completely transparent? If the first, the DELETE request should return 202 Accepted and some description of the status, and a further GET request might inform the client of the pending deletion in some way. If you don't care about that, you can simply return 404 Not Found or 410 Gone, but then you might have to deal with the possibility of the client creating new content for the same endpoint while the deletion is still pending. That might be a problem or not, depending on your implementation of the PUT semantics.
Background
In our Apache configuration we use mod-auth-external (previously on Google Code) to invoke PAM authentication.
Now there is a request for proper handling of shadow-based password expiration:
If password is before warning period Apache should respond with HTTP status code 200. Nothing new here.
If password is in warning period (its validity end is near) Apache should respond with HTTP status code 200, but include somehow information about the warning period.
If password is in expiration period (it is no longer valid but user can still change it on his own) Apache should respond with HTTP status code 401 and include somehow information about expiration period.
If password is beyond expiration period (it is no longer valid and account was locked, administrator must unlock it) Apache should respond with HTTP status code 401 and include somehow information about the locked state.
(There are also corner cases of page missing or some other errors. It is not clear what to do then. But it seems that solving above points would allow to solve those corner cases as well.)
Our PAM authenticator (used through mod-auth-external) is able to differentiate those cases by adjusting return values. That we already have.
The problem is however how to get information from the authenticator to the associated action serving the page (either actual page with 200 status code or 401 error document).
Current investigations
It should be noted that there is significant difference between requirement 2 and requirements 3 and 4.
Requirements 3 and 4 alone are somewhat easier because they both involve our mod-auth-external authenticator returning error (access denied). So we only need to know how to get that error code in 401 error page. I even raised issue on that on mod-auth-external page.
Requirement 2 is much more difficult. In that case our authenticator must return 0 (access granted) and still somehow propagate information about the warning to whatever gets served in the end.
Logs parsing
Obvious (and ugly) idea is to parse logs. mod-auth-external description on Google Code Wiki mentions that authenticator return value gets written to Apache syslog. Also whatever authenticator prints to standard error stream gets logged as well.
This could be used to pass information from authenticator to some other entities.
The difficulty here is that it is not clear how to do it safely. What to print to be sure that "the other entity" will match properly current request with log entry. Mere URL doesn't seem to be enough since there can be multiple requests for the same URL at the same time. While I don't see anything more useful in what authenticator gets.
Another issue here is that it seems that to be able to parse the logs you have to have some non-trivial code running for "the other entity". And this complicates things further since how should we do it?
Another idea
If we could make the authenticator somehow modify "request session" (or whatever, maybe just environment? - I don't know, I'm new to Apache) to add arbitrary data to it we would be (almost) at home.
Our authenticator would somehow store "password status" and also possibly days remaining to the end of warning/expiration period (if applicable). Then upon serving 401 error page we would retrieve that back and use it to dynamically generate content of the page.
Or even better we would have it stored in session so that the other end could read that data directly. (For cases where it is not simply a browser showing page.)
But so far I fail to see how to do that.
Do you have any idea how to meet those requirements?
For over a month I got no answer here. Nor on GitHub issue that I opened for mod-auth-external.
So I ended doing a custom modification to our mod-auth-external. I don't like modifying third party software but this one seems dead anyway. And also it turned out we are using pretty old version (2.2.9 which I upgraded to 2.2.11, the last in 2.2.x line). Which already had some customizations anyway.
I explained details of the solution in a comment to my GitHub issue so I will not repeat them here.
I will however comment on shadow details as they were not mentioned there.
I had two choices: either use getspnam function to retrieve shadow data or to parse messages generated by PAM. First attempts based on getspnam function but in the end I used PAM messages. I didn't have strong reasons for any of those. However I decided to propagate in HTTP response not only shadow status but any PAM message that was generated and so it seemed easier to follow that way.
I was pooking around a random site on the internet and noticed that on the images they have numbers prefixing it: icons-16.png?1292032550
I've heard of people optimising websites with far expires headers. If someone changes the content that doesn't change very often, the cache won't get refreshed. Ergo this new image won't get re-downloaded to someones cache. Because the filename has to change.
Yes, the intent is probably to force a refresh of the browser cache. However, I do not recommend this approach:
Many proxies (and possibly some browsers) simply will not cache anything with a query string, regardless of Cache-Control headers. You're shooting yourself in the foot if you include a superfluous query string – you'll needlessly consume your own bandwidth sending images that should be cached, but aren't.
Depending on how you configure your server, user agents will periodically make a request for cached resources, with a If-Modified-Since and/or If-None-Match header. If the client's cache is up to date, the server responds with 304 Not Modified and stops; otherwise it responds with a normal 200 OK and sends the new content. You do not have to change a resource's file name in order for client caches to be updated when the resource changes. Trying to get clever with a query string only serves to defeat caching mechanisms.
That said, if you do optimize caching by setting an Expires date a year out (and if the Last Modified date of the resource is long ago), user agents may check for updates infrequently. If this is unacceptable to you, you have two options: either reduce the amount of time before the resource expires (so that the browser will issue a GET request and you can respond with 304 or 200 as appropriate), or use "URL fingerprinting," where a random token is included in the path, instead of in the query string. For example:
/img/a03f/image.png
instead of
/img/image.png?a03f
This way, your resources are still cached by proxies. You'll probably want to look in to using mod_rewrite to allow you to include a token in the path. Of course, you need to be able to change all references to this URL whenever you change the resource.
For further reading, I highly recommend Google's page speed best practices, specifically the section on optimizing caching.
Yes. One way to get around caches is to append an otherwise inconsequential query parameter, such as a time stamp, to a URL.
This is a way to compensate for your web server e.g. not generating the correct ETag headers when your content changes. Using ETag with a value like the file's hash will be a better approach for telling the browser that some content has changed and it must be reloaded.