issue with multisite SSR transfer state in Sparatcus 3.4 - spartacus-storefront

I am using Sparatcus 3.4 upgraded from 2.0.we are using multiple sites(AU and NZ).
problem is when we access any site first (say AU) its giving me the data correctly and everything running fine but when I open the NZ site in the same tab or another tab (which have diff domain but same nginx server) its returning me the AU data. and when I checked the localStorage and sessionStorage for site id and currency its showing me the last website data (ie. AU). and then if I refresh the page its shows me the correct NZ data.
after debugging I found there is issue with the state transfer, maybe server is returning the just last state from server.

Related

How to force Nuxt app to rerender its data on production

I'm using Nuxt 2.15 with target: "static" and hosting on a shared hosting "hostinger". I fetch the data from an external API using axios and Vuex state management and here comes the problem where the app doesn't load the new data it gets from the API.
How can I make the app rerenders its data and output the newly updated data it gets from fetching the API?
I assume you are using nuxtServerInit or asyncData for getting the data from API. This used with static mode means that data is get only during generation. Which is great for not too often updated content because it doesn't have to connect to server every time it's faster.
Depend on your needs you can:
Get data from API in mounted() hook this will get data from API every time the page is loaded BUT it also means that it could be loaded with some delay and it probably wouldn't be indexed by search engines
You can go with universal mode (https://nuxtjs.org/docs/configuration-glossary/configuration-mode/) and start your site on node.js server which will use up-to-date data from API each time user will open your site.
EDIT: as #kissu corrected me in comment this one is deprecated, please use this one: target:'server' instead of target:'static', which is default so you can just remove this line (https://nuxtjs.org/docs/configuration-glossary/configuration-target/ )

Spartacus: Wrong currency code in cart API parameters

We have implemented multisite configuration recently, but facing an issue after it.
When we have it configured (in both mode - Static/Dynamic configuration), we are getting error in cart API when refreshing the page. After analyzing it we found that it is happening due to wrong site context parameter(currency code) and it is happening only in case of non SSR build, when we use SSR it works fine. In case of normal build, it is adding “USD” as currency code in API call with dynamic site binding and if we are using static binding then it is picking the first currency code even if we are on secondary website. Spartacus framework is picking it from default list of currency codes when hitting cart API on page refresh.
Spartacus version:2.0
Screenshot
Error message
Please share a solution on this.
You can add currency in static configuration (e.g. adding currency: ['JPY'], inside context, currency will be JPY).
If your multiple sites using different "currency" (e.g. one site uses "USD" and the other one uses "JPY"), then you cannot use static configuration. Dynamic configuration loads all base sites data, and get the default currency from each site (the value is: the defaultCurrency of the first store in the site). So, you need make sure the site data is set correctly in backend.

Odoo V9 Website Cache Issue

I have certain custom sales order pages on odoo e-commerce website, which are been loaded based on the templates designed. The page loads dynamic data of sales order stored in the session, but it is either loading old data or not requesting to server and just loading the page from the cache. So to do this I have to manually load the page again by refreshing the page to load the current correct data. This is the problem when I have a domain name assigned for my website, If I access this same website using the IP address it works correctly. So I am not sure what is going wrong here.

Magento adding product images using API fails

I'm using the 'catalogProductAttributeMediaCreate' V2 SOAP service call to create images for products. This worked fine for months but has been failing since a couple of days.
The API call itself returns a successful action. No exception is thrown by Magento and the call returns an image id in the form of a string like '/f/e/ferrari-f12-berlinetta-wallpapers-pictures-backgrounds.jpg_4.jpg'.
But when I check the magento admin backend, no images are shown. When I check the table in mysql I notice that magento hasn't created a record for it. Even though the API call returns a success.
Adding images through the admin backend manually works fine. I've re-installed the entire magento site. But the fauls persists.
When I use the API to remove the image it returns an exception, telling me that the image '/f/e/ferrari-f12-berlinetta-wallpapers-pictures-backgrounds.jpg_4.jpg' does not exist. Which makes sense as it's not present in the database.
So, why does the API returns a successful action and an image ID, when it obviously fails. And where do I start to troubleshoot?

Script to download Google web history

How does one write a script to download one's Google web history?
I know about
https://www.google.com/history/
https://www.google.com/history/lookup?hl=en&authuser=0&max=1326122791634447
feed:https://www.google.com/history/lookup?month=1&day=9&yr=2011&output=rss
but they fail when called programmatically rather than through a browser.
I wrote up a blog post on how to download your entire Google Web History using a script I put together.
It all works directly within your web browser on the client side (i.e. no data is transmitted to a third-party), and you can download it to a CSV file. You can view the source code here:
http://geeklad.com/tools/google-history/google-history.js
My blog post has a bookmarklet you can use to easily launch the script. It works by accessing the same feed, but performs the iteration of reading the entire history 1000 records at a time, converting it into a CSV string, and making the data downloadable at the touch of a button.
I ran it against my own history, and successfully downloaded over 130K records, which came out to around 30MB when exported to CSV.
EDIT: It seems that number of foks that have used my script have run into problems, likely due to some oddities in their history data. Unfortunately, since the script does everything within the browser, I cannot debug it when it encounters histories that break it. If you're a JavaScript developer, use my script, and it appears your history has caused it to break; please feel free to help me fix it and send me any updates to the code.
I tried GeekLad's system, unfortunately two breaking changes have occurred #1 URL has changed ( I modified and hosted my own copy which led to #2 type=rss arguments no longer works.
I only needed the timestamps... so began the best/worst hack I've written in a while.
Step 1 - https://stackoverflow.com/a/3177718/9908 - Using chrome disable ALL security protocols.
Step 2 - https://gist.github.com/devdave/22b578d562a0dc1a8303
Using contentscript.js and manifest.json, make a chrome extension, host ransack.js locally to whatever service you want ( PHP, Ruby, Python, etc ). Goto https://history.google.com/history/ after installing your contentscript extension in developer mode ( unpacked ). It will automatically inject ransack.js + jQuery into the dom, harvest the data, and then move on to the next "Later" link.
Every 60 seconds, Google will force you to re-login randomly so this is not a start and walk away process BUT it does work and if they up the obfustication ante, you can always resort to chaining Ajax calls and send the page back to the backend for post processing. At full tilt, my abomination script collected 1 page a second of data.
On moral grounds I will not help anyone modify this script to get search terms and results as this process is not sanctioned by Google ( though not blocked apparently ) and recommend it only to sufficiently motivated individuals to make it work for them. By my estimates it took me 3-4 hours to get all 9 years of data ( 90K records ) # 1 page every 900ms or faster.
While this thing is going, DO NOT browse the rest of the web because Chrome is running with no safeguards in place, most of them exist for a reason.
One can download her search logs directly from Google (In case downloading it using a script is not the primary purpose),
Steps:
1) Login and Go to https://history.google.com/history/
2) Just below your profile picture logo, towards the right side, you can find an icon for settings. See the second option called "Download". Click on that.
3) Then click on "Create Archive", then Google will mail you the log within minutes.
maybe before issuing a request to get the feed the script shuld add a User-Agent HTTP header of well known browser, for Google to decide that the request came from that browser.