We are using Spartacus Version 3.0.0 and have setup Cloud Deployment via SAP CCV2 Cloud.
We followed the steps to enable SSR described in https://sap.github.io/spartacus-docs/server-side-rendering-in-spartacus/#adding-ssr-support-using-schematics-recommended. Additionally we also followed the guide for the workaround needed regarding the file structure in CCV2 cloud: https://sap.github.io/spartacus-docs/ssr-ccv2-issue-spartacus-version-2/#page-title
So far, all works locally when starting the server both in dev and production mode. As soon as we deploy into the CCV2 Cloud, we don't have Server Side rendering at all anymore.
In the Kibana log, we sometimes see the error message "SSR Rendering exceeded timeout, fallbacking to CSR", but only for some requests occasionally, which means, that for most requests, there is no SSR, but also no error logs..
Any idea?
The problem was caused by the IP Restriction on the DEV environment of CCV2. This IP Restriction is currently also being applied for the request coming from the Storefront Service during an SSR Request, as the ip of the storefront service was not whitelisted, the call always returned a 403, what was returning as a SSR timeout.
The spartacus documentation has been update regarding that problem: https://sap.github.io/spartacus-docs/server-side-rendering-optimization/#troubleshooting-a-storefront-that-is-not-running-in-ssr-mode
We have created an SAP Bug ticket to fix that problem.
Related
I am here to describe a certain issue I faced recently. Me and My friends are having a pet project called Wibrant(earlier named winbook). Which is a social media website, hosted here. It has a Django-react stack both repos can be found here, and is hosted on an EC2 instance of free tier, on AWS, which is associated to an elastic IP.
The backend is running on a docker container, on the server itself, however, we decided to host the frontend on vercel, which was initally hosted here.
But I decided to proxy it using nginx. Nginx conf for both react and django can be found here
This configuration was working perfectly, until one night I was suddenly getting a 502 error on https://winbook.d3m0n1k.engineer/. Upon inspecting the nginx logs, I found an error like
no live upstreams while connecting to upstream
which I was unable to understand. So, I tried to curl the site, using my localhost and the server. I was able to curl it using my local system, but was not able to do the same with the ec2 server. I got the error:
curl: (35) error:0A000126:SSL routines::unexpected eof while reading
Upon researching I found this error to occur due to openssl version mismatch, so i tried to update it, but couldn't. So decided to spin up a new ec2 instance. I was able to curl the site from there. Thinking that fixed the issue, I migrated the whole set up to that instance and reassociated my elastic ip to that instance. I tried to test it, Only to find that it stopped working. Confused, I ran the curl command again, and it was failing too. On using a python script with requests module to get the site, I got this error from my latest setup.
Caused by SSLError(SSLZeroReturnError(6, 'TLS/SSL connection has been closed
However, now the previous setup started to work perfectly fine.
So, I could curl the Vercel deployment when I didn't had the elastic IP associated to my instance, but couldn't if I did.
So, I figured it was some issue with the elastic IP. I suspected Vercel had blacklisted my IP maybe. So I reset the whole dns config of my domain, created and associated a new elastic IP with the instance, and it worked perfectly.
So, my question is..
Has anyone faced such an issue before? If yes, what was the fix in your case.
Is it really possible that Vercel has the IP in a blacklist of sorts?
This issue is probably non reproducible, but if someone find this thread, dealing with the same problem, I hope that the post and/or the comments/answers lead you to your solution. Cheers.
I'm experiencing an issue with https requests sent from a Jenkins job, which are blocked and do not reach the final server.
The scenario is as follows:
I have a server where I'm running some backend, exposing APIs both over http and https (it's a django + django rest framework backend)
I implemented a Rhinoceros Plugin in C#, which needs to perform some rest API requests to the above mentioned server
On another server, I'm running a Jenkins job that is responsible for compiling the plugin, installing it on Rhinoceros and run some operations on Rhinoceros for test purposes
All calls over http do work like a charm from the plugin running on the Jenkins job, but all https do not work.
NOTES:
When manually running the plugin outside Jenkins, the https remote request succeed without any issue
Moreover, https requests sent from any other location do work
When the plugin runs from the Jenkins job no trace of any incoming connection is logged into the backend server, so that makes me believe the request never leaves the Jenkins server
WHAT I TRIED:
I installed the domain certificate locally on the Jenkins server, using keytool, on the JAVA_HOME/lib/security/cacerts keystore
I tried to install some Jenkins Plugins to skip certificate check or to trust certain domains
I checked the firewall on the Jenkins machine (adding allow rules for both Java and Rhinoceros applications)
So far, nothing worked.
Any idea?
Thanks
I want to know if we can change the remote express gateway config from some other service(Might or might not be behind the gateway). Is there an API exposed for admins, to enable changing config without having to change the docker image of eg?
Our use case is we have an infrastructure based on tenants and want to change the config in run time without having container restarts or image changes. The documentation says config changes will be a hot reloaded.
If the above is not possible can you suggest what is the best alternative to change files in a remote docker container from other service.
Thanks in advance.
Yes, the Express Gateway Admin API has endpoints to add, remove, list, or change the following entities:
Policies
Service Endpoints
API Endpoints
Pipelines
I have not used them, but the documentation suggests that they update the gateway.config.yaml configuration file.
I have a multi-master Origin setup in AWS. I have an ELB in front that uses SSL certificate configuration.
I'm having difficulty configuring the access to the Web console as it seems that the web sockets are being interrupted. I can tell this because of the image below and the inability to access the logs or terminal for a pod in the web console.
Server connection interrupted
What is the proper configuration in AWS to allow the web console to function correctly?
I resolved my issue. I figured out the ELB configuration by following the CloudFormation template in the reference architecture here:
https://github.com/openshift/openshift-ansible-contrib/reference-architecture/aws-ansible/playbooks/roles/cloudformation-infra/files/greenfield.json
I also had an issue with the version of Chrome (50) and had to upgrade to version 55. Basically I was getting 'ERR_DISALLOWED_URL_SCHEME'. This post pointed my towards upgrading Chrome:
https://productforums.google.com/forum/#!topic/chrome/leVmLPNVISI
I am writing a project with an Angular2 frontend and a REST WebAPI backend using php.
I have been running/debugging the frontend using npm's lite server (aka npm start). Until now, I have been using the in-memory-web-api to serve data, but I am ready to start consuming real data from the backend.
In production, both frontend and backend would be served from the same Apache server, but in development I have been using npm to run Angular2 and a separate Apache server to run the API.
My problem is that npm runs on localhost:3000 and Apache on localhost:80. This creates cross origin security issues and the only way I can have my Angular2 app get data is to enable CORS on my REST API. I don't want to enable CORS on the backend if I can avoid it because I am worried that it may somehow make it's way into production.
So far, npm's server has been really nice because it compiles my .ts files automatically and will refresh the browser whenever it detects a change to the files. I would really rather not have to move my Angular2 development into Apache unless there is a way to keep these nice features.
Is there any way to keep these two things separate without having to enable CORS?
If not, is there a way I can merge the two while keeping npm's nice features?
Configure your development application server to proxy requests to the development REST server. Then make same origin requests.
Alternatively, use .htaccess to turn CORS headers on but add it to .gitignore (or your version control system's equivalent) to ensure it stays out of production.
Alternatively, if your REST server has a configuration system. Use that to turn on CORS in development (and again, ensure that the config file is kept out of version control).