Magento2 504 Gateway Timeout - e-commerce

I'm dev my e-commerce with Magento 2, hosted on Siteground shared server.
Since my db is getting bigger, I suppose the server takes too much time to respond, so I get 504 Gateway Timeout (if I try to add/edit products). I read I should edit some php timeout params, but I can't on a shared service. So now, is that any solution?
SOLUTION
I just had to migrate to a dedicated Cloud, Magento is a really huge monster!

You should make sure you have updated the php version on siteground hosting,
by default it will be 5.6, so try to use 7.0, and then refresh your browser,cache etc, it should work fine.

Related

Is there any way to increase the cloudflare proxy request timeout limit(524)? [duplicate]

Is it possible to increase CloudFlare's time-out? If yes, how?
My code takes a while to execute and I wasn't planning on Ajaxifying it the coming days.
No, CloudFlare only offers that kind of customisation on Enterprise plans.
CloudFlare will time out if it fails to establish a HTTP handshake after 15 seconds.
CloudFlare will also wait 100 seconds for a HTTP response from your server before you will see a 524 timeout error.
Other than this there can be timeouts on your origin web server.
It sounds like you need Inter-Process Communication. HTTP should not be used a mechanism for performing blocking tasks without sending responses, these kind of activities should instead be abstracted away to a non-HTTP service on the server. By using RabbitMQ (or any other MQ) you can then pass messages from the HTTP element of your server over to the processing service on your webserver.
I was in communication with Cloudflare about the same issue, and also with the technical support of RabbitMQ.
RabbitMQ suggested using Web Stomp which relies on Web Sockets. However Cloudflare suggested...
Websockets would create a persistent connection through Cloudflare and
there's no timeout as such, but the best way of resolving this would
be just to process the request in the background and respond asynchronously, and serve a 'Loading...' page or similar, rather than having the user to wait for 100 seconds. That would also give a better user experience to the user as well
UPDATE:
For completeness, I will also record here that
I also asked CloudFlare about running the report via a subdomain and "grey-clouding" it and they replied as follows:
I will suggest to verify on why it takes more than 100 seconds for the
reports. Disabling Cloudflare on the sub-domain, allow attackers to
know about your origin IP and attackers will be attacking directly
bypassing Cloudflare.
FURTHER UPDATE
I finally solved this problem by running the report using a thread and using AJAX to "poll" whether the report had been created. See Bypassing CloudFlare's time-out of 100 seconds
Cloudflare doesn't trigger 504 errors on timeout
504 is a timeout triggered by your server - nothing to do with Cloudflare.
524 is a timeout triggered by Cloudflare.
See: https://support.cloudflare.com/hc/en-us/articles/115003011431-Troubleshooting-Cloudflare-5XX-errors#502504error
524 error? There is a workaround:
As #mjsa mentioned, Cloudflare only offers timeout settings to Enterprise clients, which is not an option for most people.
However, you can disable Cloudflare proxing for that specific (sub)domain by turning the orange cloud into grey:
Before:
After:
Note: it will disable extra functionalities for that specific (sub)domain, including IP masking and SSL certificates.
As Cloudflare state in their documentation:
If you regularly run HTTP requests that take over 100 seconds to
complete (for example large data exports), consider moving those
long-running processes to a subdomain that is not proxied by
Cloudflare. That subdomain would have the orange cloud icon toggled to
grey in the Cloudflare DNS Settings . Note that you cannot use a Page
Rule to circumvent Error 524.
I know that it cannot be treated like a solution but there is a 2 ways of avoiding this.
1) Since this timeout is often related to long time generating of something, this type of works can be done through crontab or if You have access to SSH you can run a PHP command directly to execute. In this case connection is not served through Cloudflare so it goes as long as your configuration allows it to run. Check it on Google how to run scripts from command line or how to determine them in crontab by using /usr/bin/php /direct/path/to/file.php
2) You can create subdomain that is not added to cloudlflare and move Your script there and run them directly through URL, Ajax call or whatever.
There is a good answer on Cloudflare community forums about this:
If you need to have scripts that run for longer than around 100 seconds without returning any data to the browser, you can’t run these through Cloudflare. There are a couple of options: Run the scripts via a grey-clouded subdomain or change the script so that it kicks off a long-running background process and quickly returns a status which the browser can poll until the background process has completed, at which point the full response can be returned. This is the way most people do this type of action as keeping HTTP connections open for a long time is unreliable and can be very taxing also.
This topic on Stackoverflow is high in SERPs so I decided to write down this answer for those who will find it usefull.
https://support.cloudflare.com/hc/en-us/articles/115003011431-Troubleshooting-Cloudflare-5XX-errors#502504error
Cloudflare 524 error results from a web page taking more than 100 seconds to completely respond.
This can be overridden to (up to) 600 seconds ... if you change to "Enterprise" Cloudflare account. The cost of Enterprise is roughtly $40k per year (annual contract required).
If you are getting your results with curl, you could use the resolve option to directly access your IP, not using the Cloudflare proxy IP:
For example:
curl --max-time 120 -s -k --resolve lifeboat.com:443:127.0.0.1 -L https://lifeboat.com/blog/feed
The simplest way to do this is to increase your proxy waiting timeout.
If you are using Nginx for instance you can simply add this line in your /etc/nginx/sites-availables/your_domain:
location / {
...
proxy_read_timeout 600s; # this increases it by 10mins; feel free to change as you see fit with your needs.
...
}
If the issue persists, make sure you use let's encrypt to secure your server alongside Nginx and then disable the orange cloud on that specific subdomain on Cloudflare.
Here are some resources you can check to help do that
installing-nginx-on-ubuntu-server
secure-nginx-with-let's-encrypt

.Net Core 5 API keeps returning the same IP

In my .net core 5 api I need to get the remote IP from request. It works fine on my test server.
Checked it several times from different networks.
BUT
on my client's server, I'm getting the same IP address regardless from where I'm calling the API.
So, my problem is that on my server everything works fine, but on client's server, I'm getting false IP address.
I've installed the .Net 5 SDK
I've compared it to my server and IIS (different versions of IIS, but configuration seems fine)
Any ideas?
I will assume that you have LB environment. SO most probably you getting the LB IP.
If this is the case. you should configure the LB to allow HTTP header transfer to the internal requests. Especially if you use internal HTTP direction across the DMZ.
mainly you can rely on "x-forwarded-for" and the proxy server (LB) will fill that in and pass it to your internal servers.
After talking to my client, apparently he knew that the server is in fact behind NAT (asked him about that before and he denied it🤷‍♂️)

Server Side Rendering is not working in CCV2 Cloud

We are using Spartacus Version 3.0.0 and have setup Cloud Deployment via SAP CCV2 Cloud.
We followed the steps to enable SSR described in https://sap.github.io/spartacus-docs/server-side-rendering-in-spartacus/#adding-ssr-support-using-schematics-recommended. Additionally we also followed the guide for the workaround needed regarding the file structure in CCV2 cloud: https://sap.github.io/spartacus-docs/ssr-ccv2-issue-spartacus-version-2/#page-title
So far, all works locally when starting the server both in dev and production mode. As soon as we deploy into the CCV2 Cloud, we don't have Server Side rendering at all anymore.
In the Kibana log, we sometimes see the error message "SSR Rendering exceeded timeout, fallbacking to CSR", but only for some requests occasionally, which means, that for most requests, there is no SSR, but also no error logs..
Any idea?
The problem was caused by the IP Restriction on the DEV environment of CCV2. This IP Restriction is currently also being applied for the request coming from the Storefront Service during an SSR Request, as the ip of the storefront service was not whitelisted, the call always returned a 403, what was returning as a SSR timeout.
The spartacus documentation has been update regarding that problem: https://sap.github.io/spartacus-docs/server-side-rendering-optimization/#troubleshooting-a-storefront-that-is-not-running-in-ssr-mode
We have created an SAP Bug ticket to fix that problem.

What would happen if I made a HTTP request to a server without Apache installed?

Doesn't have to be Apache, but that's just the only HTTP server I know of (Actually could you guys recommend alternatives that I could look into as well?)
Anyways, so I have been messing around with Amazon Web Services and I created an EC2 server instance with an Amazon Linux Image. On that, (Following guides and examples) I installed Apache and now when I make a GET request to my public IP, it returns to me the HTML files I created on my server.
My question is, what if I never installed Apache, and then made an HTTP request to my public IP? For no reason really, the question just came up in my head and I'm curious. I'd rather not figure out how to uninstall Apache or create a new instance to figure it out, so I was wondering if somebody could weigh in as well as tell me a little more about what it is exactly apache does on a server. My understanding is that it is a layer you can install on your server OS that will create a socket listener to port 80 (HTTP), and when a request is made on that port, Apache will return web pages? Also I think I read somewhere you could configure Apache to forward a port to something like a python server script?
Thanks in advance for your time!
could you guys recommend alternatives that I could look into as well?)
nginx is a popular alternative to apache. It's much more efficient.
what if I never installed Apache, and then made an HTTP request to my public IP?
Your browser would get a "connection reset" because there is nothing on port 80. Your browser would display a message (Chrome says "This webpage is not available"). You would NOT get a "404" because that requires an HTTP server to send HTTP codes.
If your server was firewalled instead, you'd bet a busy wait for a while, then a message about the server not responding.
Also I think I read somewhere you could configure Apache to forward a port to something like a python server script?
Yes, that is called "reverse proxy" mode. It's essential to any application website if you want to scale. The web server(s) can distribute traffic to one or more backends running the application. The web server is useful for filtering bad requests (since your backend in Ruby/Python will be 1000's of times slower than the reverse proxy.)
Well, if you want to test what will happen if Apache isn't installed, you can always just stop the Apache service by typing:
sudo service apache2 stop
or
sudo service httpd stop
depending on your version. Then if you visit your site's webpage you'll get a 404 error or something similar.
There are ways to use python scripts to run simple servers, but in general it's easier to just let Apache handle that and use a framework like Ruby on Rails or Django to control the display and creation of content for your server.

How to secure data transfer between Rails App on Heroku and a remote IIS Server?

I'm running a Rails app on Heroku's Cedar stack.
I need to access a remote Windows Server (over the internet, not in a LAN) to query a SQL Server database.
First, I used TinyTDS to access the DB but configuring it on Heroku is really painful.
Secondly, I had made a dynamic web page on the IIS remote server and I was making http get requests to retrieve data. But it's not secure.
I need a good and secure solution (ssh tunnel?).
Any help appreciated.
There's nothing wrong with using HTTP requests to interface between the two. If you want encryption, just use HTTPS. If it's just for internal use then you don't need to buy an SSL certificate, you can generate one yourself.