Is it possible to disable client side routing in Gatsby?
I'm using Gatsby to generate a static site which only has one page and will be served from AWS/S3. I'm running into an issue caused by Gatsby removing the object suffix from the URL (https://s3.amazonaws.com/top-bucket/sub-bucket/index.html becomes https://s3.amazonaws.com/top-bucket/sub-bucket/) after the page and the Gatsby runtime loads. This issue does not happen if I disable JavaScript, so I'm pretty certain it's caused by Gatsby's use of React/Reach Router.
Is there any way to disable this behavior? I know I can probably setup a redirect on S3 to handle the request to the bucket, but I'd prefer to do this at the application level, if possible.
This is a hack and may not work in anyone else's application or break with future releases of Gatsby, but I was able to prevent this redirect by setting window.page.path = window.location.pathname; in gatsby-browser.js. This short circuits a conditional check in production-app.js, which attempts to "make the canonical path match the actual path" and results in the (IMO) unexpected behavior referenced above.
this issue is pretty old but hope it helps someone, I used this plugin: https://github.com/wardpeet/gatsby-plugin-static-site
npm install #wardpeet/gatsby-plugin-static-site --save
And just added it in gatsby-config.js
plugins: [{
`#wardpeet/gatsby-plugin-static-site`,
}]
Client side routing was then disabled!
Related
I have some react code (written by someone else) that needs to be served. The preferred method is via a Google Storage Bucket, fronted by their Cloud CDN, and this works. However, due to some quirks in the code, there is a requirement to override 404s with 200s, and serve content from the homepage instead (i.e. if there is a 404, don't serve a 404, serve the content of the homepage and return as a 200 instead)
(If anyone is interested, this override currently is implemented in CloudFront on AWS. Google CDN does not provide this functionality yet)
So, if the code is served at "www.mysite.com/app/" and someone hits "www.mysite.com/app/not-here" (which would return a 404), what should happen is that the response should NOT be 404, but a 200 with the content being served from index.html instead.
I was able to get this working by bundling all the code inside a docker container and then using the solution here. However, this setup means if we have a code change, all the running containers need to be restarted, and the customer expects zero downtime, hence the bucket solution.
So I now need to do the same thing but with the files being proxied in (with the upstream being the CDN).
I cannot use the original solution since the files are no longer local, and httpd can't check for existence of something that is not local.
I've tried things like ProxyErrorOverride and ErrorDocument, and managed to get it to redirect, but that is not what is needed.
Does anyone know how/if this can be done?
If the question is: how to catch the 404 error provided by Cloud Storage when a file is missing with httpd/apache? I don't know.
However, I think that isn't the best solution. Serving files directly from Cloud Storage is convenient but not industrial.
Imagine, you deploy several broken files successively, how to rollback in a stable format?
The best is to package your different code release in an atomic bag, a container for instance. Each version are in a different container and performing a rollback is easier and consistent.
Now your "container restart" issue. I don't know on which platform you are running your container. If your run it on a Compute Engine (a VM) it's maybe the worse solution. Today, there is container orchestration system that allows you to deploy, scale up and down the containers, and to perform progressive rollout, to replace, without downtime, the existing running containers by a newer version.
Cloud Run is a wonderful serverless solution for that, you also have Kubernetes (GKE on Google Cloud) that you can use with Knative for a better developer experience.
I have a problem using Cypress when running tests on our staging domain. For some reason Cypres browser opens the correct website but then immediately changes the url to the absolute domain and appends __/ at the end:
https://stagingdomain.com/administrators/login becomes https://stagingdomain.com/__/
On production this does not happen, the test passes correctly. Sidenote: Our staging environment is only accessible behind our corporate VPN, but besides that everything else is the same
it('Gets, types and asserts', function () {
cy.visit('https://stagingdomain.com/administrators/login');
cy.contains('ADMIN LOGIN');
cy.url().should('include', 'administrators');
});
});
I have followed all security measures provided on Cypress' documentation but none seem to be resolving this issue. Wondering if anyone else has faced the same challenge and has been able to overcome it
Turns out this was a known issue with Cypress which was since addressed in version 3.4.1
This is still any issue in 5.1.0.
I clearly have a page redirect to index.php which is in http://url.com/site/
Instead of redirecting to http://url.com/site/index.php like it should. The page is redirected to http://url.com/__/index.php which does not exist. It seems to be an issue with the doc root rewrite.
I also tried adding these to my cypress.json with no luck:
{
"baseUrl": "http://url.com/site/index.php",
"experimentalSourceRewriting": true
}
As a workaround, I simply redirect the user again and they check if the session is valid by getting to my secure page after my login.
So our Subversion server changed. And with it came a necessary url change, from https://hostname of the previous machine, to a more apt https://svn.
Problem is, a lot of the externals use the absolute https://hostname/blah/blah/blah rather than ^/blah/blah/blah. And this has obviously led to a lot of failures.
To prevent the headache of change possibly hundreds of externals one checkout at a time, I've been asked to figure out a way to utilize http redirects to allow the externals to stay as they are for now.
I've got this simple rule in the httpd.conf of the old server, which is still being used for other http services.
Redirect /repo/ https://svn/repo/
And that works fine for the web browsing of our repositories. But it doesn't work for TortoiseSVN, I just get "Repository moved temporarily to 'https://svn/repo'; please relocate". And on linux I just get "Unable to connect to a repository at URL 'https://old hostname/repo/blah/blah'".
Is this possible at all? I hope it is and I just need a different form of redirect.
Nevermind. I'm too new to this. I had to change 'Redirect' to 'Redirect 301'.
Probably should have been obvious. But it works now.
after installing these plugins:
Concatenate Js/Css and
SimpleCache
I get this error in backend:
Error: <!DOCTYPE html>
<html lang="de">
I have this behaviour at two sites. Other sites without these plugins are working.
How can I reach the backend or what have I to do to reach the backend?
Regards
Uwe
turn on safe mode: http://www.impresspages.org/docs/safe-mode/
It turns off all plugins.
You also make plugins inactive by going to the database and altering ip_plugin table records.
Thanks for support. I've got the problem, that I can't reach the backend.
In your doc is written:
Safe mode can be used only by the administrator. Make sure you are logged in to the admin before trying to enter safe mode. - See more at: http://www.impresspages.org/docs/safe-mode/#sthash.1wgnuAaq.dpuf
Well, I can't enter the backend. When I'm typing: www.http://domain.tld/admin I get the error above. Is there a fault on my side?
Is it possible to rename or delete the both plugin-folders via ftp to get than access to the backend?
You also make plugins inactive by going to the database and altering ip_plugin table records. Okay, thank you.
Plugin has been updated. Should work now.
I have an ssl page that also downloads an avatar from a non-ssl site. Is there anything i can do to isolate that content so that the browser does not warn user of mixed content?
Just an idea - either:
try to use an ssl url on the avatar website, if necessary by editing whatever JS/PHP/... script they provide, or:
use your scripting language of choice to grab a copy of the avatar and store it on your server, then serve it from there.
There are a number of good security reasons for the browser to warn about this situation, and attempting to directly bypass it is only likely to set off more red flags.
Ninefingers' suggestions are good, and I would suggest a third option: you can proxy the content directly through your own server using a simple binary retrieve/transmit script, if it changes frequently and is unsuitable for caching.
If all the content you want to include from foreign sites comes from a specific server and path (i.e. http://other.guy/avatar/*) you could use mod_proxy to create a reverse proxy which makes https://your.site/avatar_proxy/{xyz} mirror http://other.guy/avatar/{xyz} .This will increase your bandwidth usage and probably slow things down.