frontend cloud run app can not access my backend cloud run app due a MixedContent problem - vue.js

I have two cloud services up and running.
frontend (URL: https://frontend-abc-ez.a.run.app/)
backend (URL: http://backend-abc-ez.a.run.app/)
Frontend is calling the backend through a nuxt.js server middleware proxy to dodge the CORS problematics.
The call is coming through - I can see that in the backend log files. However the response is not really coming back through because of CORS. I see this error in the console:
Mixed Content: The page at 'https://frontend-abc-ez.a.run.app/' was loaded over HTTPS, but requested an insecure XMLHttpRequest endpoint 'http://backend-abc-ez.a.run.app/login'. This request has been blocked; the content must be served over HTTPS.
What I find weird is that I configured the backend url with https but it is enforced as http - at least that is what the error is telling me. Also I see a /login path segment in the unsecure URL. Why is that? I never explicitly defined that endpoint. Is it the security layer proxy of the run service itself?
Anyway - I need to get through this properly and am having a hard time to understand the source of the problem.

For some reason as I rechecked the applications today in the morning everything went fine. I have really no idea why it is working now. I did not change a thing - I waited for the answers here before I'd continue.
Very weird. But the solution so far seems to be waiting. Maybe Cloud Run had some troubles.

Related

Login to manager of piranhacms hosted in docker behind nginx on ubuntu (aspnet core 6)

Does anyone has experience hosting piranha on ubuntu in a docker container behind nginx?
The frontend is fine, everything works smooth. But the manager is not working for me. It has something to do with the login. First i saw error 502 after login (failed login was working with correct error message, so the post itself is working). I changed the login not to do the local redirect but returning the page. No error message, so i guess the login data was fine, but somehow i am still not logged in.
The only cookie i see is the antoforgery. Someone has an idea, the is no error message in the logs.
The problem was, that aspnet core identity created a cookie larger then nginx accept in proxy communication.
I increase the buffers, but this did not work for me. There are a lot of article about it, but nothing worked.
So i decided to reduce the header sent. The application configuration has been changed to manage the identity in a different way. More information in memory but smaller cookie.

body parser makes heroku go timeout

I am developing a stack based on NodeJS + Express + SocketIO and I deployed it to Heroku.
The stack involves using concurrency in dyno (I followed Heroku's guide on how to do it) and a Redis adapter for SocketIO.
In addition to SocketIO, a SPA is served with express.static(), responsing to a dynamic path (/app/:name) and API to a path (/api/) and everything works fine, even on Heroku.
The problem arises when this webapp (or just an HTTP client like the one in WebStorm) makes a POST request to the /api/:app/login endpoint and exactly at the point of the express.json middleware which, locally works fine, while on Heroku it randomly takes more than 30 seconds without giving any error on the server and the Heroku router returns H12 error in the logs and returns error 503 to my client.
I noticed it doesn't happen always but almost 3/4 of the times and after i refresh the page or retry the request many times, it works.
Any advice on what could be blocking the middleware?
Thanks
I tried placing logs middleware all over to find the middleware causing the timeout

404 error with custom domain for Google Cloud Run service

I created a custom domain mapping for my Cloud Run service following this guide https://cloud.google.com/run/docs/mapping-custom-domains.
I can access my service via the https run.app URL and the custom domain via HTTP, but when I go to the custom domain via HTTPS, I get back a Google 404 error page.
The weird thing is, this seems to be an issue on my local laptop (both browser and curl on the terminal), but curl-ing it from a remote server seems to work ok.
As #LundinCast pointed out, there seems to be an outage on the Google server side. I'll monitor the situation and mark this as resolved for now.
Edit: I'm guessing this is related to https://status.cloud.google.com/incident/cloud-networking/19016

How to fix "Cross-Origin Read Blocking (CORB) blocked cross-origin response with MIME type application/json." issue?

I'm currently developing the frontend (VueJS) for a project and to test my login and register logics I'm using laravel as backend, though we'll be actually working with springboot for backend. I was coding in a desktop and everything was normal. So I just started to work with my laptop. I got the same project, everything is equal. When I use postman to make the requests, it works normally, but when I try to make them with the form from my website, I get that error.
I've looked everywhere but couldn't fix it. Nothing I tryed did work. And It seems that no one else had a similar problem.
Cross-Origin Read Blocking (CORB) blocked cross-origin response http://127.0.0.1:8000/api/login with MIME type application/json. See https://www.chromestatus.com/feature/5629709824032768 for more details.
Add proxy configuration in vue.config.js file
module.exports = {
devServer: {
proxy: 'http://localhost:4000'
}
}
This will tell the dev server to proxy any unknown requests (requests that did not match a static file) to http://localhost:4000.
here is a link to the doc for more detail

Google compute load balancer throws 400 Bad Request on DELETE

I created an instance group through an instance template, and aligned this instance group to a backend service which is used by a http load balancer.
Now when I open a url to an instance vm from the instance group I created, I can do GET POST and DELETE requests and all of the requests are fast, and everything works as expected.
When I open up the url to the static IP for the load balancer. I can do GET and POST requests, but DELETE requests throw a 400 BAD REQUEST with a response page saying:
That’s an error.
Your client has issued a malformed or illegal request. That’s all we
know.
Other load balancer issues:
The site is quite slow through the load balancer. Perhaps
there is a setting I'm missing, I'm pretty sure I set everything to
us-central-1b.
Sometimes the site doesn't even show up. It will work for http, but then
it won't work for https and visa versa. The load balancer has very strange
behaviour.
My VM api access is set to This instance has full API access to all Google Cloud services
I'm using Django as my api layer, I turned on debugging on this host and saw that the DELETE requests weren't even coming through when making requests through the loadbalancer static ip. Is there a firewall setting I'm missing?
Please help me make this fast again and allow the DELETE requests to happen.
Thanks!
Are you sending anything in the body of the request?
Google load balancer will respond with 400 BAD REQUEST if you try to send anything in the body. Easy way to check if this is the problem is fire up Chrome Developer tools and check the Request Payload section is empty/doesn't exist.
The HTTP spec doesn't explicitly say wether you can pass anything in the body so this isn't wrong, just undefined.
Is the load balancer slow for all requests or just pages with lots of elements on?