body parser makes heroku go timeout - express

I am developing a stack based on NodeJS + Express + SocketIO and I deployed it to Heroku.
The stack involves using concurrency in dyno (I followed Heroku's guide on how to do it) and a Redis adapter for SocketIO.
In addition to SocketIO, a SPA is served with express.static(), responsing to a dynamic path (/app/:name) and API to a path (/api/) and everything works fine, even on Heroku.
The problem arises when this webapp (or just an HTTP client like the one in WebStorm) makes a POST request to the /api/:app/login endpoint and exactly at the point of the express.json middleware which, locally works fine, while on Heroku it randomly takes more than 30 seconds without giving any error on the server and the Heroku router returns H12 error in the logs and returns error 503 to my client.
I noticed it doesn't happen always but almost 3/4 of the times and after i refresh the page or retry the request many times, it works.
Any advice on what could be blocking the middleware?
Thanks
I tried placing logs middleware all over to find the middleware causing the timeout

Related

Login to manager of piranhacms hosted in docker behind nginx on ubuntu (aspnet core 6)

Does anyone has experience hosting piranha on ubuntu in a docker container behind nginx?
The frontend is fine, everything works smooth. But the manager is not working for me. It has something to do with the login. First i saw error 502 after login (failed login was working with correct error message, so the post itself is working). I changed the login not to do the local redirect but returning the page. No error message, so i guess the login data was fine, but somehow i am still not logged in.
The only cookie i see is the antoforgery. Someone has an idea, the is no error message in the logs.
The problem was, that aspnet core identity created a cookie larger then nginx accept in proxy communication.
I increase the buffers, but this did not work for me. There are a lot of article about it, but nothing worked.
So i decided to reduce the header sent. The application configuration has been changed to manage the identity in a different way. More information in memory but smaller cookie.

frontend cloud run app can not access my backend cloud run app due a MixedContent problem

I have two cloud services up and running.
frontend (URL: https://frontend-abc-ez.a.run.app/)
backend (URL: http://backend-abc-ez.a.run.app/)
Frontend is calling the backend through a nuxt.js server middleware proxy to dodge the CORS problematics.
The call is coming through - I can see that in the backend log files. However the response is not really coming back through because of CORS. I see this error in the console:
Mixed Content: The page at 'https://frontend-abc-ez.a.run.app/' was loaded over HTTPS, but requested an insecure XMLHttpRequest endpoint 'http://backend-abc-ez.a.run.app/login'. This request has been blocked; the content must be served over HTTPS.
What I find weird is that I configured the backend url with https but it is enforced as http - at least that is what the error is telling me. Also I see a /login path segment in the unsecure URL. Why is that? I never explicitly defined that endpoint. Is it the security layer proxy of the run service itself?
Anyway - I need to get through this properly and am having a hard time to understand the source of the problem.
For some reason as I rechecked the applications today in the morning everything went fine. I have really no idea why it is working now. I did not change a thing - I waited for the answers here before I'd continue.
Very weird. But the solution so far seems to be waiting. Maybe Cloud Run had some troubles.

Ktor: first call.receiveText() gets timeout

I’ve recently developed a simple Ktor app and organized a hosting for it on Apache Webserver + Tomcat.
The app has post { … } route used for processing HTTP POST requests. The route works fine for all requests except the first one.
Some additional tracing shows that request processing stucks on the line
call.receiveText()
where I read the POST JSON body for further parsing. The request is stuck until network timeout, and I couldn’t detect the actual processing time (it takes minutes).
Every following POST request with exactly the same content is processed fine. In Tomcat settings I put “load-on-startup” - it didn’t affect the result.
What could be the reason of such behavior? I assume some lazy loading issue. Does Ktor provide some mechanism to force initialization of the library components?

Bad Request (400) response from Cowboy server, when hosting on heroku

I have an app running on heroku and noticed today, that a particular request leads to a 400 response on Firefox but it works on Chrome. I also found out that if I remove an unnecessary cookie that requests succeeds again.
While investigating I also found an issue with chrome, where it wasn't able to fetch the CSS file, while loading the website:
However, opening that link in a new tab, I managed to load the CSS without any problems. Also removing that one extra cookie managed to sort it out.
All these 400 responses have one thing in common, they are served by the server "Cowboy"
The application I'm running is asp.net core, so it should return "Kestrel" as the server instead, but it seems that the request doesn't even get to the heroku router, because I can't even find them in the logs of the dyno.
I've tried to search online for an explanation and it seems that in case I'd be running against some limits, this is the response I should be expecting, but this is just a staging application and it works in chrome but doesn't work in firefox, so it's hard for me to imagine which limits I could be running against.
Update:
We've removed those unnecessary cookies, and now chrome loading the CSS seems to work fine, but Firefox is still getting a 400 Bad Request from the Cowboy server. Any ideas? I've only found Why do I get a "400 Bad Request" response when I have large cookies?
- which doesn't seem to apply to me, the cookies are less than 4k and all browsers should have the same cookies, there is nothing on the server to differentiate between browsers.

Yii Flash Messages not showing - possible HTTP Proxy browsing?

I'm investigating a problem a user is having with a web application that is built using Yii.
The user is not seeing the Yii 'flash' session-based user-feedback messages. These messages are shown once to a user and then destroyed (so they're not shown on subsequent page loads).
I took a look at the server access logs and I noticed something weird.
When this user requests a page there is a second identical request but from a different IP and with a different User Agent string. The second request is often at the same time or is sometimes (at most) a couple of minutes later. A bit of googling leads me to the conclusion that the user is browsing the web using a HTTP Proxy.
So, is this likely to be a HTTP Proxy? Or could it be something more suspicious? And if it is a HTTP Proxy, does this explain why they're not seeing the flash session messages? Could it be that the messages are being 'shown' to the Proxy and then destroyed?