Facebook oauth/access_token call works perfectly on local machine, but fails on dev server - api

Fighting a weird situation. We are developing a Facebook app and I have run into a strange issue. When our application sends out the oauth/access_token call, it works perfectly on my local environment, but returns a 400 error when called from our dev server. I've checked the calls from the two environments against each other and they are identical, yet one works and the other fails completely.
Help is appreciated.

Related

Fetch() async/await method in a local NextJs 13 app getting object “undefined”when trying to build for prod but works in dev

I’m learning NextJS 13 and trying out fetch()/async/await in a simple demo app running on a local Docker container. I’m using fetch()/async/await hitting a GraphQL api on another local container. Everything works great when running “npm run dev” and I’m now wanting to try out the workflow of Next’s static output and pushing to production but when I try generating the app via “npm run build” none of the api calls work because my code which is trying to display for example “data.item.name” and can’t select “item.name” on object “undefined” and the build fails. I retry running “npm run dev” and all works perfectly fine again.
I’ve double checked that my api is responding with Postman and all seems to be good. I did notice that when running the NextJS 13 app within Docker it wanted me to use the container ID of the api container ie “http://nginx/api” vs “http://localhost:8888” when making the fetch() call which had me hung up for a few days. But once I got that working the app and the fetch() calls worked great.
I’m brand new to Next and React so this one has really had me stuck.

Quarkus app runs perfectly on local but does not respond to any request on windows server

I have a simple quarkus api app. I've tested the app on my local machine and its running fine. But when i try to run the same app in windows server it starts perfect but does not respond to any of my api request. It is not showing any error or problems. I've tried multiple ways to run the app also through package and jar files but all led to same result. screenshot of the palce where app gets stuck

Testing individual microservices in Jhipster

So I have started work in an app built with jhipster. I have made changes to one microservice and want to run only that on my local machine. But I am not able as it gives me a 401 error. Frustrated I simply deleted the securityConfig.java file in app/config and now it keeps hitting me with a login page.
please help.

Program showing fine in browser but fails to load resources in electron window

Currently working on a project and everything was working fine. Then suddenly after rebooting from a power cut whilst working on said project, the electron application has since failed to load any online resources. It works perfectly fine when on a native browser using localhost but will fail to load any online resources on the electron application version.
Initially, I thought it was a firewall issue but changing settings has not changed anything. I've reset router/pc multiple times to no avail.
The error recieved for any online resource is GET [URL] net::ERR_EMPTY_RESPONSE
Edit: Here is a screenshot of the error displayed in electron:

clientaccesspolicy.xml suddenly stopped working (WCF/Silverlight)

Very frustrated with all of this, hoping someone can assist.
I had a Silverlight application and WCF working together without issue for a year. In order to get them working, I had some pain initially but finally worked through it with help. All of the pain came from configuration/security, 401's, cross-domain hell, etc.
The way I have everything setup is that I have a WCF service that resides in it's own application/directory and runs in its own application pool.
On the same web server (IIS7), I have another application/directory with the Silverlight application that points to the aforementioned service.
The server name (for this exercise) is WEBSERVER1. We've created a CNAME for it that is WEB1. In the past, if the user went to http://WEB1/MyApp/ or http://WEBSERVER1/MyApp/ it would work. Suddenly yesterday it started behaving badly. Normal users started getting the Windows challenge/response prompt (and even if they entered the info they would get a 401 error).
My WCF service runs in a site that enables anonymous access (and this has always worked).
My Silverlight application runs in a site that has windows integrated (and this has always worked), since I capture the Windows username when they connect.
For the record, I did create a NEW application pool yesterday with an ASP.NET application that runs in it. This seems to work fine, but there is a chance creating this new application pool and application/directory has caused something to change.
I have a clientaccesspolicy.xml in my wwwroot folder, as well as in the folder for each of the two applications above (just in case). I have tried to promote NTLM over Negotiate as a provider (as that worked for another issue I was having on another server).
After trying some changes, I can't even get the thing to behave the same each time I call it. Sometimes it will prompt me for credentials. Other times it will work, but then say it failed to connect with the WCF service with a "not found". Other times it will actually work fine, but only if I am using the actual server name and not the CNAME. When using the CNAME I always get the crossdomain error, even though I have the cross-domain xml files in every directory root.
This is a nightmare, and makes advanced algorithm analysis seem fun and easy by comparison. Did Microsoft realize how difficult they made this combination of (IIS7/WCF/Silverlight/providers/permissions/cryptic or missing error messages) to get to work??
I found a solution that appears to be working.
In this case, I had to change the authentication mode for the default web site (which hosted the clientaccesspolicy.xml file) from anonymous access to Windows Integrated. I don't understand why this worked for a year or so and then stopped, but it seems to have resolved it.
The new application that I had deployed yesterday was a standard ASP.NET web application, which I put in it's own application directory and it's own application pool, to ensure that it would not cause this sort of issue. I'm still not even sure if it did.
The way I resolved it was by trying to navigate from my PC to the actual http://servername/clientaccesspolicy.xml file, and that was giving me a 401 error. I switched from anonymous to windows integrated on that default website (which has nothing in it except for that xml file) and that resolved the permission issue. I then had to permission the actual AD groups to have read access to that folder (if not they got the user/pw prompt and could not get through).