I have been having some issues connecting to the Developer WAB application from the Enterprise instance I am currently using. I have followed all of the steps outlined in the guide provided by Esri here and seem to be running into an authentication loop in my browser.
There is an error in the web page Console that states that there is 'No 'Access-Control-Allow-Origin' header present on the requested resource'.
The error displaying in the Web AppBuilder for ArcGIS says that there is no token found, and so redirects back to the 'setportalurl' page. Any ideas on how this can be resolved?
No token is found, redirect /webappbuilder/ to /webappbuilder/?action=setportalurl
Cheers
As the Developer WAB uses the machine name and port for it's domain, ArcGIS is rejecting the request for security reasons.
If you paste the full domain into the "Allow Origins" section of your portal's security settings and save, this should then work properly.
Might not be applicable to your scenario, but for people getting the same error when working from localhost this can be an issue with WAB Dev Edition's self-signed cert.
The solution for me was to allow the chrome flag:
chrome://flags/#allow-insecure-localhost
Related
I created a custom domain mapping for my Cloud Run service following this guide https://cloud.google.com/run/docs/mapping-custom-domains.
I can access my service via the https run.app URL and the custom domain via HTTP, but when I go to the custom domain via HTTPS, I get back a Google 404 error page.
The weird thing is, this seems to be an issue on my local laptop (both browser and curl on the terminal), but curl-ing it from a remote server seems to work ok.
As #LundinCast pointed out, there seems to be an outage on the Google server side. I'll monitor the situation and mark this as resolved for now.
Edit: I'm guessing this is related to https://status.cloud.google.com/incident/cloud-networking/19016
I know there are about a hundred questions of this on SO, but none of them are maybe up-to-date with what seems to be happening on facebook platform right now. It seems the switch that turns off SSL is disabled:
It may be hard to see, but the "Enforce HTTPS" toggle is greyed out and can't be toggled. I'm all for enforcing HTTPS in production, but is everyone who is building against facebook API really setting up an SSL certificate on their local server just for this?
You will still be able to use HTTP with “localhost” addresses, but
only while your app is still in development mode.
You can change the App mode to Development Mode from App Dashboard:
In this mode you can only test your application with Facebook test user accounts. You can obtain the test accounts login credentials from your app dashboard.
Please note, http://localhost redirects are automatically allowed while in development mode only and do NOT need to be added in Valid OAuth Redirect URIs section.
Read more about it in this Facebook Blog.
2021 update: Facebook do not allow localhost over HTTP any more. You will need to get your site working locally over HTTPS for testing. This is despite their blog post and the literal Facebook developer console assuring you that they allow localhost over HTTP by default.
paste this in your client json
"start": "set HTTPS=true&&react-scripts start",
next copy and enter this in your url bar .
chrome://flags/#allow-insecure-localhost,
and set Allow invalid certificates for resources loaded from localhost to enabled
The most simple way to test your facebook login, since you cannot dissable anymore "Enforce HTTPS" option, is to use ngrok:
ngrok.com
Im linux user. After installing it just type at your terminal:
ngrok http 80
and automaticly will be created a new https domain just for your local project. You will see an ui interface in your terminal and your secure domain will be that who starts with https://
Copy the domain and use it in developers.facebook.com in your app to see if you code is good or not.
If is good its ok keep going until you will host your project on a secure domain.
For more info and docs about ngrok.com see:
ngrok docs
This setting requires HTTPS for OAuth Redirects, and it requires and Facebook JavaScript SDK calls that return or require an access token are only from HTTPS pages. All new apps created as of March 2018 have this setting on by default, and you should plan to migrate any existing apps to use only HTTPS URLs by October 6, 2018.
Most major cloud application hosts provide free and automatic configuration of TLS certificates for your applications. If you self-host your app or your hosting service doesn't offer HTTPS by default, you can obtain a free certificate for your domain(s) from Let's Encrypt.
https://developers.facebook.com/docs/facebook-login/security
I have a react-native application which I'm hooking up to an existing API which already has two clients (webapp, chrome extension). For some reason I just get the generic [TypeError: Network request failed] with nothing more.
The requests that are failing are just basic get requests such as
fetch('https://api.mydomain.com/pages/') or axios.get('https://api.mydomain.com/pages/') and they work fine from the webapp and chrome extension that are using this API as well - the requests only fail from the react-native application.
Everything I'm finding on google for this is in regards to localhost or SSL Certificate problems, although these are not my issues because I'm trying to pull from a deployed server and also that server has SSL correctly enabled and I'm using the https endpoint.
Some more notes:
when I do go against localhost (using the IP Address, not localhost address) I get this same error.
I'm getting this error on Android and I haven't touched nor am concerned yet with iOS
I get this same error with fetch and axios
The request goes through OK with this endpoint https://jsonplaceholder.typicode.com/posts/1 and I get back a response
This last note is the most interesting because I believe that means there is some issue with my server, however my server is not receiving any request... I have opened up CORS for testing purposes but have the same issue, although if that were the issue the server would have received the request and responded with 403.
This has to do with Android not trusting my SSL certificate. Apparently Android has some additional trust requirements on top of what web browsers require.
I found this through error.request._response via the axios catch block which showed me the error java.security.cert.CertPathValidatorException: Trust anchor for certification path not found.
After figuring that out, the root cause ended up being my SSL Certificate uploaded to AWS didn't have the correct intermediate cert which was fine in chrome, but not in android.
I'm trying to build an hotspot with mikrotik to allow the internet to my clients! So, the problem starts when i'm trying to access sites with Https sercurity like facebook, before the user authenticates.
With normal http connection the hotspot works fine, but when i put https, i'm getting this error: error
Can someone please help me? I have read all the docs in the mikrotik forum, nothing worked!
it's good news that nothing worked because it's the purpose of HTTPS: ensure that the site you want is the site you get. Hotspot does exactly the reverse: you ask for a website and you get another one (hotspot landing page): error.
There is no workaround without installing your certificate on each client, which is not doable on a hotspot environment.
Hopefully, problem has been handled with CNAs (Captive Network Assistants) which detect hotspot presence and launch an automatic HTTP request before the user has time to launch its own browser and navigate to Facebook. Latest iOS/Android/Windows versions do that automatically.
I'm working on a web application that uses the google earth plugin. Recently, a new requirement to have non-public users logon was added, which meant that some users were now using the site over https. Among the things that broke in testing were the custom placemark icons (They were working using http).
The icons are hosted on the same server which servers the page.
Here are the urls for each of the protocols.
http - http://localhost/Images/yellow.png
https - https://localhost/Images/yellow.png
I can follow that link and the image will appear as you would expect.
The images hrefs are declared as icon styles in dynamically generated kml.
I want to avoid loading the images over http because I think that will cause internet explorer to present the user with a mixed content warning.
How do I get the images to load properly while using https?
I have been wrestling with this myself -- the short answer is that this won't work. If the content is served off of an HTTPS site that generates any kind of error/prompt (authentication, invalid certificate, etc.) the plugin will simply not load the content.
Interestingly, the desktop client works fine and prompts the user for credentials if necessary. However, neither client will allow content to be served off of site with an untrusted certificate.
The only workaround that I have found is:
Use a trusted HTTPS certificate on the server hosting the content (either trust the certificate on the client systems or just use a real certificate.)
Do not use HTTPS basic auth as that will always generate 401/Challenge responses which the web browser client will simply ignore
If authentication is a requirement, use NTLM authentication and common (e.g., domain) logins. If you load the plugin in Interent Explorer (or in a .NET WebBrowserControl) the authentication will be handled properly and the images will show up.
I was at a Google Earth administrator's training last week and the trainer confirmed this "bug". It is supposed to be fixed in the next version of the plugin (it may actually be fixed already -- what version of the plugin are you using?)