I am useing quintolabs qlproxy for web filtering. How can I whitelist onedrive so it stays syncronized? What are the URLs and IPs to Whitelist?
Seems the issue is that OneDrive application uses SSL Pinning and thus does not accept mimicked SSL certificate from your Squid proxy. A similar issue for Dropbox is explained at http://docs.diladele.com/faq/squid/dropbox.html.
This same error will be present in all SSL inspecting web filters. For example from the message at Sophos (astaro) UTM support forum it seems the list of domain names to exclude is quite large (see https://www.astaro.org/gateway-products/network-protection-firewall-nat-qos-ips/56579-microsoft-onedrive.html):
skyapi.live.net
storage.live.com
skydrive.live.com
shared.live.com
onedrive.live.com
Please note the list may not be complete. The best is to fire up the WireShark or (better) Microsoft Message Analyzer on the machine where OneDrive is installed and try to see what domain names are sent to the proxy upon start of OneDrive application. Then exclude these from ssl bump.
I'm trying to set a web service that needs the user's Google Latitude info, so I'm using Google OAuth to get the user authorization stuff.
However, when trying to set the redirection URI in the Google APIs Console for a web application client ID I get a message error if I try to set it to 'http://PUBLIC_IP/'.
I need to test it with non local users (thus localhost can't be used), so I would like to know if having a web domain is mandatory in order to use Google's OAuth. If not, how can I solve this issue?
This is not currently supported. I filed a feature request and will update on progress.
Update: Essential app verification activities have continued to make support of IP address-based apps unlikely. These verification activities are necessary to provide protections against abuse of user accounts. In addition, the cost of setting up dedicated domains has been reduced significantly since this feature was requested. Please read other responses here about possible options.
You can use xip.io to work around it.
For example: '192.168.0.50.xip.io:3000' will resolve to '192.168.0.50:3000'
I ran into this issue too and so I entered a URL with a .com extension and also entered it into my /etc/hosts file. Works like a charm.
It totally sucks that my entire app now has to be developed on an apparently 'live' domain though.
I used my public hostname. It helps if you have a static IP address. I used http://www.displaymyhostname.com/ to get my hostname. I plugged it straight into the Authorized JavaScript origins field when I created a new Web Application Client ID.
P.S. My hostname looked something like this: 111.111.111.111.static.exetel.com.au
You can use a dynamic DNS. I used ddns.net which offers a free solution. Basically, you enter your FQDN as this: yourcompany.ddns.net as your domain. When looked up for an IP address, the .net domain points to ddns; when ddns.net is looked up, it looks up in its database for your company, returns the IP. So mine looks like this: https://wigwam.ddns.net and everything works fine. You don't need to buy a domain, you can substitute your known IP, and Google is happy with that.
Your IP must be static, of course.
Yes, as of now you still need to have a domain name to use Google OAuth in your application. If you have a static public IP and don't want to buy a domain name, you could use a free subdomain from FreeDNS to link to your public IP. Seemed to work well enough for me with a Django app.
Echoing what Breno said in response to his earlier comment:
Apologies for the lack of updates here. Essential app verification activities have continued to make support of IP address-based apps unlikely. These verification activities are necessary to provide protections against abuse of user accounts. In addition, the cost of setting up dedicated domains has been reduced significantly since this feature was requested. Please read other responses here about possible options.
You can read more about Google's app verification requirements [1] and Google's policies requiring secure handling of data [2].
[1] https://support.google.com/cloud/answer/9110914?hl=en
[2] https://developers.google.com/identity/protocols/oauth2/policies#secure-response-handling.
xip.io is not working anymore as an alternative you can use nip.io the same way for example:
10.0.0.1.nip.io:8000 will resolve to 10.0.0.1:8000
It seems like xip.io is down, but there are alternatives such as sslip.io and nip.io. However, I couldn't get either of these to work.
I ended up hosting the main file server on the main machine, and ran said server on a 192.168.1.xx IP address. I then ran servers on each of the test machines (including a second server on the main machine), all of which were on the localhost address. Any requests that the localhost servers received were then passed off to the 192.168.1.xx server, which allowed testing on all of the devices.
This should also work with public facing IP addresses.
I installed my SSL certificate yesterday. However I get the SSL warning (triangle) icon. The excuse for that is that "the page includes other resources which are not secure".
I am not sure what that means but my assumption is that it has something to do with some text inputs which are not secure.
Any information or resources to make me understand more and figure out how to secure everything will be helpful. I don't like the warning there (especially on the signup page) and need to figure out what's the issue. Thanks.
You need to make sure not to embed any resources via http:// - use only https://.
If you embed external resources which are available via both HTTP and HTTPS, you can use protocol-relative URLs such as //domain.tld/whatever - they'll be loaded over the protocol that's currently used.
In a customized Login Module I've developed for my application server (GlassFish 3.1.2.2), I'm using the following syntax to obtain the HttpServletRequest:
PolicyContext.getContext(HttpServletRequest.class.getName())
And it works fine.
But now I'm configuring the server to use only HTTPS and the same instruction returns null.
I guess this is a security restriction, but I'm not sure what needs to be changed in order to solve this issue (server.policy?).
To put this under context, I need to record the IP address of all login attempts, valid and invalid, and getting the request in the module seemed the most obvious solution.
Can someone help me to figure out a solution?
I can't help you directly with your question, but you may want to note that PolicyContext is a JACC class. It's spec'ed to work inside JACC policy providers. You may want to look at an article I wrote that explains this more in depth.
There is thus no specific guarantee that obtaining the HttpServletRequest works from inside a GlassFish proprietary login module, although I indeed have seen people using this more often and it typically works. The fact that it does not work when you switch to https sounds more like a bug or oversight to me than any specific security restriction.
A workaround for you could be to rewrite your login module as a Java EE standard auth module using JASPIC. I've also written an article about that subject which you could use for reference. In JASPIC you explicitly have access to the HttpServletRequest.
I have a problem with my site after implementation of SSL that images do not appear. The scenario is that images come from images.domain.com (hosted on Amazon S3) and my certificate is for www.domain.com.
This problem only seems to happen in IE and not in any other browsers.
The issue is related to "mixed content" - HTTPS pages which have HTTP resources (images, scripts, etc) embedded.
The point of using HTTPS is to ensure that only the originating server and the client have access to the secured page. However, in theory it might be possible for this security to be compromised if HTTP resources are embedded - a server might intercept an unsecured javascript file and inject some code to alter the secured page onload.
Most browsers will indicate that a secure page has mixed content by altering the "secure lock" icon, either by showing the lock as open or broken, or by making the icon red (Chrome displayed a skull and crossbones for a short time, but they realised that this was a bit serious for the potential threat level).
Internet Explorer (depending on the version) will display a message either asking whether the insecure content should be shown (IE<=7), or whether only the secure content should be shown (IE>=8). It sounds like you have somehow disabled this message to always hide the insecure content, however that's not the default behaviour.
I think the best solution for you is to replace your S3 links with HTTPS versions.
I am not a web developer, but someone who often deals with the crap experience that is IE. I am not sure what version you are using, but you do not have a wildcard SSL cert (i.e. *.domain.com), so does it have something to do with an old-school limitation in 3rd party images?
See here for what I allude to above and a very good explanation of how IE caches cross-domain HTTPS content, specifically images. I am not sure what the solution is, but I was curious so I researched a little myself and this might help.