The webapp I'm making is medium-sized, and it's going to be a single-page static JS+HTML app (made with Backbone, and served by nginx) which accesses an API, hosted on a proper webserver.
Should the API be under a different hostname, or same hostname but different path? What could be possible pros & cons of these options? Both options are feasible, thanks to nginx.
I would suggest using an intuitive separated environment. Splitting the access location like example.com and api.example.com allow the hostnames to describe the purpose of each environment. Separating these keeps things organised and clear while using the same hostname for each could cause confusion as to what kind of request is being done.
Using example.com/api is possible as well, but could lead to future issues where directories are used for other things as well. E.g., would example.com/newfeature have a directory like example.com/newfeature/api as well?
In the end, it's all a matter of personal preference though. Pick something that works in a clear way for your environment.
I think your question is somewhat irrelevant, as long as your code is flexible about the base url of the api. Make sure you can configure your code (both javascript and back-end) so that all api URLs are relative to some single configuration parameter and you will have flexibility to put your api service anywhere you want or need to put it.
I tend to think it might be a good idea to have everything on the same hostname, because the user might have disabled 3rd party cookies, and so the API server won't be able to recognize you after you close your browser. Before anyone tells me I should have the main website serve the cookies instead, let me tell you that I'd like the main website to be completely static HTML/JS files, and so they have no ability to serve httpOnly cookies, which is the kind of cookies I like.
Related
I am using Apache httpd and proxying requests to my Tomcat server where my application WARs are deployed.
Let's say I have application App and servlet URL pattern /servlet1 and domain name abc.com. So, when I forward request from my ROOT.war servlet to /App/servlet1, my URL changes to abc.com/App/servlet1, but I would prefer abc.com for a better user experience.
I know I could do this by re-nameing App to ROOT.war but that is not an option for me, as we already have a ROOT.war for another application.
Is it possible to rewrite abc.com/App/servlet1 to abc.com other then ROOT.war? If so, how do I do it?
The way to do this is to merge your ROOT and App applications together into a single application.
No servlet contains is going to be able to detect when some URLs should go to one application and others should go to another without some obvious mapping strategy. The servlet specification uses mandates the use of URL prefixes (context paths) to differentiate between deployed web applications: you cannot mix them together unless they are in fact the same application.
There are very very ugly ways to do this, but those techniques end up opening up a whole lot of headaches and continued hacks just to get around what sounds like a senseless requirement: making URLs pretty. Nobody cares how pretty a URL is. Make sure that example.com takes the user to the right place and don't worry about any of the rest of it.
I'm trying to set a web service that needs the user's Google Latitude info, so I'm using Google OAuth to get the user authorization stuff.
However, when trying to set the redirection URI in the Google APIs Console for a web application client ID I get a message error if I try to set it to 'http://PUBLIC_IP/'.
I need to test it with non local users (thus localhost can't be used), so I would like to know if having a web domain is mandatory in order to use Google's OAuth. If not, how can I solve this issue?
This is not currently supported. I filed a feature request and will update on progress.
Update: Essential app verification activities have continued to make support of IP address-based apps unlikely. These verification activities are necessary to provide protections against abuse of user accounts. In addition, the cost of setting up dedicated domains has been reduced significantly since this feature was requested. Please read other responses here about possible options.
You can use xip.io to work around it.
For example: '192.168.0.50.xip.io:3000' will resolve to '192.168.0.50:3000'
I ran into this issue too and so I entered a URL with a .com extension and also entered it into my /etc/hosts file. Works like a charm.
It totally sucks that my entire app now has to be developed on an apparently 'live' domain though.
I used my public hostname. It helps if you have a static IP address. I used http://www.displaymyhostname.com/ to get my hostname. I plugged it straight into the Authorized JavaScript origins field when I created a new Web Application Client ID.
P.S. My hostname looked something like this: 111.111.111.111.static.exetel.com.au
You can use a dynamic DNS. I used ddns.net which offers a free solution. Basically, you enter your FQDN as this: yourcompany.ddns.net as your domain. When looked up for an IP address, the .net domain points to ddns; when ddns.net is looked up, it looks up in its database for your company, returns the IP. So mine looks like this: https://wigwam.ddns.net and everything works fine. You don't need to buy a domain, you can substitute your known IP, and Google is happy with that.
Your IP must be static, of course.
Yes, as of now you still need to have a domain name to use Google OAuth in your application. If you have a static public IP and don't want to buy a domain name, you could use a free subdomain from FreeDNS to link to your public IP. Seemed to work well enough for me with a Django app.
Echoing what Breno said in response to his earlier comment:
Apologies for the lack of updates here. Essential app verification activities have continued to make support of IP address-based apps unlikely. These verification activities are necessary to provide protections against abuse of user accounts. In addition, the cost of setting up dedicated domains has been reduced significantly since this feature was requested. Please read other responses here about possible options.
You can read more about Google's app verification requirements [1] and Google's policies requiring secure handling of data [2].
[1] https://support.google.com/cloud/answer/9110914?hl=en
[2] https://developers.google.com/identity/protocols/oauth2/policies#secure-response-handling.
xip.io is not working anymore as an alternative you can use nip.io the same way for example:
10.0.0.1.nip.io:8000 will resolve to 10.0.0.1:8000
It seems like xip.io is down, but there are alternatives such as sslip.io and nip.io. However, I couldn't get either of these to work.
I ended up hosting the main file server on the main machine, and ran said server on a 192.168.1.xx IP address. I then ran servers on each of the test machines (including a second server on the main machine), all of which were on the localhost address. Any requests that the localhost servers received were then passed off to the 192.168.1.xx server, which allowed testing on all of the devices.
This should also work with public facing IP addresses.
I'm trying to set a web service that needs the user's Google Latitude info, so I'm using Google OAuth to get the user authorization stuff.
However, when trying to set the redirection URI in the Google APIs Console for a web application client ID I get a message error if I try to set it to 'http://PUBLIC_IP/'.
I need to test it with non local users (thus localhost can't be used), so I would like to know if having a web domain is mandatory in order to use Google's OAuth. If not, how can I solve this issue?
This is not currently supported. I filed a feature request and will update on progress.
Update: Essential app verification activities have continued to make support of IP address-based apps unlikely. These verification activities are necessary to provide protections against abuse of user accounts. In addition, the cost of setting up dedicated domains has been reduced significantly since this feature was requested. Please read other responses here about possible options.
You can use xip.io to work around it.
For example: '192.168.0.50.xip.io:3000' will resolve to '192.168.0.50:3000'
I ran into this issue too and so I entered a URL with a .com extension and also entered it into my /etc/hosts file. Works like a charm.
It totally sucks that my entire app now has to be developed on an apparently 'live' domain though.
I used my public hostname. It helps if you have a static IP address. I used http://www.displaymyhostname.com/ to get my hostname. I plugged it straight into the Authorized JavaScript origins field when I created a new Web Application Client ID.
P.S. My hostname looked something like this: 111.111.111.111.static.exetel.com.au
You can use a dynamic DNS. I used ddns.net which offers a free solution. Basically, you enter your FQDN as this: yourcompany.ddns.net as your domain. When looked up for an IP address, the .net domain points to ddns; when ddns.net is looked up, it looks up in its database for your company, returns the IP. So mine looks like this: https://wigwam.ddns.net and everything works fine. You don't need to buy a domain, you can substitute your known IP, and Google is happy with that.
Your IP must be static, of course.
Yes, as of now you still need to have a domain name to use Google OAuth in your application. If you have a static public IP and don't want to buy a domain name, you could use a free subdomain from FreeDNS to link to your public IP. Seemed to work well enough for me with a Django app.
Echoing what Breno said in response to his earlier comment:
Apologies for the lack of updates here. Essential app verification activities have continued to make support of IP address-based apps unlikely. These verification activities are necessary to provide protections against abuse of user accounts. In addition, the cost of setting up dedicated domains has been reduced significantly since this feature was requested. Please read other responses here about possible options.
You can read more about Google's app verification requirements [1] and Google's policies requiring secure handling of data [2].
[1] https://support.google.com/cloud/answer/9110914?hl=en
[2] https://developers.google.com/identity/protocols/oauth2/policies#secure-response-handling.
xip.io is not working anymore as an alternative you can use nip.io the same way for example:
10.0.0.1.nip.io:8000 will resolve to 10.0.0.1:8000
It seems like xip.io is down, but there are alternatives such as sslip.io and nip.io. However, I couldn't get either of these to work.
I ended up hosting the main file server on the main machine, and ran said server on a 192.168.1.xx IP address. I then ran servers on each of the test machines (including a second server on the main machine), all of which were on the localhost address. Any requests that the localhost servers received were then passed off to the 192.168.1.xx server, which allowed testing on all of the devices.
This should also work with public facing IP addresses.
Considering a website that requires user authentication, all pages have a "welcome username" string that is different for each user (like most of the websites these days)
The application is caching different page components and sets correct last-modified headers also static content is served by a different machine using nginx.
I think a reverse proxy in this case will slow down the website, is this assumption wrong and I'm missing something here, could the reverse proxy still improve performance?
You could serve your custom content ("Hello [user]") via json/javascript. In this manner, the load on the dynamic part of your server is simple - "What's my username?" rather then "Customise the homepage for user x".
This has been covered in other threads, for example: https://serverfault.com/questions/87953/what-are-the-pros-and-cons-of-using-http-for-web-server-communication-with-app-se
One of YSlow's measurables is to use cookie-free domains to serve static files.
"When the browser requests a static
image and sends cookies with the
request, the server ignores the
cookies. These cookies are unnecessary
network traffic. To workaround this
problem, make sure that static
components are requested with
cookie-free requests by creating a
subdomain and hosting them there." --
Yahoo YSlow
I interpret this to mean that I could experience performance gains if I move www.example.com/images to static.example.com/images.
Although this is easy to do, I would lose the handy ability within my content management system (Joomla/WordPress) to easily reference and link to these images.
Is it possible to use .htaccess to redirect all requests for a particular folder on www.example.com to a folder on static.example.com instead? Would this method also fool the CMS into thinking the images were located in the default locations on its own domain?
Is it possible to use .htaccess to redirect all requests
for a particular folder on www.example.com to a folder on
static.example.com instead?
Possible, but counter productive — the client would have to make an HTTP request, get the redirect response, then make another HTTP request.
This costs a lot more than the single line of cookie data saved!
Would this method also fool the CMS into thinking the images
were located in the default locations on its own domain?
No.
Although this is easy to do, I would
lose the handy ability within my
content management system
(Joomla/WordPress) to easily reference
and link to these images.
What you could try to do is create a plugin in Joomla that dinamically creates these references.
For example, you have a plugin that when you enter {dinamic_path path} in an article, it appends 'static.example.com/images' to the path provided. So, everytime you need to change the server path, you just change in the plugin. For the links that are already in the database, you can try to use phpMyAdmin to change them in this structure.
It still loses the WYSIWYG hability in TinyMCE, but is an alternative.
In theory you could create a virtual domain that points directly to the images folder, such as images.example.com. Then in your CMS (hopefully at the theme layer) you could replace any paths that point to the images folder with an absolute path to the subdomain.
The redirects would cause far more network traffic, and far more latency, than simply leaving things as they are.
It would redirect the request but the client would still be sending its cookies to the server, so really you accomplished nothing. You would have to directly access the files from a domain that isn't storing cookies for it to work.
What you really want to do is use staticexample.com/images instead of static.example.com/images so that you don't pick up any cookies on the example.com domain that you may have set. If all you do is server images from that domain with a simple apache server or something then you can configure that server not to return even a session cookie.
The redirects are a very bad idea. Cookies cause some performance hits but round trips to the server such as a redirect would cause are a much more serious performance issue.
I did below and gained success:
<FilesMatch "!\.(gif|jpe?g|png)$">
php_value session.cookie_domain example.com
</FilesMatch>
What it means is that if you do not set images in cookie information.
Then images are cookie-free with server.