I'm developing an automation tool using javascript/jQuery.
To manipulate the DOM I've tunneled all iframe/browser access through an proxy server to have all them on same domain.
All this is working fine! But my end point app is a transactional jsp/servlet database and I wanna have multiple access to it.
I guess, because the proxy 'tunneling' all access through proxy has the same session, wich is not desirable since I need multiple access to the app, and to do this I must create different sessions.
I'm trying to figure out how to achieve a unique sessionID for each Iframe/browser pointing to the same web app passing through the same (?) proxy server, roughly:
iframe ---\
iframe -----> browser ---> apache proxy ---> jsp transactional app
iframe ---/
I was sniffing the traffic on FireFox (FireBug) and all iframes has the same session ID. That's not exclusively on iframe, even if I start another browser and use the link passing trhough the proxy I keep the same session ID.
Using Apache http Server 2.2.20 (win32).
Proxy config (if useful):
ProxyPass /bbb http://xxx/bbb/
ProxyPassReverse /bbb/ http://xxx/bbb/
Do the iframes src attributes point to the same domain or subdomain?
Remember that Session is implemented through cookies and that cookies are shared through the domain and subdomains they belong to, eg:
If Cookie belongs to yourdomain.com then subdomain.yourdomain.com has access to it
but
If Cookie belongs to subdomain.yourdomain.com then yourdomain.com or subdomain1.subdomain.yourdomain.com DOES NOT have access to it
And it doesn´t matter if it's an iframe or another browser window or tab...
Related
I have an apache web server (frontend).
If someone enters https://myurl/ticket/123 for instance, I want apache to serve a html page and the value "123". If someone enters the url https://myurl/ticket/789 I want apache to serve the same html page and the value "789".
This value is then used by the browser to make a request to a backend server (node.js) along with a token. The backend server then serves the ticket data from ticket #123 or ticket #789 respectively.
Diagram:
My questions: How to I configure apache to accept the dynamic ticket id in the url? And how do I pass the value from the url to the browser?
I know it's possible to use apache as a reverse proxy, that's not what I want (because of the token).
We have a Java/Jetty server. The servlets on this server are called by some of our internal applications over http.
I have been asked to create a webapp /website which will use many of these servlets / api.
However this is an external customer facing website and needs to be served over https / ssl. The servelet urls look like
http://internalServer:9999?parameters.
Now my webapp is ready and has been deployed on Apache on Debian. Everything works fine but as soon as I enable
https/ssl the backend calls do not go through. On chrome I get "Mixed content. Page was loaded on https but is requestig resource over http...". On Safari I get -could not load resource due to access control checks.
I understand the reasons for these errors but I would like to know ways to solve this.
I have full control over apache server and website code.
I have very limited control over internal jetty server and no control over servelt code.(don't want to mess with existing apps).
Is there something I can do just with apache configuration? can I use it as a reverse proxy for the Jetty(http) server?
Thanks for your help.
"Mixed content. Page was loaded on https but is requestig resource over http..."
That error message means your HTML has resources that are being requested over http://... specifically.
You'll need to fix your HTML (and any references in javascript and css) that request resources (or references resources) to also use https://....
If you try to call an http service from an https site you will have Mixed content error.
You can avoid that error using apache2 proxy settings inside your example.org.conf
You can find it inside the folder /apache2/sites-enabled
Add some code:
<VirtualHost *:443>
...
ProxyPass /service1 http://internalServer:9999
ProxyPassReverse /service1 http://internalServer:9999
</VirtuaHost>
From your https site you have to fetch the url
https://example.org/service1`
to reach the service.
In that way you can call your services http from a https site.
I am using a static site generator for my site, that means my entire site is static. All my resources and HTML files are referenced with the domain name prefixed, so that the CDN could be used.
But due to SEO concerns I disabled non-www access and redirect those to the www.domain.com variant. But now I cannot use a CDN apparently, because the origin server needs to be different from the supername.
Can a CDN be used for HTML files?
How can I deliver content through www.domain.com and use a CDN?
Can I give the CDN access to static.domain.com an an origin server, but deny access to other clients? Seems clumsy!
Any ideas?
Using Apache2.2 trying to use Level 3 CDN through my hosting company's site
depending what you are able to set on the CDN via your hosting company, the best way would be to override the host header on the CDN settings.
So, first let's look at your DNS settings:
www should point to the CDN
origin should point to your web server.
Now, on the CDN you set your origin to origin.yourdomain.com and add (I can't tell you if this is possible in your setup) a "http host header override" to www.yourdomain.com. In some cases it's implemented the other way around, so you would "force IP-Host" to origin.yourdomain.com.
In both cases, what you want to achieve is this:
when an end user requests www.yourdomain.com , it is resolved to the CDN
The CDN needs to fetch the content from your server, so it establishes a session on port 80 (assuming HTTP) to origin.yourdomain.com
Once the port is open, the CDN sends (amongst others) a HTTP Host-Header with www.yourdomain.com (this is the name based virtual host APache is seeing and evaluating).
That way you can set up your web server in exactly the same way as you would without a CDN.
Our client has a set of (5-6) intranet/internet applications either custom developed or 3d-party, located in various web servers, which applications we cannot modify/control.
We have developed a web portal application (A) and the client wants that all its other applications (B) are accessed only via A, meaning that if a user enters directly the application url for B, he gets an error page telling that access is allowed only via A. So, user has to log in to application A and then click a link to application B to access it. This requirement has been asked for security reasons and to make A act as an access gateway to other applications (B).
Is this possible and how can we implement it? Should we use another web server on the top acting as a proxy to all other applications (B) or is there a better solution for this? And if we use another web server as a proxy should we implement the referrer logic with a user id - token approach combined with appropriate session cookies, so that the application B's url cannot be hacked and is unique for each user and session?
Sorry if I stated my questions unclearly or in a wrong way, but I'm unfamiliar with network/system administration and web servers. I can provide more details where needed.
there are different approaches here:
1. using firewall setup access to B http{s} port only from A IP address.
2. set Directory restriction in httpd.conf for aps B directory like:
<Directory "/var/www/B">
AllowOverride None
Order allow,deny
Allow from <IP of A>
</Directory>
in APS A create link (http://ip_A/accesstoB/somepath/script.php) that will Proxied to B using .htaccess rule like:
RewriteRule ^accesstoB/(.*)$ http://<ip_B>/$1 [P]
in this example: customer accessing http://ip_A/accesstoB/somepath/script.php link will be proxied to http://ip_B/somepath/script.php
You begin with restricting access to B Applications by using web server conf files or with firewall restrictions based on ip.
Then you redirect all these requests to new wrapper app you will develop.
With this wrapper app you do whatever authentication you like, then your wrapper app does the http/https request(via libcurl or etc.) and echoes the response.
I am developing a Rails application that uses SSL connection. I am currently using third party resources that are js and css files for implementing a map (OpenStreetMap) . I have already tried to import these resources (js and css) into my application, but the javascript code tries to access an external WMS via HTTP.
The problem is that Google Chrome is blocking access to third-party resources from HTTP when the application is in HTTPS.
So I disabled SSL on a certain pages of the application and tried to force the HTTP or HTTPS the way I desire.
Following this blog: http://www.simonecarletti.com/blog/2011/05/configuring-rails-3-https-ssl/ and it works.
But when I force the HTTP protocol to the page where these resources will be used using Google Chrome, it forces HTTPS connection causing infinite loop.
If I clear the Chrome cache (that have already accessed the same page with HTTPS) in order access it via HTTP it works. But if I have accessed a HTTPS page and try to access via HTTP, Chrome forces the HTTPS connection resulting in an infinite loop.
The question is: Is there something I can set in the request that causes Chrome to accept the connection?
Regards
I've been doing some research on this, and it turns out that turning on force_ssl = true on Rails 3 causes the app to send an HSTS header. There's a bit of information about it here: How to disable HTTP Strict Transport Security?
Essentially, the HSTS header tells Chrome (and Firefox) to access your site only through HTTPS for a specific amount of time.
So... the answer I have for you now is that you can clear your own HSTS setting by going to about:net-internals within your Chrome browser and removing the HSTS state.
I think the answers here can help you: Rails: activating SSL support gets Chrome confused