I have an ssl page that also downloads an avatar from a non-ssl site. Is there anything i can do to isolate that content so that the browser does not warn user of mixed content?
Just an idea - either:
try to use an ssl url on the avatar website, if necessary by editing whatever JS/PHP/... script they provide, or:
use your scripting language of choice to grab a copy of the avatar and store it on your server, then serve it from there.
There are a number of good security reasons for the browser to warn about this situation, and attempting to directly bypass it is only likely to set off more red flags.
Ninefingers' suggestions are good, and I would suggest a third option: you can proxy the content directly through your own server using a simple binary retrieve/transmit script, if it changes frequently and is unsuitable for caching.
If all the content you want to include from foreign sites comes from a specific server and path (i.e. http://other.guy/avatar/*) you could use mod_proxy to create a reverse proxy which makes https://your.site/avatar_proxy/{xyz} mirror http://other.guy/avatar/{xyz} .This will increase your bandwidth usage and probably slow things down.
Related
I am currently hosting the contents of a site with ProviderA. I have a domain registered with ProviderB. I want users to access the contents (www.providerA.com/sub/content) by visiting www.providerB.com. A domain forward is easy enough and works as intended, however, unless I embed the site in a frame (which is a big no-no), the actual URL reads www.providerA.com/sub/content despite the user inputting www.providerB.com.
I really need a solution for this. A domain masking without the use of a frame. I'm sure this has been done before. An .htaccess domain rewrite?
Your help would be hugely appreciated! I'm going nuts trying to find a solution.
For Apache
Usual way: setup mod_proxy. The apache on providerB becomes a client to providerA's apache. It gets the content and sends it back to the client.
But looks like you only have .htaccess. So no proxy, you need full configuration access for that.
So you cannot, see: How to set up proxy in .htaccess
If you have PHP on providerB
Setup a proxy written in PHP. All requests to providerB are intercepted by that PHP proxy. It gets the content from providerA and sends it back. So it does the same thing as the Apache module. However, depending on the quality of the implementation, it might fail on some requests, types, sizes, timeouts, ...
Search for "php proxy" on the web, you will see a couple available on GitHub and others. YMMV as to how difficult it is to setup, and the reliability.
No PHP but some other server side language
Obviously that could be done in another language, I checked PHP because that is what I use the most.
The best solution would be to transfer the content to providerB :-)
I want to modify my application URL from //localhost:8080/monitor/index.html to just monitor , so that on putting monitor on browser, my application should open. Is there a way to achieve this, can someone suggest the configuration changes which will be required for this.
Can I map my short URL to the existing one may be somewhere in web.xml. I am not sure about the approach any suggestions will be great.
Thanks and regards
Deb
You're mixing up several different protocol layers in your question.
If you just enter nothing but "monitor" in the browser URL bar the browser is going to first lookup "monitor" in DNS and finding nothing it will then probably send a query to Google or your configured search engine. In the past browsers have taken other steps, such as appending ".com" and prepending "www." but I don't think modern browsers do that any more.
So far, your server is not even remotely involved.
If you're a large ISP user (TimeWarner, Comcast) and use their DNS it's also possible the ISP will intercept your failed DNS lookup and route the request to a "helpful" search page (i.e. SPAM) of their own.
At this point the request is still nowhere near your server.
I suppose you could mess with the /etc/hosts file on your local system to resolve "monitor" to the proper hostname, but that's an extremely brittle solution that has to be hard coded on each machine you want to have this "shortcut" link (and which breaks when the hostname changes).
You're much better off just setting up a web shortcut in your browser that points to the right place.
Is there a way to detect if a site is on a Content Delivery Network and if yes, can we tell which service are they using?
A method that is achievable from the command line is using the 'host' command, with the -a flag set to see the DNS record e.g.
host -a www.visitbritain.com
Returns:
www.visitbritain.com. 0 IN CNAME d18sjq5nyxcof4.cloudfront.net.
Here you can see that the CNAME entry tells us that the site is using cloudfront as the CDN.
Just take a look at the urls of the images (and other media) of the site.
Reverse lookup IP's of the hostnames you see there and you will see who own them.
I built this little tool to identify the CDN used by a site or a domain, feel free to try it.
The URL: http://www.whatsmycdn.com/
You might also be able to tell from the HTTP headers of the media if the URL doesn't give it away. For example, media served by SimpleCDN has Server: SimpleCDN 5.6a4 in its headers.
cdn planet now have their cdn finder tool on github
http://www.cdnplanet.com/blog/better-cdn-finder/ The tool installs on the command line and allows you the feed in host names and check if they use a CDN.
If Website using GCP CDN you simply check it using curl
curl -I <https://site url>
In reponse you can find following headers there available
x-goog-metageneration: 2
x-goog-stored-content-encoding: identity
x-goog-stored-content-length: 17393
x-goog-meta-object-id: 11602
x-goog-meta-source-id: 013dea516b21eedfd422a05b96e2c3e4
x-goog-meta-file-hash: cf3690283997e18819b224c6c094f26c
Yes you can find by
host -a www.website.com
Apart from some excellent answers already posted here which include some direct methods which may or may not work for all the websites out there, there is also an indirect way to see if a CDN is there. And especially if its your own website and you want to know if you are getting what you are paying for !
The promise of a CDN is that connections from your users are terminated closer to them so that they get less TCP / TLS connection establishment overhead and static content is cached closet to them so that it loads faster, puts less strain on your origin servers.
To verify this, you can take measurements of site load times across the globe and see if all the users get similar loads times. No you dont have to get a machine everywhere in the world to do that ! Someone has already done that for you
Head to https://prober.tech/ and the URL you wish to test for load times.
Because this site itself is in Cloudflare's CDN, you can put that link itself in the test box and use it as baseline !
More information on using the tool can be found here
Does anyone know of any reverse proxy solutions that allow the content/data of an HTTP response to be directly modified before being relayed to the requesting client?
As an example:
Proxy relays client request for pdf document to another server, response received by proxy, watermark added to pages of pdf, watermarked pdf is returned to client.
Regards,
Mike
Apache has mod_proxy and mod_proxy_html, which is used to rewrite links, headers, etc. I've only ever seen HTML or XML filters, but you should be able to write your own binary one for your PDF needs. The possible difficulty I could see is that Apache treats webpages as a stream, rather than a file. I'm not sure how to watermark a PDF doc, but if you need access to the entire file to do it, it might get complicated quickly.
Note that it would seem far easier to me to do the watermarking on the server, where you have access to the file, rather than a proxy. If server load is a concern, either a batch process, or a separate server could be an alternative solution.
I found a description of Deliverance over on the python tags, and it may be useful for what you're looking for. I have no experience with it myself, so grain of salt and all that.
http://www.openplans.org/projects/deliverance/introduction
I've had success with Pound.
I think I might go down the Squid/ICAP route.
This is for an enterprise level system, does anyone have any experience with either of these in this context?
http://wiki.squid-cache.org/Features/ICAP
We are putting up a company blog at companyname.com/blog but for now the blog is a Wordpress installation that lives on a different server (blog.companyname.com).
The intention is to have the blog and web site both on the same server in a month or two, but that leaves a problem in the interim.
At the moment I am using mod_rewrite to do the following:
http://companyname.com/blog/article-name redirects to http://blog.companyname.com/article-name
Can I somehow keep the address bar displaying companyname.com/blog even though the content is coming from the latter blog.companyname.com?
I can see how to do this if it is on the same server and vhost, but not across a different server?
Thanks
Rather than using mod_rewrite, you could use mod_proxy to set up a reverse proxy on companyname.com, so that requests to http://companyname.com/blog/article-name are proxied (rather than redirected) to http://blog.companyname.com/article-name.
Here are more instructions and examples.
There is functionality with ZoneEdit called webforwards which could probably do this and hide what you are actually doing (unless someone looked into it).
The only thing that mod_rewrite can do is send HTTP header redirects, and those redirects (across servers) always result in the browser address bar reflecting the reality.
You should instead consider writing a 404 script that 'reflects' the blog. This would essentially be a transparent proxy, and many are already written.
The script would find if the requested page (that was 404'd) started with http://mycompany.com/blog/ . If it did, it would download and then send onto the client the blog page and associated files (probably caching them as well).
So requesting http://mycompany.com/blog/article_xyz would cause the 404 script to download and send http://blog.companyname.com/article_xyz.
It's probably more work than it's worth, but you might be able to design a simple enough 404 script that it's worthwhile.
-Adam