Is it possible to fake referer information sent by web browsers? - referrer

My service might use referer information to tell from what web site a request is done, and I would like to make sure there is no way to fake the referer information.

Referer can be easily spoofed.Using a referrer is very unreliable as a method of verification.
There exists a firefox plugin called refspoof to do that very easily.
Even command line tools like wget have options to do that: --referer=url

It is possible. There's nothing you can do to prevent browsers from faking that data.

There are many ways to fake any information sent by the client. The most basic rule of accepting information from a client is: don't trust the client.
Ever.
Browsers can fake, among many others, their User-Agent string and referrer (the proper spelling, the PHP function is about the most prolifically perpetuated typo going).

It's easily spoofed, so I wouldn't rely on it for anything important.

The client is free to send you what ever data it wants. You should -never- trust what the browser sends.

Related

Detect untrusted SSL acces on the server-side?

The Question
Is there a way to detect wether a visitor trusts the SSL connection/certificate? I really could not find anything on the web or on stackoverflow. I think it's a pretty uncommon question.
A Use-Case
I'm using a certificate from StartSSL. It works fine for most common and modern browsers. But on my Windows Phone using IE I get a warning. That's because the root certificate is not known to IE on Windows Phone by default.
The solution is easy: just download the certificate - two clicks/taps. I would like to provide a tiny guide to the common visitor on how to do this. However, only visitors with problems should get the message.
Visitors who connect to your site via HTTPS simply won't get to your site if they don't trust your certificate. Once an exception has been added, there's no way for you to determine whether or not it's generally trusted or an exception.
Perhaps you could try to build a list of user-agents and make a guess as to what their default CAs should be, so as to be able to display an additional message in this case. It's not a perfect rule (since you can never full control what the client trusts, it's the user/admin's responsibility), and has the disadvantages of user-agent specific content; in particular, it's not necessarily reliable, you won't have a complete database, and users who've already added the exception or imported the certificate permanently would see this additional message (unless you use something like a cookie to remember).
If your initial page is over plain HTTP, you might be able to try an XHR request to your HTTPS site and report whether it worked at all. (You might need to take into account the Same Origin Policy.)
I am not sure whether there is a foolproof way to auto-detect this condition. You may have to rely on a workaround.
Detect whether the request is from a phone by inspecting user-agent in the header, check whether it's the first time they are accessing your site (absence of your site's cookie etc.) and if they are first time user, redirect response to (HTTP) page with instructions to install the certificate. You can provide a check box on that page for users to supress that redirect behavior in furture. If they want it to be supressed, set a cookie, or store their preference on server (if there is authentication).

Security of https-login?

I'm writing an Apple iOS app that login to a account and fetching some balance. It use a plain html link for the login:
https://www.myaccount.com/login.jsp?username=myusername&password=mypassword
The username and password is dynamically loaded to the login link at runtime.
I've sniffed the traffic using Wireshark and I couldn't find the username or password in any of the packages being sent. I guess the SSL(?) thing of "https" have encrypted the query.
I'm I right? Is this a safe way? Any other thoughts? How should I handle the password in the app to avoid security issues? Is it cached? Do I need to encrypt it if I want the app to remember my password?
Although technically safe as Sjoerd pointed out, this is just great for social engineering. In the internet cafe: "Dammit, that's a cool article. Can you email me the URL real quick?"
You won't believe how many people fall for this kind of stuff.
Another drawback is that Browsers tend to cache URLs, so again very unsafe in situations where multiple people have access to the same machine.
A much better way is to use HTTP Basic Authentication or at least HTTP POSTing the data.
This is secure in the sense that it can not be sniffed, since the request is sent over an encrypted HTTPS channel. However, it does show up in the address bar of the browser and possibly in the log files of the server.
The safer way is to POST the username and password to the JSP page, so that they do not show up in the URL.

Can you modify http request headers in a Safari extension?

I can do this in FF and IE, and I know it doesn't exist in Chrome yet. Anybody know if you can do this in a Safari plugin? I can't find anything that says one way or another in the documentation.
Edit (November 2021): as pointed out in the comments, ParosProxy seems to no longer exist (and was last released ~2006 from what I can see). There are more modern options for debugging on Mac (outside of browser plugins on non-Safari browsers) like Proxyman. Rather than adding another list of links that might expire, I'll instead advise people to search for "debugging proxy" on their platform of choice instead.
Original Answer (2012):
The Safari "Develop" menu in advanced preferences allows you to partially customize headers (like the user agent), but it is quite limited.
However, if a particular browser or app does not allow you to alter the headers, just take it out of the equation. You can use things like Fiddler or ParosProxy (and many others) to alter the requests regardless of the application sending the request.
They also have the advantage of allowing you to make sure that you are sending the same headers regardless of the application in question and (depending on your requirements) potentially work across multiple browsers and apps without modification.
Safari has added extension support but its APIs don't let you have granular level control over Request & Response as compared to Chrome/Firefox/Edge.
To have granular level control over your Request and Response, you need setup a system wide proxy instead.
Requestly Desktop App automatically does this for you and on top of that, you can do various types of modifications too like:
Modify Request/Response Headers
Redirect URLs
Modify Response
Delay Network request
Insert Custom Scripts
Change User-Agent
Here's an article about header modification using requestly
https://requestly.io/feature/modify-request-response-headers/
Disclaimer: I work at Requestly

Sending sensitive data as a query string parameter

We are reviewing the design of a system. And need to verify what we think may be a security issue.
In this system some sensitive information is sent in the query string. Question is:
Can the query string parameters be read as the request goes over the internet, even if the request is sent over https?
Can the query string parameters be read be read from the browsing history on the client machines?
When you use HTTPS, the SSL/TLS connection is established before any HTTP traffic is sent, thus the whole request (including the URL and its parameters) will be encrypted and won't be readable. The only thing that's possibly visible by a third party is the server certificate (so they could see the host name, but that's it).
The browser's history isn't protected in any way by HTTPS as such, although some browsers may have some "safe browsing" options which would delete some HTTPS URLs automatically perhaps. This one ultimately really depends on the browser and its configuration.
This is certainly a security issue if sensitive details are being passed in get request.
Sensitive data will not only get cached in the user's browser but also in any proxy on d way and plus in webserver logs
Yes for the first. Not sure about the second - depends on the browser, I guess - but I suspect, Yes, here as well.

Should I get rid of bots visiting my site?

I have been noticing on my trackers that bots are visiting my site ALOT. Should I change or edit my robots.txt or change something? Not sure if thats good, because they are indexing or what?
Should i change or edit my robots.txt or change something?
Depends on the bot. Some bots will dutifully ignore robots.txt.
We had a similar problem 18 months ago with the google AD bot because our customer was purchasing Soooo many ads.
Google AD bots will (as documented) ignore wildcard (*) exclusions, but listen to explicit ignores.
Remember, bots that honor robots.txt will just not crawl your site. This is undesirable if you want them to get access to your data for indexing.
A better solution is to throttle or supply static content to the bots.
Not sure if thats good, because they are indexing or what?
They could be indexing/scraping/stealing. All the same really. What I think you want is to throttle their http request processing based on UserAgents. How to do this depends on your web server and app container.
As suggested in other answers, if the bot is malicious, then you'll need to either find the UserAgent pattern and send them 403 forbiddens. Or, if the malicious bots dynamically change user agent strings you have a two further options:
White-list UserAgents - e.g. create a user agent filter that only accepts certain user agents. This is very imperfect.
IP banning - the http header will contain the source IP. Or, if you're getting DOS'd (denial of service attack), then you have bigger problems
I really don't think changing the robots.txt is going to help, because only GOOD bots abide by it. All other ignore it and parse your content as they please. Personally I use http://www.codeplex.com/urlrewriter to get rid of the undesirable robots by responding with a forbidden message if they are found.
The spam bots don't care about robots.txt. You can block them with something like mod_security (which is a pretty cool Apache plugin in its own right). Or you could just ignore them.
You might have to use .htaccess to deny some bots to screw with your logs.
See here : http://spamhuntress.com/2006/02/13/another-hungry-java-bot/
I had lots of Java bots crawling my site, adding
SetEnvIfNoCase User-Agent ^Java/1. javabot=yes
SetEnvIfNoCase User-Agent ^Java1. javabot=yes
Deny from env=javabot
made them stop. Now they only get 403 one time and that's it :)
I once worked for a customer who had a number of "price comparison" bots hitting the site all of the time. The problem was that our backend resources were scarce and cost money per transaction.
After trying to fight off some of these for some time, but the bots just kept changing their recognizable characteristics. We ended up with the following strategy:
For each session on the server we determined if the user was at any point clicking too fast. After a given number of repeats, we'd set the "isRobot" flag to true and simply throttle down the response speed within that session by adding sleeps. We did not tell the user in any way, since he'd just start a new session in that case.