What 'exactly' happens when I type a password in a web-browser before the password is sent out of the machine onto the network? - passwords

What happens exactly when I type a password into a password input field in a form in a web-page (e.g the password field of Gmail) and then hit enter. I want to understand the procedure before the browser sends it out of the local machine onto the network. For example: on the local machine, the web-browser first makes a system call, then the local machine does some processing on the password, and then sends it over the network. I searched on Google a lot, but could not find anything useful.

Nothing special happens, in general.
A password is just an HTML <input> element. It's just a text box.

In most instances, web sites requiring form submitted passwords use HTTPS to encrypt the posted data (including the username and password) back up to the server (if the site doesn't use SSL, then the password would be sent up in cleartext).
Edit:
According to wiki, SSL / TLS works at the presentation layer of the OSI, although this SSL article indicates that SSL is in the TCP application layer. AFAIK https is just http working on top of an SSL connection.
As per #John's post, the traditional way of submitting passwords is by providing the user with a form with a username and password .
(Full reference here)
<form name="input" action="https://somesite/login.asp" method="post">
Username: <input type="text" name="user" />
Password: <input type="password" name="password" />
<input type="submit" value="Submit" />
</form>
This form data would be posted back to the server, which should then validate the input before checking authentication of username and password (usually by passing one or both fields through a one way hash to ensure that passwords aren't stored cleartext in the database).
From a security point of view
Although the browser will 'show' asterisks on the password control, the browser still has knowledge of the cleartext password.
Secure forms method should be POST (method = GET would appear on the querystring)
The form page itself should have been served via https (since in addition to encryption, the browser verifies the server's certificate against the domain, for expiry, for the certification chain itself)
and the form action itself should use an https:// url, to ensure that the username and password are encrypted in transit on the way back to the server.
One concept that needs to be made is the reason why the browser doesn't 'hash' or otherwise do some of the authentication checking on the 'client side' of the connection is because the server mustn't ever trust the browser / remote computer - the original full password must be sent by the remote client computer.
Re : Can I hack the password?
Common attacks are
Keylogging - software which logs user's keystrokes, including usernames and passwords
Phishing - the attacker presents the user with a site login page which looks like the target login page, the user can be fooled into typing username and password. If the user's configured DNS or LMHosts file is compromised, then the login URL can be made to appear bonafide, even though it is served by the attacker's phishing server.
XSS attack (e.g. if the login page displays unvalidated user comment posts on the same page) would allow an attacker to inject javascript to hook into any number of events on the page to obtain usernames and passwords
With weak client validation and some information about the design of server, SQL Injection attacks can be used to either bypass the login entirely, read data from the server database (e.g. obtain all users and passwords if they are stored cleartext), or corrupt or delete data on the server (but this attacks the server, not the browser)

Related

CSRF Double Submit Cookie is basically "not Secure"

From OWASP page : A CSRF attack works because browser requests automatically include all cookies including session cookies.
To prevent it, we can use double-submit cookie hash.
In some sample codes I found, basically this algorithm is found.
Victim access app:
Backend : generate login cookie AND hash string related to login
cookie
Frontend : store the hash string into second cookie (say :
CSRF-token cookie)
Frontend (secured) : send request with login
cookie and CSRF HTTP header, where the header value is extracted
from CSRF-token cookie.
Attacker :
Use some kind of social media engineering to make users click malicious link, where this malicious link use session cookie.
The attacker then steal this session cookie to logged in as victim
Double submit cookie should prevent this attack since attacker also need to provide valid CSRF token in the HTTP header.
I still don't get this: If browser requests automatically include all cookies, that means on clicking malicious link, both login cookie AND CSRF-token cookie will also included, and attacker steal both of them.
So the attacker is just need to extract value from CSRF-token cookie, and create his own API access, using login cookie that he steal, and CSRF HTTP header with extracted value?
Am I missing something?
Thanks
A few things appear to be mixed up here.
So in the original synchronizer token pattern, you would generate a random token, store it server-side for the user session, and also send that to the client. The client would then send the token back as a form value or request header, but not as a cookie, so it doesn't get sent automatically - that's the whole point. (And the server would of course compare the token from the request to the one in the session.)
In double posting, the token doesn't even need to be generated server-side, the client can also do it (or the server can send it, doesn't matter that much if we accept that crypto is good enough in Javascript).
The token will be sent as a cookie, and also as something else (form value, request header, anything not sent automatically). If the server sent it as a cookie (obviously without httpOnly), the client can read it from that and include as a non-cookie too in a request. The server will again just compare the two.
The point is that an attacker on attacker.com will not be able to access the cookie (neither read nor write) for the application domain. So if the client can read the cookie and include it as something else in the request, that client must be running on the application origin (if we are talking about unmodified browsers only), so no CSRF is being performed. The client can also create the whole token itself, because attacker.com will still not be able to create a cookie for the application domain.
So based on the above, if everything is just in cookies, the implementation is wrong, and does not protect against CSRF.
While Gabor has basically answered the question, I just wanted to add some emphasis on some of the important parts, since I was once also confused with this double submit cookie technique.
The main misconception here is to assume that CSRF attack happened because the attacker is able to steal the cookie from the "targetweb.com", while in fact the attacker doesn't need to know the value of the cookie at all!
For the CSRF attack to happen, the attacker only need 4 conditions:
The session on the target site has already been established (user has logged in to the "targetweb.com")
The attacker knows the request format of some operation (e.g transfer fund)
The session token is stored in cookie.
The user trigger something (e.g a button/link), that unbeknownst to him, send a request to the "targetweb.com".
All the attacker need to do is to make the user trigger the request that had been forged by the attacker without the user knowing (and the important part is, the forged request doesn't need to contains the session cookie, since it will be added automatically by the browser later when it is sent -- thus the attacker doesn't need its value).
Now, with the double submit cookie technique, the server send additional value in the cookie. The attacker doesn't know its value. And when a request is made, this value need to be also appended to, say, a request header (which not automatically added). The server is then compare this header value with the cookie value and only proceed when the value match.
What's different from the attacker point of view is now he need to append the value to the header also to make the request valid. And he doesn't know the value, thus CSRF is prevented.
CSRF protection with double submit cookie is not secure.
Therefore, in the OWASP documentation, the double submit cookie is classified as one of defense in depth.
The reason is that cookies can be set by a third party with MITM attack.
HTTPS requests and responses cannot be eavesdropped or modified. However, MITM attack can modify the HTTP response(plain text).
An attacker could direct the victim to http://example.com/ (Target site) to send a plaintext http request.
Then, in response, the attacker can use MITM to return a Set-Cookie header.
HTTP / 1.1 200 OK
Set-Cookie: CSRFTOKEN=XXXXXXXXXXXXXXXXXXXXXXX;
This CSRFTOKEN is set in the victim's browser.
Next, the attacker sets up a CSRF trap page below.
<form action="https://example.com/target" method="POST">
<input type="hidden" name="mail" value="evil#example.net">
<input type="hidden" name="token" value="XXXXXXXXXXXXXXXXXXXXXXX">
<input type="submit">
</form>
The destination of the above form is a https page, but the cookie set by http response is also valid on https requests.
So the cookie and hidden parameter will be sent the same value, bypassing CSRF protection.

Is this website SSO authentication scheme safe?

I have designed a SSO scheme for the small collection of websites I have (some of them sub domains of main website).
I want all of the websites to use registration/login of the main site (and we can maintain full decoupling at the same time).
I want to see whether it is safe and if it can be made even simpler.
Scenario:
Person clicks on login link on origin website (GET)
Person's browser is brought to login URL along with origin website’s id ( https://authwebsiite.com/login/site/client-website-id )
On authentication website we look up "encryption shared secret" and "return URL" in the database using "website id"
Authentication is done on the authentication website
User info (username and a few other items) is encrypted with the key origin website's owner has previously provided to the authentication website and the info is filled in a form and submitted back (POST) to origin website’s return URL (https) using person's browser
A session is created on origin website for the browser
Concern:
Someone might simulate a return call… (we used a shared secret i.e. encryption key to solve that)
Improvements?
Can this process made even simpler?

Host Header Injection

I am a beginner in security and reading about the host header injection. I tested an application for this vulnerability and it is possible there for some request but developer implemented no-cache, no-store flags and this vulnerability is not in password reset request.
So first thing is there will not be cache poisoning. and the second is it is not happening in password reset request.
As I understand that for exploiting this vulnerability, I changed that host header. So I want to know why will It be a vulnerability, why a user will change Host of the application? and how an attacker can exploit it?
As in all of the cases the client input on the application should be never trusted (in security terms). The host header attribute is also something that can be changed by the client.
A typical attack scenario would be for example:
Lets suppose you have an application that you blindly trust the HOST header value and use it in the application without validating it.
So you may have the following code in your application, where you load a JS file dynamically (by host name):
<script src="http://<?php echo $_SERVER['HOST'] ?>/script.js">
In this scenario, whatever the attacker set as the HOST header would be reflected on this JS script load. So the attacker could tamper with this by manipulating the response to load a JS script from another host (potentially malicious). If the application is using any Caching mechanism or CDN and if this request is repeated multiple times, it can be cached by the Caching Proxy. Then, this can be served to other users as well (as it was saved to cache).
Another way of exploiting this is:
Let suppose that the application has a user password reset feature. And the application will send an email to whoever asks for a password reset with a unique token to reset it, like the email below:
Hi user,
Here is your reset link
http://<?php echo $_SERVER['HOST'] ?>/reset-password?token=<?php echo $token ?>
Now an attacker can trigger a password reset for a known victim email by tampering the HOST header value to the one of his desire. Then the victim would receive the legitimate email for password reset, yet the URL will be changed to the domain set by the attacker. If the victim would open that link, the password reset token could be leaked to the attacker so it would lead to account takeover.

Auto login user to third party site without showing a password to him

Background
We are integrating third party email solution into our site. When a user goes to the Mail page it must be automatically authenticated at the Mail site.
For now, the Mail link points to our page which automatically submits a form with the user's login and password. After clicking submit the user is redirected to the Mail site with authentication cookie.
The problem with this approach is that we do not want the user to see his Mail password, because we generate it automatically for him and there are some sane reasons not to show it.
Question
Is there any way to receive mail authentication cookies without sending the login information to the client and performing form.submit operation from the client's browser? Is there a better way to do what I'm trying to do?
Edit
Of course "I am trying to do it programatically". Looks like that there are no sane solution except pass these login/password to the client. Looks like we must accept that user can see his mail password and somehow make sure he cannot use this information to change password to some other value we will not know.
Edit: I didn't read the post correctly, I thought he was trying to login to a remote mail application, not one hosted on his own server. Ignore this answer.
When you login to the remote third party mail website, they will create a cookie (since HTTP is stateless, it's the only way it knows the user is authenticated unless they store some kind of session ID in the url). When you send the user to that site, the site needs to know how to authenticate the user. Even if you logged in from your application and grabbed the cookie, you can set a cookie on the users browser for another website. The only way for this to work is if there is some kind of development API on the third parties website you can hook into, or they allow you to use session id's in the URL.
Possible solution but has a security risk
If they allow you to set a session_id in the URL (for instance, PHPSESSID in PHP) then you could grab the session ID and append it to the URL when sending it to the user. I don't really like this idea since if the user clicks on a link in an e-mail, the new page will be able to check the referrer and see their session ID in the URL. This can become a huge security risk.
Lookup topics related to your mail vendor and "Pass-through Authentication." You did not mention what vendor/software you are using for your web mail solution, so I can't help you very much there. Other than forwarding the user's information (in a post request) to the login handler.
Generate unique IDs before sending an email and put them as hidden instead of username/password into form. Make them disposable (usable only once or usable once before successful entering the site)

Integrated Authentication on Webserver - Security?

We have our own web server hosting our website that is open to the public outside of our network.
I have a request to make our "Internal Postings" link on our Careers page to authenticate the user against our network's Active Directory list.
I currently have it setup so the link hits a page inside the directory structure of the website, and this page's folder is set to "Integrated Windows Authentication". Anonymous access is turned off for this page. If the user is authenticated (ie: logged into our network or supplies proper credentials) it passes them on to an external careers website which hosts our job postings. If they fail to authenticate, it displays a custom 401 error page.
This works fine, but there is a problem with it. Using IE, people cannot just enter their username. They (of course) are required to enter the domain name as well. Unfortunately the default 'domain' is set to the URL of our website (www.xyz.com/username). I would like it to automatically choose the name of our internal domain (aaa/username) but am unsure of how to do this.
Another option would be to use LDAP and a little ASP scripting to authenticate the user. I have this code already, but am unsure of the security consequences of doing so. Basically, the page will be setup for anonymous authentication, and if the user isn't logged into our network, they will be prompted for a username/password using standard textboxes. This is then passed to an ASP script that does an LDAP lookup against our Active Directory. Is there any security issues with this method?
Which method would you choose to do?
Thanks.
EDIT: It seems I cannot authenticate to ActiveD via LDAP using a username/password combo. So forget about that option.
My question now is, how can I change the default 'domain' that IWA uses? Is that at all possible? IE seems to default to 'www.xyz.com\username' (my website) rather than 'aaa\username' (my domain name). Of course, www.xyz.com\username fails because that is not where our ActiveD resides... Is this possible? I want to make it as simple as possible for our employees.
You cannot authenticate an user with a script that looks up the user in LDAP. You need to know that the user is who it claims it is, and the only way to do that is to let NTLM/Kerberos authenticate the user (ie. establish proof that the user knows a secret stored in the AD, the password).
The URL of the web site to the set of sites considered be in the local intranet zone for IE browsers running on the internal network. By default sites consider to local intranet will be sent the current logged on users credentials when challanged with NTLM/Kerberos. Hence your internal users shouldn't even see a network logon box.
I hate to dredge up an old thread, but the answers are a bit misleading, if I understand the question. The thread Remus refers to is about authenticating via LDAP with a username only. As he points out, that isn't possible. But it looks like what Kolten has in mind is authenticating via LDAP with a username and password both. That's a standard practice called binding.