We're having an authentication problem with our self-made collaboration website.
We have server A with the Collab platform. We log on with our domain account on the website. This website talks to an API on Server B but Server B should also be able to get data from out intranet (Sharepoint) on server C. Is this the double hop issue? NTLM is uses, not Kerberos.
Do we need to change everything to Kerberos? I found a lot about double hop but none have a decent explanation on how to configure/fix it.
Server A & B are 2012R2, server C is 2016.
Yes. When you need authenticate a user on a web server and then re-use and pass credentials from the web server to another server, that is the "double-hop" scenario. The only way to allow this, is to establish a Kerberos trust between those two machines. This is done in Active Directory.
You don't need to "convert all of your stuff to Kerberos" because it is already built-into any server that is on an Active Directory domain.
In IIS, you will need to set the "IIS", "Authentication" to use "Digest" authentication (of course Anonymous is disabled).
There are dozens of great articles that walk you through anything that I didn't mention. (Like this one) It is not difficult, but it takes some time to learn it the first time. So there is no need to worry.
Related
My previous question was closed so I will be more specific.
I need to create an application, desktop one written in C#, that will ask for user credentials and after verification opens the GUI allowing to work with DB (black box for users).
It should be used from everywhere, not LAN or SQL domain. I assume I would need to do the following:
Create a client and a server applications that will deal with authentification. That would mean a lot of socketing stuff..
Once the user is verified, the client queries would be sent to database (client->server->DB).
The server would need to send the DB data sets back to the client.
As you can see, this is just my guess but I have no idea whether its too complicated or completely wrong. The main thing is that it must be desktop app (not web based one) and accessible from everywhere.
I am interested in main points how to design the system and will be extremely grateful for that.
You can use a certificate server for authentication..like Apache's mod_ssl
So I'm setting up a dedicated server using Debian 5 Lenny. I will be using some Atlassian Tools (JIRA, Confluence, Bamboo, and Fisheye). I want to use a local LDAP server to store information for the users that will be accessing these software titles, so that they can use one set of credentials to log in.
I also want webmail users to be configured using LDAP.
However, this is a small operation. Three people. That's why all of the software, including the ldap server, will all be on the same machine.
That said, is it safe to use LDAP to store user credentials (including passwords) in LDAP without using Kerberos? I'm confused as to when Kerberos should be used.
Hypothetically, let's say I had two servers on a subnet. Server A received requests from the outside world, for atlassian tools. Server a communicates to ldap server (internally) on server b. In that case, would I use kerberos?
When do I use Kerberos? When do I not?
I am not setting anything like "Active Directory" up. No Samba either. Users do not need to login to a domain (with access to files on the domain), they just need to login to webapps. But if I was doing LDAP on it's own dedicated machine, then I might want Kerberos?
:confuzzled: :(
-Sam
The simplest possible answer is yes, it is possible to store user names, user ids, and passwords without using Kerberos, and in fact directory services accessed via LDAP are an excellent tool for storing this sort of authentication and authorization information.
Update:
In my opinion, if you do choose an open source server, you will find OpenDS to be superior to OpenLDAP or Apache.
Basically, if you have Kerberos, you do not need any directory server. If you aren't in a corporate environment and are looking for an identity management store, you should definitively go for a directory server like OpenLDAP or Apache Directory. Kerberos require running a correctly set up DNS and NTP server. This might be way to much. Even if you do, those lazy morons from Atlassian still did not implement Kerberos support into their products. You can't even go with that.
I just noticed that there are only three of you, maybe a simple database setup with MySQL would suffice instead of running a full-blown directory server?
For those in the know, what recommendations do you have for storing passwords in Windows Azure configuration file (which is accessed via RoleManager)? It's important that:
1) Developers should be able to connect to all production databases while testing on their own local box, which means using the same configuration file,
2) Being Developers need the same configuration file (or very similar) as what is deployed, passwords should not be legible.
I understand that even if passwords in the configuration were not legible Developers can still debug/watch to grab the connection strings, and while this is not desirable it is at least acceptable. What is not acceptable is people being able to read these files and grab connection strings (or other locations that require passwords).
Best recommendations?
Thanks,
Aaron
Hum, devs are not supposed to have access to production databases in the first place. That's inherently non-secure, no matter if it's on Azure or somewhere else. Performing live debugging against a production database is a risky business, as a simple mistake is likely to trash your whole production. Instead I would suggest to duplicate the production data (eventually as an overnight process), and let the devs work against a non-prod copy.
I think it may be solved partially by a kind of credentials storage service.
I mean a kind of service that do not need a passwords, but allows access only for machines and SSPI-authenticated users which are white-listed.
This service can be a simple WebAPI hosted under SSLed server, with simple principles like so:
0) secured pieces have a kind of ACL with IP whitelist, or machine name-based, or certificate-based whitelist per named resource, or mixed.
1) all changes to stored data are made only via RDP access or SSH to the server hosting the service.
2) the secured pieces of information are accessed only via SSL and this API is read-only.
3) client must pre-confirm own permissons and obtain a temporary token with a call to api like
https://s.product.com/
3) client must provide a certificate and machine identity must match with the logical whitelist data for resource on each call.
4) requesting of data looks like so:
Url: https://s.product.com/resource-name
Header: X-Ticket: value obtained at step 3, until it expire,
Certificate: same certificate as it used for step 3.
So, instead of username and password, it is possible it store alias for such secured resource in connection string, and in code this alias is replaced by real username-password, obtained from step 4, in a Sql connection factory. Alias can be specified as username in special format like obscured#s.product.com/product1/dev/resource-name
Dev and prod instances can have different credentials aliases, like product1.dev/resource1 and product1/staging/resource1 and so on.
So, only by debugging prod server, sniffing its traffic, or by embedding a logging - emailing code at compilation time it is possible to know production credentials for actual secured resource.
An internal team, separate from my own, has stated that they prefer to do incoming authentication based on client certificates. Which sounds good to me, except that I haven't messed with them before and aren't quite sure where to start researching (Wikipedia went straight into a lot of detail that I'm not sure is pertinent to what I need to find out). If I have an IIS6 server with a web app that runs under an AD user account, what steps should I take to eventually fire off a request from that web app to a remote server, via .NET (I'm guessing HttpWebRequest)? I do see that we have an internal trusted certificate authority and all. Remote server is running Apache on Linux boxes.
I'm essentially in learning mode, not necessarily looking for a blow-by-blow list of what needs to happen (though if I could learn how it works while learning how to do it, that'd be nice too :) ... do you have any resources I could start looking into in order to figure out how to successfully authenticate securely via SSL with this remote server and communicate with it via client certs? Probably from creation of the client cert on up, though I'd like to more fully understand how it all works in the first place.
Here is a link - http://support.microsoft.com/kb/315588. Hope this helps with what you are looking for.
i have following scenario and can't seem to find anything on the net, or maybe i am looking for the wrong thing:
i am working on a webbased data storage system. there are different users and different places and only certain users are allowed to access certain parts of the system. now, we do not want them to connect to these parts from at home or with a different computer than they are using at their work-place (there are different reasons for that).
now my question is: if there is a way to have the work-place-pc identify itself to the server in some way over the browser, how can i do that?
oh and yes, it is supposed to be webbased.
i hope i explained it so everyone understands.
thnx for your replies in advance.
... dg
I agree with Lenni... IP address is a possible solution if they are static or the DHCP server consistently assigns the same IP address to the same machine.
Alternatively, you might also consider authentication via "personal certificates" ... that's what they are referred to in Firefox, don't know it that's the standard name or not. (Obviously I haven't worked with these before.)
Basically they are SSL or PKI certificates that are installed on the client (user's) machine that identify that machine as being the machine it says it is -- that is, if the user tries to connect from a machine that doesn't have a certificate or doesn't have a certificate that you allow, you would deny them.
I don't know the issues around this ... it might be relatively easy for the same user to take the certificate off one computer and install it on another one with the correct password (i.e. it authenticates the user), or it might be keyed specifically to that machine somehow (i.e. it authenticates the machine). And a quick google search didn't turn up any obvious "how to" instructions on how it all works, but it might be worth looking into.
---Lawrence
Since you're going web based you can:
Examine the remote host's IP Address (compare it against known internal subnets, etc)
During the authentication process, you can ping the remote IP and take a look at the TTL on the returned packets, if it's too low, then the computer can't be from the local network. (of course this can be broken, but it's just 1 more thing)
If you're doing it over IIS, then you can integrate into SSO (probably the best if you can do it)
If it's supposed to be web-based (and by that I mean that the web server should be able to uniquely identify the user's machine), then you choices are limited: per se, there's nothing you can obtain from the browser's headers or request body that allows you to identify the machine. I suppose this is by design, due to the obvious privacy implications.
There are choices though, none of which pain-free: you could use an ActiveX control, which however only runs on Windows (and not on all browsers I think) and requires elevated privileges. You could think of a Firefox plug-in (obviously Firefox only). At any rate, a plain-vanilla browser will otherwise escape identification.
There are only a few of REAL solutions to this. Here are a couple:
Use domain authentication, and disallow users who are connecting over a VPN.
Use known IP ranges to allow or disallow access.
IP address. Not bombproof security but a start.