How to Setup kerberos in a load balanced message broker environment? - load-balancing

We have IIB 10.0.0.12 running on Windows Server 2012 R2. We are looking to setup Kerberos -Token based authentication for SOAP services that are exposed to internal/external consumers.
We have around 4 System test servers running on a Same domain. The test servers are not load balanced; can we create a Single User account (Say "IIBTestPrincipal") in Active Directory and map multiple SPN's to this user account and setup the test environments like below.
setspn -A HTTP/server3.somedomain.co.uk#SOMEDOMAIN.CO.UK IIBADPrincipal
setspn -A HTTP/server5.somedomain.co.uk#SOMEDOMAIN.CO.UK IIBADPrincipal
Can somebody please advice/ guide on process for setting the same in load balanced environment.?
We have 4 broker servers load balanced via Netscalar. Can the load balancer perform a kerberos passthrough and broker perform all the kerberos authentication work ? If so should we be creating a SPN on Load balancer Host name and map all the prod servers as alias to that SPN ?
Couldn't find much info from Info center,Any thoughts on the above are much appreciated.

Netscaler supports Kerberos impersonation and Kerberos contrained delegation. I'm not that familiar with Kerberos, take a look in their documentation
https://support.citrix.com/article/CTX222453

Related

HAProxy with https and kerberos

I'm trying to implement a reverse proxy in our system, for a micro-services architecture.
The proxy server is HAProxy that works with SSL Termination and needs to proxy requests to a backend server with Https and Kerberos authentication.
I succeeded to terminate the ssl on the proxy server and pass the request to the https server (I need the termination in order to route requests by their body, to specific backend services) , but failing to authenticate with kerberos on the backend server.
Is it possible to implement Kerberos auth on the proxy server and then pass the TGT to the different backend services?
I have successfully done this and it took some work.
At the time I was using HDP so I used ambari to setup a hive server on the HAproxy node. (This was done solely for the purpose of having Ambari manage the kerberos principle. The hive server itself never ran)
Then I merged the keytab for my hive server (on the proxy) with my Hive server keytabs so that the principle could be used on the hive servers. I think I also allowed it as a principle to work with hive. I'm sure there is another path that would allow you to use delegation but this was the past of least resistance and made it so hive managed mostly managed the keytab. I did have to re-merge the keytab when they where regenerated but it wasn't as bad as manually managing keytabs.

Kerberos & Load Balancer

We are currently running a PHP application on apache httpd with mod_auth_kerb for SSO. We'd like to scale it to multiple hosts and make it highly available while we are at it.
Generally, HAproxy seems to be the recommended tool for this task, so i'll refer to this for the rest of the post - am open to alternatives here though. I haven't been able to find a way to combine HAproxy with Kerberos-based SSO - this seems to only be available for comercial load balancers (F5 for example).
We do not need the actual Kerberos ticket on the Webservers, it's literally just for authentication - is there a way to have HAproxy authenticate users via Kerberos and just pass the sAMAccountName as Header to the webservers? Alternatively full passthrough would work aswell of course.

Can you create Kerberos principals where the hostname is flexible? (Docker)

I'm specifically trying to do this with Apache Storm (1.0.2), but it's relevant to any service that is secured with Kerberos. I'm trying to run a secured Storm cluster in Docker. There are a number of out-of-the-box docker images out there for Storm, and they work great unsecured. I'm using https://github.com/Baqend/docker-storm. I also have Storm running securely on RHEL VM's.
However, my understanding is that Kerberos ties hostnames to principals, so if I'm making service foobar available to clients, I need to create a principal of foobar/hostname#REALM. Then a client service might connect to hostname with principal foobar, Kerberos will look up foobar/hostname#REALM in its database, find that it's there (because we created a principal with exactly that name), and everything will work.
In my case, it's described here: https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.3.0/bk_installing_manually_book/content/configure_kerberos_for_storm.html. The nimbus authenticates as storm/<nimbus host>#REALM, and the supervisors and outside clients authenticate as storm/REALM. Everything works.
But here in 2017, we have containers and hostnames are no longer static. So how would I Kerberize a service that runs in Docker Data Center (or Kubernetes, etc)? I have to attach an unknown hostname to the server authentication. I imagine I could create a principal for all possible hostnames and dynamically pick the right one at startup based on where the container lives, but that's kludgy.
Am I misunderstanding how Kerberos works? Is there a solution here that I don't see? I see multiple examples online of people running Storm in Docker, but I can't imagine that nobody's clusters are secure.
I don't know Apache Storm or Docker, but based on previous workings with JBOSS in a cluster in which an inbound client could be connecting to any one of a possible number of different hosts, then you would simply assign a virtual name to the entire pool at the load balancer and kerberize the service according to the virtual name instead of individual host name at the host level. So if you're making service foobar available to clients, you need to create a service principal (SPN) of foobar/virtualhostname#REALM in your Directory to kerberize the service with. You assign that SPN to a user account (not a computer account) to give it the flexibility to work with any Kerberized service which uses that SPN. If you are using Active Directory, you must create a keytab with the SPN inside of it, and place the keytab on each host running the kerberized service instance foobar/virtualhostname#REALM.

Configuring LDAP Authentication in Odoo

I have two servers:
1st server: Odoo 9 application hosted in Amazon EC2
2nd server: LDAP server hosted in my Synology NAS which is in a local area network
Right now, I would like to authenticate all the Odoo users by the LDAP server.
Things that I have done
I have installed the Authentication via LDAP (auth_ldap) module in Odoo
Configured LDAP Parameters in Odoo. Note: Actual IP address and domain were altered due to security issue. Need someone to check if the configuration values are entered correctly.
Open the port 389 in my office network to public and forwarded it to the LDAP server.
Tested using ldapsearch command line in Amazon EC2 to ensure that both servers can communicate.
Somehow I am still not able to login using the LDAP user login in Odoo. What did I do wrong? Is there any other way to find out if Odoo is communicating with the LDAP server?

Bamboo cloud agent's user account security questionable

When using a Bamboo cloud agent, on Windows, you're instructed to have a Bamboo Windows user with a default known password: Atlassian1.
It clearly says that this user should be configured to denied remote login.
But still, it's an active Windows user with a fair bit of permissions. Bamboo's server (cloud) interacts with the machine in a known port - 26224. Through this channel it sends all build commands, get build status from the remote agent etc.
What prevents a hacker from scanning the Internet, find a host with port 26224 open and start talking with the Bamboo agent? How does the agent know for sure that it talks to a legitimate Bamboo CI server?
I'm asking that in order to be fully confident that there is no possible attack vector.
The Security documentation for Bamboo states:
Please note the following security implications when enabling remote agents for Bamboo:
No encryption of data passed between server and agent — this includes data such as:
login credentials for version control repositories
build logs
build artifacts
No authentication of the agent or server — this could result in unauthorised actions being taken on your system, such as:
Unauthorised parties installing new remote agents — version control repository login credentials could be stolen.
Unauthorised parties masquerading as a Bamboo server — the unauthorised server could pass malicious code to the agent to run.
See Agent authentication for more information.
We strongly recommend that you do not enable remote agent installation on any Bamboo instance accessible from a public or untrusted network. Creating remote agents is Disabling and enabling remote agents support by default.
For public facing agents, Atlassian strongly recommends securing them which is done using SSL. See Securing your remote agents which contains this note:
This page applies to remote agents and not elastic agents. Elastic agents are secured automatically by the Bamboo server and no additional steps are required.
Further more to the Elastic Piece, their documentation on Elastic Bamboo Security states:
All traffic sent between the agents located in EC2 and the Bamboo server is tunnelled through an SSL-encrypted tunnel. The tunnel will be initiated from the Bamboo Server to the EC2 instance, which means that you don't need to allow any inbound connections to your server. You will need to permit outbound traffic from the server on the tunnel port, however - the default port number is 26224. On the EC2 instance, only the tunnel port needs to be open for inbound traffic.