Say I have a HTTP server that has instances running on machines virt01 through virt09 which have CNAMEs svc01 through svc09. I want to add Kerberos auth to it.
Assume:
I'm on AD domain example.com
My host DNS entries are host#example.com
My Kerberos realm is EXAMPLE.COM,
From answers such as this one, I figured that the keytab has to contain entries such as:
HTTP/virt01.example.com#EXAMPLE.COM
...
HTTP/virt09.example.com#EXAMPLE.COM
HTTP/svc01.example.com#EXAMPLE.COM
...
HTTP/svc09.example.com#EXAMPLE.COM
in order for browsers and other clients (such as other non-interactive services) to be able to authenticate against the servers. Is the above correct?
If it is, a follow up question is - is there a way to make a "service alias" so to speak, so I can put just one entry in keytab:
HTTP/svc-alias.example.com#EXAMPLE.COM
somehow? This in order to be able to move the service to other hosts for example and not have to regen the keytab with a new host and CNAME added. Especially important for local testing. E.g. if this is tested on workstation583, a new keytab entry for that workstation would have to be made, which is really inconvenient.
If not possible, what is the easiest way to manage adding / removing hostnames? How is this done in practice with multi-server deployments to make it manageable?
Any resources answering any of the above are appreciated as well.
Related
I guess that's a very simple task, but I can't manage to have SSL work on gitlab pages. Gitlab pages documentation is too vague for me.
For example, when they say "Make sure your domain doesn't have an AAAA DNS record." does that mean the subdomain (say gitlab.mysite.com) doesn't have a AAAA record. Or does it mean my whole DNS configuration shouldn't have such a record?
Also if that's the later, how can I manage to make this work?
Maybe someone has a source to a good tutorial for this because I really struggle finding simple information (not assuming any prior knowledge about SSL/gitlab).
I just went through the whole process beginning to end and set up a GitLab Pages website on a custom domain with a Let's Encrypt certificate -- it worked like a charm.
I had to:
a) set up a TXT record to verify domain ownership, and
b) add an A record to point at the GitLab Pages IP address (since my domain DNS management provider didn't allow me to set up a domain-level CNAME)
After this, GitLab went and fetched a Let's Encrypt certificate for my Pages web site.
I didn't have a AAAA record, so that didn't come into the picture.
As per GitLab Pages documentation section GitLab Pages integration with Let's Encrypt,
Caution: This feature covers only certificates for custom domains
Issue 3342 is open to add support for sub-domains.
If you are still having trouble, let me know, I'd be happy to help with this.
I have been working to link my AD to G-Suit and have an auto sync established. The reason I put this here because I have had hard time to figure out everything. I am still not at the end of this procedure and I would appreciate if the skilled people would contribute to help me and I guess many others as well, on this topic.
I downloaded GCDS tool (4.5.7) and installed on a member server. Tried to go through the steps and failed, except to the first one to authenticate to Google.
Learnt: It is a Java (Sun) based product and when it come to authentication or SSL it will through errors that need to be sorted.
Step 1, Google Auth - done and very simple as long as you can logon to your GAE account
Step 2, LPAD config... this was tricky
I created a service account to use
Learnt:
You need to have the SAMS account matching with the displayname and name as well; only this way I could authenticate.
In most cases you don't need any admin rights, a domain user should be able to read the DN structure from LDAP.
I have the OU structure, but I need LDAP working on the DC (this works somehow)
Learnt:
Simple connection through port 389;
SSL would use port 636;
in most cases
GCDS only uses Simple authentication!
Learnt:
With port 389
Domain group policy needed to changed to non LDAP auth required (Domain controller: LDAP server signing requirements changed to none!) to be able to logon - this one is working and good for DEVSERV
Question: Should I use it for PRODSERV or I need to aim to use SSL?
Learnt:
With port 636 (SSL) you need a certificate
Question: I tried to add self cert based on the following article, added to the trusted cert root but Google cannot see it?
BASE DN can be read out through LDP.EXE (built in LDAP browser by MS)
Learnt:
You can add your OU you wanted doesn't have to be the root of the tree
Question: does it mean you have implemented extra security?
Step 3,Defining what data I can collect. OU and person I picked.
Learnt
Profile will collect extra information to Google, such as job title, phone etc. I only wanted them for the company signature... Well that is still not clear if this can be done. If that is not possible, I can't see the reason why I should disclose unwanted information to store on another server.
Question: Can job description be included to the Google Mail signature?
I am keep adding my finding to it as I am working through but would appreciate any input from people who managed to set it up.
Step 4, Searching in the Organisation Unit - confusing again but it is done. (More to follow.)
I am having a java based application running in tomcat. It is an online app, the request first goes to apache and then redirects to tomcat.
Today I was not able to log into my application and I noticed warnings at catalina.out file. They said "An attempt was made to authenticate the locked user "root" "and "An attempt was made to authenticate the locked user "manager" "
In my localhost_access_log.2015-07-07.txt I found the below IP addresses trying to access the system.
83.110.99.198
117.21.173.36
I need to block these 2 IPS from accessing my system. The first IP is a well known blacklisted according to the anti-hacker-alliance. How can I do this thing?
FYI I am using apache 2, so the main configuration file is apache2.conf
(Please don't remove the IP addreses I listed above, as I need other developers to be aware of the threat as well)
If you're using VPC:
The best way to block traffic from particular IPs to your resources is using NACLs (Network Access Control Lists).
Do a DENY for All protocols INGRESS for these IPs. This is better than doing it on the server itself as it means traffic from these IPs will never even get as far as your instances. They will be blocked by your VPC.
NACLs are on the subnet level, so you'll need to identify the subnet your instance is in and then find the correct NACL. You can do all of this using the VPC Dashboard on the AWS console.
This section of the documentation will help you:
http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_ACLs.html
Note that you will need to give the rule numbers for these 2 rules to block these 2 IPs a rule number that is less than the default rule number (100). Use 50 and 51 for example.
You can use an .htaccess file:
Order Deny,Allow
Deny from 83.110.99.198
Deny from 117.21.173.36
It's probably better to add this as a firewall rule though. are you using any firewall service now?
My client owns "domain.com". We need to give various applications friendly names for internal and external access. The applications are WCF web services and MVC web applications with varying levels of authentication (Windows auth within and across AD domains and plain text authentication). It looks a little like this:
UAT Environment
service1.uat.services.domain.com
service2.uat.services.domain.com
service3.uat.services.domain.com
service4.uat.services.domain.com
application1.uat.apps.domain.com
application2.uat.apps.domain.com
Production Environment
service1.services.domain.com
service2.services.domain.com
service3.services.domain.com
service4.services.domain.com
application1.apps.domain.com
application2.apps.domain.com
We're likely to have a LOT more sub domains, and everything needs to be secured with SSL.
We've changed our minds on how to configure this a number of times, but now we've hit a possible restriction. We thought a wildcard SSL certificate might work, but apparently they only work to a single level of subdomain i.e. *.services.domain.com.
Because of budget, we'd like to register a single wildcard SSL certificate and apply it to multiple servers (belonging to multiple AD Domains, and also a few servers in our DMZ).
This morning I had an idea, but I don't know enough about this stuff to make a definite decision. Do any of you foresee any restrictions on using the following naming convention instead of the above?
service1-uat-services.domain.com
service2-uat-services.domain.com
service3-uat-services.domain.com
service4-uat-services.domain.com
application1-uat-apps.domain.com
application2-uat-apps.domain.com
service1-services.domain.com
service2-services.domain.com
service3-services.domain.com
service4-services.domain.com
application1-apps.domain.com
application2-apps.domain.com
That way, we can register a wildcard for *.domain.com and use a single level subdomain for each application / service, but still allow us to keep everything logically separate. Are there any technical issues anyone can identify using this set up?
There shouldn't be any problem with that.
I am trying to block access to our openldap's namingContexts. The openldap server hosts directories for several DNs, and we don not want anyone from being able to identify which DNs are being hosted by the server.
I understand that namingContext is an operational attribute and part of the rootDSE. Obviously, ldap clients need access to some entries of the rootDSE in order to operate properly.
On the other hand, it looks like rootDSE entries are also subject to ACL.
The question is whether the namingContext attributes are required to be publicly readable in order for a client to connect to the server, or whether the namingContext attributes can be restricted. If the later, what would be a suitable ACL for this? We use openldap.
The following access control:
access to attrs="namingContexts" by * none
denies access to namingContexts.