API Connect On Premise OVA Properties - apiconnect

I noticed while deploying API Connect that the OVA had a variety of useful properties that I could set to get my on premise deployment up and running much more rapidly and securely. I couldn't find any documentation on them though. Can someone explain what all of these properties do?
I've bolded the ones I really care about, but bonus points if you explain them all and show how to craft an ovftool command to set those properties.
hostname
domain
Ip Address
Netmask
Gateway
Primary DNS
Secondary DNS
NTP
Domain Search
Cloud IP address
Username
Password
Email

These properties are referenced in https://www.ibm.com/support/knowledgecenter/en/SSMNED_5.0.0/com.ibm.apic.install.doc/overview_installing_mgmtvm_apimgmt.html
When configuring multiple (a cluster of) management nodes into a management service, use the Cloud IP address to tell the 2nd..nth nodes where to find the 1st (leave blank for the 1st node).
Email is the email address of the Cloud Manager "admin" user.

Related

Does Azure networking use anti-spoofing and not route packets with unrecognised source IP addresses?

I have a non-azure, non-Windows, non-microsoft site-to-site tunnel set up between an Azure cloud environment and an on-premise LAN; at the azure end, the proprietary (non-microsoft) S2S host sits behind an Azure load balancer.
The proprietary tunnel is route-based and as such, I'd like to route connections all the way from our on-premise network to various resources in Azure.
e.g.
OnPrem Server -> OnPremFw -> (tunnel) -> CloudFW -> LB -> vNET1 -> vNET2 -> VMtarget
When packets hit the CloudFW, they are being "Hidden NAT'd", so the source IP address is translated from its On-premise IP address to an IP address recognised by Azure as directly associated with an Azure subnet range. In this case, things work as expected.
However, if I turn off the H-NAT, so that packets carry their original on-prem source IP address in to Azure, then no matter what security or routing rules I apply, nothing works.
Is it plausible that Azure is passively dropping these packets, or is silently screening them out, something like address spoofing?
I can't find any Azure documentation confirming this, but the behaviour I am seeing strongly implies this must be the case. Could anyone confirm?
I would like to know if essentially, it isn't possible to use "non-Azure" IP addresses in Azure routing and security configurations.
thanks
The answer to this question is No.
It is possible to use non-Azure-defined IP addresses in Azure route table rules and in Azure nsg rules.

Can you create Kerberos principals where the hostname is flexible? (Docker)

I'm specifically trying to do this with Apache Storm (1.0.2), but it's relevant to any service that is secured with Kerberos. I'm trying to run a secured Storm cluster in Docker. There are a number of out-of-the-box docker images out there for Storm, and they work great unsecured. I'm using https://github.com/Baqend/docker-storm. I also have Storm running securely on RHEL VM's.
However, my understanding is that Kerberos ties hostnames to principals, so if I'm making service foobar available to clients, I need to create a principal of foobar/hostname#REALM. Then a client service might connect to hostname with principal foobar, Kerberos will look up foobar/hostname#REALM in its database, find that it's there (because we created a principal with exactly that name), and everything will work.
In my case, it's described here: https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.3.0/bk_installing_manually_book/content/configure_kerberos_for_storm.html. The nimbus authenticates as storm/<nimbus host>#REALM, and the supervisors and outside clients authenticate as storm/REALM. Everything works.
But here in 2017, we have containers and hostnames are no longer static. So how would I Kerberize a service that runs in Docker Data Center (or Kubernetes, etc)? I have to attach an unknown hostname to the server authentication. I imagine I could create a principal for all possible hostnames and dynamically pick the right one at startup based on where the container lives, but that's kludgy.
Am I misunderstanding how Kerberos works? Is there a solution here that I don't see? I see multiple examples online of people running Storm in Docker, but I can't imagine that nobody's clusters are secure.
I don't know Apache Storm or Docker, but based on previous workings with JBOSS in a cluster in which an inbound client could be connecting to any one of a possible number of different hosts, then you would simply assign a virtual name to the entire pool at the load balancer and kerberize the service according to the virtual name instead of individual host name at the host level. So if you're making service foobar available to clients, you need to create a service principal (SPN) of foobar/virtualhostname#REALM in your Directory to kerberize the service with. You assign that SPN to a user account (not a computer account) to give it the flexibility to work with any Kerberized service which uses that SPN. If you are using Active Directory, you must create a keytab with the SPN inside of it, and place the keytab on each host running the kerberized service instance foobar/virtualhostname#REALM.

TFS 2017, HTTPS Binding loses console permissions?

I've been trying all day to set up my instance of TFS2017 to work with HTTPS.
I've read the official setup guide, but it didn't help much.
My instance is attached to a domain and configuration has been made with an Administrators group user. The domain account is referenced as an administration console user properly.
The setup has been made with default 8080 port and domain account user can access the website as expected (hosted at http://machine-name:8080/tfs)
Now, when I change the IIS website settings binding to use HTTPS on port 443 with a valid wildchar certificate + set the hostname to be tfs.mydomain.com + ask for SSL require, I cannot have my user to authenticate anymore.
I make TFS Public Url point to https://tfs.mydomain.com/tfs.
I get prompted for the authentication box, but after many attempts, the site would just fail with 401.
The tests are made into the server environment to avoid Firewall confusions.
My instance has two network cards with 2 separate networks. First resolves to public IP, second resolves to private IP. I noticed the configuration works with the machine names, while it fails with the DNS resolution on the public IP. Could this be a reason ?
Thanks for your help
To perform the procedures in your requirements, you must first meet some prerequisites such as required Permissions and so on. Please double check this first. Also please make sure you have set up the corresponding ports such as below prompted.
Important:
The default port number for SSL connections is 443, but you
must assign a unique port number for each of the following
sites: Default Website, Team Foundation Server, Microsoft Team
Foundation Server Proxy (if your deployment uses it), and SharePoint
Central Administration (if your deployment uses SharePoint).
You should record the SSL port number for each website that you
configure. You will need to specify these numbers in the
administration console for Team Foundation.
There is a very detail tutorial about configuring HTTPS with SSL, please refer Setting up HTTPS with Secure Sockets Layer (SSL) for Team Foundation Server
To narrow down the issue with IP, you could disable one of your two network cards. Give a test with only using one network card each time.

Binding specific interface IP address to Azure Storage container connections

Our product has software-managed virtual networks and has multiple local IP addresses from which network communications could be routed. One of the requirements we have is to ensure that outgoing traffic is routed from a specific, desired local IP when communicating with the Azure blob storage endpoint.
The Azure SDK does not seem to expose any means of specifying which local IP address to use for communications to the Azure blob endpoint. Please let us know if you think the SDK does expose and if so how we can utilize the facility.
If not, we are evaluating making changes to the azure-storage-java SDK source in order to support the local IP binding requirement.
Has this kind of situation been brought to your attention before? Do you have any suggestions as to how this might be accomplished?
Thanks,
Sowmya.

How can I make Apache on an amazon ec2 linux box using the elastic IP instead of the private IP?

I've migrated a website to Amazon ec2 that hooks into a service we are using that is installed on another server (not on Amazon). Access to the API for that service is IP-restricted and done by sending XML data using *http_build_query* & *stream_context_create* in PHP.
If I want to connect to the service from a new server, I need to ask the vendor to add the new IP first. I did that by sending the Elastic IP to them, but it doesn't work.
While trying to debug, I noticed that the output for $_SERVER['SERVER_ADDR'] is the private IP of the ec2 instance.
I assume that the server on the other side is receiving the same data, so it tries to authenticate the private IP.
I've asked the vendor to allow access from the private IP as well – it's not implemented yet, so I'm not sure if that solves the problem, but as far as I understand the way their API works, it will then try to parse data back to the IP it was contacted from, which shouldn't be possible because the server is outside the Amazon cloud.
I might miss something really obvious here. I added a command to rc.local (running CENT OS on my ec2 instance) that associates the elastic IP to the server upon startup by using ec2-associate-address, and this seemed to help make a MySQL connection to another outside server working, but no luck with the above mentioned API.
To rule out one thing - the API is accessed through HTTPS, with ports 80 and 443 (and a mysql port) enabled in security groups and tested. The domain and SSL are running fine.
Any hint highly appreciated - I searched a lot already, but couldn't find anything useful so far.
It sounds like both IPs (private and elastic) are active in your VM. Check by running ifconfig -a. If that's what's happening then the IP that gets used for external traffic will depend on the remote address and your VM's routing table. It could even vary from one connection to the next.
If that's what's going on then the quickest fix would be to ifconfig down the interface that has the private address. That should leave only the elastic address for all external connections. If that resolves the problem then you can script something that downs the private IP automatically after the elastic IP has been made active, or if the elastic IP will be permanently assigned to this VM and you really don't need the private IP then you can permanently disassociate the private IP from this VM.