Spring Cloud Config serves configuration based on app name, profile and label.
I'm wondering, how to differentiate configuration of two instances of same service on same environment. They should basically have the same configuration, but, for example, I would like to allow on testing environment running them on the same host, so I need different ports.
How you solve this? Using fake profiles (dev-1, dev2)? Or is there some better way?
There is no facility for individual instance configuration as you noted. You could do a fake profile thing. But for port, why not just set server.port?
Related
I want to know if we can change the remote express gateway config from some other service(Might or might not be behind the gateway). Is there an API exposed for admins, to enable changing config without having to change the docker image of eg?
Our use case is we have an infrastructure based on tenants and want to change the config in run time without having container restarts or image changes. The documentation says config changes will be a hot reloaded.
If the above is not possible can you suggest what is the best alternative to change files in a remote docker container from other service.
Thanks in advance.
Yes, the Express Gateway Admin API has endpoints to add, remove, list, or change the following entities:
Policies
Service Endpoints
API Endpoints
Pipelines
I have not used them, but the documentation suggests that they update the gateway.config.yaml configuration file.
I'm specifically trying to do this with Apache Storm (1.0.2), but it's relevant to any service that is secured with Kerberos. I'm trying to run a secured Storm cluster in Docker. There are a number of out-of-the-box docker images out there for Storm, and they work great unsecured. I'm using https://github.com/Baqend/docker-storm. I also have Storm running securely on RHEL VM's.
However, my understanding is that Kerberos ties hostnames to principals, so if I'm making service foobar available to clients, I need to create a principal of foobar/hostname#REALM. Then a client service might connect to hostname with principal foobar, Kerberos will look up foobar/hostname#REALM in its database, find that it's there (because we created a principal with exactly that name), and everything will work.
In my case, it's described here: https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.3.0/bk_installing_manually_book/content/configure_kerberos_for_storm.html. The nimbus authenticates as storm/<nimbus host>#REALM, and the supervisors and outside clients authenticate as storm/REALM. Everything works.
But here in 2017, we have containers and hostnames are no longer static. So how would I Kerberize a service that runs in Docker Data Center (or Kubernetes, etc)? I have to attach an unknown hostname to the server authentication. I imagine I could create a principal for all possible hostnames and dynamically pick the right one at startup based on where the container lives, but that's kludgy.
Am I misunderstanding how Kerberos works? Is there a solution here that I don't see? I see multiple examples online of people running Storm in Docker, but I can't imagine that nobody's clusters are secure.
I don't know Apache Storm or Docker, but based on previous workings with JBOSS in a cluster in which an inbound client could be connecting to any one of a possible number of different hosts, then you would simply assign a virtual name to the entire pool at the load balancer and kerberize the service according to the virtual name instead of individual host name at the host level. So if you're making service foobar available to clients, you need to create a service principal (SPN) of foobar/virtualhostname#REALM in your Directory to kerberize the service with. You assign that SPN to a user account (not a computer account) to give it the flexibility to work with any Kerberized service which uses that SPN. If you are using Active Directory, you must create a keytab with the SPN inside of it, and place the keytab on each host running the kerberized service instance foobar/virtualhostname#REALM.
I have a suite of automated tests that run on a Selenium Grid. I have a need now to run these tests in multiple environments (QA, Stage, Production). The environments will be set up using a different DNS server for each one. So a test targeting the QA environment should use the QA DNS, Stage tests should use the Stage DNS, etc.
Ideally, I would like my test suite (which runs in Jenkins and accepts a parameter for which environment to target) to be able to tell the grid to allocate a node, set its DNS servers to (whatever), run the test, then put the DNS servers back the way it found them.
I don't see anything in Selenium's documentation about changing DNS settings on the individual nodes. I also tried looking for browser capabilities that could handle this, but no luck there either. What's the cleanest way to make this happen?
EDIT: The requirement to switch DNS servers is a new one, so there's currently no method in place (manual or automatic) for doing it. Before using this DNS-based method of differentiating environments, we were using environment-specific hostfiles, and switching between them with a custom service that listened on each node for a hostfile-switch request. We might have to create a similar service for switching DNS settings, but I was hoping there was something more "official" than that.
We worked around this issue by setting up a proxy server for each environment, and configuring the proxy servers to use the environment-specific DNS settings. Selenium permits setting a proxy on the individual nodes, so this was a way to programatically modify those settings.
I have created a cluster with 2 servers and I have developed a sample application. I can access this application from ip address of these servers(10.0.0.3:7002/sample/ and 10.0.0.4:7002/sample/) but I don't know this cluster is working or not. Can I access this web application from a single address? like myclusteraddress:7002/sample/.
You can accomplish this task in two ways...
First Way
You need to create a Load balancer(F5) to the both the servers which automatically manages the traffic and serves the user requests...
Second Way
You have to a dns cutover for that website...it's same as above task almost.
I have multiple web applications running in CloudBees run#cloud. I need to communicate between these applications using HTTP. I have SSL router and custom domain names configured for all these apps too. Should I use the custom domain names or the default xxx.cloudbees.net address in the communication?
I understand that using SSL gives me better security, but I'm thinking more in the lines of performance, flexibility and data transfer costs.
Using custom domain name will only "consume" a DNS lookup to resolve the the adequate cloudbees.net node. I don't think one or the other name will have any significant impact on performances and network costs as the IP address has been resolved to internal routes.