OpenShift Origin single master multiple nodes - openshift-origin

With OpenShift Origin in a single master - 2 node installation do you need to have Docker running on the master and configured with Docker storage (Docker storage pool) as you do on the nodes? This is not clear to me though installation indicates all OpenShift servers require Docker? Not sure why the master would? Thank you.

Related

Unable to connect redis cluster outside docker container

I have setup a dockerized redis cluster in windows 10. I could connect to docker cluster from container. But when i try to fetch cluster node info from my web application which is not running on docker it give me error
org.springframework.data.redis.ClusterStateFailureException: Could not retrieve cluster information. CLUSTER NODES returned with error.
I have created redis cluster in different ways but none of them worked for me.
Created redis cluster in bridge network mode and published respective port but was getting same error.
Tried different port mapping :
only mapped port skipped IP. docker inspect shows port mapping as(0.0.0.0:7000:6379)
Provided both IP and port. IP(provided machine host ip , 127.0.0.1, docker NAT ip). docker inspect shows port mapping as(127.0.0.1:7000->6379/tcp), (hostip:7000->6379/tcp),(dockerNAtIP:7000->6379/tcp)
Created redis cluster in --network=host mode.
a. Did not publish port as it id not ideal to do that.
b. Also published port in the same manner as mentioned above.
But i was getting same error.
Finally did setup without docker in windows and i was able to connect to redis cluster and fetch node info.
Is it not possible to connect dockerized redis cluster from application running outside docker?
Tried with all this steps:
Unable to connect to dockerized redis instance from outside docker
https://simplydistributed.wordpress.com/2018/08/31/redis-5-bootstrapping-a-redis-cluster-with-docker/
https://get-reddie.com/blog/redis4-cluster-docker-compose/
https://www.ionos.com/community/hosting/redis/using-redis-in-docker-containers/
https://medium.com/commencis/creating-redis-cluster-using-docker-67f65545796d

Can Opereto be installed on any cloud-native environment?

I saw that Opereto can be installed on a single node using docker-compose. However, I would like to scale by installing Opereto on Kubernetes. Is it supported as well?
Thanks
Opereto is now released in two delivery methods: docker-compose for a small footprint single node installation and Kubernetes cluster.
https://docs.opereto.com/installation-get-started/
You can install Opereto on any environment that supports Kubernetes vanilla. There might be some differences in the deployment commands if you use OC command instead of kubectl but it should be straight forward to work it out.
Please note, however, that Opereto requires an HTTPs ingress to be configured. Ingress configuration may be different from one K8s provided to another.

Glassfish 3.1 remote instance cant connect to database

in glassfish 3.1 I have two instance on two ssh node and they work fine in a cluster. I created the third ssh node and add the instances in a cluster. SO the cluster has three instances on three remote ssh node.
Web service running on the third node but web service cant connect to database. I believe the new instances has same connector and configuration, resources as other two since the instance is added into the cluster. So all sharing same cluster config.
I am new in glassfish, please help me out.
Thanks

connect opscenter and datastax agent runs in two docker containers

There two containers which is running in two physical machines.One container for Ops-center and other is for (datastax Cassandra + Ops-center agent).I have have manually installed Ops-center agent on each Cassandra containers.This setup is working fine.
But Ops-center can not upgrade nodes due to fail ssh connections to nodes. Is there any way create ssh connection between those two containers. ??
In Docker you should NOT run SSH, read HERE why. After reading that and you still want to run SSH you can, but it is not the same as running it on Linux/Unix. This article has several options.
If you still want to SSH into your container read THIS and follow the instructions. It will install OpenSSH. You then configure it and generate a SSH key that you will copy/paste into the Datastax Opscenter Agent upgrade dialog box when prompted for security credentials.
Lastly, upgrading the Agent is as simple as moving the latest Agent JAR or the version of the Agent JAR you want to run into the Datastax-agent Bin directory. You can do that manually and redeploy your container much simpler than using SSH.
Hope that helps,
Pat

How can I manage puppet agent nodes in the internal network from a puppet master in the internet?

How can I manage puppet agent nodes in the internal network from a puppet master in the internet?
All the puppet agent nodes are within an internal network, thus do not have a unique ip for each of the agent node, however the puppet master node is in the internet, how can accomplish this? Any help will be greatly appreciated.
Node's IP address doesn't participate in the authentication process to the puppet master. Also, master does not need to directly connect to nodes, since it's nodes responsibility to contact the master which then provides compiled catalog for each node.
What's important is that node's FQDN is unique among all the nodes managed by master. As far as:
Puppet master can be reached via a known FQDN (by default configured to puppet on the agents, might be changed to a full domain name mapped to a public IP in your case)
Puppet agents have access to Internet (or, more specifically, at least to the IP address of the Puppet master)
Puppet agents have a unique fully qualified domain name configured on them (which you can see with hostname -f)
your master can manage your agents without special configuration.