Glassfish 3.1 remote instance cant connect to database - glassfish

in glassfish 3.1 I have two instance on two ssh node and they work fine in a cluster. I created the third ssh node and add the instances in a cluster. SO the cluster has three instances on three remote ssh node.
Web service running on the third node but web service cant connect to database. I believe the new instances has same connector and configuration, resources as other two since the instance is added into the cluster. So all sharing same cluster config.
I am new in glassfish, please help me out.
Thanks

Related

Unable to connect redis cluster outside docker container

I have setup a dockerized redis cluster in windows 10. I could connect to docker cluster from container. But when i try to fetch cluster node info from my web application which is not running on docker it give me error
org.springframework.data.redis.ClusterStateFailureException: Could not retrieve cluster information. CLUSTER NODES returned with error.
I have created redis cluster in different ways but none of them worked for me.
Created redis cluster in bridge network mode and published respective port but was getting same error.
Tried different port mapping :
only mapped port skipped IP. docker inspect shows port mapping as(0.0.0.0:7000:6379)
Provided both IP and port. IP(provided machine host ip , 127.0.0.1, docker NAT ip). docker inspect shows port mapping as(127.0.0.1:7000->6379/tcp), (hostip:7000->6379/tcp),(dockerNAtIP:7000->6379/tcp)
Created redis cluster in --network=host mode.
a. Did not publish port as it id not ideal to do that.
b. Also published port in the same manner as mentioned above.
But i was getting same error.
Finally did setup without docker in windows and i was able to connect to redis cluster and fetch node info.
Is it not possible to connect dockerized redis cluster from application running outside docker?
Tried with all this steps:
Unable to connect to dockerized redis instance from outside docker
https://simplydistributed.wordpress.com/2018/08/31/redis-5-bootstrapping-a-redis-cluster-with-docker/
https://get-reddie.com/blog/redis4-cluster-docker-compose/
https://www.ionos.com/community/hosting/redis/using-redis-in-docker-containers/
https://medium.com/commencis/creating-redis-cluster-using-docker-67f65545796d

Azure Container Services Port Load Balancer

While trying to port my application which is running on docker Swarm locally to Azure Container Service I am struck on the load balancer part of the Azure.
Locally I have a container instance of HAproxy running on Swarm Master and multiple web containers running.
Web containers have just exposed the ports and they are not mapped to machines on which they are running.
HAproxy container has mapped port to the master and internally is talking to my web containers for load balancing.
This gives me the leverage to run any number of containers with limited number of workers in Docker Swarm.
In azure container service I see that Azure load balancer will talk to only ports that are mapped, that means that I can only run 1 container per agent or I keep an internal load balancer in my containers, which implies that users will be going through 2 load balancers before hitting my application.
Not an ideal scenario when my application uses sticky sessions.
So Apparently Microsoft's statement "Everything works same in Azure containers" goes for a toss ?
what are the solutions available or am I doing something wrong here?
Regards,
Harneet
The solution in ACS is almost identical. Use HAProxy and have the Azure LB talk to that. The only difference is that you will not be running the proxy on the master, you will have Swarm deploy it to an agent for you.
You shouldn't really be running workloads on your masters. What would you do if you have a DDoS attack and can't reach your masters, for example. Having Swarm deploy the proxy for you means that you can also have swarm monitor the health of the proxy.
You could, if you really wanted to, run the proxy on the master as you do now. The solution would be the same, have the Azure LB provide a public connection to the proxy just as you currently do.

node agent in glassfish v4.1.1

I want to migrate an application from glassfish 2.1 to glassfish 4.1.1.
But not able to create node agent in glassfish 4.1.1
Already checked in admin console and also tried with command prompt as well with
command : create-node-agent-na
OUTPUT: CLI194: Previously supported command: create-node-agent is not supported for this release.Command create-node-agent failed.
Does anyone have any idea on how to create node agent in glassfish 4.1.1 or is there any replacement provided in GF v4.1.1
[glassfish 4.1.1 error:]
GlassFish 3.x and higher no longer has a node agent. Administration works slightly differently; nodes are simply representations of the hosts where server instances reside. You can create a new SSH, DCOM or CONFIG node which governs how the DAS communicates with server instances on that node. The rest of the node configuration just identifies the IP address or hostname of the node.
If you create an SSH node (or DCOM node in Windows only), then you will be able to communicate with the server instances on the remote machine directly and start and stop them from the DAS.
If you create a CONFIG node, then the DAS has no way of communicating with server instances which are not running. When a server instance on a CONFIG node starts, it contacts the DAS to register itself as running, and then the DAS will be able to administer the instance over HTTP.
There is more information on how to do this in this Payara Server blog post. Payara Server is derived from GlassFish, so all these instructions are valid on GlassFish 4.x as well.

Remote Docker Host Authentication

Hi I'm currently working on a side project. In this project I'll have a central server that will need to connect to several remote docker daemons. My problem is with authentication.
Given that the project will be hosted on Digitalocean, my first thought suggested that I'll accept only connections from the private networking interface. The problem is that that interface is accessible by all other servers in the same datacenter.
Second thought is to allow only requests from the central server using the DOCKER_HOST config, the problem is that if I understand correctly the if the private IP of the centeral server get known, the IP can be spoofed.
Third thought is to enable TLS ( https://docs.docker.com/articles/https/ ), I've never dealt with those things before and the tutorial is unclear for me, I lack the knowledge of the terminologies and it's being used heavily.
So basically the problem is that I have a central client and multiple remote docker hosts, what is the best way to connect to them? Thank you.
EDIT: I managed to solve the problem using HTTP authentication by running nginx as a proxy in front of the docker daemon.
My understand is you are trying to build a docker cluster, which can manage all nodes from one single central server.
this is very likely docker's Docker Swarm project, from their doc, they give some simple idea how this is work:
open a TCP port on each node for communication with the swarm manager
install Docker on each node
create and manage TLS certificates to secure your swarm
Sorry this should post as a comment but I do not have enough rep to do that.

Pod to Pod connection with using multiple port

I have a Google Cloud Container Engine cluster with 2 Pods, master and slave. Each of them runs RabbitMQ instance, that supposed to be joined into one cluster.
Ports exposed from Dockers aren't available from other machine, but could be accessed only through a Service. That's not a problem, I could establish a service for each instance (one-to-one, service-to-pod), and point each Pod to opposite service IP.
The problem that RabbitMQ uses more that one port for communications. That means that service IP should open all this ports from underlying Pod. But I cannot specify list of shared port for a Service, and if I create a new service for each port each of them will have own IP.
Is there any way to expose list of ports from same Docker/Pod on same internal IP address using Container Engine cluster? maybe some special routing configuration?
Your question is similar to this question, and unfortunately has the same response: Kubernetes / Google Container Engine does not currently have a way to expose a range of ports for a service at the current time. There is an open issue in GitHub to address this use case.