node agent in glassfish v4.1.1 - glassfish

I want to migrate an application from glassfish 2.1 to glassfish 4.1.1.
But not able to create node agent in glassfish 4.1.1
Already checked in admin console and also tried with command prompt as well with
command : create-node-agent-na
OUTPUT: CLI194: Previously supported command: create-node-agent is not supported for this release.Command create-node-agent failed.
Does anyone have any idea on how to create node agent in glassfish 4.1.1 or is there any replacement provided in GF v4.1.1
[glassfish 4.1.1 error:]

GlassFish 3.x and higher no longer has a node agent. Administration works slightly differently; nodes are simply representations of the hosts where server instances reside. You can create a new SSH, DCOM or CONFIG node which governs how the DAS communicates with server instances on that node. The rest of the node configuration just identifies the IP address or hostname of the node.
If you create an SSH node (or DCOM node in Windows only), then you will be able to communicate with the server instances on the remote machine directly and start and stop them from the DAS.
If you create a CONFIG node, then the DAS has no way of communicating with server instances which are not running. When a server instance on a CONFIG node starts, it contacts the DAS to register itself as running, and then the DAS will be able to administer the instance over HTTP.
There is more information on how to do this in this Payara Server blog post. Payara Server is derived from GlassFish, so all these instructions are valid on GlassFish 4.x as well.

Related

Script to start Weblogic servers and Managed servers

Can someone help me to write script to perform below steps in weblogic.
1.Stop Managed Servers
2.Stop Node Manager
3.Stop Admin Server
4.Delete the tmp,cache folders.
The steps you mentioned can be done with WLST and Node Manager. However, you need to make the following adjustments:
Configure Node Manager/WebLogic Domain to stop using the demo SSL certificate when accessing/starting Node Manager.
Configure Node Manager
Edit nodemanager.properties and set the following:
SecureListener to false
QuitEnabled to true
Restart Node Manager
Configure WebLogic Domain
Login to WebLogic Domain
Under Environment, Machines: click the Machine name configured
Under Configuration, Node Manager: set Type to Plain and save
Restart WebLogic Domain (Admin Server + Managed Servers)
Configure WebLogic Domain's Node Manager Credentials. The default is usually the username/password you entered when creating the WebLogic Domain. However, it is also a good idea to set different credentials for the Node Manager. This is totally optional, especially when working in a development environment.
Login to WebLogic Domain
Under Domain Structure, click the Weblogic Domain name
Under Security, General: click Advanced
Set the NodeManager Username and NodeManager Password/Confirm NodeManager Password and click Save
For this answer, I will use nodemanager/nodemanager_pwd as sample values.
Assuming you have one Admin Server and one Managed Server, both on the same machine, write the following commands in a Python script:
# Connect to the Node Manager running on localhost with port 5556.
# Change the DOMAIN_NAME and the DOMAIN_HOME as appropriate
nmConnect('nodemanager','nodemanager_pwd','localhost','5556','DOMAIN_NAME','DOMAIN_HOME','PLAIN')
# Start the Admin Server.
# The following command assumes that the
# name of the Admin Server is AdminServer
nmServerStart('AdminServer')
# Start the Managed Server. Again, change the Managed Server name as appropriate
nmServerStart('Managed_Server_01')
To stop the Managed Server and Admin Server, it's the opposite direction with the sequence, and now you need to use the nmKill command. The stopNodeManager() is possible if the QuitEnabled property was set to true in the nodemanager.properties file.
nmConnect('nodemanager','nodemanager_pwd','localhost','5556','DOMAIN_NAME','DOMAIN_HOME','PLAIN')
nmKill('Managed_Server_01')
nmKill('AdminServer')
stopNodeManager()
When invoking the Python script that contains the commands above, execute the following command:
$MW_HOME/oracle_common/common/bin/wlst.sh startup.py
$MW_HOME/oracle_common/common/bin/wlst.sh shutdown.py
As for the clearing of the tmp/cache folders, these can all be done via shell script (assuming you're running on Linux)

Glassfish 3.1 remote instance cant connect to database

in glassfish 3.1 I have two instance on two ssh node and they work fine in a cluster. I created the third ssh node and add the instances in a cluster. SO the cluster has three instances on three remote ssh node.
Web service running on the third node but web service cant connect to database. I believe the new instances has same connector and configuration, resources as other two since the instance is added into the cluster. So all sharing same cluster config.
I am new in glassfish, please help me out.
Thanks

connect opscenter and datastax agent runs in two docker containers

There two containers which is running in two physical machines.One container for Ops-center and other is for (datastax Cassandra + Ops-center agent).I have have manually installed Ops-center agent on each Cassandra containers.This setup is working fine.
But Ops-center can not upgrade nodes due to fail ssh connections to nodes. Is there any way create ssh connection between those two containers. ??
In Docker you should NOT run SSH, read HERE why. After reading that and you still want to run SSH you can, but it is not the same as running it on Linux/Unix. This article has several options.
If you still want to SSH into your container read THIS and follow the instructions. It will install OpenSSH. You then configure it and generate a SSH key that you will copy/paste into the Datastax Opscenter Agent upgrade dialog box when prompted for security credentials.
Lastly, upgrading the Agent is as simple as moving the latest Agent JAR or the version of the Agent JAR you want to run into the Datastax-agent Bin directory. You can do that manually and redeploy your container much simpler than using SSH.
Hope that helps,
Pat

ATG Commerce weblogic clustering

I am trying to setup a weblogic cluster running ATG Commerce. I have installed weblogic on machine A, with the following configuration.
Weblogic admin server
Managed servers: Production_A, Production_B, Publishing_A and Publishing_B that
shall run on machine A and B accordingly.
Do I have to install weblogic on machine B also?
(While installation of ATG commerce it ask which is our application server. I mean to say the production and publishing server shall run inside the weblogic server of machine A).
Secondly, do I have to actually specify the managed servers in weblogic while installing it on machine A. I mean to say while installing ATG commerce in machine B, during CIM configuration, I specify the weblogic of machine A, and I create production_B and publishing_B to run inside A's weblogic.
I am confused wheater I'm doing it right.
The managed servers production_B and publishing_B appear in A's weblogic After I do CIM configuration on B. The I set up machines(A and B in A's weblogic), add servers to machines, Add servers to cluster. Everything is happening. But when I try to run B's servers from A's weblogic, the servers do not start saying nodemanager is unreachable. In machine A, I have set the node manager for machine B to Machine A(I need to ask like will the node manager for machine B's server? Or it will run in A's weblogic node manager(production_A and Publishing_A runs with this node manager)) machine A is reachable from machine B(I can open weblogic console of Machine A on Machine B). Am I missing on anything?
Can Anyone direct me to a reference/blog for weblogic clustering in ATG Commerce
Firstly, you will need to install WebLogic on every physical machine on which you intend to deploy your EAR. It is the servlet container that you'll be using so without it (and it's obvious dependencies like JAVA) you won't be able to run your deployments on that machine.
As far as your ATG instances are concerned, I would do it as follow:
Create Commerce A and Publishing A on Server A (using CIM). Something that you are missing though is you have no LockManager configured. You'll require at least one of these to maintain your locks across your Commerce Instances and possibly another to do the same across your publishing instances (I've never deployed a clustered publishing environment, only ever one BCC per environment).
Having done the configuration on Server A, manually copy the ATG-Data/servers folder (or /servers) folder from Server A to Server B. Because you don't install ATG on every machine, in fact you don't need ATG to be deployed if you create your EAR in standalone mode, I would recommend you setup an ATG-Data folder on both Server A and Server B and deploy your configs in there.
Now, having copied the servers folder you will need to manually edit the following files:
Configuration.properties
This probably contains references to Server A that you want to update.
The Ports on Server B can be the same (per instance) as it is on Server A
Update the otherLockServer property in the LockManager instance (if you created it) for both Server A and Server B to reference the 'other lock server'
Update the ClientLockManager to point at both LockManagers
From a WebLogic point of view you need to create the servers on each instance as well. Even within a WebLogic Cluster (which is licensed separately with Oracle and not included in your ATG License) you need an AdminServer per WebLogic. I believe the NodeManager configuration would be different but I've not set this up in a WebLogic cluster, yet.

Intellij Idea 11 fails to connect to JBoss 7.1.1 remotely

I am attempting to deploy a simple exploded war application from idea 11.1.3 to a remote (same machine) JBoss 7.1.1. The War builds fine and starts to deploy, but I keep getting the following message on the attempted deploy:
Error running JBoss 7.1.1.Final: Unable to connect to the
localhost:8080
I am using Intellij's default JBoss 7 Remote configuration, which I've used successfully in the past. I can hit localhost:8080 directly with any browser so I know its responding. Ideas?
I've encountered with the same problem and found solution. May be this problem appeared because I've just switched from Eclipse. So I think someone it can be helpful.
For correct debugging JBoss remotely we need to specify 3 ports:
1) http port ('Port' field of the 'Remote Connection Settings' section) is used to ping JBoss server (periodical checking if the
server is alive) and to produce urls addressing resources on the server - FIRST TAB
2) native management port ('Management port' field of the 'JBoss Server Settings' section) is used to connect to the JBoss management 
interface, to check for the server startup to be finished and to deploy artifacts - FIRST TAB
3) remote socket port - by default on JBoss it 8787 - LAST TAB
After this all will work fine.
I was little bit confusing because if I run JBoss under sudo – remote debug worked just fine.