Overriding the way JMX works in a Docker WLS container - weblogic

I have a WebLogic docker container. The WLS admin port is configured at 7001. When I run the container, I use --hostname=[hosts' hostname] and expose the 7001 port at a different host port using -p 8001:7001 for example. The reason I do the port mapping is because I would want to run multiple WLS containers on the same host.
I have some applications that I deploy on this WebLogic. These applications use an external SDK (which I don't control) to get the application url using JMX (getURL operation of RuntimeServiceMBean).
This is where it gets wrong. The URL comes out as http://[container's IP]:7001. I would want it to retrieve http://[hosts' hostname]:8001 - i.e. the hostname I used to start the container and the port at which 7001 is mapped i.e. 8001.
Is there a way this could be done?

When the container is started, you should start WebLogic after adjusting the External Listen Address of your AdminServer. You can use WLST Offline for that from within a shell script, passing parameters with docker run -e KEY=VALUE, then later read these from inside the WLST script. Modify your AdminServer External Listen Address, exit(), then you can start AdminServer.
Here's an example on how to create the extra Network Channel with proper External Listen Address.

Related

Running an apache container on a port > 1024

I've built a docker image based on httpd:2.4. In my k8s deployment I've defined the following securityContext:
securityContext:
privileged: false
runAsNonRoot: true
runAsUser: 431
allowPrivilegeEscalation: false
In order to get this container to run properly as non-root apache needs to be configured to bind to a port > 1024, as opposed to the default 80. As far as I can tell this means editing Listen 80 in httpd.conf to Listen {Some port > 1024}.
When I want to run the docker image I've build normally (i.e. on default port 80) I have the following port settings:
deployment
spec.template.spec.containers[0].ports[0].containerPort: 80
service
spec.ports[0].targetPort: 80
spec.ports[0].port: 8080
ingress
spec.rules[0].http.paths[0].backend.servicePort: 8080
Given these settings the service becomes accessible at the host url provided in the ingress manifest. Again, this is without the changes to httpd.conf. When I make those changes (using Listen 8000), and add in the securityContext section to the deployment, I change the various manifests accordingly:
deployment
spec.template.spec.containers[0].ports[0].containerPort: 8000
service
spec.ports[0].targetPort: 8000
spec.ports[0].port: 8080
ingress
spec.rules[0].http.paths[0].backend.servicePort: 8080
Yet for some reason, when I try to access a URL that should be working I get a 502 Bad Gateway error. Have I set the ports correctly? Is there something else I need to do?
Check if pod is Running
kubectl get pods
kubectl logs pod_name
Check if the URL is accessible within the pod
kubectl exec -it <pod_name> -- bash
$ curl http://localhost:8000
If the above didn't work, check your httpd.conf.
Check with the service name
kubectl exec -it <ingress pod_name> -- bash
$ curl http://svc:8080
You can check ingress logs too.
In order to get this container to run properly as non-root apache
needs to be configured to bind to a port > 1024, as opposed to the
default 80
You got it, that's the hard requirement in order to make the apache container running as non-root, therefore this change needs to be done at container level, not to Kubernetes' abstracts like Deployment's Pod spec or Service/Ingress resource object definitions. So the only thing left in your case, is to build a custom httpd image, with listening port > 1024. The same approach applies to the NGINX Docker containers.
One key information for the 'containerPort' field in Pod spec, that you are trying to manually adjust, and which is not so apparent. It's there primarily for informational purposes, and does not cause opening port on container level. According Kubernetes API reference:
Not specifying a port here DOES NOT prevent that port from being
exposed. Any port which is listening on the default "0.0.0.0" address
inside a container will be accessible from the network. Cannot be updated.
I hope this will help you to move on

browse postgres in a docker container

I am using docker-compose to work across multiple docker containers, all these containers are mostly individual django rest framework built applications. I have downloaded all the containers and am able to build the whole application using all these containers.
Each container has postgres db running, I want to browse the db now using any ui tool. I know pgadmin can do the work here, but how I can configure my pgadmin to showcase any postgres database from these containers?
It should be possible to expose your database port also to your local network.
Normally you connect your application containers internally to the database container. In that case it's not needed declare the ports section in your compose file for the database, but if you have that entry you bind your database in addition to your local host.
After you have also expose the postgres port to your host port it should be no problem to connect with the gui tool of your choice.
version: '3.2'
services:
httpd:
image: "oth/d_apache2.4:0.2"
ports:
# container port 80 of the webserver to localhost 80
- "80:80"
keycloak:
# keycloak uses keycloak_db
image: "jboss/keycloak-postgres:3.2.1.Final"
environment:
# internal network reference to db container
- POSTGRES_PORT_5432_TCP_ADDR=keycloak_db
- POSTGRES_PORT_5432_TCP_PORT=5432
keycloak_db:
environment:
image: "postgres:alpine"
ports:
# container port 5432 to localhost 5432
# stack intern is the port still available
- "5432:5432"
Make sure that the port of the postgres container is mapped to the host system. The default postgres port is 5432. You can do that with the port directive in your docker-compose.yml. You are only able to map the port once. So your config file would look like:
services:
postgres_1:
ports:
- "49000:54321"
[...]
postgres_2:
ports:
- "49001:54321"
[...]
After that you should be able to access the desired database with the IP of your docker host and the above specified port.
If you still encounter problems connecting with a client like pgadmin check the following configuration files inside your container.
Is there anything blocking your connection attempt? Is yourdocker host behind a firewall?
postgresql.conf under the section connections and authentication:
listen_addresses
port
Check your pg_hba.conf, which controls client authentication.
For debug purposes you can set it to the following:
Don't do the following in production:
host all all all trust

Weblogic NodeManager Start Failure

I've setup a weblogic cluster with 2 managed servers.In order to configure node manager on both nodes i've followed the related article :
http://www.oracle.com/webfolder/technetwork/tutorials/obe/fmw/wls/12c/12_2_1/01-12-001-ConfiguringandUsingNodeManager/Configuring_and_Using_NM.html
with the following configuration :
Machine-0 :
DomainsFile=/u01/app/oracle/config/domains/base_domain/Machine-
0/nodemanager.domains
LogLimit=0
PropertiesVersion=12.1.3
AuthenticationEnabled=true
NodeManagerHome=/u01/app/oracle/config/domains/base_domain/Machine-0
JavaHome=/opt/jdk1.8.0_131
LogLevel=INFO
DomainsFileEnabled=true
StartScriptName=startWebLogic.sh
ListenAddress=localhost
NativeVersionEnabled=true
ListenPort=5558
LogToStderr=true
SecureListener=false
LogCount=1
StopScriptEnabled=false
QuitEnabled=false
LogAppend=true
StateCheckInterval=500
CrashRecoveryEnabled=false
StartScriptEnabled=true
LogFile=/u01/app/oracle/config/domains/base_domain/Machine-
0/nodemanager.log
LogFormatter=weblogic.nodemanager.server.LogFormatter
ListenBacklog=50
Machine-1 (the second managed server) has the same configuration with the exceptions of ports (5557) and name.
Although node manager is successfully started on both machines (startNodeManager.sh on machine-0 and machine-1) from admin console on Machine-0 the following error occurs and node manager doesnt start :
weblogic.nodemanager.NMConnectException
nodemanager.log of Machine-0 has no indications of errors or any helpful stuff.
Any help would be appreciated.
thanks in advance
These are the things that I usually check when I am setting up a new WebLogic domain:
It is possible that the Listen Address of Machine-1 is not correct. Check the Listen Address of the machine from the WebLogic Domain Configuration. It should match the host's machine name. Using localhost might not work because the Admin Server is trying to connect to the Machine-1, which can be on the other server.
Make sure to check if the port is reachable from the Admin Server's machine.
Check that the Node Manager configuration uses Plain instead of SSL connection, as stated in your nodemanager.properties file. Under Environments > Machines, click the machine and go to Configuration Tab, Node Manager. Check if the Type is Plain and not SSL. Changing this will require a restart of the Admin Server.
Please verify the items below before you start nodemanager.
Check if the nodemanager.domains has your domain name listed.
Try to see if the ports are listening using the commands below.
netstat -an|grep 5557
netstat -an|grep 5558
Also, check if the nodemanager is reachable in weblogic console.

How to run glassfish 4 on port 80 instead of 8080? root access is not an issue

Did some google on it and the solution was to redirect using iptables or mod in apache? Since my application uses websockets the above solution breaks my websocket connectivity and I again have to connect to my websockets using port 8080. Is there any way that I can run the glassfish itself on port 80 so that my websockets also run on port 80 making easier for users behind corporate firewall to access the app since corparates may block 8080.
I have root access as well.
To run GlassFish on port 80 you need to :
Connect to the administration interface (by default on port :4848)
In the left menu go to Configurations
Then select the appropriate configuration you need to change eg server-config
Then go to Network Config
Then go to Network Listeners
Select the appropriate listener, probably http-listener-1
Change the Port value to 80
Save and reboot your GlassFish server/instance/cluster according to your needs
Using the command line utility
asadmin set configs.config.server-config.network-config.network-listeners.network-listener.http-listener-1.port=80
you may need to replace server-config and/or http-listener-1
Go to glassfish4\glassfish\domains\domain1\config folder and here open domain.xml file
and find tag
<network-listeners>
<network-listener port="9999" protocol="http-listener-1" transport="tcp" name="http-listener-1" thread-pool="http-thread-pool"></network-listener>
</network-listeners>
in port attribute of <network-listeners> you can specify your port address whichever you want.
Here's another approach.
You can go to the admin console under port 4848 (I am using Glassfish 4.1.2) and navigate to "Configuration" > "server-config" > "HTTP Service" > "Http Listeners" > "http-listener-1" in the left hand navigation.
Click on the "http-listener-1" link in the main content window.
Change the port to desired number and save.
Restart Glassfish and run your application.
in some cases you have to change port before glassfish is started (in my case port 8080 is already in use by another instance) so answer 4 worked for me.
Following are simple steps to change the port number of Glassfish server (GlassFish runs by default on port number 8080):
Go to the folder where Glassfish is installed.
Locate config folder which is as follows (Windows):
C:\Program Files\glassfish-3.0.1\glassfish\domains\domain1\config
Open domain.xml using any text editor.
Look for 8080 and change it to some other port number that doesn’t conflict with other port numbers (e.g. 8081).
Save domain.xml.
Additional step if necessary:
Now remove GlassFish from IDE and add it again so that IDE understands the new port number.
Restart GlassFish, if it was already running.
Soruce Link

JMeter with remote servers

I'm trying to setup JMeter in a distributed mode.
I have a server running on an ec2 intance, and I want the master to run on my local computer.
I had to jump through some hopes to get RMI working correctly on the server but was solved with setting the "java.rmi.server.hostname" to the IP of the ec2 instance.
The next (and hopefully last) problem is the server communicating back to the master.
The problem is that because I am doing this from an internal network, the master is sending its local/internal ip address (192.168.1.XXX) when it should be sending back the IP of my external connection (92.XXX.XXX.XXX).
I can see this in the jmeter-server.log:
ERROR - jmeter.samplers.RemoteListenerWrapper: testStarted(host) java.rmi.ConnectException: Connection refused to host: 192.168.1.50; nested exception is:
That host IP is wrong. It should be the 92.XXX.XXX.XX address. I assume this is because in the master logs I see the following:
2012/07/29 20:45:25 INFO - jmeter.JMeter: IP: 192.168.1.50 Name: XXXXXX.local FullName: 192.168.1.50
And this IP is sent to the server during RMI setup.
So I think I have two options:
Tell the master to send the external IP
Tell the server to connect on the external IP of the master.
But I can't see where to set these commands.
Any help would be useful.
For the benefit of future readers, don't take no for an answer. It is possible! Plus you can keep your firewall in place.
In this case, I did everything over port 4000.
How to connect a JMeter client and server for distributed testing with Amazon EC2 instance and local dev machine across different networks.
Setup:
JMeter 2.13 Client: local dev computer (different network)
JMeter 2.13 Server: Amazon EC2 instance
I configured distributed client / server JMeter connectivity as follows:
1. Added a port forwarding rule on my firewall/router:
Port: 4000
Destination: JMeter client private IP address on the LAN.
2. Configured the "Security Group" settings on the EC2 instance:
Type: Allow: Inbound
Port: 4000
Source: JMeter client public IP address (my dev computer/network public IP)
Update: If you already have SSH connectivity, you could use an SSH tunnel for the connection, that will avoid needing to add the firewall rules.
$ ssh -i ~/.ssh/54-179-XXX-XXX.pem ServerAliveInterval=60 -R 4000:localhost:4000 jmeter#54.179.XXX.XXX
3. Configured client $JMETER_HOME/bin/jmeter.properties file RMI section:
note only the non-default values that I changed are included here:
#---------------------------------------------------------------------------
# Remote hosts and RMI configuration
#---------------------------------------------------------------------------
# Remote Hosts - comma delimited
# Add EC2 JMeter server public IP address:Port combo
remote_hosts=127.0.0.1,54.179.XXX.XXX:4000
# RMI port to be used by the server (must start rmiregistry with same port)
server_port=4000
# Parameter that controls the RMI port used by the RemoteSampleListenerImpl (The Controler)
# Default value is 0 which means port is randomly assigned
# You may need to open Firewall port on the Controller machine
client.rmi.localport=4000
# To change the default port (1099) used to access the server:
server.rmi.port=4000
# To use a specific port for the JMeter server engine, define
# the following property before starting the server:
server.rmi.localport=4000
4. Configured remote server $JMETER_HOME/bin/jmeter.properties file RMI section as follows:
#---------------------------------------------------------------------------
# Remote hosts and RMI configuration
#---------------------------------------------------------------------------
# RMI port to be used by the server (must start rmiregistry with same port)
server_port=4000
# Parameter that controls the RMI port used by the RemoteSampleListenerImpl (The Controler)
# Default value is 0 which means port is randomly assigned
# You may need to open Firewall port on the Controller machine
client.rmi.localport=4000
# To use a specific port for the JMeter server engine, define
# the following property before starting the server:
server.rmi.localport=4000
5. Started the JMeter server/slave with:
jmeter-server -Djava.rmi.server.hostname=54.179.XXX.XXX
where 54.179.XXX.XXX is the public IP address of the EC2 server
6. Started the JMeter client/master with:
jmeter -Djava.rmi.server.hostname=121.73.XXX.XXX
where 121.73.XXX.XXX is the public IP address of my client computer.
7. Ran a JMeter test suite.
JMeter GUI log output
Success!
I had a similar problem: the JMeter server tried to connect to the wrong address for sending the results of the test (it tried to connect to localhost).
I solved this by setting the following parameter when starting the JMeter master:
-Djava.rmi.server.hostname=xx.xx.xx.xx
It looks as though this wont work Distributed JMeter Testing explains the requirements for load testing in a distributed environment. Number 2 and 3 are particular to your use case I believe.
The firewalls on the systems are turned off.
All the clients are on the same subnet.
The server is in the same subnet, if 192.x.x.x or 10.x.x.x ip addresses are used.
Make sure JMeter can access the server.
Make sure you use the same version of JMeter on all the systems. Mixing versions may not work correctly.
Might be very late in the game but still. Im running this with jmeter 5.3.
So to get it work by setting up the slaves in aws and the controller on your local machine.
Make sure your slave has the proper localports and hostname. The hostname on the slave should be the ec2 instance public dns.
Make sure AWS has proper security policies.
For the controller (which is your local machine) make sure you run with the parameter '-Djava.rmi.server.hostname='. You can get the ip by googling "my public ip address". Definately not those 192.xxx.xxx.x or 172.xx.xxx.
Then you have to configure your modem to port forward your machine that is used to be your controller. The port can be obtained when from the slave log (the ones that has the FINE: RMI RenewClean....., yeah you have to set the log to verbose). OR set DMZ and put your controller machine. Dangerous, but convinient just for the testing time, don't forget to off it after that
Then it should work.