I have two WLST queries. I execute it through WebLogic Scripting Tool console. These queries are:
1) List of deployed applications and status:
connect('weblogic','password','t3://localhost:7001')
cd('AppDeployments')
deplymentsList=cmo.getAppDeployments()
for app in deplymentsList:
domainConfig()
cd ('/AppDeployments/'+app.getName()+'/Targets')
mytargets = ls(returnMap='true')
domainRuntime()
cd('AppRuntimeStateRuntime')
cd('AppRuntimeStateRuntime')
for targetinst in mytargets:
curstate4=cmo.getCurrentState(app.getName(),targetinst)
print app.getApplicationName(), targetinst, curstate4;
Output example:
WeblogicApp Cluster1 STATE_ACTIVE
DMS Application AdminServer STATE_ACTIVE
Benefits Cluster2 STATE_ACTIVE
2) List of hosts-machines
connect('weblogic','password','t3://localhost:7001')
svrs = cmo.getServers()
domainRuntime()
for host in svrs:
machine = host.getMachine();
print "Host: " + machine.getName()
Output example:
Host: 192.168.200.1
Host: 192.168.200.2
Host: 192.168.200.3
Host: Machine-0
Host: Machine-1
Host: Machine-2
I need to get both info (an application and their host or shared hosts if they have it more than one). I don't know how to solve and mix the queries to get both info in one query, or at least to get the info related about deployment application - hosts in the second query.
The output needed is something like that:
WeblogicApp Cluster1 STATE_ACTIVE 192.168.200.2
WeblogicApp Cluster1 STATE_ACTIVE 192.168.200.3
DMS Application AdminServer STATE_ACTIVE 192.168.200.1
DMS Application AdminServer STATE_ACTIVE Machine-1
DMS Application AdminServer STATE_ACTIVE Machine-2
Benefits Cluster1 STATE_ACTIVE Machine-0
..............
Thanks in advance.
A little late to the party. But if anybody else comes by searching for answers I came up with extension of the first script to give the desired result:
connect('weblogic','password','t3://localhost:7001')
setShowLSResult(false)
cd('AppDeployments')
deplymentsList=cmo.getAppDeployments()
domainConfig()
for app in deplymentsList:
cd ('/AppDeployments/'+app.getName()+'/Targets')
mytargets = ls(returnMap='true')
for targetinst in mytargets:
domainRuntime()
cd('AppRuntimeStateRuntime')
cd('AppRuntimeStateRuntime')
curstate4 = cmo.getCurrentState(app.getName(),targetinst)
domainConfig()
cd('/AppDeployments/'+app.getName()+'/Targets/'+targetinst)
myType = cmo.getType()
if myType == 'Cluster':
myServers = cd('/AppDeployments/'+app.getName()+'/Targets/'+targetinst+'/Servers', returnMap='true')
for server in myServers:
cd('/AppDeployments/'+app.getName()+'/Targets/'+targetinst+'/Servers/'+server)
machineName = cmo.getMachine().getName()
print app.getApplicationName(), targetinst, curstate4, machineName
elif myType == 'Server':
cd('/AppDeployments/'+app.getName()+'/Targets/'+targetinst)
machineName = cmo.getMachine().getName()
print app.getApplicationName(), targetinst, curstate4, machineName
The output will be similar to the output stated in the original question.
Related
I have the problem that my ECS logs (awslogs driver) are not working as expected. In Cloudwatch I'm only seeing the server startup logs & not the useful logs from the apache (/var/log/apache2/error.log & /var/log/apache2/access.log)
I have a docker multicontainer setup with one container running the apache server & the other container running PHP-FPM. My container logs on cloudwatch look like this:
Apache-Container:
23:35:39 *** Running /etc/my_init.d/02_init.sh...
23:35:39 Starting Apache
23:35:39 * Starting Apache httpd web server apache2
23:35:39 /usr/sbin/apache2ctl: 87: ulimit: error setting limit (Operation not permitted)
23:35:39 Setting ulimit failed. See README.Debian for more information.
23:35:40 *** Running /etc/rc.local...
23:35:40 *** Booting runit daemon...
23:35:40 *** Runit started as PID 225
23:35:40 Oct 25 22:35:40 apache-container syslog-ng[231]: syslog-ng starting up; version='3.5.6'
2019-10-26 00:17:01
Oct 25 23:17:01 apache-container CRON[947]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly)
...
07:35:16 tail: '/var/log/syslog' has been replaced; following new file
...
FPM-Container:
...
10:25:23 172.x.x.x - 27/Okt/2019:09:25:23 +0000 "GET /app.php" 200
10:25:25 172.x.x.x - 27/Okt/2019:09:25:24 +0000 "GET /app.php" 200
...
I've checked various forums & online resources. As I understood it right I just need to symlink my logs to STDOUT/STDERR or even better to /proc/self/fd/1 & /proc/self/fd/2 like this:
ln -sf /dev/stdout /var/log/apache2/access.log
ln -sf /dev/stderr /var/log/apache2/error.log
I've tried to link the logs in my .Dockerfile via the RUN command & also during runtime, but no success. I see that my logs are showing up correctly in the log files before linking them. I've also tried things like echo "test stderr logs" >> /dev/stderr or echo "test stdout logs" >> /dev/stdout inside & outside the containers, but nothing showing up in the cloudwatch logs. When I try docker logs MY_DOCKER_CONTAINER_ID I get: Error response from daemon: configured logging driver does not support reading.
Maybe I'm missing some basic knowledge here. I see that syslog is in my environment/base image (maybe i need to merge syslog & apache logs?) and that the PHP-FPM-container is logging 200's but only to the app.php even though I would like to know the exact path of the accessed url.
You need to specify in your docker-compose used by ECS to use the cloudwatch logging driver like so:
version: '2'
services:
myapp:
build:
context: .
logging:
driver: awslogs
options:
awslogs-group: "/my/log/group"
awslogs-region: "us-west-2"
awslogs-stream-prefix: some-prefix
This should cause /dev/stdout and /dev/stderr to appear in CloudWatch. You can find more information on the logging driver options on the Docker page.
Hey thx for the responses. If I remember it right, the problem was, that all of my output was redirected to syslog & there was a misconfiguration in my syslog config.
I'm looking for configure Celery on my FreeBSD server and I get some issues according to log files.
My configuration:
FreeBSD server
2 Django applications : app1 and app2
Celery is daemonized and Redis
Each application has his own Celery task
My Celery config file:
I have in /etc/default/celeryd_app1 :
# Names of nodes to start
CELERYD_NODES="worker"
# Absolute or relative path to the 'celery' command:
CELERY_BIN="/usr/local/www/app1/venv/bin/celery"
# App instance to use
CELERY_APP="main"
# Where to chdir at start.
CELERYD_CHDIR="/usr/local/www/app1/src/"
# Extra command-line arguments to the worker
CELERYD_OPTS="--time-limit=300 --concurrency=8"
# Set logging level to DEBUG
#CELERYD_LOG_LEVEL="DEBUG"
# %n will be replaced with the first part of the nodename.
CELERYD_LOG_FILE="/var/log/celery/app1/%n%I.log"
CELERYD_PID_FILE="/var/run/celery/app1/%n.pid"
# Workers should run as an unprivileged user.
CELERYD_USER="celery"
CELERYD_GROUP="celery"
# If enabled pid and log directories will be created if missing,
# and owned by the userid/group configured.
CELERY_CREATE_DIRS=1
I have exactly the same file for celeryd_app2
Django settings file with Celery settings:
CELERY_BROKER_URL = 'redis://localhost:6379'
CELERY_ACCEPT_CONTENT = ['application/json']
CELERY_RESULT_BACKEND = 'redis://localhost:6379'
CELERY_TASK_SERIALIZER = 'json'
CELERY_RESULT_SERIALIZER = 'json'
CELERY_IGNORE_RESULT = False
CELERY_TASK_TRACK_STARTED = True
# Add a one-minute timeout to all Celery tasks.
CELERYD_TASK_SOFT_TIME_LIMIT = 60
Both settings have the same redis' port.
My issue:
When I execute a celery task for app1, I find logs from this task in app2 log file with an issue like this :
Received unregistered task of type 'app1.task.my_task_for_app1'
...
KeyError: 'app1.task.my_task_for_app1'
There is an issue in my Celery config file ? I have to set different redis port ? If yes, How I can do that ?
Thank you very much
I guess the problem lies in the fact that you are using the same Redis database for both applications:
CELERY_BROKER_URL = 'redis://localhost:6379'
Take a look into the guide for using Redis as a broker. Just change the database for each application, e.g.
CELERY_BROKER_URL = 'redis://localhost:6379/0'
and
CELERY_BROKER_URL = 'redis://localhost:6379/1'
I am a total newbie in terms of kubernetes/atomic host, so my question may be really trivial or well discussed already - but unfortunately i couldn't find any clues how to achieve my goal - that's why i am here.
I have set up kubernetes cluster on atomic hosts (right now i have just one master and one node). I am working in the cloud network, on the virtual machines.
[root#master ~]# kubectl get node
NAME STATUS AGE
192.168.2.3 Ready 9d
After a lot of fuss i managed to set up the kubernetes dashboard UI on my master.
[root#master ~]# kubectl describe pod --namespace=kube-system
Name: kubernetes-dashboard-3791223240-8jvs8
Namespace: kube-system
Node: 192.168.2.3/192.168.2.3
Start Time: Thu, 07 Sep 2017 10:37:31 +0200
Labels: k8s-app=kubernetes-dashboard
pod-template-hash=3791223240
Status: Running
IP: 172.16.43.2
Controllers: ReplicaSet/kubernetes-dashboard-3791223240
Containers:
kubernetes-dashboard:
Container ID: docker://8fddde282e41d25c59f51a5a4687c73e79e37828c4f7e960c1bf4a612966420b
Image: gcr.io/google_containers/kubernetes-dashboard-amd64:v1.6.3
Image ID: docker-pullable://gcr.io/google_containers/kubernetes-dashboard-amd64#sha256:2c4421ed80358a0ee97b44357b6cd6dc09be6ccc27dfe9d50c9bfc39a760e5fe
Port: 9090/TCP
Args:
--apiserver-host=http://192.168.2.2:8080
Limits:
cpu: 100m
memory: 300Mi
Requests:
cpu: 100m
memory: 100Mi
State: Running
Started: Fri, 08 Sep 2017 10:54:46 +0200
Last State: Terminated
Reason: Error
Exit Code: 2
Started: Thu, 07 Sep 2017 10:37:32 +0200
Finished: Fri, 08 Sep 2017 10:54:44 +0200
Ready: True
Restart Count: 1
Liveness: http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
Volume Mounts: <none>
Environment Variables: <none>
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
No volumes.
QoS Class: Burstable
Tolerations: <none>
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
1d 32m 3 {kubelet 192.168.2.3} Warning MissingClusterDNS kubelet does not have ClusterDNS IP configured and cannot create Pod using "ClusterFirst" policy. Falling back to DNSDefault policy.
1d 32m 2 {kubelet 192.168.2.3} spec.containers{kubernetes-dashboard} Normal Pulled Container image "gcr.io/google_containers/kubernetes-dashboard-amd64:v1.6.3" already present on machine
32m 32m 1 {kubelet 192.168.2.3} spec.containers{kubernetes-dashboard} Normal Created Created container with docker id 8fddde282e41; Security:[seccomp=unconfined]
32m 32m 1 {kubelet 192.168.2.3} spec.containers{kubernetes-dashboard} Normal Started Started container with docker id 8fddde282e41
also
[root#master ~]# kubectl cluster-info
Kubernetes master is running at http://localhost:8080
kubernetes-dashboard is running at http://localhost:8080/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard
Now, when i tried connecting to the dashboard (i tried accessing the dashbord via the browser on windows virtual machine in the same cloud network) using the adress:
https://192.168.218.2:6443/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard
I am getting the "unauthorized". I believe it proves that the dashboard is indeed running under this address, but i need to set up some way of accessing it?
What i want to achieve in the long term:
i want to enable connecting to the dashboard using the login/password (later, when i learn a bit more, i will think about authenticating by certs or somehting more safe than password) from the outside of the cloud network. For now, connecting to the dashboard at all would do.
I know there are threads about authenticating, but most of them are mentioning something like:
Basic authentication is enabled by passing the
--basic-auth-file=SOMEFILE option to API server
And this is the part i cannot cope with - i have no idea how to pass options to API server.
On the atomic host the api-server,kube-controller-manager and kube-scheduler are running in containers, so I get into the api-server container with command:
docker exec -it kube-apiserver.service bash
I saw few times that i should edit .json file in /etc/kubernetes/manifest directory, but unfortunately there is no such file (or even a directory).
I apologize if my problem is too trivial or not described well enough, but im new to (both) IT world and the stackoverflow.
I would love to provide more info, but I am afraid I would end up including lots of useless information, so i decided to wait for your instructions in that regard.
Check out wiki pages of kubernetes dashboard they describe how to get access to dashboard and how to authenticate to it. For quick access you can run:
kubectl proxy
And then go to following address:
http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy
You'll see two options, one of them is uploading your ~/.kube/config file and the other one is using a token. You can get a token by running following command:
kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep service-account-token | head -n 1 | awk '{print $1}')
Now just copy and paste the long token string into dashboard prompt and you're done.
I have kinda ssh slave build jenkins setup.
Jenkins server connect to Mac slave thru ssh. build ios apps there. two remote nodes are configured in Jenkins connected to the Mac.
The Mac has dhcp.
Every time my mac starts I want to run a script that tell the Jenkin server to configure the node's IP address pointing to the dhcp address that the mac receives. Since its dhcp it changes always.
Is possible to configure such? using shell script or perl ...
e.g. http://jenkins-server:8080/computer/mac-slave-enterprise/configure
is the node config url. If its possible to setup by sending host=10.1.2.100 & Submit=Save or something like this?
I found it is possible run Groovy script at
http://jenkins/script
or from mac command line or sh script,
$ curl -d "script=<your_script_here>" http://jenkins/script
I tried to get some info with this code but no luck, seems I have create SSLLauncher, but lost in how to grab a launcher things. There is no direct setHost or setLauncher thing.
following the tutorial at,
https://wiki.jenkins-ci.org/display/JENKINS/Display+Information+About+Nodes
but cannot set the host address.
println("node desc launcher = " + aSlave.getComputer().getLauncher());
//println("node desc launcher = " + aSlave.getComputer().getLauncher().setHost("10.11.51.70"));
println("node launcher host = " + aSlave.getComputer().getLauncher().getHost());
hudson.plugins.sshslaves.SSHLauncher ssl = aSlave.getComputer().getLauncher();
int port = ssl.getPort();
String userName, password, privateKey;
userName = ssl.getUsername();
password = ssl.getPassword();
privateKey = ssl.getPrivatekey();
println("user: "+userName + ", pwd: "+password + ", key: "+privateKey);
// all these values returns null.
Another way would be to just delete the node and recreate it.
Here is some groovy on how to delete it from here:
for (aSlave in hudson.model.Hudson.instance.slaves) {
if (aSlave.name == "MySlaveToDelete") {
println('====================');
println('Name: ' + aSlave.name);
println('Shutting down node!!!!');
aSlave.getComputer().setTemporarilyOffline(true,null);
aSlave.getComputer().doDoDelete();
}
And here is how to create one (source):
import jenkins.model.*
import hudson.model.*
import hudson.slaves.*
Jenkins.instance.addNode(new DumbSlave("test-script","test slave description","C:\\Jenkins","1",Node.Mode.NORMAL,"test-slave-label",new JNLPLauncher(),new RetentionStrategy.Always(),new LinkedList()))
I'm very new to rabbitmq, I installed rabbitmq-server on one EC2 instance, and want to create a consumer on another EC2 instance.
But I'm getting this error:
socket.gaierror: [Errno -2] Name or service not known
That's the node status:
ubuntu#ip-10-147-xxx-xxx:~$ sudo rabbitmq-server restart
ERROR: node with name "rabbit" already running on "ip-10-147-xxx-xxx"
DIAGNOSTICS
===========
nodes in question: ['rabbit#ip-10-147-xxx-xxx']
hosts, their running nodes and ports:
- ip-10-147-xxx-xxx: [{rabbit,46074},{rabbitmqprelaunch4603,51638}]
current node details:
- node name: 'rabbitmqprelaunch4603#ip-10-147-xxx-xxx'
- home dir: /var/lib/rabbitmq
- cookie hash: Gsnt2qHd7wWDEOAOFby=
And that's the consumer code:
import pika
cred = pika.PlainCredentials('guest', 'guest')
conn_params = pika.ConnectionParameters('10-147-xxx-xxx', credentials=cred)
conn_broker = pika.BlockingConnection(conn_params)
conn_broker = pika.BlockingConnection(conn_params)
channel = conn_broker.channel()
channel.exchange_declare(exchange='hello-exchange', type='direct', passive=False, durable=True, auto_delete=False)
channel.queue_declare(queue='hello-queue')
channel.queue_bind(queue='hello-queue', exchange='hello-exchange', routing_key='hola')
def msg_consumer(channel, method, header, body):
channel.basic_ack(delivery_tag=method.delivery_tag)
if body == 'quit':
channel.basic_cancel(consumer_tag='hello-consumer')
channel.stop_consuming()
else:
print body
return
channel.basic_consume(msg_consumer, queue='hello-queue', consumer_tag='hello-consumer')
channel.start_consuming()
You should check that the security group allows you to use the rabbitMQ port, also it seems that you are not using Rabbit default's port (5672) so it should be in your connection parameters