How to target WorkManager to multiple clusters without hardcoding cluster name or specifying cluster names in property file - weblogic

i have a WLST script to setup WorkManager and target it to all the clusters in a weblogic domain. Issue is that i have hundred's of domains across multiple env's so cannot have multiple property files. Is there a way to script wlst to pull cluster details from domain and set them as target to created workmanager?
I tried to get cluster details using below, but it is listing only one cluster where there are 3 clusters configured on the domain.
#
cd ('/Clusters')
Clusters = cmo.getClusters()
for clusters in Clusters:
svr = clusters.getName()
print svr
#
Here is the wlst i am using:
#
import java.lang.Exception
# Connect to ADMIN
##################
userName=sys.argv[1]
userPW=sys.argv[2]
adminURL=sys.argv[3]
WL_DOMAIN=sys.argv[4]
exists = 'no';
connect(userName,userPW,adminURL)
edit()
startEdit()
cd ('/Clusters')
Clusters = cmo.getClusters()
for clusters in Clusters:
svr = clusters.getName()
print svr
cd('/SelfTuning/' + domainName)
cmo.createWorkManager('workManager')
cd('/SelfTuning/' + domainName + '/WorkManagers/workManager')
set('Targets',jarray.array([ObjectName('com.bea:Name='+svr+',Type=Cluster')], ObjectName))
activate()
#
Above script targets the created work manager to only one cluster where the domain itself has 3 clusters. I would like to target WM to all the clusters in the domain without hard coding any cluster names. Please help !

it is an algorithmic issue. Your code is not correct.
connect(userName,userPW,adminURL)
edit()
startEdit()
cd('/SelfTuning/' + domainName)
cmo.createWorkManager('workManager')
cd ('/Clusters')
Clusters = cmo.getClusters()
cd('/SelfTuning/' + domainName + '/WorkManagers/workManager')
set('Targets',Clusters)
save()
activate()

Here you go: (just get rid of that svr variable)
cd ('/Clusters')
Clusters = cmo.getClusters()
for clusters in Clusters:
print clusters.getName()

Related

Web HUE installation and set up is done. Bu the dashboard is not working

I have recently set up the Hue set up on my hadoop cluster and everything seems fine. I was able to open the webhue ie., localhost:8888 and i can see the HDFS, HBase and Mysql. But I am still facing some issues on this. Could anyone please help me out in this regard.
Problems facing are :
Hive connection: I am using beeline and i was able to connect to hive databses using beeline on the shelll But in the web hue, it shows error loading databases. The configuration i have used in hue.ini file is
hive_server_host=localhost
Port where HiveServer2 Thrift server runs on.
hive_server_port=10000
The second issue is even though i was able to connect to the mysql database, the issue i am facing is in the dashboard tab. I can see all the widgets and charting options like pie,bar etc. But when i drag and drop them on the page, its loading forever. I dont able to see any chart of the table data.
Please help me out as i have been trying since 10 days and i could not able to find any pointers yet.
#Ruthikajawar I have a working hue.ini here:
https://github.com/steven-dfheinz/HDP3-Hue-Service/blob/Hue.4.6.0/configuration/live.hue.ini
The specifics for working hive are:
[beeswax]
# Host where HiveServer2 is running.
# If Kerberos security is enabled, use fully-qualified domain name (FQDN).
hive_server_host=hdp.cloudera.com
# Binary thrift port for HiveServer2.
#hive_server_port=10000
# Http thrift port for HiveServer2.
#hive_server_http_port=10001
# Host where LLAP is running
## llap_server_host = localhost
# LLAP binary thrift port
## llap_server_port = 10500
# LLAP HTTP Thrift port
## llap_server_thrift_port = 10501
# Alternatively, use Service Discovery for LLAP (Hive Server Interactive) and/or Hiveserver2, this will override server and thrift port
# Whether to use Service Discovery for LLAP
## hive_discovery_llap = true
# is llap (hive server interactive) running in an HA configuration (more than 1)
# important as the zookeeper structure is different
## hive_discovery_llap_ha = false
# Shortcuts to finding LLAP znode Key
# Non-HA - hiveserver-interactive-site - hive.server2.zookeeper.namespace ex hive2 = /hive2
# HA-NonKerberized - (llap_app_name)_llap ex app name llap0 = /llap0_llap
# HA-Kerberized - (llap_app_name)_llap-sasl ex app name llap0 = /llap0_llap-sasl
## hive_discovery_llap_znode = /hiveserver2-hive2
# Whether to use Service Discovery for HiveServer2
hive_discovery_hs2 = true
# Hiveserver2 is hive-site hive.server2.zookeeper.namespace ex hiveserver2 = /hiverserver2
hive_discovery_hiveserver2_znode = /hiveserver2
# Applicable only for LLAP HA
# To keep the load on zookeeper to a minimum
# ---- we cache the LLAP activeEndpoint for the cache_timeout period
# ---- we cache the hiveserver2 endpoint for the length of session
# configurations to set the time between zookeeper checks
## cache_timeout = 60
# Host where Hive Metastore Server (HMS) is running.
# If Kerberos security is enabled, the fully-qualified domain name (FQDN) is required.
#hive_metastore_host=hdp.cloudera.com
# Configure the port the Hive Metastore Server runs on.
#hive_metastore_port=9083
# Hive configuration directory, where hive-site.xml is located
hive_conf_dir=/etc/hive/conf
# Timeout in seconds for thrift calls to Hive service
## server_conn_timeout=120
# Choose whether to use the old GetLog() thrift call from before Hive 0.14 to retrieve the logs.
# If false, use the FetchResults() thrift call from Hive 1.0 or more instead.
## use_get_log_api=false
# Limit the number of partitions that can be listed.
## list_partitions_limit=10000
# The maximum number of partitions that will be included in the SELECT * LIMIT sample query for partitioned tables.
## query_partitions_limit=10
# A limit to the number of rows that can be downloaded from a query before it is truncated.
# A value of -1 means there will be no limit.
## download_row_limit=100000
# A limit to the number of bytes that can be downloaded from a query before it is truncated.
# A value of -1 means there will be no limit.
## download_bytes_limit=-1
# Hue will try to close the Hive query when the user leaves the editor page.
# This will free all the query resources in HiveServer2, but also make its results inaccessible.
## close_queries=false
# Hue will use at most this many HiveServer2 sessions per user at a time.
# For Tez, increase the number to more if you need more than one query at the time, e.g. 2 or 3 (Tez has a maximum of 1 query by session).
## max_number_of_sessions=1
# Thrift version to use when communicating with HiveServer2.
# Version 11 comes with Hive 3.0. If issues, try 7.
thrift_version=11
# A comma-separated list of white-listed Hive configuration properties that users are authorized to set.
## config_whitelist=hive.map.aggr,hive.exec.compress.output,hive.exec.parallel,hive.execution.engine,mapreduce.job.queuename
# Override the default desktop username and password of the hue user used for authentications with other services.
# e.g. Used for LDAP/PAM pass-through authentication.
## auth_username=hive
## auth_password=hive
# Use SASL framework to establish connection to host.
use_sasl=true
For the second part of your question. You should monitor the /var/log/hue/error.log while using the UI to capture and resolve any errors.

Celery tasks from different applications in different log files

I'm looking for configure Celery on my FreeBSD server and I get some issues according to log files.
My configuration:
FreeBSD server
2 Django applications : app1 and app2
Celery is daemonized and Redis
Each application has his own Celery task
My Celery config file:
I have in /etc/default/celeryd_app1 :
# Names of nodes to start
CELERYD_NODES="worker"
# Absolute or relative path to the 'celery' command:
CELERY_BIN="/usr/local/www/app1/venv/bin/celery"
# App instance to use
CELERY_APP="main"
# Where to chdir at start.
CELERYD_CHDIR="/usr/local/www/app1/src/"
# Extra command-line arguments to the worker
CELERYD_OPTS="--time-limit=300 --concurrency=8"
# Set logging level to DEBUG
#CELERYD_LOG_LEVEL="DEBUG"
# %n will be replaced with the first part of the nodename.
CELERYD_LOG_FILE="/var/log/celery/app1/%n%I.log"
CELERYD_PID_FILE="/var/run/celery/app1/%n.pid"
# Workers should run as an unprivileged user.
CELERYD_USER="celery"
CELERYD_GROUP="celery"
# If enabled pid and log directories will be created if missing,
# and owned by the userid/group configured.
CELERY_CREATE_DIRS=1
I have exactly the same file for celeryd_app2
Django settings file with Celery settings:
CELERY_BROKER_URL = 'redis://localhost:6379'
CELERY_ACCEPT_CONTENT = ['application/json']
CELERY_RESULT_BACKEND = 'redis://localhost:6379'
CELERY_TASK_SERIALIZER = 'json'
CELERY_RESULT_SERIALIZER = 'json'
CELERY_IGNORE_RESULT = False
CELERY_TASK_TRACK_STARTED = True
# Add a one-minute timeout to all Celery tasks.
CELERYD_TASK_SOFT_TIME_LIMIT = 60
Both settings have the same redis' port.
My issue:
When I execute a celery task for app1, I find logs from this task in app2 log file with an issue like this :
Received unregistered task of type 'app1.task.my_task_for_app1'
...
KeyError: 'app1.task.my_task_for_app1'
There is an issue in my Celery config file ? I have to set different redis port ? If yes, How I can do that ?
Thank you very much
I guess the problem lies in the fact that you are using the same Redis database for both applications:
CELERY_BROKER_URL = 'redis://localhost:6379'
Take a look into the guide for using Redis as a broker. Just change the database for each application, e.g.
CELERY_BROKER_URL = 'redis://localhost:6379/0'
and
CELERY_BROKER_URL = 'redis://localhost:6379/1'

puppet master didn't pass agent hostname/fqdn to enc script

Puppet version: 3.6.2
In order to simplify the management of ssl certificates, our puppet agents use the same certname, certname=agent.puppet.com
When puppet master gets request from agent(hostname: web00.xxx.com), it executes Enc script with certname as parameter.
node_terminus = exec
external_nodes = /home/ocean/puppet/conf/bce_puppet_bns
puppet.log:
2015-05-06 09:55:34 +0800 Puppet (debug): Executing '/home/ocean/puppet/conf/bce_puppet_bns agent.puppet.com'
How do I configure to make puppet master pass agent's real hostname/FQDN to Enc script like:
/home/ocean/puppet/conf/bce_puppet_bns web00.xxx.com
Or how can I get the agent's hostname/FQDN in Enc script ?
Don't.
Don't use any info other than $clientcert passed from the agent.
Don't share certificates among different agents.
There are deeply rooted assumptions in Puppet that each agent node has an individual certificate. You will wreak havoc in your infrastructure by trying such stunts.
For example, PuppetDB data is usually grouped by owning agents' certnames. This data will become inconsistent quickly with all agents calling themselves the same, but being quite different of course.
ensure puppetmaster says this
[master]
node_name = facter
alter auth.conf so that all the sections have the "agent.puppet.com" cert like this
# allow nodes to retrieve their own catalog
path ~ ^/catalog/([^/]+)$
method find
allow $1
allow agent.puppet.com
# allow nodes to retrieve their own node definition
path ~ ^/node/([^/]+)$
method find
allow $1
allow agent.puppet.com
# allow all nodes to access the certificates services
path /certificate_revocation_list/ca
method find
allow *
# allow all nodes to store their own reports
path ~ ^/report/([^/]+)$
method save
allow $1
allow agent.puppet.com
That's just puppetmaster <=> client, Felix is right that if you are using puppetdb that would have to be altered too

Jenkins: configure slave node address dynamically using command or groovy script

I have kinda ssh slave build jenkins setup.
Jenkins server connect to Mac slave thru ssh. build ios apps there. two remote nodes are configured in Jenkins connected to the Mac.
The Mac has dhcp.
Every time my mac starts I want to run a script that tell the Jenkin server to configure the node's IP address pointing to the dhcp address that the mac receives. Since its dhcp it changes always.
Is possible to configure such? using shell script or perl ...
e.g. http://jenkins-server:8080/computer/mac-slave-enterprise/configure
is the node config url. If its possible to setup by sending host=10.1.2.100 & Submit=Save or something like this?
I found it is possible run Groovy script at
http://jenkins/script
or from mac command line or sh script,
$ curl -d "script=<your_script_here>" http://jenkins/script
I tried to get some info with this code but no luck, seems I have create SSLLauncher, but lost in how to grab a launcher things. There is no direct setHost or setLauncher thing.
following the tutorial at,
https://wiki.jenkins-ci.org/display/JENKINS/Display+Information+About+Nodes
but cannot set the host address.
println("node desc launcher = " + aSlave.getComputer().getLauncher());
//println("node desc launcher = " + aSlave.getComputer().getLauncher().setHost("10.11.51.70"));
println("node launcher host = " + aSlave.getComputer().getLauncher().getHost());
hudson.plugins.sshslaves.SSHLauncher ssl = aSlave.getComputer().getLauncher();
int port = ssl.getPort();
String userName, password, privateKey;
userName = ssl.getUsername();
password = ssl.getPassword();
privateKey = ssl.getPrivatekey();
println("user: "+userName + ", pwd: "+password + ", key: "+privateKey);
// all these values returns null.
Another way would be to just delete the node and recreate it.
Here is some groovy on how to delete it from here:
for (aSlave in hudson.model.Hudson.instance.slaves) {
if (aSlave.name == "MySlaveToDelete") {
println('====================');
println('Name: ' + aSlave.name);
println('Shutting down node!!!!');
aSlave.getComputer().setTemporarilyOffline(true,null);
aSlave.getComputer().doDoDelete();
}
And here is how to create one (source):
import jenkins.model.*
import hudson.model.*
import hudson.slaves.*
Jenkins.instance.addNode(new DumbSlave("test-script","test slave description","C:\\Jenkins","1",Node.Mode.NORMAL,"test-slave-label",new JNLPLauncher(),new RetentionStrategy.Always(),new LinkedList()))

Running buildbot behind cherokee reverse proxy

I am attempting to run my buildbot master server behind a cherokee reverse proxy with the buildbot instance as cherokee's information source in a round robin reverse proxy layout.
This is the buildbot master.cfg configuration file:-
# -*- python -*-
# ex: set syntax=python:
# This is a sample buildmaster config file. It must be installed as
# 'master.cfg' in your buildmaster's base directory.
# This is the dictionary that the buildmaster pays attention to. We also use
# a shorter alias to save typing.
c = BuildmasterConfig = {}
####### BUILDSLAVES
# The 'slaves' list defines the set of recognized buildslaves. Each element is
# a BuildSlave object, specifying a unique slave name and password. The same
# slave name and password must be configured on the slave.
from buildbot.buildslave import BuildSlave
c['slaves'] = [BuildSlave("example-slave", "pass")]
# 'slavePortnum' defines the TCP port to listen on for connections from slaves.
# This must match the value configured into the buildslaves (with their
# --master option)
c['slavePortnum'] = 9989
####### CHANGESOURCES
# the 'change_source' setting tells the buildmaster how it should find out
# about source code changes. Here we point to the buildbot clone of pyflakes.
from buildbot.changes.gitpoller import GitPoller
c['change_source'] = []
c['change_source'].append(GitPoller(
'git://github.com/buildbot/pyflakes.git',
workdir='gitpoller-workdir', branch='master',
pollinterval=300))
####### SCHEDULERS
# Configure the Schedulers, which decide how to react to incoming changes. In this
# case, just kick off a 'runtests' build
from buildbot.schedulers.basic import SingleBranchScheduler
from buildbot.schedulers.forcesched import ForceScheduler
from buildbot.changes import filter
c['schedulers'] = []
c['schedulers'].append(SingleBranchScheduler(
name="all",
change_filter=filter.ChangeFilter(branch='master'),
treeStableTimer=None,
builderNames=["runtests"]))
c['schedulers'].append(ForceScheduler(
name="force",
builderNames=["runtests"]))
####### BUILDERS
# The 'builders' list defines the Builders, which tell Buildbot how to perform a build:
# what steps, and which slaves can execute them. Note that any particular build will
# only take place on one slave.
from buildbot.process.factory import BuildFactory
from buildbot.steps.source import Git
from buildbot.steps.shell import ShellCommand
factory = BuildFactory()
# check out the source
factory.addStep(Git(repourl='git://github.com/buildbot/pyflakes.git', mode='copy'))
# run the tests (note that this will require that 'trial' is installed)
factory.addStep(ShellCommand(command=["trial", "pyflakes"]))
from buildbot.config import BuilderConfig
c['builders'] = []
c['builders'].append(
BuilderConfig(name="runtests",
slavenames=["example-slave"],
factory=factory))
####### STATUS TARGETS
# 'status' is a list of Status Targets. The results of each build will be
# pushed to these targets. buildbot/status/*.py has a variety to choose from,
# including web pages, email senders, and IRC bots.
c['status'] = []
from buildbot.status import html
from buildbot.status.web import authz, auth
authz_cfg=authz.Authz(
# change any of these to True to enable; see the manual for more
# options
auth=auth.BasicAuth([("pyflakes","pyflakes")]),
gracefulShutdown = False,
forceBuild = 'auth', # use this to test your slave once it is set up
forceAllBuilds = False,
pingBuilder = False,
stopBuild = False,
stopAllBuilds = False,
cancelPendingBuild = False,
)
c['status'].append(html.WebStatus(http_port=8010, authz=authz_cfg))
####### PROJECT IDENTITY
# the 'title' string will appear at the top of this buildbot
# installation's html.WebStatus home page (linked to the
# 'titleURL') and is embedded in the title of the waterfall HTML page.
c['title'] = "Pyflakes"
c['titleURL'] = "http://divmod.org/trac/wiki/DivmodPyflakes"
# the 'buildbotURL' string should point to the location where the buildbot's
# internal web server (usually the html.WebStatus page) is visible. This
# typically uses the port number set in the Waterfall 'status' entry, but
# with an externally-visible host name which the buildbot cannot figure out
# without some help.
c['buildbotURL'] = "http://localhost:8010/"
####### DB URL
c['db'] = {
# This specifies what database buildbot uses to store its state. You can leave
# this at its default for all but the largest installations.
'db_url' : "sqlite:///state.sqlite",
}
# change any of these to True to enable; see the manual for more
# options
auth=auth.BasicAuth([("pyflakes","pyflakes")]),
And this is the cherokee configuration:-
Unfortunately, I get 502 Bad gateway when I go to my web url but on the other hand, I know that my buildbot master server instance is working correctly because going to the same web url and appending :8010 behind the web url gives me the "Welcome to the Buildbot ..." page.
Is your proxy on the same machine as the buildbot? If not, you will need to adjust the URL in cherokee, to point to the machine running buildbot (localhost points to the machine cherokee is running on).
In any case, c['buildbotURL'] should be changed to point to the public URL that the buildbot is available under (i.e. what cherokee exposes, rather than the URL being proxied).