Best way to write Condition to systemd unit file - conditional-statements

Trying to figure out the best way to go about determining if a linux instance is Amazon Linux 2 or Red Hat Enterprise Linux 7.
I was looking at the ConditionArchitecture test, however, it does not seem to get granular enough. The other route would be to use ConditionPathExist and try and find a unique path between AL2 and RHEL7.
[Unit]
Description=CloudPassage Halo Agent Configuration
After=network-online.target network.service
Before=cphalod.service
ConditionFileNotEmpty=!/opt/cloudpassage/data/store.db.vector
[Service]
Type=oneshot
ExecStart=/opt/cloudpassage/bin/configure --agent-key=XXXXXXXXXXXXXXXXXXXXXX --tag=XXX-XXX-XXX --proxy=proxy:3128 --dns=false
[Install]
WantedBy=multi-user.target
I basically want to add a condition statement in the Service section of the unit final, saying if AL2 use one agent-key and tag, then if it's RHEL7 use a different agent-key and tag. Has anyone done anything similar? I've tried searching around SO, but I didn't see anything for my any scenario similar to mine. If there is a better way to go about it instead of in the unit file, I'm open to suggestions.

Related

How to make a systemctl service not close other services

Below is the BHd.service file; this is the first time I've done this, and everything is finally working perfectly except that when I stop the service, it always without fail stops httpd (and likely other services listed here). I sincerely have looked everywhere for the answer. I don't think it should be doing this.
Log story short, I need smb, nfs, httpd and mariadb to be running before this unit start. I do not want them to be stopped after the unit is stopped or reloaded; for now, firewalld must be off. I honestly can't tell what line is affecting the systemctl stop command, everything I read indicates that Requires and After only affect systemctl start.
[Unit]
Description=BH
Documentation=somewhere.com
Wants=smb.service nfs.service
After=httpd.service mariadb.service
Requires=httpd.service mariadb.service
Conflicts=firewalld.service
[Service]
Type=forking
PIDFile=/var/run/BHd.pid
ExecStart=/usr/bin/python /var/www/html/pythonscripts/BHd.py start
ExecStop=/usr/bin/python /var/www/html/pythonscripts/BHd.py stop
[Install]
Alias=BHd
WantedBy=smb.service nfs.service
RequiredBy=httpd.service mariadb.service
EDIT: More info
[root#BHDEMO ~]# systemctl -l | grep BH
BHd.service loaded failed failed Description
[root#BHDEMO ~]#
Short Answer: systemctl disable BHd
Long Answer:
Well, I feel stupid... I'm sure some of you will think "this guy is writing a 40k line service but didn't know that!"... yes, that's how us computer engineers roll, lol.
The Solution:
After editing this file over and over in an attempt to get it working, I never tried systemctl disable BHd... that did it... I had a few lines in the Install section early, WantedBy and RequiredBy as seen above. I realized about 5 hours ago that this was not correct and deleted it. Unfortunately, just doing another enable is not sufficient, as those lines created directories that needed to be uninstalled with disable, then reinstalled with enabled without the added files and folders. Almost sounds like a linux bug that should be fixed, imagine how many like me over the years wasted so much time.

How to submit code to a remote Spark cluster from IntelliJ IDEA

I have two clusters, one in local virtual machine another in remote cloud. Both clusters in Standalone mode.
My Environment:
Scala: 2.10.4
Spark: 1.5.1
JDK: 1.8.40
OS: CentOS Linux release 7.1.1503 (Core)
The local cluster:
Spark Master: spark://local1:7077
The remote cluster:
Spark Master: spark://remote1:7077
I want to finish this:
Write codes(just simple word-count) in IntelliJ IDEA locally(on my laptp), and set the Spark Master URL to spark://local1:7077 and spark://remote1:7077, then run my codes in IntelliJ IDEA. That is, I don't want to use spark-submit to submit a job.
But I got some problem:
When I use the local cluster, everything goes well. Run codes in IntelliJ IDEA or use spark-submit can submit job to cluster and can finish the job.
But When I use the remote cluster, I got a warning log:
TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
It is sufficient resources not sufficient memory!
And this log keep printing, no further actions. Both spark-submit and run codes in IntelliJ IDEA result the same.
I want to know:
Is it possible to submit codes from IntelliJ IDEA to remote cluster?
If it's OK, does it need configuration?
What are the possible reasons that can cause my problem?
How can I handle this problem?
Thanks a lot!
Update
There is a similar question here, but I think my scene is different. When I run my codes in IntelliJ IDEA, and set Spark Master to local virtual machine cluster, it works. But I got Initial job has not accepted any resources;... warning instead.
I want to know whether the security policy or fireworks can cause this?
Submitting code programatically (e.g. via SparkSubmit) is quite tricky. At the least there is a variety of environment settings and considerations -handled by the spark-submit script - that are quite difficult to replicate within a scala program. I am still uncertain of how to achieve it: and there have been a number of long running threads within the spark developer community on the topic.
My answer here is about a portion of your post: specifically the
TaskSchedulerImpl: Initial job has not accepted any resources; check
your cluster UI to ensure that workers are registered and have
sufficient resources
The reason is typically there were a mismatch on the requested memory and/or number of cores from your job versus what were available on the cluster. Possibly when submitting from IJ the
$SPARK_HOME/conf/spark-defaults.conf
were not properly matching the parameters required for your task on the existing cluster. You may need to update:
spark.driver.memory 4g
spark.executor.memory 8g
spark.executor.cores 8
You can check the spark ui on port 8080 to verify that the parameters you requested are actually available on the cluster.

Running a service in the background withouth getting stuck

Currently I am having an issue with trying to run a process/script in the background[The master starts it on the minion]
The script is something like this:
#!/bin/bash
nohup ping 8.8.8.8 >/dev/null&
And I call it from the master with:
Process-Name:
service.running:
- name: Script-Name
- enable: True
For some reason it gets stuck on the master,I've read a little bit on this issue[it has happenned before apparently] and tried their solutions on it but apparently nothing involving the service state seems to work.
Is there anyway to work around this?
In short, you should configure your script as system daemon first (SysV init.d script, or systemd unit, or ... depends on OS).
Details
The service.running function requires properly configured system service ~ daemon.
For example, on RHEL-based Linux, if you don't see your script name in the output of one of these commands, you should configure it as proper service first (which is a separate topic):
# systemd
systemctl list-units | grep your_service_name
# SysV init.d
chkconfig --list | grep your_service_name
And because you want to start it in background, cmd.run function is not the right tool either:
It will only report successful start of the script without waiting for its completion results.
It will also start new instance of your script every time.
However, if all you simply want is to "fire and forget", use cmd.run.

Weblogic 12c setting PermSize when using NodeManager

We have a Windows Server 2012 64bit + Weblogic 12c setup. The AdminServer requires a higher PermSize when being used with a 64bit OS, thus we need to modify the "setDomainEnv.cmd" (as described in other questions here on stackoverflow).
When starting the AdminServer through the usual "startWeblogic.cmd" script, it uses the settings in "setDomainEnv.cmd" that sets the PermSize etc. successfully, but when using NodeManager "startServer()" command, it does not.
I read something in the documentation about the fact that one can control the parameters that are loaded on startup of a managed server (with NodeManager), but I did not find the right way to do it.
I would hope that we can achieve a consistent behaviour when starting a managed server (and the AdminServer) through NodeManager or manually.
Any ideas?
UPDATE:
I checked what's going on when starting managed server and(!) in comparison what's going on when starting the AdminServer. Result: the AdminServer process (it starts a 'javaw.exe' instance in contrast to a 'java.exe' instance for a managed server) never get's passed ANY parameters set in the setDomainEnv.cmd script.. it's basically full of Oracle internal parameters.
To me all this looks completely messed up and inconsistent. In addition to this I found an issue reported by Oracle that mystically talks about setting environment variables when running on a 64bit OS (see headline "Developer ZIP Distribution Fails on Windows 64-bit and Linux 64-bit"):
https://docs.oracle.com/cd/E24329_01/doc.1211/e26593/issues.htm#WLSRN238
I have idea if this applies to my version or not, since the version I downloaded does not say "developer" version, it basically was the primary weblogic download for the latest release.
The question that comes to my mind is this: what is the expected way of starting the AdminServer if not using "startServer"? Is there a bug that nobody cares about, since it is usually done differently? I am really disappointed to how confusing this rather simple topic evolves when starting to read Oracle documentation: it simply does not say anything about it at all.
Command line that is triggered when starting the AdminServer through "startServer()" command:
C:\PROGRA~1\Java\JDK17~1.0_6\jre\bin\javaw.exe -classpath "C:\PROGRA~1\Java\JDK17~1.0_6\jre\lib\rt.jar;C:\PROGRA~1\Java\JDK17~1.0_6\jre\lib\i18n.jar;C:\PROGRA~1\Java\JDK17~1.0_6\lib\tools.jar;D:\Oracle\Middleware\wlserver\server\lib\weblogic_sp.jar;D:\Oracle\Middleware\wlserver\server\lib\weblogic.jar;D:\Oracle\Middleware\oracle_common\modules\net.sf.antcontrib_1.1.0.0_1-0b3\lib\ant-contrib.jar;D:\Oracle\Middleware\wlserver\modules\features\oracle.wls.common.nodemanager_2.0.0.0.jar;D:\Oracle\Middleware\oracle_common\modules\com.oracle.cie.config-wls-online_8.1.0.0.jar;D:\Oracle\Middleware\wlserver\common\derby\lib\derbyclient.jar;D:\Oracle\Middleware\wlserver\common\derby\lib\derby.jar;D:\Oracle\Middleware\wlserver\server\lib\xqrl.jar" "-Djava.runtime.name=Java(TM) SE Runtime Environment" -Dpython.cachedir=C:\Users\ADMINI~1\AppData\Local\Temp\2\wlstTempAdministrator -Djava.protocol.handler.pkgs=weblogic.utils|weblogic.utils|weblogic.utils -Djava.vm.version=24.65-b04 "-Djava.vm.vendor=Oracle Corporation" -Djava.vendor.url=http://java.oracle.com/ -Dpath.separator=; "-Djava.vm.name=Java HotSpot(TM) 64-Bit Server VM" -Dweblogic.RootDirectory=D:\Oracle\Middleware\user_projects\domains\test1234\. "-Djava.vm.specification.name=Java Virtual Machine Specification" -Djava.runtime.version=1.7.0_67-b01 -Djavax.rmi.CORBA.UtilClass=weblogic.iiop.UtilDelegateImpl -Djava.awt.graphicsenv=sun.awt.Win32GraphicsEnvironment -Djava.endorsed.dirs=C:\PROGRA~1\Java\JDK17~1.0_6\jre\lib\endorsed -Dos.arch=amd64 -Djava.io.tmpdir=C:\Users\ADMINI~1\AppData\Local\Temp\2\ -Dline.separator=
"-Djava.vm.specification.vendor=Oracle Corporation" -Djava.naming.factory.url.pkgs=weblogic.jndi.factories:weblogic.corba.j2ee.naming.url "-Dos.name=Windows Server 2012 R2" -Dprod.props.file=D:\Oracle\Middleware\wlserver\.product.properties -Dorg.omg.CORBA.ORBSingletonClass=weblogic.corba.orb.ORB -Djava.library.path=C:\PROGRA~1\Java\JDK17~1.0_6\jre\bin;C:\Windows\Sun\Java\bin;C:\Windows\system32;C:\Windows;;D:\Oracle\Middleware\wlserver\server\native\win\x64;D:\Oracle\Middleware\wlserver\server\bin;D:\Oracle\Middleware\oracle_common\modules\org.apache.ant_1.9.2\bin;C:\PROGRA~1\Java\JDK17~1.0_6\jre\bin;C:\PROGRA~1\Java\JDK17~1.0_6\bin;D:\Oracle\product\12.1.0\dbhome_1\BIN;C:\Windows\System32;C:\Windows;C:\Windows\System32\wbem;C:\Windows\System32\WINDOW~1\v1.0\;C:\PROGRA~2\VISUAL~1\bin;C:\PROGRA~1\doxygen\bin;C:\PROGRA~1\TORTOI~1\bin;C:\PROGRA~2\WINDOW~4\8.0\WINDOW~1\;C:\PROGRA~1\MICROS~1\110\Tools\Binn\;D:\Oracle\Middleware\wlserver\server\native\win\x64\oci920_8;. "-Djava.specification.name=Java Platform API Specification" -Djava.class.version=51.0 -Dorg.omg.CORBA.ORBClass=weblogic.corba.orb.ORB -Dos.version=6.3 -Djavax.rmi.CORBA.PortableRemoteObjectClass=weblogic.iiop.PortableRemoteObjectDelegateImpl -Djava.awt.printerjob=sun.awt.windows.WPrinterJob -Djava.specification.version=1.7 -Djava.class.path=C:\PROGRA~1\Java\JDK17~1.0_6\lib\tools.jar;D:\Oracle\Middleware\wlserver\server\lib\weblogic_sp.jar;D:\Oracle\Middleware\wlserver\server\lib\weblogic.jar;D:\Oracle\Middleware\oracle_common\modules\net.sf.antcontrib_1.1.0.0_1-0b3\lib\ant-contrib.jar;D:\Oracle\Middleware\wlserver\modules\features\oracle.wls.common.nodemanager_2.0.0.0.jar;D:\Oracle\Middleware\oracle_common\modules\com.oracle.cie.config-wls-online_8.1.0.0.jar;D:\Oracle\Middleware\wlserver\common\derby\lib\derbyclient.jar;D:\Oracle\Middleware\wlserver\common\derby\lib\derby.jar;D:\Oracle\Middleware\wlserver\server\lib\xqrl.jar -Djava.vm.specification.version=1.7 -Dweblogic.management.GenerateDefaultConfig=false -Djava.home=C:\PROGRA~1\Java\JDK17~1.0_6\jre "-Djava.specification.vendor=Oracle Corporation" -Dawt.toolkit=sun.awt.windows.WToolkit "-Djava.vm.info=mixed mode" -Djava.version=1.7.0_67 -Djava.ext.dirs=C:\PROGRA~1\Java\JDK17~1.0_6\jre\lib\ext;C:\Windows\Sun\Java\lib\ext "-Djava.vendor=Oracle Corporation" -Djava.vendor.url.bug=http://bugreport.sun.com/bugreport/ -Dweblogic.store.DisableDiskScheduler=true -Dpython.verbose=warning weblogic.Server
UPDATE 2:
Start the AdminServer through node manager (nmStart('AdminServer')) creates a usual "java.exe" process and starts up the AdminServer with correct memory settings. But this is even more confusing: why is "startServer()" creating a separate process (javaw.exe) with entirely different settings? Why are my settings now totally different for AdminServer? What is the "correct" way of starting the AdminServer (development/production?). Two thumbs down on this environment.
UPDATE 3:
After repeating further tests the solution of getting "startServer()" to work is basically as follows: do not worry about the node manager settings at all, edit the "startWeblogic" script directly by adding additional java options inside of it (as usual by adding -D start parameters). The reason for all this is basically that the global settings (as used by node manager) are ignored completely, see my pasted command line output.
Check the nodemanager.properties file in your Oracle install ( e.g. /opt/ora/mw/wlserver_10.3/common/nodemanager/nodemanager.properties ) and verify that these options are set:
StartScriptName=startManagedWebLogic.sh
StartScriptEnabled=true
so the nodemanager is starting your servers with the appropriate scripts. You also have to option of setting server specific start attributes via the admin console - go to:
Servers -> Server Name -> Server Start tab -> Arguments
You can fill in server specific JVM args, like -XX:MaxPermSize=4096m in this field that will be used by the nodemanager. This may be a better/easier idea than hard coding it in the setDomainEnv script.
UPDATE
Attempt issuing an nmStart() command rather than a startServer() command for the AdminServer.
startServer allows you to start a server WITHOUT the nodemanager. It uses javaw.exe to effectively background the process
nmStart allows you to start the server WITH the nodemanager - which is why you get the correct memory settings. Because the process is started via a service, it is more or less automatically backgrounded, which is why you see the normal java.exe

knife status to list node where chef is failing

I was looking a way to list chef-client where chef is failing (Only failing NOT stopped).
I checked knife status document, but didn't find anything usefull.
anyone know better way to do this ?
Thanks
The knife lastrun plugin provides a useful chef handler that will record the time and status of most recent chef run. It will also usefully store the stack trace of any failed chef run. This is useful plugin when running large numbers of chef clients.