Axis2 generates Skeleton only for one port, but I have 2 ports - axis2

I have 2 ports in my WSDL:
<wsdl:portType name="Interface1">
<wsdl:portType name="Interface2">
I call
wsdl2java -g -o result -p "com.foo" -ss -ssi -ap -g -uri MyService.wsdl
After it I can find only "Interface1SkeletonInterface.java" in my "com/foo" folder.
Why?

wsdl2java has a command-line option -pn or --port-name to specify which port to generate code for. If not specified, the default is to generate code for the first listed port.
The ant task has an equivalent option portName.

Related

"Startup File" on Azure Docker Web App

Is the "Startup File" option on the docker web app options for docker-compose files? or shell commands? I cannot find any documentation for it...
Basically I'd like my Web App to run a docker-compose.yml instead of executing docker run [options] when I push an image to it.
This is documented now, see below or click here.
What are the expected values for the Startup File section when I
configure the runtime stack?
For Node.js, you specify the PM2 configuration file or your script
file. For .NET Core, specify your compiled DLL name as dotnet <myapp>.dll. For Ruby, you can specify the Ruby script that you want
to initialize your app with.
Not sure if this is still a problem but I just noticed it appends whatever you put in there to the default startup command.
2019-09-02 05:03:04.493 INFO - docker run -d -p 55721:80 --name xxxxxx -e WEBSITES_ENABLE_APP_SERVICE_STORAGE=false -e WEBSITE_SITE_NAME=xxxxx -e WEBSITE_AUTH_ENABLED=False -e PORT=80 -e WEBSITE_ROLE_INSTANCE_ID=0 -e WEBSITE_HOSTNAME=xxxxxx.azurewebsites.net -e WEBSITE_INSTANCE_ID=xxxxxxxxx -e HTTP_LOGGING_ENABLED=1 xxxxxx.azurecr.io/xxxxxxx:latest -p 80:4000 -p 443:8000
I put the -p 80:4000 -p 443:8000 into the textbox in the portal config
Azure Web Apps for Containers does not support multi-container apps (with docker-compose) at the time of writing.

Find httpd.conf file location after it's been changed by -f flag

Httpd processes use a non-default configuration file if they are run with the -f flag.
For example
/home/myuser/apache/httpd-2.4.8/bin/httpd -f /confFiles/apache/2.4.8/apache.conf -k start
will use this configuration file: /confFiles/apache/2.4.8/apache.conf
I need to get this location and would rather not have to check for possible -f flags used to start httpd.
The answer here says to run /path/to/httpd -V and concatenate
-D SERVER_CONFIG_FILE="conf/httpd.conf"
with
-D HTTPD_ROOT="/etc/httpd"
to get the final path to the config file.
However, this path will not be the correct one if the -f flag is used to start the httpd process.
Is there a command that can get the config file that is actually being used by the process?
The answer you refer to mentions the paths httpd was compiled with, but as you say those can be manually changed with parameters.
The simple way to check is the command line, if process is called "httpd" (standard name), a simple ps will reveal the config file being used:
ps auxw | grep httpd
Or querying the server if server has mod_info loaded, in command line or with your favourite browser:
curl "http://yourserver.example.com/server-info?server" | grep -i "config file"
Note: mod_info should not be publicaly available for everyone to see.

Nagios - NRPE: Command '...' not defined

In /usr/local/nagios/etc/nrpe.cfg I added a new command check_this_process to the already pre-defined ones:
command[check_users]=/usr/local/nagios/libexec/check_users -w 5 -c 10
command[check_load]=/usr/local/nagios/libexec/check_load -w 15,10,5 -c 30,25,20
command[check_hda1]=/usr/local/nagios/libexec/check_disk -w 20% -c 10% -p /dev/$
command[check_zombie_procs]=/usr/local/nagios/libexec/check_procs -w 5 -c 10 -s$
command[check_total_procs]=/usr/local/nagios/libexec/check_procs -w 150 -c 200
command[check_this_process]=/usr/local/nagios/libexec/check_procs -w 15 -c 20 -C name
This works:
define service{
use generic-service
host_name my_host
service_description CPU Load
check_command check_nrpe!check_load
}
This doesn't:
define service{
use local-service
host_name my_host
service_description cron
check_command check_nrpe!check_this_process
}
and returns: NRPE: Command 'check_this_process' not defined
The terminology used in the supplied docs is a little confusing, but I'll put it like this:
As written in Page 10 of https://assets.nagios.com/downloads/nagioscore/docs/nrpe/NRPE.pdf, you need to modify /usr/local/nagios/etc/commands.cfg on your Nagios server and add the following to define the check_nrpe command:
define command{
command_name check_nrpe
command_line $USER1$/check_nrpe -H $HOSTADDRESS$ -c $ARG1$
}
On your Nagios server, define your service definition as you've already done:
define service{
use local-service
host_name my_host
service_description cron
check_command check_nrpe!check_this_process
}
On your remote host to be monitored, the following is going to be different depending on whether you installed NRPE:
using the tarball and xinetd as in
https://assets.nagios.com/downloads/nagioscore/docs/nrpe/NRPE.pdf
or using a package manager like yum as in
http://sharadchhetri.com/2013/03/02/how-to-install-and-configure-nagios-nrpe-in-centos-and-red-hat/
If you used the tarball / xinetd method, your NRPE configuration file will likely be located at /usr/local/nagios/etc/nrpe.cfg on your remote-host-to-be-monitored. (To avoid typing that all the time, I'll just call it "my_host").
So, on my_host, modify /usr/local/nagios/etc/nrpe.cfg.
Add
command[check_this_process]=/usr/local/nagios/libexec/check_procs -w 15 -c 20 -C name
So that it looks like:
command[check_users]=/usr/local/nagios/libexec/check_users -w 5 -c 10
command[check_load]=/usr/local/nagios/libexec/check_load -w 15,10,5 -c 30,25,20
command[check_hda1]=/usr/local/nagios/libexec/check_disk -w 20% -c 10% -p /dev/$
command[check_zombie_procs]=/usr/local/nagios/libexec/check_procs -w 5 -c 10 -s$
command[check_total_procs]=/usr/local/nagios/libexec/check_procs -w 150 -c 200
command[check_this_process]=/usr/local/nagios/libexec/check_procs -w 15 -c 20 -C name
(Note: the above is assuming you have a process called name. If not, replace name with your real process name: i.e. crond)
Restart xinetd:
service xinetd restart
(NOTE: restarting xinted might not be necessary, but I don't use it so I'm a little fuzzy on this one.)
However, if you installed NRPE on my_host using a package manager like yum, your NRPE configuration file will likely be located at /etc/nagios/nrpe.cfg.
So, on my_host, modify /etc/nagios/nrpe.cfg.
Add
command[check_this_process]=/usr/local/nagios/libexec/check_procs -w 15 -c 20 -C name
So that it looks like:
command[check_users]=/usr/local/nagios/libexec/check_users -w 5 -c 10
command[check_load]=/usr/local/nagios/libexec/check_load -w 15,10,5 -c 30,25,20
command[check_hda1]=/usr/local/nagios/libexec/check_disk -w 20% -c 10% -p /dev/$
command[check_zombie_procs]=/usr/local/nagios/libexec/check_procs -w 5 -c 10 -s$
command[check_total_procs]=/usr/local/nagios/libexec/check_procs -w 150 -c 200
command[check_this_process]=/usr/local/nagios/libexec/check_procs -w 15 -c 20 -C name
Restart the nrpe service:
service nrpe restart
Back on your Nagios server, run a verification of your Nagios configuration settings:
nagios -v /usr/local/nagios/etc/nagios.cfg
Check the output for errors.
If there are no errors, restart Nagios:
service nagios restart
On your Nagios server you should have a check_nrpe utility installed somehwere as a result of installing the "check_nrpe plugin" on your Nagios server.
See pages 9 and 10 of: https://assets.nagios.com/downloads/nagioscore/docs/nrpe/NRPE.pdf
This check_nrpe utility will most likely be located at: /usr/local/nagios/libexec/check_nrpe
Using the host information for my_host manually test your NRPE connection from the Nagios server.
Execute the following:
/usr/local/nagios/libexec/check_nrpe -H <IP Address of my_host> -c check_this_process
If everything is setup correctly, you should get some output on the command line.
My trouble-shooting guide for 'NRPE: Command ... not found.' Ordered from most common to least common - in my environment.
Was the NRPE daemon restarted AFTER adding the new command? If it is a new command, then NRPE MUST be restarted.
Typos/spelling errors. Does the configured command name on the Nagios side, match that the one in the NRPE config?
Permissions issues. Does the USER that NRPE runs as, have READABLE and EXECUTABLE access to the actual command being ran? Did you test run the command, as the NRPE user? On that same system? TIP: Use the dash (-) when changing to the NRPE user on Linux (su - ...) so you import said users environment as well.
Path issues. Was the FULL PATH to the actual command put into the NRPE config file? Doing this will normally eliminate issues with PATHs, so don't do it any other way.
Bad commands. Does the actual command really execute? Or is it simply throwing an error and exiting? Do you have the correct version of (INSERT SOMETHING HERE) to run the command, installed on the remote system? You should be able to run any command defined in the nrpe.cfg from the command line, and all new commands should be checked BEFORE being added to the nrpe.cfg.
IF ALL THE ABOVE FAILS: Enable DEBUGGING in NRPE and check the log files (on the remote host). This is a bit of a drawn out process - described in the documentation - read it. It is important to disable DEBUGGING as soon as you get output that looks like it would be useful.
This checklist ASSUMES that you've done the needful things to the various Nagios and NRPE configs to get it working in the first place. Hopefully others will read this before posting yet another question as to why they are seeing this error.

ssh client (dropbear on a router) does no output when put in background

I'm trying to automate some things on remote Linux machines with bash scripting on Linux machine and have a working command (the braces are a relict from cmd concatenations):
(ssh -i /path/to/private_key user#remoteHost 'sh -c "echo 1; echo 2; echo 3; uname -a"')
But if an ampersand is concatenated to execute it in background, it seems to execute, but no output is printed, neither on stdout, nor on stderr, and even a redirection to a file (inside the braces) does not work...:
(ssh -i /path/to/private_key user#remoteHost 'sh -c "echo 1; echo 2; echo 3; uname -a"') &
By the way, I'm running the ssh client dropbear v0.52 in BusyBox v1.17.4 on Linux 2.4.37.10 (TomatoUSB build on a WRT54G).
Is there a way to get the output either? What's the reason for this behaviour?
EDIT:
For convenience, here's the plain ssh help output (on my TomatoUSB):
Dropbear client v0.52
Usage: ssh [options] [user#]host[/port][,[user#]host/port],...] [command]
Options are:
-p <remoteport>
-l <username>
-t Allocate a pty
-T Don't allocate a pty
-N Don't run a remote command
-f Run in background after auth
-y Always accept remote host key if unknown
-s Request a subsystem (use for sftp)
-i <identityfile> (multiple allowed)
-L <listenport:remotehost:remoteport> Local port forwarding
-g Allow remote hosts to connect to forwarded ports
-R <listenport:remotehost:remoteport> Remote port forwarding
-W <receive_window_buffer> (default 12288, larger may be faster, max 1MB)
-K <keepalive> (0 is never, default 0)
-I <idle_timeout> (0 is never, default 0)
-B <endhost:endport> Netcat-alike forwarding
-J <proxy_program> Use program pipe rather than TCP connection
Amendment after 1 day:
The braces do not hurt, with and without its the same result. I wanted to put the ssh authentication to background, so the -f option is not a solution. Interesting side note: if an unexpected option is specified (like -v), the error message WARNING: Ignoring unknown argument '-v' is displayed - even when put in background, so getting output from background processes generally works in my environment.
I tried on x86 Ubuntu regular ssh client: it works. I also tried dbclient on x86 Ubuntu: works, too. So this problem seems to be specific to the TomatoUSB build - or inside the "dropbear v0.52" was an unknown fix between the build in TomatoUSB and the one Ubuntu provides (difference in help output is just the double-sized default receive window buffer on Ubuntu)... how can a process know if it was put in background? Is there a solution to the problem?
I had the similar problem on my OpenWRT router. Dropbear SSH client does not write anything to output if there is no stdin, e.g. when run by cron. I presume that & has the same effect on process stdin (no input).
I found some workaround on author's bugtracker. Try to redirect input from /dev/zero.
Like:
ssh -i yourkey user#remotehost "echo 123" </dev/zero &
It worked for me as I tried to describe at my blog page.

relocate apache's configuration files

is there a chance to use different location for apache's config files (on Windows)? Other than having to compile it myself and setting the proper #define HTTPD_ROOT value.
Thx rezna
This can be done by specifying the -f option when installing apache as a service on Windows.
The -f option accepts the location of the configuration file. For example, if your command to install the service was
httpd.exe -k install -n "MyServiceName"
Add -f "c:\files\my.conf", with your configuration file instead, like so:
httpd.exe -k install -n "MyServiceName" -f "c:\files\my.conf"
See the Apache manual for more information.