Generate HTML report with zap-cli - zap

I would like to get a HTML report from the zap-cli. I am able to run these commands, but is there a way to run both in single command
[sb#company.local#sb-test-vm ~]$ zap-cli quick-scan -s xss,sqli --spider -r -e "some_regex_pattern" http://demo.testfire.net/
[INFO] Running a quick scan for http://demo.testfire.net/
[INFO] Issues found: 6
+----------------------------------+--------+----------+------------------------------------------------------------------------------------------------------------------+
| Alert | Risk | CWE ID | URL |
+==================================+========+==========+==================================================================================================================+
| Cross Site Scripting (Reflected) | High | 79 | http://demo.testfire.net/bank/login.aspx |
+----------------------------------+--------+----------+------------------------------------------------------------------------------------------------------------------+
| Cross Site Scripting (Reflected) | High | 79 | http://demo.testfire.net/comment.aspx |
+----------------------------------+--------+----------+------------------------------------------------------------------------------------------------------------------+
| Cross Site Scripting (Reflected) | High | 79 | http://demo.testfire.net/notfound.aspx?aspxerrorpath=%3C%2Fb%3E%3Cscript%3Ealert%281%29%3B%3C%2Fscript%3E%3Cb%3E |
+----------------------------------+--------+----------+------------------------------------------------------------------------------------------------------------------+
| Cross Site Scripting (Reflected) | High | 79 | http://demo.testfire.net/search.aspx?txtSearch=%3C%2Fspan%3E%3Cscript%3Ealert%281%29%3B%3C%2Fscript%3E%3Cspan%3E |
+----------------------------------+--------+----------+------------------------------------------------------------------------------------------------------------------+
| SQL Injection | High | 89 | http://demo.testfire.net/bank/login.aspx |
+----------------------------------+--------+----------+------------------------------------------------------------------------------------------------------------------+
| SQL Injection | High | 89 | http://demo.testfire.net/bank/login.aspx |
+----------------------------------+--------+----------+------------------------------------------------------------------------------------------------------------------+
[sb#company.local#sb-test-vm ~]$ zap-cli report -o abc.html -f html
[INFO] Report saved to "abc.html"
[sb#company.local#sb-test-vm ~]$ ls -l abc.html
-rw-rw-r--. 1 sb#company.local sb#company.local 58659 Sep 25 16:39 abc.html
[sb#company.local#sb-test-vm ~]$ date
Tue Sep 25 16:39:16 EDT 2018
[sb#company.local#sb-test-vm ~]$
I tried the switches provided but unable to execuete the scan and get the report in a single liner. I am willing to use zap.sh even however I didn't see the option to generate the report in HTML, only XML. any insight on this is appreciated
zap-cli --help
Usage: zap-cli [OPTIONS] COMMAND [ARGS]...
ZAP CLI v0.9.0 - A simple commandline tool for OWASP ZAP.
Options:
--boring Remove color from console output.
-v, --verbose Add more verbose debugging output.
--zap-path TEXT Path to the ZAP daemon. Defaults to /zap or the value
of
the environment variable ZAP_PATH.
-p, --port INTEGER Port of the ZAP proxy. Defaults to 8090 or the value
of
the environment variable ZAP_PORT.
--zap-url TEXT The URL of the ZAP proxy. Defaults to http://127.0.0.1
or the value of the environment variable ZAP_URL.
--api-key TEXT The API key for using the ZAP API if required. Defaults
to the value of the environment variable ZAP_API_KEY.
--help Show this message and exit.
Commands:
active-scan Run an Active Scan.
ajax-spider Run the AJAX Spider against a URL.
alerts Show alerts at the given alert level.
context Manage contexts for the current session.
exclude Exclude a pattern from all scanners.
open-url Open a URL using the ZAP proxy.
policies Enable or list a set of policies.
quick-scan Run a quick scan.
report Generate XML, MD or HTML report.
scanners Enable, disable, or list a set of scanners.
scripts Manage scripts.
session Manage sessions.
shutdown Shutdown the ZAP daemon.
spider Run the spider against a URL.
start Start the ZAP daemon.
status Check if ZAP is running.
EDIT:
I tried this command but the abc.html file wasn't there in the current dir. I did a find but couldn't find abc.html anywhere
zap-cli quick-scan -s xss,sqli --spider -r -e "some_regex_pattern" http://demo.testfire.net/ && zap-cli report -o abc.html -f html
So, next I tried to out these 2 commands in a zap-run.sh script and chmod +x the script and ran it, that DID create the abc.html file. So, thank you

If the big goal is a "single liner", why not just chain the commands?
zap-cli quick-scan -s xss,sqli --spider -r -e "some_regex_pattern" http://demo.testfire.net/ && zap-cli report -o abc.html -f html
If that doesn't suit you then put both commands in a batch file (or shell script) and call that instead.

Related

Ignite cluster size using control script

I need to get ignite cluster size(no of server nodes running) preferably using control script or ignite rest api. I am able to get baseline nodes using command below but I don't see any command or rest api to return topology snapshot. Is there a way we could get this information to ignite client rather than looking into logs.
Workaround to get baseline nodes:
baselineNodes=$(kubectl --kubeconfig config.conf exec <ignite-node> -n {client_name} -- /opt/ignite/apache-ignite/bin/./control.sh --baseline | grep "Number of baseline nodes" | cut -d ':' -f2 | sed 's/^ *//g')
It seems that topology REST command could do the trick. Here's the documentation link.
http://host:port/ignite?cmd=top&attr=true&mtr=true&id=c981d2a1-878b-4c67-96f6-70f93a4cd241
Got help from ignite community and below command worked for me. Basically idea is to use metric to extract server nodes.
kubectl --kubeconfig config.conf exec <ignite-node> -n {client_name} -- /opt/ignite/apache-ignite/bin/./control.sh --metric cluster.TotalServerNodes | grep -v "metric" | grep cluster.TotalServerNodes | cut -d " " -f5 | sed 's/^ *//g'
Quoting the reply received:
"You can query any metric value or system view content via control script [1], [2]
control.sh --system-view nodes
or [3]
control.sh —metric cluster.TotalBaselineNodes
control.sh —metric cluster.TotalServerNodes
control.sh —metric cluster.TotalClientNodes
[1] https://ignite.apache.org/docs/latest/tools/control-script#metric-command
[2] https://ignite.apache.org/docs/latest/tools/control-script#system-view-command
[3] https://ignite.apache.org/docs/2.11.1/monitoring-metrics/new-metrics#cluster"

How to get rabbitmq cluster status in json format

How to get status and cluster_status from rabbitmq in JSON format
sudo rabbitmqctl status
sudo rabbitmqctl cluster_status
help is your friend:
# rabbitmqctl help cluster_status
Error:
Usage
rabbitmqctl [--node <node>] [--longnames] [--quiet] cluster_status
Displays all the nodes in the cluster grouped by node type, together with the currently running nodes.
Relevant Doc Guides
* https://rabbitmq.com/clustering.html
* https://rabbitmq.com/cluster-formation.html
* https://rabbitmq.com/monitoring.html
General Options
The following options are accepted by most or all commands.
short | long | description
-----------------|---------------|--------------------------------
-? | --help | displays command help
-n <node> | --node <node> | connect to node <node>
-l | --longnames | use long host names
-t | --timeout <n> | for commands that support it, operation timeout in seconds
-q | --quiet | suppress informational messages
-s | --silent | suppress informational messages
| and table header row
-p | --vhost | for commands that are scoped to a virtual host,
| | virtual host to use
| --formatter | alternative result formatter to use
| if supported: json, pretty_table, table, csv.
not all commands support all (or any) alternative formatters.
Figured out by myself. You can use the option --formatter json
Could not find it in the rabbitmq documentation!
sudo rabbitmqctl cluster_status --formatter json
sudo rabbitmqctl cluster_status --formatter json | jq .running_nodes
To parse this and use it in bash script:
running_nodes=($(egrep -o '[a-z0-9#-]+' <<< $(sudo rabbitmqctl cluster_status --formatter json | jq .running_nodes)))

AWS Elastic Beanstalk: Reload configs files without redeploying the whole app?

Is there a way to load what is in .ebextention without redeploying the whole application with eb deploy? It seems restarting Apache is not enough.
exemple of config file:
container_commands:
01_remove_old_cron_jobs:
command: "crontab -r || exit 0"
02_cronjobs:
command: "cat .ebextensions/cron_jobs.txt > /etc/cron.d/cron_job && chmod 644 /etc/cron.d/cron_job"
leader_only: true
03_setup_apache:
command: "cp .ebextensions/enable_mod_deflate.conf /etc/httpd/conf.d/enable_mod_deflate.conf"
The plain answer is no. The config files are only executed upon deployment as part of the EB scripting pipeline. If it's only a one time operation you'd like to perform, simply eb ssh the instance and perform these manually. Upon next deploy they'll be done automatically via your config files.
You could add another container_commands e.g.:
04_reload_files:
command: "sudo service httpd reload"
Note this will work with Amazon Linux but not Amazon Linux 2 where you would need to do:
04_reload_files:
command: "sudo systemctl reload httpd.service"
AWS have a new preferred way of interacting with webserver config during deployment using the following structure (for apache) rather than .ebextensions:
~/workspace/my-app/
|-- .ebextensions
| -- httpd-proxy.config
|-- .platform
| -- httpd
| -- conf.d
| -- port5000.conf
| -- ssl.conf
-- index.jsp
Some more info available here: https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/platforms-linux-extend.html

how do I write SSH command output with only values I need

I am creating a VPS with the API provided for command line. The output of the command comes with several text inside which I don't need. This is my command.
The variables are predefined and work fine.
echo y | /usr/local/bin/CLICMD vm create --hostname=$VMNAME --domain=$srvdomain --cpu 1 --memory 1024 --image $image --datacenter=$dc --billing=hourly -n 100 > /dev/null 1>> /home/logs/createvps.log
When I run it, it gives me the following output in createvps.log file,
This action will incur charges on your account. Continue? [y/N]: id 11232312
created 2015-06-13T14:43:27-05:00
guid xxxxxx-r345-4323-8e3f-c8c04e18fad7
From the above output, I just need to have id (11232312) value stored in a mysql table. I know how to grab the value from log file and save in mysql.
My question is, how do I save just that id in the log file instead of all the other values/strings.
Thank you in advance.
Not sure what is exactly your question, but I guess this should help you:
echo y | /usr/local/bin/CLICMD vm create --hostname=$VMNAME \
--domain=$srvdomain --cpu 1 --memory 1024 --image $image \
--datacenter=$dc --billing=hourly -n 100 | \
grep -oE "id [0-9]+$" | grep -Eo "[0-9]+" >> /home/logs/createvps.log
Few notes to difference in your code and mine:
You do two redirection of stdout, one to /dev/null and one to your log, which is equivalent of doing just one redirection (writing in /dev/null is practically NOP).

kill all processes spawned by parent with `ssh -x -n` on other hosts

Hi
A software named G09 works in parallel using Linda. It spawns its parallel child processes on other nodes (hosts) as
/usr/bin/ssh -x compute-0-127.local -n /usr/local/g09l/g09/linda-exe/l1002.exel ...other_opts...
However, when the master node kills this process, the corresponding child process on other node, namely compute-0-127 does not die but keeps running in background. Right now, I manually go to each node which has these orphaned Linda processes and kill them with kill. Is there any way to kill such child processes?
Look at pastebin 1 for PSTREE before killing the process and at pastebin 2 for PSTREE after parent is killed
pastebin1 - http://pastebin.com/yNXFR28V
pastebin2 - http:// pastebin.com/ApwXrueh
-not enough reputation points for hyperlinking second pastebin, sorry !(
Update to Answer1
Thanks Martin for explaining. I tried following
killme() { kill 0 ; } ; #Make calls to prepare for running G09 ;
g09 < "$g09inp" > "$g09out" &
trap killme 'TERM'
wait
but when Torque/Maui (which handles job execution) kills the job(this script) as qdel $jobid the processes started by G09 as ssh -x $host -n still run in the background. What am I doing wrong here ? (Normal termination is not a problem as G09 itself stops those processes.) Following is pstree before qdel
bash
|-461.norma.iitb. /opt/torque/mom_priv/jobs/461.norma.iitb.ac.in.SC
| `-g09
| `-l1002.exe 1048576000Pd-C-C-addn-H-MO6-fwd-opt.chk
| `-cLindaLauncher/tmp/viaExecDataN6
| |-l1002.exel 1048576000Pd-C-C-addn-H-MO6-fwd-opt.ch
| | |-{l1002.exel}
| | |-{l1002.exel}
| | |-{l1002.exel}
| | |-{l1002.exel}
| | |-{l1002.exel}
| | |-{l1002.exel}
| | |-{l1002.exel}
| | `-{l1002.exel}
| |-ssh -x compute-0-149.local -n ...
| |-ssh -x compute-0-147.local -n ...
| |-ssh -x compute-0-146.local -n ...
| |-{cLindaLauncher}
| `-{cLindaLauncher}
`-pbs_demux
and after qdel it still shows
461.norma.iitb. /opt/torque/mom_priv/jobs/461.norma.iitb.ac.in.SC
`-ssh -x -n compute-0-149 rm\040-rf\040/state/partition1/trirag09/461
l1002.exel 1048576000Pd-C-C-addn-H-MO6-fwd-opt.ch
|-{l1002.exel}
|-{l1002.exel}
|-{l1002.exel}
|-{l1002.exel}
|-{l1002.exel}
|-{l1002.exel}
|-{l1002.exel}
`-{l1002.exel}
ssh -x compute-0-149.local -n /usr/local/g09l/g09/linda-exe/l1002.exel
ssh -x compute-0-147.local -n /usr/local/g09l/g09/linda-exe/l1002.exel
ssh -x compute-0-146.local -n /usr/local/g09l/g09/linda-exe/l1002.exel
What am I doing wrong here ? is the trap killme 'TERM' wrong ?
I would try the following approach:
create a script/application that wraps this g09 binary that you are starting, and start that wrapper instead
in the script, wait for the HUP signal to arrive (which should be received when the ssh connection is closed)
in processing the HUP signal, send a signal to your process group (i.e. PID 0) that kills all processes in the group.
Sending a KILL signal to the process group is really easy: kill -9 0. Try this:
#!/bin/sh
./b.sh 1 &
./b.sh 2 &
sleep 10
kill -9 0
where b.sh is
#!/bin/sh
while /bin/true
do
echo $1
sleep 1
done
You can have as many child processes as you want (directly or indirectly); they will all get the signal - as long as they don't detach themselves from the process group.
I had a similar problem using ssh -N (similar to ssh -n), and kill -9 0 does not work for me if I run it inside a script that initiates the ssh call. I find that kill jobs -p does terminate the ssh process, which is not very elegant, but I am using that currently.