Export/Import OWASP ZAP Passive Scan Rules - zap

Is there any way to create a scan policy for passive scans? I know you can create and modify scan policies for the active/attack scanning, but i'm wondering if you can do the same for the passive scan rules or if you have to individually modify them on every machine?

There's an existing ticket open to unify Active/Passive Scan handling in a singular policy type interface: https://github.com/zaproxy/zaproxy/issues/3870.
If you're really interested in that you could support it on BountySource (https://www.bountysource.com/issues/49047644-improved-active-passive-rules-management) and see if that draws some attention/action.
Another option you could go with is to create a quick script that uses ZAP's web API to apply a Passive Scan rule "policy". Relevant endpoints include: pscan/view/scanners/, pscan/action/disableAllScanners/, pscan/action/enableScanners/. Here's a python example:
from zapv2 import ZAPv2 as zap
import time
apikey = "apikey12345" #Your apikey
z = zap(apikey=apikey, proxies={"http": "http://127.0.0.1:8080", "https": "http://127.0.0.1:8080"})
time.sleep(2) #Might need to be longer depending on your machine and if ZAP is already running or not
print "Disabling all passive scan rules.."
z.pscan.disable_all_scanners()
scanners = z.pscan.scanners
for scanner in scanners:
print scanner.get("id") + " : " + scanner.get("enabled") + " : " + scanner.get("name")
to_enable = "10020,10021,10062" #Customize as you see fit
print "\nEnabling specific passive scan rules..[" + to_enable +"]"
z.pscan.enable_scanners(to_enable)
print "\nListing enabled passive scan rules.."
scanners2 = z.pscan.scanners
for scanner in scanners2:
if (scanner.get("enabled") == "true"):
print scanner.get("id") + " : " + scanner.get("enabled") + " : " + scanner.get("name")
Finally you could configure ZAP on one system, then copy that config.xml to other systems as needed.

Related

Azure Monitor Alert based on Custom Metric

I've created a custom metric to monitor free disk C: space on my Azure VM.
But when i'n trying to create an alert rule (not classic), i can't find my custom metrics in the options list. i'm thinking that this is due to the fact that i'm using the new Rule alrts insted of the Classic Rules.
Has someone succeeded to create a new alert rule based on a custom metric?
Using a query can give me the output, but i don't know from where this info are coming (VM extension ? Diagnostic Log?):
Perf
| where TimeGenerated >ago(1d)
| where CounterName == "% Free Space" and ObjectName == "LogicalDisk" and InstanceName == "C:" and CounterValue > 90
| sort by TimeGenerated desc

Paraview looping with SaveScreenshot in a server is very slow

I mean to get a series of snapshots, at a sequence of time steps, of a layout with two views (one RenderView + one LineChartView).
For this I put together a script, see below.
I do
ssh -X myserver
and there I run
~/ParaView-5.4.1-Qt5-OpenGL2-MPI-Linux-64bit/bin/pvbatch myscript.py
The script is extremely slow to run. I conceive the following reasons/bottlenecks:
Communication of the graphic part (ssh -X) from the remote server to my computer.
Display of graphics in my computer.
Processing in the server.
Is there a way to assess which is the bottleneck, with my current resources?
(For instance, I know I could get a faster communication to assess item 1, but I cannot do that now.)
Is there a way to accelerate pvbatch?
The answer likely depends on my system, but perhaps there are generic actions I can take.
Creation of the layout with two views
...
ans = GetAnimationScene()
time_steps = ans.TimeKeeper.TimestepValues
for istep in range(len(time_steps)) :
tstep = time_steps[istep]
ans.AnimationTime = tstep
fname = "combo" + '-' + '{:08d}'.format(istep) + '.png'
print( "Exporting image " + fname + " for time step " + str(tstep) )
SaveScreenshot(fname, viewLayout1, quality=100)
Why do you need the -X ?
Just set DISPLAY to :0 and do not forward graphics.
The bottleneck is most likely the rendering on your local display. If your server has a X server, you can perform the rendering on your server by setting accordingly the DISPLAY environnement variable as Mathieu explained.
If your server does not have a X server running, then the best option is to build Paraview on your server using either the OSMesa backend or the EGL backend (if you have a compatible graphic card on it).

HBase-indexer & Solr : NOT found data

I am currently using hbase-indexer to index hbase in solr.
When I execute foolowing command to check the indexer,
hbase-indexer$ bin/hbase-indexer list-indexers --zookeeper 127.0.0.1:2181
The result is said that:
myindexer
+ Lifecycle state: ACTIVE
+ Incremental indexing state: SUBSCRIBE_AND_CONSUME
+ Batch indexing state: INACTIVE
+ SEP subscription ID: Indexer_myindexer
+ SEP subscription timestamp: 2017-01-24T13:15:48.614+09:00
+ Connection type: solr
+ Connection params:
+ solr.zk = localhost:2181/solr
+ solr.collection = tagcollect
+ Indexer config:
222 bytes, use -dump to see content
+ Indexer component factory:
com.ngdata.hbaseindexer.conf.DefaultIndexerComponentFactory
+ Additional batch index CLI arguments:
(none)
+ Default additional batch index CLI arguments:
(none)
+ Processes
+ 1 running processes
+ 0 failed processes
I think hbase-indexer works well as shown above, because it is displayed as + 1 running processes.(Prior to this, I've already executed hbase-indexer daemon by the command : ~$ bin/hbase-indexer server )
For test, I've insert data in Hbase through put command and checked the data was inserted.
But, solr qry said following that: (No Record)
I wish your knowledge and experience associated with this to be shared.
Thank you.
{
"responseHeader":{
"zkConnected":true,
"status":0,
"QTime":7,
"params":{
"q":"*:*",
"indent":"on",
"wt":"json",
"_":"1485246329559"}},
"response":{"numFound":0,"start":0,"maxScore":0.0,"docs":[]
}}
We encountered same issue.
As You are saying sever instance has good health, below are reasons which it wont work.
Firstly, If 'Write ahead log'(WAL) is disabled (may be for write performance reasons) then your puts wont create solr documents.
Hbase NRT indexer works on WAL. if its disabled then it wont create solr documents.
Second reason may be mophiline configurations if they are not correct then it wont create solr documents
However, I'd suggest to write a custom mapreduce programs(or spark jobs as well) to index solr documents by reading hbase data (if not Real time, that means when ever your put data in to hbase immeditely it wont reflect, after mapreduce solr indexer runs solr documents will be created)

Weblogic Exception after deploy: java.rmi.UnexpectedException

Just encountered a similar issue as described in the below article:
Question: Article with similar error description
java.rmi.UnmarshalException: cannot unmarshaling return; nested exception is:
java.rmi.UnexpectedException: Failed to parse descriptor file; nested exception is:
java.rmi.server.ExportException: Failed to export class
I found that the issue described is totally unrelated to any Java update and is rather an issue with the Weblogic bean-cache. It seems to use old compiled versions of classes when updating a deployment. I was hunting a similar issue in a related question (Question: Interface-Implementation-mismatch).
How can I fix this properly to allow proper automatic deployment (with WLST)?
After some feedback from the Oracle community it now works like this:
1) Shutdown the remote Managed Server
2) Delete directory "domains/#MyDomain#/servers/#MyManagedServer#/cache/EJBCompilerCache"
3) Redeploy EAR/application
In WLST (which one would need to automate this) this is quite tricky:
import shutil
servers=cmo.getServers()
domainPath = get('RootDirectory')
for thisServer in servers:
pathToManagedServer = domainPath + "\\servers\\" + thisServer.getName()
print ">Found managed server:" + pathToManagedServer
pathToCacheDir = pathToManagedServer + "\\" + "cache\\EJBCompilerCache"
if(os.path.exists(pathToCacheDir) and os.path.isdir(pathToCacheDir) ):
print ">Found a cache directory that will be deleted:" + pathToCacheDir
# shutil.rmtree(pathToCacheDir)
Note: Be careful when testing this, the path that is returned by "pathToCacheDir" depends on the MBean-context that is currently set. See samples for WLST command "cd()". You should first test the path output with "print domainPath" and later add the "rmtree" python command! (I uncommented the delete command in my sample, so that nobody accidentially deletes an entire domain!)

qt mysql query giving different result on different machine

Following code works on my pc but gives error on other pc's. how is it possible to run this successfully on all machines.
QSqlQuery query;
QString queryString = "SELECT * FROM " + parameter3->toAscii() + " WHERE " + parameter1->toAscii() + " = \"" + parameter2->toAscii() + "\"";
bool retX = query.exec(queryString);
What pre requisite should be fulfilled for this to run on any pc
In troubleshooting, if you isolate your query and it returns the result you anticipated ( such as you have done utilizing qt creator to verify the query returns a result of true), the next step would be to take a close look at your code and verify that you are passing the proper parameters into the query for execution.
I have a virgin machine I utilize for this purpose. I am a software engineer by trade and I am fully aware that i have a ton of software installed on my PC which the common user may/will not have installed. So the virgin allows me to test the code in stand-alone form.
I suggest implementing a message box prior to the execution of your query which shows the query to be executed. This will verify the query is correct on the "other machines".
Certain dll's were needed. in my case qtguid4.dll, qtcored4.dll and qtsqld4.dll. There was a size difference. Once matched it worked on a pc. However, on other pc's i still get an error "The application failed to initialize 0xc000007b ....."
How is it possible to make an application run.
Brgds,
kNish