We have a suite of automated regression tests driven using Selenium for an Angular app with a .NET Core WEB API backend.
The intention is to include some automated security testing as part of our overnight build/test run.
From reading so far it looks like running ZAP as an intercepting proxy between Selenium and our web application is the way to go (see 'Proxy Regression/Unit Tests' in https://www.zaproxy.org/docs/api/#exploring-the-app) but I'm struggling to find clear documentation/examples.
What is the simplest way to achieve this using OWASP ZAP, and are there any definitive articles/examples available?
Start with the packaged full scan: https://www.zaproxy.org/docs/docker/full-scan/
Set the port and then proxy your selenium tests through ZAP. Use the -D parameter to pause ZAP until your tests have finished. For more ZAP automation options see https://www.zaproxy.org/docs/automate/
Related
Good morning to everyone,
i would like to understand if it is possible and how to import macros with .jar extension produced by Selenium into webinspect (version 21.2) and then use them to conduct a scan.let me try to explain, on our machines we only have webinspect, the tests with selenium are run by other people on other systems. we wanted to understand if by simply passing us these files webinspect would be able to read and execute them, or if it is necessary to put the webinspect proxy in the selenium scripts while this other team records the macros. Can anyone help me? Thank you
I need to speed up a process by taking advantage of macros already registered to do other tests prior to mine, so as to avoid a new registration
ps: I have already read the documentation, but it does not explain whether it is actually possible to do what I asked, it explains other procedures.
Unfortunately, WebInspect's integration with Selenium is essentially a replay of the Selenium scripts in real-time, used as the Crawl phase of the scan. WebInspect cannot simply consume your JAR file. It will require you set up a listener/proxy of some sort, so when the script replays, WebInspect can capture the traffic, and then it performs an Audit-Only of what it saw. There are two methods to insert this proxy technology into the process, as detailed in the WebInspect Help. The user must configure some features so when WebInspect replays the Selenium script everything connects automatically.
e.g. from WI 22.10:
file:///C:/ProgramData/HP/HP%20WebInspect/Help/WebInspect/index.htm#Selenium_WD_1.htm?TocPath=Using%2520WebInspect%2520Features%257CIntegrating%2520with%2520Selenium%2520WebDriver%257C_____0
Besides Selenium, there are several other alternatives when it comes to Functional Testing driven WebInspect scans. You had asked about requiring the dev staff to record something to provide to your for WebInspect.
Have the QA team capture their Selenium test runs using BURP Proxy. Have them save that captured proxy traffic as an artifact for the security team, e.g. "macro1.burpcap". Use the Workflow-driven Scan wizard options in WebInspect and simply Import that BURP capture as a native Workflow Macro. I like this option since BURP is easy to acquire and run, and supports multiple OS as a Java app.
WebInspect's Web Proxy could also be used, as BURP was used above. However, this complicates things for your dev team, as they do not have access to WebInspect. There are other free options for WebInspect customers which your dev team could install, including the Standalone WebInspect Toolkit, the Web Proxy standalone tool, or the Web Proxy API tool (REST service). One annoyance with all of these today is that they (currently) require Windows, and it requires an authorized WebInspect user (you) to download and deploy these installers inside your network for the dev staff to get.
The WebInspect REST API offers several endpoints for Proxy listeners. This means that remote users (i.e. your Dev) could spawn a proxy listener, run their functional test script through that proxy, then have the captured data saved as a Workflow Macro, and kill the listener. By itself, this combination could produce the artifacts your appsec team will want to use later in their Workflow-driven Scan.
To support this with further automation ("developer-driven DAST"), you could have those same proxy API calls add on a New Scan endpoint call at the end, to go ahead and trigger a Workflow-driven Scan using the Macro that was just recorded in the prior calls. Good for putting in a cicd pipeline, provided you have a dedicated WebInspect machine sitting on the network with its API available.
The challenge with using the WebInspect API as a "poor man's pipeline scanning tool" is that WebInspect is simplistic and has no resource management features in and by itself. This means that your pipelines could trigger lots of scans quickly, and the WebInspect machine would fall over after 4+ scans got started. What a mess! You would have to design API checks into your pipelines to monitor the number of Running scans on the WebInspect machine, and then Pause/Poll the pipeline until the WebInspect machine was free and the pipeline could then submit its new scan order.
Our solution for this sort of enterprise automation would be to use Fortify ScanCentral DAST instead of just WebInspect standalone. SCDAST affords a central web GUI for your appsec staff to configure/operate/review scans, with multiple headless "WebInspect" scan machines managed in resource pools. Scan orders coming in (REST API calls) would be queued and prioritized automatically, and the remote scan machines would be brought on-line/shutdown as needed (think as headless WebInspect API on Docker). So now your cicd pipelines can simply trigger the DAST scan and not worry about the scanner machine resources.
This brings me around to another great option for your Selenium needs, the Fortify FAST Proxy. This solution only operates with ScanCentral DAST, which is why I had to go on that side tangent above. With FAST Proxy, your dev would only start up the FAST Proxy (includes authentication details on the ScanCentral API), run their Selenium scripts through that proxy, and then kill the FAST Proxy when done. That completes their Functional Testing with Selenium. Meanwhile, on shutdown, the FAST Proxy automatically delivers the captured traffic to ScanCentral as a New Workflow-driven Scan order. In a little while, the DAST scan of their Selenium script traffic has completed. If you configured Notifications, dev now receives a link to their appsec results.
When we run #QuarkusTest annotated tests, one of these tests run the Quarkus test extension and start Quarkus in dev mode. Quarkus will then remain running for the duration of the test run. This is how we can achieve fast debugging.
But my use case is bit different from it. I need to verify an application which is running remotely. Is there a way I can tell Quakus not to start application while I'm using the #QuarkusTest annotation?
one possible way to achieve it that I simply write JUnits and boiler-plate code to connect the api and then verify it. However, I want to use Quakus framework while stopping Quarkus not to run application.
My goal is to set up an environment where CircleCI would run my e2e tests on BrowserStack in different browsers.
My tests are assuming that there is a mock server running. (E.g. tests are checking whether a certain call to the mock server has been made or not.)
I learned that there is such a thing as local testing for BrowserStack, but whenever I'd like to start the mock server on port 65432 it says the port is already being used. Error: listen EADDRINUSE :::65432
I have an Express mock server running (on port 65432), tests are ran by Nightwatch against Selenium server.
So far I only saw examples which run tests against homepages which are living on the internet (like google.com), but I would like to run my own mock server locally and run my tests against it.
Is there a way where I could run a mock server and run my tests with Nightwatch and Selenium against that mock server and all done by a CI tool running the tests on BrowserStack?
If you have a internal website (not accessible to public) hosted on your machine (using mock server - Tomcat, Nginx, Express Mock Server, etc) and wish to run the Selenium based scripts to test that application on BrowserStack, then you can use the Local Testing feature.
You simply need to run the binary file that they provide on your local machine (where the internal website is accessible) and set the capability 'browserstack.local' to 'true'. Hence the tests running on BrowserStack will be able to access your internal website. I would recommend you to review the documentation here. You can checkout the documentation on NightwatchJS-BrowserStack here.
If you wish to trigger the tests using CircleCI. They provide the plug-in for CircleCI as well, read more on the same here. The plug-in itself will handle the Local Testing for you in that case.
For future readers: my problem was parallelism - I set 2 workers (child processes basically) with the following object:
"test_workers": {
'enabled': true,
'workers': 2
}
I found this setup from one of the examples which I can't find anymore, but if you are running your Nightwatch tests with your own mock server this might mess up the test suite since every worker will try to spin up a mock server for it's own tests, which will obviously fail.
I can't found any question/answer about that (probably I don't know how to find it...)
Could somebody give me a global idea to execute +200 Selenium webdriver tests (Python) from cloud servers/tools?
Thanks!!
rgzl
Another way is Saucelabs, using this service you'll be able to just send your Selenium
Java/Python tests
to their Cloud infrastructure for execution. The benefits of such testing are obvious – no need to waste time and resources setting up and maintaining your own VM farm, and additionally you can run your test suite in various browsers in parallel. Also no need to share any sensitive data, source code and databases.
As said in this acticle:
Of course inserting this roundtrip across the Internet is not without cost. The penalty of running Selenium tests this way is that they run quite slowly, typically about 3 times slower in my experience. This means that this is not something that individual developers are going to do from their workstations.
To ease the integration of this service into your projects, maybe you'll have to write a some kind of saucelabs-adapter, that will do the necessary SSH tunnel setup/teardown and Selenium configuration, automatically as part of a test.
And for a better visualization:
Here's a global idea:
Use Amazon Web Services.
Using AWS, you can have a setup like this:
1 Selenium Grid. IP: X.X.X.X
100 Selenium nodes connecting to X.X.X.X:4444/wd/register
Each Selenium node has a node config, running 2 maxSessions at once. (depending on size of course)
Have also, a Continuous integration server like Jenkins, run your Python tests Against X.X.X.X grid.
everyone
do you think nodeJS suit for the web UI automate testing?
I don't think so.
first, nodeJS base on V8 engine, so how to test the issue on IE6-8?also how about other
no web-kit based browser?
second, what's nodeJS suit for?
What are you talking about? NodeJS is designed for writing SERVERS, not clients. It has nothing to do with browsers.
Imho NodeJS is the best choice for writing high traffic web servers. Also together with websockets it is also very good choice. And it is the future of web designing since the unification of language used in client's side and server's side.
You can use nodeJS to connect to Selenium and do automated UI tests, Soda (https://github.com/testingbot/soda) supports this.
If you want to use a node.js based headless browser to automate UI tests, check out zombie.js. If you want to create a UI test suite that runs against different browsers, I'd highly recommend selenium.