import macros with .jar extension produced by Selenium - selenium

Good morning to everyone,
i would like to understand if it is possible and how to import macros with .jar extension produced by Selenium into webinspect (version 21.2) and then use them to conduct a scan.let me try to explain, on our machines we only have webinspect, the tests with selenium are run by other people on other systems. we wanted to understand if by simply passing us these files webinspect would be able to read and execute them, or if it is necessary to put the webinspect proxy in the selenium scripts while this other team records the macros. Can anyone help me? Thank you
I need to speed up a process by taking advantage of macros already registered to do other tests prior to mine, so as to avoid a new registration
ps: I have already read the documentation, but it does not explain whether it is actually possible to do what I asked, it explains other procedures.

Unfortunately, WebInspect's integration with Selenium is essentially a replay of the Selenium scripts in real-time, used as the Crawl phase of the scan. WebInspect cannot simply consume your JAR file. It will require you set up a listener/proxy of some sort, so when the script replays, WebInspect can capture the traffic, and then it performs an Audit-Only of what it saw. There are two methods to insert this proxy technology into the process, as detailed in the WebInspect Help. The user must configure some features so when WebInspect replays the Selenium script everything connects automatically.
e.g. from WI 22.10:
file:///C:/ProgramData/HP/HP%20WebInspect/Help/WebInspect/index.htm#Selenium_WD_1.htm?TocPath=Using%2520WebInspect%2520Features%257CIntegrating%2520with%2520Selenium%2520WebDriver%257C_____0
Besides Selenium, there are several other alternatives when it comes to Functional Testing driven WebInspect scans. You had asked about requiring the dev staff to record something to provide to your for WebInspect.
Have the QA team capture their Selenium test runs using BURP Proxy. Have them save that captured proxy traffic as an artifact for the security team, e.g. "macro1.burpcap". Use the Workflow-driven Scan wizard options in WebInspect and simply Import that BURP capture as a native Workflow Macro. I like this option since BURP is easy to acquire and run, and supports multiple OS as a Java app.
WebInspect's Web Proxy could also be used, as BURP was used above. However, this complicates things for your dev team, as they do not have access to WebInspect. There are other free options for WebInspect customers which your dev team could install, including the Standalone WebInspect Toolkit, the Web Proxy standalone tool, or the Web Proxy API tool (REST service). One annoyance with all of these today is that they (currently) require Windows, and it requires an authorized WebInspect user (you) to download and deploy these installers inside your network for the dev staff to get.
The WebInspect REST API offers several endpoints for Proxy listeners. This means that remote users (i.e. your Dev) could spawn a proxy listener, run their functional test script through that proxy, then have the captured data saved as a Workflow Macro, and kill the listener. By itself, this combination could produce the artifacts your appsec team will want to use later in their Workflow-driven Scan.
To support this with further automation ("developer-driven DAST"), you could have those same proxy API calls add on a New Scan endpoint call at the end, to go ahead and trigger a Workflow-driven Scan using the Macro that was just recorded in the prior calls. Good for putting in a cicd pipeline, provided you have a dedicated WebInspect machine sitting on the network with its API available.
The challenge with using the WebInspect API as a "poor man's pipeline scanning tool" is that WebInspect is simplistic and has no resource management features in and by itself. This means that your pipelines could trigger lots of scans quickly, and the WebInspect machine would fall over after 4+ scans got started. What a mess! You would have to design API checks into your pipelines to monitor the number of Running scans on the WebInspect machine, and then Pause/Poll the pipeline until the WebInspect machine was free and the pipeline could then submit its new scan order.
Our solution for this sort of enterprise automation would be to use Fortify ScanCentral DAST instead of just WebInspect standalone. SCDAST affords a central web GUI for your appsec staff to configure/operate/review scans, with multiple headless "WebInspect" scan machines managed in resource pools. Scan orders coming in (REST API calls) would be queued and prioritized automatically, and the remote scan machines would be brought on-line/shutdown as needed (think as headless WebInspect API on Docker). So now your cicd pipelines can simply trigger the DAST scan and not worry about the scanner machine resources.
This brings me around to another great option for your Selenium needs, the Fortify FAST Proxy. This solution only operates with ScanCentral DAST, which is why I had to go on that side tangent above. With FAST Proxy, your dev would only start up the FAST Proxy (includes authentication details on the ScanCentral API), run their Selenium scripts through that proxy, and then kill the FAST Proxy when done. That completes their Functional Testing with Selenium. Meanwhile, on shutdown, the FAST Proxy automatically delivers the captured traffic to ScanCentral as a New Workflow-driven Scan order. In a little while, the DAST scan of their Selenium script traffic has completed. If you configured Notifications, dev now receives a link to their appsec results.

Related

API automation execution from CI/CD Platforms

My question is about API automation execution from CI/CD Platforms like Jenkins/Bamboo/Azure etc.
For API automation, it is important to have control on the machine from which the API’s are triggered as we may need to open Firewall for some API’s or add Certificates to java of that machine.
But if I run my API test from the CI/CD agent machines, I can not have that control as those agent machines are maintained by organizational level team who will not entertain any such modifications for the agent machines as they are used by all other teams.
How to overcome this issue? How is actually done?
If anyone out there, faced the same situation in their own company, I want to hear from them.
Thanks a lot for your support.

Performance testing tool vs performance testing tool Plugin integration with other tools

What is the difference between
Using the performance testing tool directly(Jmeter ,..)
Integrate the performance testing tool with selenium using plugin(Jmeter ,..).
Whether I can achieve all the functionalities in both the ways.
If used as a plugin will there be any limitations?
Thanks.
Performance testing tool acts on HTTP protocol level, basically pretty much the same as browser does, however in particular JMeter:
JMeter is not a browser, it works at protocol level. As far as web-services and remote services are concerned, JMeter looks like a browser (or rather, multiple browsers); however JMeter does not perform all the actions supported by browsers. In particular, JMeter does not execute the Javascript found in HTML pages. Nor does it render the HTML pages as a browser does (it's possible to view the response as HTML etc., but the timings are not included in any samples, and only one sample in one thread is ever displayed at a time).
therefore you can only test backend performance using JMeter however you will not get client-side performance metrics
Protocol-based tests have much less footprint in terms of resources (CPU, RAM, etc.) so you can simulate thousands of virtual users from a mid-range modern laptop.
Selenium is a browser automation framework, it operates real browsers so:
you have client-side performance metrics (including ability to query Window.Performance metrics)
and you don't have HTTP-protocol related metrics (connect time, latency, concurrency, throughput, etc.)
Browser-based tests have huge footprint in terms of resources as browsers are very resource intensive, for example Firefox 74 requires 1 CPU core and 2 GB of RAM per browser instance so you can kick off only several browsers on a mid-range modern laptop
Depending on your requirements you might want to either test the backend using JMeter or the frontend using Selenium or create the main load using JMeter and use 1-2 real browsers to test client-side performance.
If you're looking for a way of integrating JMeter with Selenium take a look at WebDriver Sampler (it's a JMeter Plugin which can be installed using JMeter Plugins Manager)

Visual Studio Team Services Test Running

Apologies if similar has been asked before, I couldn't seem to find anything, just link me in the right direction if so.
I'm brand new to test automation, I will be writing selenium tests against a third party website hosted on an internal network. Our source control is provided by Visual Studio Team Services, although it is possible I can install TFS on premise.
Eventually I need to schedule test runs, I believe all this can be done with team services, seen some demo's, all good.
I will be using a URL to access the system under test which is on our internal network, if team services tries to run a selenium test and connect to the URL it will fail I imagine as it's running from wherever Microsoft are holding the code and building.
I don't think there would be a chance that we would allow Team services any access to our internal network if that was even possible.
So the question is, what are my options? can the build be moved from VS Team Services onto a local machine to run the tests with the internal URL? Is this a good idea if it can? Am i relying too much on the internet for testing on our internal network and is this a risk?
I have spent a bit of time on "the google" but struggling to find a great deal of information, it's possible I am asking the wrong questions.
Any help is greatly appreciated, links to articles are fine, don't mind doing the leg work, just need some pointers.
Many thanks for your help, apologies if any of that makes no sense.
You have a few options:
Install a VSTS Build agent on-premise and connect it to VSTS. The agent connects to VSTS using an outbound connection and it will be able to execute Builds and Release pipelines and from there orchestrate the execution of tests. You can either put this agent in a specific Agent Pool or Agent Queue, or you can add a Capability to it (e.g. "onprem"). By setting the Build Definition to use the specified Pool/Queue the agent will be selected. Or by adding the Demand "onprem" to your Build Definition it will ensure that it always requires that capability of any agent.
Use TFS 2015u3 or TFS2017 with the same agent, but that would mean you loose all the goodness that VSTS has to bring with regards to licenses, "free upgrades" and all.
With regards to security.
Adding a agent to your network that executes commands queued on a cloud service adds a risk. You can minimize that risk by configuring the build agent with a limited account, use Active Directory to limit the machines this user can run processes on/logon to and you can limit the access to this agent through permissions on the Queue and Pool as well. You can ensure that the users who have access to this pool and all your VSTS administrators have configured 2-factor-authentication on their AAD account and if needed add IP access control to these accounts as well. It's recommended that users that administer such agent pools/queues do not have alternate credentials configured and that the Personal Access Token used to register the agent is scoped to the permissions required to do just that.
With these extra measures in place you'll have a pretty secure setup. And it beats the hassle of having to install, backup, maintain a couple of TFS servers on-premise.

Script For sending many HTTP request at a time for checking server load

I want check my AWS autoscling is working fine when the CPU utilization is greater then 80%.So i want a script for send many http resuest to my AWS server for testing.Please help me.
There are several tools to accomplish this task, two of them are
ab, a command line tool build into Apache:
ab is a tool for benchmarking your Apache Hypertext Transfer Protocol (HTTP) server. It is designed to give you an impression of how your current Apache installation performs. This especially shows you how many requests per second your Apache installation is capable of serving.
jMeter, a graphical testing tool written in Java:
The Apache JMeter™ application is open source software, a 100% pure Java application designed to load test functional behavior and measure performance. It was originally designed for testing Web Applications but has since expanded to other test functions.
Handle them with care, as they really do what you are requesting. I've managed to brute force my test server until it collapsed with ab quite easy...

Any idea for executing Selenium webdriver + Java/Python tests from Cloud

I can't found any question/answer about that (probably I don't know how to find it...)
Could somebody give me a global idea to execute +200 Selenium webdriver tests (Python) from cloud servers/tools?
Thanks!!
rgzl
Another way is Saucelabs, using this service you'll be able to just send your Selenium
Java/Python tests
to their Cloud infrastructure for execution. The benefits of such testing are obvious – no need to waste time and resources setting up and maintaining your own VM farm, and additionally you can run your test suite in various browsers in parallel. Also no need to share any sensitive data, source code and databases.
As said in this acticle:
Of course inserting this roundtrip across the Internet is not without cost. The penalty of running Selenium tests this way is that they run quite slowly, typically about 3 times slower in my experience. This means that this is not something that individual developers are going to do from their workstations.
To ease the integration of this service into your projects, maybe you'll have to write a some kind of saucelabs-adapter, that will do the necessary SSH tunnel setup/teardown and Selenium configuration, automatically as part of a test.
And for a better visualization:
Here's a global idea:
Use Amazon Web Services.
Using AWS, you can have a setup like this:
1 Selenium Grid. IP: X.X.X.X
100 Selenium nodes connecting to X.X.X.X:4444/wd/register
Each Selenium node has a node config, running 2 maxSessions at once. (depending on size of course)
Have also, a Continuous integration server like Jenkins, run your Python tests Against X.X.X.X grid.