API automation execution from CI/CD Platforms - api

My question is about API automation execution from CI/CD Platforms like Jenkins/Bamboo/Azure etc.
For API automation, it is important to have control on the machine from which the API’s are triggered as we may need to open Firewall for some API’s or add Certificates to java of that machine.
But if I run my API test from the CI/CD agent machines, I can not have that control as those agent machines are maintained by organizational level team who will not entertain any such modifications for the agent machines as they are used by all other teams.
How to overcome this issue? How is actually done?
If anyone out there, faced the same situation in their own company, I want to hear from them.
Thanks a lot for your support.

Related

import macros with .jar extension produced by Selenium

Good morning to everyone,
i would like to understand if it is possible and how to import macros with .jar extension produced by Selenium into webinspect (version 21.2) and then use them to conduct a scan.let me try to explain, on our machines we only have webinspect, the tests with selenium are run by other people on other systems. we wanted to understand if by simply passing us these files webinspect would be able to read and execute them, or if it is necessary to put the webinspect proxy in the selenium scripts while this other team records the macros. Can anyone help me? Thank you
I need to speed up a process by taking advantage of macros already registered to do other tests prior to mine, so as to avoid a new registration
ps: I have already read the documentation, but it does not explain whether it is actually possible to do what I asked, it explains other procedures.
Unfortunately, WebInspect's integration with Selenium is essentially a replay of the Selenium scripts in real-time, used as the Crawl phase of the scan. WebInspect cannot simply consume your JAR file. It will require you set up a listener/proxy of some sort, so when the script replays, WebInspect can capture the traffic, and then it performs an Audit-Only of what it saw. There are two methods to insert this proxy technology into the process, as detailed in the WebInspect Help. The user must configure some features so when WebInspect replays the Selenium script everything connects automatically.
e.g. from WI 22.10:
file:///C:/ProgramData/HP/HP%20WebInspect/Help/WebInspect/index.htm#Selenium_WD_1.htm?TocPath=Using%2520WebInspect%2520Features%257CIntegrating%2520with%2520Selenium%2520WebDriver%257C_____0
Besides Selenium, there are several other alternatives when it comes to Functional Testing driven WebInspect scans. You had asked about requiring the dev staff to record something to provide to your for WebInspect.
Have the QA team capture their Selenium test runs using BURP Proxy. Have them save that captured proxy traffic as an artifact for the security team, e.g. "macro1.burpcap". Use the Workflow-driven Scan wizard options in WebInspect and simply Import that BURP capture as a native Workflow Macro. I like this option since BURP is easy to acquire and run, and supports multiple OS as a Java app.
WebInspect's Web Proxy could also be used, as BURP was used above. However, this complicates things for your dev team, as they do not have access to WebInspect. There are other free options for WebInspect customers which your dev team could install, including the Standalone WebInspect Toolkit, the Web Proxy standalone tool, or the Web Proxy API tool (REST service). One annoyance with all of these today is that they (currently) require Windows, and it requires an authorized WebInspect user (you) to download and deploy these installers inside your network for the dev staff to get.
The WebInspect REST API offers several endpoints for Proxy listeners. This means that remote users (i.e. your Dev) could spawn a proxy listener, run their functional test script through that proxy, then have the captured data saved as a Workflow Macro, and kill the listener. By itself, this combination could produce the artifacts your appsec team will want to use later in their Workflow-driven Scan.
To support this with further automation ("developer-driven DAST"), you could have those same proxy API calls add on a New Scan endpoint call at the end, to go ahead and trigger a Workflow-driven Scan using the Macro that was just recorded in the prior calls. Good for putting in a cicd pipeline, provided you have a dedicated WebInspect machine sitting on the network with its API available.
The challenge with using the WebInspect API as a "poor man's pipeline scanning tool" is that WebInspect is simplistic and has no resource management features in and by itself. This means that your pipelines could trigger lots of scans quickly, and the WebInspect machine would fall over after 4+ scans got started. What a mess! You would have to design API checks into your pipelines to monitor the number of Running scans on the WebInspect machine, and then Pause/Poll the pipeline until the WebInspect machine was free and the pipeline could then submit its new scan order.
Our solution for this sort of enterprise automation would be to use Fortify ScanCentral DAST instead of just WebInspect standalone. SCDAST affords a central web GUI for your appsec staff to configure/operate/review scans, with multiple headless "WebInspect" scan machines managed in resource pools. Scan orders coming in (REST API calls) would be queued and prioritized automatically, and the remote scan machines would be brought on-line/shutdown as needed (think as headless WebInspect API on Docker). So now your cicd pipelines can simply trigger the DAST scan and not worry about the scanner machine resources.
This brings me around to another great option for your Selenium needs, the Fortify FAST Proxy. This solution only operates with ScanCentral DAST, which is why I had to go on that side tangent above. With FAST Proxy, your dev would only start up the FAST Proxy (includes authentication details on the ScanCentral API), run their Selenium scripts through that proxy, and then kill the FAST Proxy when done. That completes their Functional Testing with Selenium. Meanwhile, on shutdown, the FAST Proxy automatically delivers the captured traffic to ScanCentral as a New Workflow-driven Scan order. In a little while, the DAST scan of their Selenium script traffic has completed. If you configured Notifications, dev now receives a link to their appsec results.

Visual Studio Team Services Test Running

Apologies if similar has been asked before, I couldn't seem to find anything, just link me in the right direction if so.
I'm brand new to test automation, I will be writing selenium tests against a third party website hosted on an internal network. Our source control is provided by Visual Studio Team Services, although it is possible I can install TFS on premise.
Eventually I need to schedule test runs, I believe all this can be done with team services, seen some demo's, all good.
I will be using a URL to access the system under test which is on our internal network, if team services tries to run a selenium test and connect to the URL it will fail I imagine as it's running from wherever Microsoft are holding the code and building.
I don't think there would be a chance that we would allow Team services any access to our internal network if that was even possible.
So the question is, what are my options? can the build be moved from VS Team Services onto a local machine to run the tests with the internal URL? Is this a good idea if it can? Am i relying too much on the internet for testing on our internal network and is this a risk?
I have spent a bit of time on "the google" but struggling to find a great deal of information, it's possible I am asking the wrong questions.
Any help is greatly appreciated, links to articles are fine, don't mind doing the leg work, just need some pointers.
Many thanks for your help, apologies if any of that makes no sense.
You have a few options:
Install a VSTS Build agent on-premise and connect it to VSTS. The agent connects to VSTS using an outbound connection and it will be able to execute Builds and Release pipelines and from there orchestrate the execution of tests. You can either put this agent in a specific Agent Pool or Agent Queue, or you can add a Capability to it (e.g. "onprem"). By setting the Build Definition to use the specified Pool/Queue the agent will be selected. Or by adding the Demand "onprem" to your Build Definition it will ensure that it always requires that capability of any agent.
Use TFS 2015u3 or TFS2017 with the same agent, but that would mean you loose all the goodness that VSTS has to bring with regards to licenses, "free upgrades" and all.
With regards to security.
Adding a agent to your network that executes commands queued on a cloud service adds a risk. You can minimize that risk by configuring the build agent with a limited account, use Active Directory to limit the machines this user can run processes on/logon to and you can limit the access to this agent through permissions on the Queue and Pool as well. You can ensure that the users who have access to this pool and all your VSTS administrators have configured 2-factor-authentication on their AAD account and if needed add IP access control to these accounts as well. It's recommended that users that administer such agent pools/queues do not have alternate credentials configured and that the Personal Access Token used to register the agent is scoped to the permissions required to do just that.
With these extra measures in place you'll have a pretty secure setup. And it beats the hassle of having to install, backup, maintain a couple of TFS servers on-premise.

How to test Service Contracts implemented as OSGi Bundles?

We are in the process of transitioning towards SOA.
Our current goal is to try and ensure that more of the application is developed as "Services" (mainly to improve visibility of capability, re-use and de-risk change). Some of those services will be exposed as web services, but many (and probably the majority) will not, and be used for "internal" use only to help reap some of the benefits of SOA.
For those "internal" services we are currently intending on implementing them as OSGi bundles; however we are struggling to understand how best to test them. Our goal is to enable the current System Test team to test all types of services and we have been investigating tools like SoapUI and SOA Test; however it's becoming clearer that we may face some challenges in testing our services implemented as OSGi bundles using tools like these; and indeed asking the test team to do so.
So we're looking for some advice on how best to test aspects of our capability designed to act as a "service", but implemented as an OSGi bundle instead of a web service.
What tools would people recommend, and is this a type of testing that's traditionally done by a developer during unit test, or can it be done by a less technical tester, undertaking the same basic principles of testing interfaces (i.e. inputs, processing, outputs)?
You could theoretically use a Remote Service Admin implementation (like Aries RSA or Eclipse ECF) to expose your internal services to the outside during testing to access them using an external system test tool.
I would not recommend to let an external team test your OSGi services though. It is much better to test the services in your own build using an integration testing tool like pax exam. It allows to define which bundles and other config to install. Then it boots up an OSGi framework with your setup and runs modified junit tests against it. The advantage is that such tests are quite realistic and still quite simple.
See here for some pax exam tests in aries rsa or apache karaf.
The first example uses the pax exam forked container for a very fast test (<1s per Test) while the second example uses the apache karaf container (~10s per Test) for tests that are very near a production system.
So you get much faster feedback than with an external system test team that will always lag a bit behind your current development. It also allows you to establish the policy that each team member runs the tests locally before committing.

Solution for a testing platform

We are looking for an automated testing software for our web application. We need to come up with a solution or software that our non-it staffs could write test cases as well as the developers.
For example I've run through some of them such as: SmartBear, National Instrument and IBM. Most of these guys are MS Windows based or commercial Linux distros which remove them from our list since we are all Debian based.
Any recommendation or guideline would be much appreciated.
Ps. We don't have any budget limit!
You're going to have a hard time getting tooling for non-technical testers to build test cases if you limit yourselves to Debian OS for developing and running the tests on. There's no reason you couldn't have a few Windows system to manage your test suites from -- those would run against your web site just fine, regardless of what stack it's hosted on. That would open you up to the tools you mentioned (and Telerik's Test Studio, the tool I help promote).
Those Windows systems could easily be run via whatever virtualization host you prefer, so you wouldn't even need physical systems to deal with that. You could easily share the same source control repository as your devs, too, since nearly every decent SCM has Windows clients.
If you're unwilling to consider having a few Windows boxes around for your testing, then you'll need to have a look at getting all your testers proficient in APIs and frameworks like WebDriver and Robot Framework. The Pages gem from Jeff Morgan (#chzy) in Ruby would be another option, as would Adam Goucher's Saunter (in Python).

TFS for machine applications

Hallo experts,
I work at a firm as a SW Tester/Validater
Our company produce autoamtic machines. Recently we are introducing team foundation server for SW development. As a SW tester my tasks include:
Validation of the functionalities of the machines at the real machines.
Reporting bugs and submitting reports.
We don't do any UnitTesting.
We don't do any code analysis.
I browser the internet and read some related stuffs. My impression on my future testing job after introducing team foundation server could be:
Working only with Testing Center
(Perhaps) installing TFS
(Perhaps) creating virtual environments thru lab center
Writing test cases
Carrying out tests manually
Reporting bugs after implemention by developer
Questions:
Are the virtual enviroments useful for SW tests, which needs communication with PLC?
Are the virtual enviroments created on the computer of SW tester or on the server?
Could SW tester templates for test cases prepare? If yes, how could such work be carried out?
For preparation of test plans which events are usually very important?
What's test impact?
Have you taken a course in order to learn TFS or thru selftaught?
Thanks a lot for your insight in advance.
Best regards,
John
If you use TFS, check out Test Manager and Lab Manager. It supposedly integrates perfectly with TFS and you might find it, at least, interesting to know about.