How to selenium test web sites depending on each other? (OAuth2 IdS, protected sites) - selenium

I have an IdS (Thinktecture Identity Server3) and various web sites trusting the IdS.
I have selenium tests for IdS and for each of the sites.
I use TeamCity and Octopus Deploy.
Changes in IdS should trigger test of dependent web sites. Changes in individual sites should trigger only test of the site (as it is).
What is the best way of ensuring this? I should think this is a common problem? ;)
BR, Anders

One way to do so, is to use App settings configuration options of the .Net itself. You can use config transformations to create different configuration per site and change. You will have to however map each, though. This will allow you to keep everything in the project. Example of such script creating transformed config files using the command line transform execution tool. Or if you prefer to use TeamCity with XML pokes. I've used the later with great success on Selenium and multi site platform test framework. Before each test build that was chained, we modified the XMLs, so the execution was dedicated to the related Git branch or repo that TeamCity was set to monitor.

I found what I was looking for in the most obvious of places. On the web site builds, I added a Finish Build Trigger pointing to the ids build. This way all my sites (I have only one :)) gets selenium tested.

Related

Running integration/e2e tests on top of a Kubernetes stack

I’ve been digging a bit into the way people run integration and e2e tests in the context of Kubernetes and have been quite disappointed by the lack of documentation and feedbacks. I know there are amazing tools such as kind or minikube that allow to run resources locally. But in the context of a CI, and with a bunch of services, it does not seem to be a good fit, for obvious resources reasons. I think there are great opportunities with running tests for:
Validating manifests or helm charts
Validating the well behaving of a component as part of a bigger whole
Validating the global behaviour of a product
The point here is not really about the testing framework but more about the environment on top of which the tests could be run.
Do you share my thought? Have you ever experienced running such kind of tests? Do you have any feedbacks or insights about it?
Thanks a lot
Interesting question and something that I have worked on over the last couple of months for my current employer. Essentially we ship a product as docker images with manifests. When writing e2e tests I want to run the product as close to the customer environment as possible.
Essentially to solve this we have built scripts that interact with our standard cloud provider (GCloud) to create a cluster, deploy the product and then run the tests against it.
For the major cloud providers this is not a difficult tasks but can be time consuming. There are a couple of things that we have learnt the hard way to keep in mind while developing the tests.
Concurrency, this may sound obvious but do think about the number of concurrent builds your CI can run.
Latency from the cloud, don't assume that you will get an instant response to every command that you run in the cloud. Also think about the timeouts. If you bring up a product with lots of pods and services what is the acceptable start up time?
Errors causing build failures, this is an interesting one. We have seen errors in the build due to network errors when communicating with our test deployment. These are nearly always transitive. It is best to avoid these making the build fail.
One thing to look at is GitLab are providing some documentation on how to build and test images in their CI pipeline.
On my side I use travis-ci. I build my container image inside it, then run k8s with kind (https://kind.sigs.k8s.io/) inside travis-CI, and then launch my e2e tests.
Here is some additional information on this blog post: https://k8s-school.fr/resources/en/blog/k8s-ci/
And the scripts to install kind inside travis-ci in 2 lines: https://github.com/k8s-school/kind-travis-ci.git. It allows lots of customization on the k8s side (enable psp, change CNI plugin)
Here is an example: https://github.com/lsst/qserv-operator
Or I use Github Actions CI, which allows to install kind easily: https://github.com/helm/kind-action and provide plenty of features, and free worker nodes for open-source projects.
Here is an example: https://github.com/xrootd/xrootd-k8s-operator
Please note that Github action workers may not scale for large build/e2e tests. Travis-CI scales pretty well.
In my understanding, this workflow coud be moved to an on-premise gitlab CI where your application can interact with other services located inside your network.
One interesting thing is that you do not have to maitain a k8s cluster for your CI, kind will do it for you!

How to run cucumber tests through centrally hosted web pages to extend the possibility of being able to pick and run desired test cases

There should be a way where user should be able to access test cases and select or customize them online before running. Or in simple terms is it easy to use feature files online.
I think what you want is Jenkins
After you set up your Jenkins server you can access it though a web page, create jobs, and run them. There are also reporting plug-ins like Cucumber Reports that make nice, easy to read, reports.

OWASP ZAP share Context between environments and change base URI's addresses

I'm new to ZAP tool so sorry in advance if question is stupid, but I cannot find answer on it so far...
I have to fix all the vulnerabilities in some application, so I installed ZAP proxy tool locally, then explored application manually, collected all the requests and ran 'Active scanner' against it. So far everything is good, but the problem is that application quite big and it's very difficult and time consuming to cover everything manually. Fortunately we have dedicated automation environment where I can setup ZAP proxy and let test's go and populate context (set of url's for test) for me
So now my task is somehow share context's between different environments with ability to change base addresses
e.g. I populated context on somedomain/myapp and want run ZAP tool against same application deployed locally, or in different server (e.g. localhost/myapp)
It would be very helpful if someone could share any info how to achieve that.
Thank you in advance,
Eugene
It seems that you can create new context and then add existing links to that context.
Craete a new context
Add existing link to the selected content (right click)
Check this link.
https://chrisdecairos.ca/intercepting-traffic-with-zaproxy/
Tiago

Integration testing of func in OSGI container

I'm using FuseESB to run my app, which is essensially OSGI container (Felix), i'd like to figure approach to test my OSGI services in integration mode (including outer dependencies like DB, outer services, etc). First on a thought is ability to run specific bundle into container which involve all app services into running tests defined in this bundle. Can somebody help with that kind of issue? THANKS!
There are differnt ways of testing this.
Since FuseESB is based on Apache Karaf you might test with the apache karaf-pax-exam tools to test a complete container setup automatically.
Another way of just testing your OSGi bundles in a OSGi container is to use pax-exam directly. Last but not least if you just want to test your service look-up functionality you might test with pojosr, it's quite nice for testing but has it's limits especially if you depend on container features.
That said you'll find information at the following pages:
Pax-Exam
Apache Karaf
sample how Pax-Web uses pax-exam in its iTests
You may find http://www.javabeat.net/2011/11/how-to-test-osgi-applications/ helpful as an overview of the various OSGi test options. Configuring PAX-Exam to pull in your whole FuseESB container and get all your app services present will involve certain challenges, but once you've got the knack it can be very handy.
bndtools as the possibility to do JUnit tests inside the container.

How do I publish php source code to a local web server in rational team concert?

I'll be using RTC in the near future here at work. My question is: where does it put the files the team members will be working on? I understand that each programmer will work on the projects files and they will push the changes to the main repository. We have a local web server where we test our work (php). So, do we have to configure RTC to publish the files to the web server? or the RTC server must be installed in the webserver so it can save the files?
We use Rational Team Concert almost exactly as you describe, and it works brilliantly. My small team of web developers collaborates on website source code and delivers it to two different streams depending on its readiness: production-stream and staging-stream. Then we have defined two builds that check out the source code, move some things around, and push the files to the web servers via SCP. So, with a few clicks we kick off a staging build, watch it finish in about two minutes and everyone can see the changes on the staging server. When the code is ready for prime-time, the change sets are delivered to production-stream and the production build is kicked off, which is configured to copy the files to the production web server.
But even before a staging or production build is run, any of us can simply configure a local web server in RTC using the Eclipse PDE and Web Tools add-ons and see the site running in localhost as we develop.
All our work is done within Rational Team Concert, from planning, to bug tracking, to source control, to builds. It's very well-suited for website management.
Your understanding is correct - you work on files locally, and they get uploaded on to the server when you checkin. Bear in mind that checkin in RTC terms really means back-up your files to the server, it is a Deliver command that shares the files with others (it is worth a quick look at the articles on jazz.net that explains how SCM works).
One way to pubish to your php server is to make that part of a build, or a build in its own right (which RTC also handles - in conjunction with your favourite build tool). The build would copy the files to the php server. The advantage of doing this as a build is you will know exactly what versions of your files are being copied, and you will be able to reproduce this copy at any point in the future.
You do not need to install the RTC server on the php server.
You can also try posting on the forums on http://jazz.net/ if you have questions on RTC.
Hope that helps.
Another alternative would be to use the command line interface to accept all changes into a workspace and run that with a cron job.
To handle discarded change sets, you'd probably want to use something like:
scm workspace replace-components <workspace-name> stream <uuid-of-stream> --all
after you had initially loaded the workspace on your web server.