Test harness for lab setup - testing

We are new to test automation, and finally we finally automated our setup (black box testing). A brief overview of our setup:
Each setup consists of 2 linux PCs and 1 windows PC - the PCs communicate to one another via an embedded board.
Before we used to manually run test cases from the linux machine - this would require both typing in the Linux machine and also some operations in the windows PC. Now instead we have written a C wrapper from which you can trigger any number of test cases and thanks to AutoIT the windows PC operations are now automated as well.
Now we have multiple such setups - I want to have a central test controller that,
Given a set of test cases (and the corresponding executable to be run on the embedded board) can distribute and trigger it in parallel across setups
During overnight tests it can keep track of which test cases have been executed and which ones are pending
Quarantine test cases
Continuous integration - we use CVS
etc etc
Basically a powerful test harness software running on a PC - this PC is connected to all the setups using a router.
Any suggestions from open source (free) projects for such a software, more than having all the features mentioned above, i want something that does most of it, I can code and add additional functionalities as need
I tried browsing online and seem to find some but they all seem to be for testing websites, not sure if it would suit my use case. Would really appreciate inputs in this regard.
Thanks

ok i am going to ignore most of the stuff that you described which looks like you have automated already, and correct me if i am wrong but i think you are looking for is controller for all these automated jobs.
I would say that Jenkins CI is the ideal solution for you.
In jenkins everything is controlled by a master machine, this machine then in turn controls slaves (being your other pcs or linux machines) via java. so this gives you overview of the system.
you can then create jobs and ristrict where they can run, these jos can pretty much do anything. Including taking parameters for what to run, you can also create matrix configuration jobs which allow for one setup to be run simlutansily on however many slaves you need.
You can set this process or timer, or trigger , build externaly or internaly or etc etc ...
Also I am pretty sure Jenkins has some dedicated plugins for working with CVS some of which are built into the OOTB setup.
Jenkins is the way to go here .

Related

should E2E be run in production

Apologies if this is open ended.
Currently my team and I are working on our End to end (E2E) testing strategy and we seem to be unsure whether we should be executing our E2E tests against our staging site or on our production site. We have gathered that there are pros and cons to both.
Pro Staging Tests
wont be corrupting analytics data on production.
Can detect failure before hitting production.
Pro Production Tests
Will be using actual components of the system including database and other configurations and may capture issues with prod configs.
I am sometimes not sure if we are conflating E2E with monitoring services (if such a thing exists). Does someone have an opinion on the matter?
In addition when are E2E tests run? Since every member of the system is being tested there doesn't seem to be an owner of the test suite making it hard to determine when the E2E should be run. We were hoping that we could run E2E in some sort of a pipeline before hitting production. Does that mean I should run these test when either the Front end / backend changes? Or would you rather just run the execution of the E2E on an interval regardless of any change?
In my team experience has shown that test automation is better done in a dedicated test server periodically and new code is deployed only after being tested several sessions in a row successfully.
Local test run is for test automation development and debugging.
Test server - for scheduled runs, because - no matter how good you are at writing tests, at some point they will become many hours in a row to run and you need a reliable statistic of them over time with fake data that wont break production server.
I disagree with #MetaWhirledPeas on the point of pursuing only fast test runs. Your priority should always be better coverage and reduced flakiness. You can always reduce the run time by paralelization.
Running in production - I have seen many situations when test results in a funny state of the official site that makes the company reputation go down. Other dangers are:
Breaking your database
Making purchases from non existent users and start losing money
Creating unnecessary strain on the official site api, that make the client experience during the run bad or even cause the server to stop completely.
So, in our team we have a dedicated manual tester for the production site.
You might not have all the best options at your disposal depending on how your department/environment/projects are set up, but ideally you do not want to test in production.
I'd say the general desire is to use fake data as often as possible, and curate it to cover real-world scenarios. If your prod configs and setup are different than your testing environment, do the hard work to ensure your testing environment configuration matches prod as much as possible. This is easier to accomplish if you're using CI tools, but discipline is required no matter what your setup may be.
When the tests run is going to depend on some things.
If you've made your website and dependencies trivial to spin up, and if you are already using a continuous integration workflow, you might be able to have the code build and launch tests during the pull request evaluation. This is the ideal.
If you have a slow build/deploy process you'll probably want to keep a permanent test environment running. You can then launch the tests after each deployment to your test environment, or run them ad hoc.
You could also schedule the tests to run periodically, but usually this indicates that the tests are taking too long. Strive to create quick tests to leave the door open for integration with your CI tools at some point. Parallelization will help, but your biggest gains will come from using cy.request() to fly through repetitive tasks like logging in, and using cy.intercept() to stub responses instead of waiting for a service.

Running integration/e2e tests on top of a Kubernetes stack

I’ve been digging a bit into the way people run integration and e2e tests in the context of Kubernetes and have been quite disappointed by the lack of documentation and feedbacks. I know there are amazing tools such as kind or minikube that allow to run resources locally. But in the context of a CI, and with a bunch of services, it does not seem to be a good fit, for obvious resources reasons. I think there are great opportunities with running tests for:
Validating manifests or helm charts
Validating the well behaving of a component as part of a bigger whole
Validating the global behaviour of a product
The point here is not really about the testing framework but more about the environment on top of which the tests could be run.
Do you share my thought? Have you ever experienced running such kind of tests? Do you have any feedbacks or insights about it?
Thanks a lot
Interesting question and something that I have worked on over the last couple of months for my current employer. Essentially we ship a product as docker images with manifests. When writing e2e tests I want to run the product as close to the customer environment as possible.
Essentially to solve this we have built scripts that interact with our standard cloud provider (GCloud) to create a cluster, deploy the product and then run the tests against it.
For the major cloud providers this is not a difficult tasks but can be time consuming. There are a couple of things that we have learnt the hard way to keep in mind while developing the tests.
Concurrency, this may sound obvious but do think about the number of concurrent builds your CI can run.
Latency from the cloud, don't assume that you will get an instant response to every command that you run in the cloud. Also think about the timeouts. If you bring up a product with lots of pods and services what is the acceptable start up time?
Errors causing build failures, this is an interesting one. We have seen errors in the build due to network errors when communicating with our test deployment. These are nearly always transitive. It is best to avoid these making the build fail.
One thing to look at is GitLab are providing some documentation on how to build and test images in their CI pipeline.
On my side I use travis-ci. I build my container image inside it, then run k8s with kind (https://kind.sigs.k8s.io/) inside travis-CI, and then launch my e2e tests.
Here is some additional information on this blog post: https://k8s-school.fr/resources/en/blog/k8s-ci/
And the scripts to install kind inside travis-ci in 2 lines: https://github.com/k8s-school/kind-travis-ci.git. It allows lots of customization on the k8s side (enable psp, change CNI plugin)
Here is an example: https://github.com/lsst/qserv-operator
Or I use Github Actions CI, which allows to install kind easily: https://github.com/helm/kind-action and provide plenty of features, and free worker nodes for open-source projects.
Here is an example: https://github.com/xrootd/xrootd-k8s-operator
Please note that Github action workers may not scale for large build/e2e tests. Travis-CI scales pretty well.
In my understanding, this workflow coud be moved to an on-premise gitlab CI where your application can interact with other services located inside your network.
One interesting thing is that you do not have to maitain a k8s cluster for your CI, kind will do it for you!

Complex system and Vagrant

On production, we have a web infrastructure as it follows:
Load Balancer (haproxy)
API Server (PHP + apache)
Frontend Server (Javascript + nginx)
MySql Server
Redis Server
I'd love to start using Vagrant to make production environments exactly the same as the development ones, plus making it easy for a new developer to jumpstart doing his job.
The big question is: how should I build the box?
Should I put everything in one box or should I build more boxes? And how many?
It depends on convention you've reached with developers. Ask yourself one question: in what type of structure do you wish to work: distributed or centralized.
If answer is "distributed", you can make one box per one project. You won't mess up when you will nedd to up any project that got over last modification few time ago. But this method intakes much memory and storage space and sometimes it doesn`t make sense, if the most part of your projects based on the same production environment.
If answer is "centralized", it means one box per all projects builded on the same environment is enough for you. It saves plenty of time, but also it`s easy to confuse, when you're looking for an old project. You can set up Docker container per each project in your Vagrant box.
Additionally I'd like to suggest you Packer usage for box building. That`s definitely clear instrument for this goal, it can make "ready to work" Vagrant box for every virtualization environment and execute shell scripts/CMS scripts. Just put in box everything essential for production environment, later developers can add some package dependencies through Vagrant provisioning and share it by Vagrantfile settings.

How would I created a flexible EC2 Windows 2008 boot script?

If you look at the Linux ecosystem (especially the Ubuntu and Alestic EC2 images) there is a common technique where the VMs are pre-configured to look at the EC2 user-data and use it as a boot script. The nice thing about this approach is that you can write a boot script that further provisions your machine, allowing you to avoid making a new image every time your software that runs on the machine changes.
I want to do the same thing for Windows, but given that I'm an Mac and Linux guy, I'm a bit lost on where to start. My requirements are:
This must run on Windows Server 2008
A bootstrap script needs to start when the machine boots up, read the user-data file by pulling down the contents http://169.254.169.254/1.0/user-data
The bootstap script then needs to run the contents of that file as if it were a script
The script embedded in the user-data needs to run in such a way that it has access to the desktop environment (ie: it can launch a browser, etc).
I'm not quite sure how services work in Windows or if I need to enable auto-login, so any advice here would be appreciated. The ultimate goal is to run a Java program that launches some custom software that in turn launches a web browser (IE, Firefox, etc) and is capable of taking screenshots.
The screenshot part is interesting, because in the past when I've tried this the only way I could get something other than a black screen was to have UltraVNC or RealVNC boot up as a service, though I don't know why that helped.
I'm looking for answers to three specific questions, as well as any general advice:
Should I be focussing on a Windows service or auto-login + bat file in the "Startup" folder?
If I use a Windows service, is there anything special that I need to do to make sure desktop access and/or screenshots are available?
Do you recommend any tools for common Linux commands, like curl or wget? Last time I used Windows I used Cygwin a lot, but is there something more appropriate to use here?
I have not tried auto-login on Windows instances in EC2, but here's the support document on how to enable it.
We boot-strap our Windows instances using a custom AMI with a custom Windows 'install' service already installed. The boot-strap installer reads a URL from user-data at startup. The URL points to a ZIP file stored in S3. The installer then downloads, un-zips, and executes the actual application installer -- in our case a simple CMD fie.
This setups allows us to have one base AMI and then be able to easily overlay 15+ different application configurations (without having to rebuild the AMI). If you only have one application configuration this may be overkill for your situation.
The only trouble we ran into was having our installer service start to early -- changing the service startup mode to "Automatic Delayed" fixed that issue.
We wrote our boot-strap installer in Java, launched via YAJSW, because we're comfortable with it. If you just want a few simple Unix tools, most are available pre-compiled for Windows, for example wget.
For something completely different, you could try PsExec to configure the instance after it has booted.
You can try using RightScale's free developer account to create plain Powershell scripts and associate them with your Windows instances to run at boot time. The RightScale dashboard solves exactly the problems you are trying to solve above.
DISCLAIMER: I work for RightScale.
As for screen capture CutyCapt is a simple tool you can point at a URL and generate an image from.
Unxutils is a great solution for those looking for unix tools on Windows. It's got the wget.exe that you're looking for, however, using Powershell to download stuff is not so bad either:
$wc = new-object system.net.webclient
$wc.DownloadFile("http://stackoverflow.com","test.html")
If you can write a batch file to do your setup, then you can run it at startup of the vm by doing this:
1. Run REGEDT32.EXE.
2. Modify the following value within HKEY_CURRENT_USER:
Software\Microsoft\Windows NT\CurrentVersion\Winlogon\ParseAutoexec
1 = autoexec.bat is parsed
0 = autoexec.bat is not parsed
As an answer to #3, I would say that you can do just about anything in a batch file that you need which includes downloading from a ftp server (but not from a http server). I am really interested in this stuff and so if you have questions, try asking me.
If you use Elastic Beanstalks you can use this:
Customizing the Software on EC2 Instances Running Windows
It uses YAML formatting standards, e.g.
packages:
msi:
mysql: http://dev.mysql.com/get/Downloads/Connector-Net/mysql-connector-net-6.6.5.msi/from/http://cdn.mysql.com/
or
sources:
"c:/myproject/myapp": http://s3.amazonaws.com/mybucket/myobject.zip
I know this is a little bit late to help out with the original post but for anyone who is still reading this one solution is to use the http://cloudinitnet.codeplex.com/ project. The service is easily installed using a powershell script and will create a local administrator account to use while running.
The goal for this project was to replace the Cloud-Init project used in Amazon Linux and Ubuntu.

Newbie question about Continuous integration and selenium tests,

I'm very new to C.I. but I have recently inherited a project where Team City has just been implemented and I'm slowly getting my head around it. One thing we would like to do is run some Selenium Tests as part of the build process. I've created the selenium tests and can run them successfully via nunit-console on my development machine. The build server builds the project and then deploys it (A web forms application as it happens) to a staging server.
Before each selenium test we set the database to a known state, i.e. to only have certain records in place - that way each test is independent of the others. The problem is the staging server will be used by real "human" testers so this would cause them a problem with the database continually being reset (records being removed etc.) The question is should I really also deploy the application to a virtual directory on the build server and run the selenium tests against that and only deploy to the staging server if those tests pass?
Or have I got this stuff completely wrong? If so how do you do it in your organisation?
I suggest that you do not mix your automated and manual testing by allowing your testers to access the server that is staged for your automated tests. This can cause false negatives both in your automated and your manual tests. These 'bugs' are indeterministic and more than likely never reproducible (a very bad news). This will cause you a lot of unnecessary 'bug reports' and build failures.
So here is what you can do...
In addition to your current setup, you can create an extra staged server for your manual testers. This is the least you should do. You should probably create several of them, one for each tester.
And here comes the rant...
In my current project we recently found out that our testers (we had ~10 of them) reused one server. They claimed that since our app is going to have multiple concurrent users, it was a good idea that while they are testing the individual functionalities, they are also testing how these functionalities act while multiple users are working on the same server. WRONG!
If multiple users are a concern, there should be test cases for the specific concerns. If functionality#1 can interfere with functionality#2, it should be specifically tested and not just be 'tested-by-luck'.
Before this was explained to our manual testers, we had many false bug reports due to the fact that one tester was simply stepping on another tester's toes. (e.g. tester1 deleted a record that tester2 introduced to the system, etc...). This created a lot of unnecessary bug reports and these bugs were never reproducible.
Sorry about the rant, I hoped this still helps :)