We have several parallel development groups working on different things in separate environments. Each group has a jenkins server/2 windows slaves setup that is executing selenium nunit tests.
Is it possible to to have all the slave instances in a pool that each of the jenkins servers can pick from? We are using the JNLP b/c there are issues with some of the browser tests that require running in an interactive desktop. I thought perhaps I could start a JNLP for each server instance on each machine, but that seemed the wrong way as each server would have no knowledge of other servers use of it. Is there any way to make a slave available to multiple servers?
I don't think you can do what you are looking for.
You can run multiple slaves on one computer, but as you said, there is no way to keep multiple servers from trying to access the same desktop.
A better solution is probably to combine your Jenkins servers. You can use the security settings and views to set it so that regular users are not even aware of the other projects being run in parallel- while allowing one Jenkins server to coordinate all of the builds (which is what you want).
You may want to check with CloudBees Ops Center (http://www.cloudbees.com/joc), in particular, the Share Executors (Slaves) Between Masters feature. That would do exactly what you want, but for a bit of a price.
Related
I am new to Appium/Selenium parallel testing and I was wondering if one could run different tests concurrently across multiple devices? My team needs to reduce the total runtime of our UI tests and are not concerned with different OS versions affecting the behaviour of the application for these specific tests. I have been reading through many posts and trying to search for answers but all I can seem to find on the internet are articles, tutorials and forums on how to run the same test in parallel on multiple devices.
Can I run different tests concurrently on multiple devices without kicking off different tests manually, or is that a limitation of Appium? Ideally this would be implemented using an open source solution.
(Right now we are trying to use a JUnit approach for testing due to specific limitations of other tools. All tests are being written in Java.)
Thanks for your time.
Depending on your setup, you can accomplish this. However, a lot of your build automation and device management will need to be set up by you or your team custom, so you will not be able to use an out-of-the-box solution to do this.
I've accomplished the same with both Selenium and Appium -- you will need a test framework that allows for test execution with parameters, and your devices will need to be connected to separate USB hubs that each have their own virtual server attached.
Using NUnit, here's my approach:
Generate .txt files for each different set of tests I want to run -- test_list_1.txt, test_list_2.txt, etc. Each list contains a different group of test cases to run.
Write a build script to clean & build your project from scratch -- for C#, I use Cake.
Set up a job in Jenkins that executes your build script and calls NUnit's console runner, which takes a test_list as a parameter. This initiates a test execution against a list of test cases
You should be able to build your Jenkins job against any test list you want, so you now have the ability to run your automation against different tests, as mentioned in your problem description.
Connect your virtual machines (which connect to your Appium devices) to Jenkins and add them as executors on your job. Now you have multiple machines to run your job against.
With this set up, you can run as many jobs as you have machines -- 4 VM's means 4 jobs, which means you can run 4 different sets of test cases concurrently.
Setting this up on my end was completely custom -- I used certain tools to accomplish individual steps, but it worked for our needs and we did accomplish concurrent execution against different sets of test cases.
What you are asking , basically it's not possible.
You can't run different test cases on different devices.
Though you can run same test cases on wide range of devices using Hive or Browserstack or AWS device farm.
Hope this helps.
You can run your tests locally on multiple devices by creating multiple instances of appium server. Every Appium server should be running on different ip and proxy address. So you should set your capabilities for each instance accordingly.
But there is another solution as well but that's bit costly. That is AWS Device Farm. AWS provides multiple real devices hosted at there servers which you can use for executions of your customised test suits. They give initially 1000 free test minutes. You have to create a maven project for your test scripts. I prefer using testng rather than Junit.
We want to start working with liferay. But the server is too heavy and the developpers computer don't have enought RAM. We want to centralize the server instance.
In other words, we want to build a development server where all developpers can connect and directly develop in their web browser, compile, view the result and push the code to git repository.
I found some good cloud IDE like eclipse CHE and a good maven archetype for liferay projet. So i can build the projet with maven. But now i want to know if it is possible to configure Liferay like every developpers can work without troubling another. And if possible, How ?
The developpers can share the same database and can use different port. Maybe, the server can generate tempory URL like some online cloud editor.
I found this post Liferay With Multiple Server Instances, but i don't think is the best way because he create one server per project. I think is too heavy.
If necessary, We have kubernetes in our IS.
Liferay's tomcat bundle, by default, is configured to take a maximum of 2.5G for the process, but it can run with far less - the default only recently was bumped up, because many people never change the default and then wonder why production systems run out of memory. For 1 concurrent user (the sole developer) on a machine, I guess that the previous default of 1G heap space is enough. Are you saying that that's too much for your developers' machines?
Having many developers on a shared server poses one problem: Yes, you may deploy different code from different machines, but: How about setting a breakpoint? Can you connect with multiple debuggers? If something fails, how do you know whos recent deployment caused the failure?
Sharing a server is an integration technique, not a development technique. If your developers don't have enough memory available for running their own Liferay server next to their IDE, it's a lot cheaper to upgrade their machines than to slow them down when everybody is accessing the same server and they can't properly debug. You pay the memory once, but your waiting developers by the hour.
Is it possible to share one server? Sure it is.
Is it possible to share one server without troubling each other? I doubt.
When you say: You think it's too heavy: What are you basing that assumption on? What does the actual developer machine look like and what keeps you from investing in the extra memory?
It's trivial to share some infrastructure - i.e. have all of them connect to the same database server (and give everyone their own schema). But just the extra effort and setup might require you to pay the developers by the hour as much as you'd otherwise pay for a couple of memory chips.
And yet another option is: Run Liferay on a remote server, but keep 1 instance per developer. This way you don't need the local memory, but can have the memory in the cloud. Calculate if you pay more for remote cloud machines than for local memory - that decision is up to you.
I'd like to deploy kubernetes on a large physical server (24 cores) and I'm uncertain as to a number of things.
What are the pros and cons of creating virtual machines for the k8s cluster other than running on bare-metal.
I have the following considerations:
Creating vms will allow for work load isolation. New vms for experiments can be created and assigned to devs.
On the other hand, with k8s running on bare metal a new NAMESPACE can be created for each developer for experimentation and they can run their code in it. After all their code should be running in docker containers.
Security:
Having vms would limit the amount of access given to future maintainers, limiting the amount of damage that could be done. While on the other hand the primary task for any future maintainers would be adding/deleting nodes and they would require bare metal access to do that.
Authentication:
At the moment devs would only touch the server when their code runs through the CI pipeline and their running deployments are deployed. But what about viewing logs? Could we setup tiered kubectl authentication to allow devs to only access whatever namespaces have been assigned to them (I believe this should be possible with the k8s namespace authorization plugin).
A number of vms already exist on the server. Would this be an issue?
128 cores and doubts.... That is a lot of cores for a single server.
For kubernetes however this is not relevant:
Kubernetes can use different sized servers and utilize them to the maximum. However if you combine the master server processes and the node/worker processes on a single server, you might create unwanted resource issues. You can manage those with namespaces, as you already mention.
What we do is use continuous integration with namespaces in a single dev/qa kubernetes environment in which changes have their own namespace (So we run many many namespaces) and run full environment deployments in those namespaces. A bunch of shell scripts are used to manage this. This works both with a large server as what you have, as well as it does with smaller (or virtual) boxes. The benefit of virtualization for you could mainly be in splitting the large box in smaller ones so that you can also use it for other purposes then just kubernetes (yes, kubernetes runs except MS Windows, no desktops, no kernel modules for VPN purposes, etc).
I would separate dev and prod in the form of different vms. I once had a webapp inside docker which used too many threads so the docker daemon on the host crashed. It was limited to one host luckily. You can protect this by setting limits, but it's a risk: one mistake in dev could bring down prod as well.
I think the answer is "it depends!" which is not really an answer. Personally, I would split up the machine using VM's and deploy that way. You've got better flexibility as to how much of the server's resources you carve out and you can easily create new environments, then destroy easily.
Even if these vms are really big, I think it's still easier to manage also given that you have existing vm's on the machine.
That said, there's not a technical reason that you can't run a single node server, but you may run into problems with downtime with upgrades (if that's an issue), as well as if that server needs patched or rebooted, then your entire cluster is down.
I would look at your environment needs for HA and uptime, as well as how you are going to deploy VM's (if you go that route), and decide what works the best for you.
I have many VMs which are used as part of Grid. Some as RC and some as Hub. Due to the large number of VMs that is being used, it is a big task to maintain the grid now. To change the RC to point to a different hub, I will have to
login to that machine
kill the current RC
run the java command again with a different hub URL
Yes, I can use a batch script to restart all the machines. But what if I just want to change just one machine?
Is it possible to create an application using JAVA RMI which can run the required commands to kill, start, restart the RCs or Hub? Has anyone ever tried to create such an application?
you should have a look at selenium grid2.0. It's been designed with exactly what you ask in mind.
You can create your own proxy extending either the selenium1 ( RC ) or selenium2 ( webdriver protocol ), and implement a list of interfaces that will allow to react to certain events.
You could for instance :
have one unique hub controlling all the nodes and refine the routing by implementing the matcher.
update the grid console to have some "reconfigure node" functionality directly there
add some rules on each node, for instance restart the VM and the server within it automatically every X test or when a specific event is detected.
I wouldn't start a RMI based solution. If you have VMs, you should have access to the VM API for the solution you choose, and you can use that to revert to a known clean state and restart from there each time. That will ensure you don't have left over crashed browsers and things like that.
thanks,
François
i know this is old question. How about setting puppet on your VM so you just need to specify one config on master.
Currently, I am writing an application that utilizes WMI to scan all the computers on our Active Directory network.
I'm interested in testing the program against all flavors of Windows machines in a testing environment.
Is there a way to similuate this environment in VMware or something?
Any ideas?
VMWare works well and can host many virtual computers on a single physical computer. You can also put the virtual computers on your active directory network.
If your goal is to set up a separate large network for testing that has it's own AD server you can look into Amazon EC2 for testing. The advantage here is once you setup your set of servers, you can turn them on and off as needed and only pay for the time actually used ($0.12 per hour).
http://aws.amazon.com/
You can use network simulation: http://en.wikipedia.org/wiki/Network_simulation
and good GPL tool is http://www.nsnam.org/
You have two options.
You probably have it right, with VMWare this is easy, try looking for cloning tools. If you plan on copying and pasting the image, you will get several problems (computer Guids repeated, Network Computer Names repeated, etc)
You can also "mock" the WMI response by wrapping the WMI methods that you want to call and implementing an interface, using Rhino Mock or NMock if you are working in .NET (which I assume you are).