I need to somehow monitor the CPU and MEM usage of an embedded system during automated system tests using Jenkins.
As of now Jenkins is flashing my target with the newest build and afterwards running some tests. The system is running arm linux so i would be able make a script to poll the info through ssh during the tests.
My question is if there already is a tool that provides this functionality - if not how would i make jenkins process a file and provide a graph of the cpu and memory info during these tests?
Would this plugin suit your needs?
Monitoring plugin: Monitoring of Jenkins
Related
here is my scenario: I have a windows VM and it's having 2 runtimes installed on it (Mule1 and Mule2).
Now If i have to distribute 60% of VM CPU to Mule1 and 40% to Mule2. How can it be done?
Is that even possible ?
There is a concept called CPU affinity, when you have more than one core or CPU. The operating system you are using have tools for assigning cores to a process. I'm not aware of a feature to assign or limit a percentage of CPU usage to a process. I don't know about an out of the box feature to limit CPU usage per process.
Linux:
You can use the taskset command to set which cores to assign to the mule process.
Example:
taskset -c 0,1 ./mule
Source: https://help.mulesoft.com/s/article/How-to-set-CPU-affinity-for-Mule-ESB-process
Windows:
In the Task Manager, you can right-click the java.exe and wrapper-windows-x86-64.exe processes, select "Set Affinity" and choose the processors
In this article there are Powershell commands to do the same from the command line: https://help.mulesoft.com/s/article/How-to-set-CPU-affinity-for-a-Mule-ESB-process-in-Windows-as-a-Service
It is completely different topic however Docker allows something similar per container.
Trying to enable Xvfb in Jenkins declarative pipeline to be able to run Selenium headless tests from the pipeline definition.
Have been able to run Selenium tests in a standard Jenkins (Linux) job. That's fine, i.e. Xvfb can be enabled under build (after plugin install) in the Jenkins job and then can Python virtual env be setup and Selenium tests executed from shell.
But I want to have a pipeline scope/setup. But in pipeline type jobs the Xvfb doesn't show up. And I haven't been able to find an answer if and how it can be enabled from the declarative pipeline code itself. Is it possible?
Is there any workaround?
Yes you can, every job which is pipeline have a link on left side of job page "Pipeline Syntax" when you go there it will help you a lot for some non obvious cases. So, for your case:
I have apache j meter installed in my local machine which has .bat file .sh executable files in it, to run in gui mode in windows, now i want to execute it on server which is in linux environment, how can i run as gui mode in server?
Actually you should not be running JMeter in GUI mode, you should use GUI mode for tests development and/or debugging. When it comes to executing the load test itself make sure you run JMeter in command-line non-GUI mode like:
jmeter -n -t test.jmx -l result.jtl
In regards to how you can proceed there are several options:
Develop the test under Windows, transfer the .jmx file to Linux box using pscp, WinSCP, FileZilla, etc.
Develop the test under Windows and run JMeter in distributed mode, i.e. have JMeter Master on Windows and JMeter Slave on Linux, see JMeter Distributed Testing Step-by-step guide for more details
You can install X Window Server implementation on Windows box (i.e. Xming or Cygwin/X and use X forwarding so you will run JMeter on linux and see its GUI on Windows.
Are the control scripts (start-dfs.sh and start-mapred.sh) used by CDH to start daemons on the fully distributed cluster?
I downloaded and installed CDH5, but could not see the control scripts in the installation, and wondering how does CDH start the daemons on slave nodes?
Or since the daemons are installed as services, they do start with the system start-up. Hence there is no need for control scripts in CDH unlike apache hadoop.
Not as such, no. If I recall correctly, Cloudera Manager invokes the daemons using Supervisor (http://supervisord.org/). So, you manage the services in CM. CM itself runs agent processes as a service on each node, and you can find its start script in /etc/init.d. There is no need for you to install or start or stop anything. You install, deploy config, control and monitor servies in CM.
I am trying to get Hudson to run my ruby based selenium tests. I have installed the Selenium Grid plugin, but I don't want to have the RC's running as slaves in a Hudson cluster. The reason for this is I don't want to waste the next six years of my life trying to configure each of my projects in various Windows environments.
Hudson currently pulls each project from Github and builds it just fine. With a regular Selenium Grid setup, I am able to edit the grid_configuration.yml file to represent the various environments I wish to tests against, then pass environment variables to the rake task that runs the test i.e. which browser/platfom to run on and the URL of the application under test -- usually a port on the hub machine running in a specific environment.
In this way, the machines on which the RC's run don't need to know anything about the source code of my apps, they just need to have selenium-grid installed and have registered with the hub.
Is there a way of elegantly emulating this with Hudson?
do you have a concept of build agents, I do not know much about Hudson. We are using Anthill Pro at work and we have set up an Ahtill Pro agent. The source code is downloaded to the agent and the agent is responsible to run the maven goal for running the tests. It works pretty well for us as the RC machines are not part of the build environment. the tests are responsible to talk to Selenium HUB and get new sessions and do the testing.
I hope this helps.
Cheers
Haroon
I chose to not use the plugin in order to take advantage of the newer Grid version. I cloned a few VMs with a startup script that runs ant launch-remote-control from a shared drive that they can all access. Hudson doesn't have, and doesn't need any access to the Grid nodes and they aren't slaves to Hudson. I altered my Hudson server to launch the hub on machine startup. This setup allows me to run the grid normally with or without Hudson.