Recommended way to run Apache Geode as a service on Windows - gemfire

I need to run locators and services on two W2K8 Windows servers. Normally I use nssm to run Java based applications by calling their respective java -jar call from nssm. But Geode uses gfsh.
What is the best practice ? One could use the API to spawn an instance and run the necessary start commands:
commandService = CommandService.createLocalCommandService(cache);
CommandStatement showDeadLocksCmdStmt = commandService.createCommandStatement
("start locator ...");
if (showDeadlocksResult.hasIncomingFiles()) {
showDeadlocksResult.saveIncomingFiles(System.getProperty("user.dir") +
"/commandresults");
}
Or one could mimic the behavior of gfsh.bat and create one NSSM service for each functionality (locator and server) which runs the same the batch does and pass the necessary command:
java -Dgfsh=true -Dlog4j.configurationFile=classpath:log4j2-cli.xml -classpath C:\dev\apache-geode-1.1.1\lib\gfsh-dependencies.jar org.apache.geode.management.internal.cli.Launcher start locator ...
Judging from this question doing it gfshy is recommended

You can use gfsh -e to run GFSH from command line

Related

Getting some message while configure the hub in the command prompt

I am trying to configure hub in the command prompt but getting the following message:
C:\Users\shubham.saraf\Documents\IST\IST\Software>java -jar selenium-server-4.5.0.jar - port 4444 -role hub
Selenium Server commands
A list of all the commands available. To use one, run java -jar selenium.jar commandName.
completion Generate shell autocompletions
distributor Adds this server as the distributor in a selenium grid.
hub A grid hub, composed of sessions, distributor, and router.
info Prints information for commands and topics.
node Adds this server as a Node in the Selenium Grid.
router Creates a router to front the selenium grid.
sessionqueue Adds this server as the new session queue in a selenium grid.
sessions Adds this server as the session map in a selenium grid.
standalone The selenium server, running everything in-process.
For each command, run with --help for command-specific help
Use the --ext flag before the command name to specify an additional classpath
to use with the server (for example, to provide additional commands, or to
provide additional driver implementations). For example:
java -jar selenium.jar --ext example.jar;dir standalone --port 1234
Can anyone suggest me to configure hub?

Running Jenkins tests in Docker containers build from dockerfile in codebase

I want to deploy a continuous integration platform based on Jenkins. As I have various kinds of projects (PHP / Symfony, node, angular, …) and as I want these tests to run both locally and on Jenkins, I was thinking about using Dockers containers.
The process I’m aiming for is :
A merge request is opened on Github / Gitlab
A webhook notifies Jenkins of the merge request
Jenkins pulls the repo, builds the containers and runs a shell script to execute the tests
Once the tests are finished, Jenkins retrieves the results from one of the containers (through a shared volume) and process the results.
I do not want Jenkins to be in a container.
With this kind of process, I’m hoping to be able to run very easily the tests on each developer machine with something like a docker-composer up and then in one of the container ./tests all.
I’m not very familiar with Jenkins. I’ve read a lot of documentation, but most of them suggested to define Jenkins slaves for each kind of projects beforehand. I would like everything to be as dynamic as possible and require as less configuration on Jenkins as possible.
I would appreciate a description of your test process if you have ever implemented something similar. If you think what I’m aiming for is impossible, I would also appreciate if you could explain to me why.
A setup I suggest is Docker in Docker.
The base is a derived Docker image, which extends the jenkins:2.x image by adding a Docker commandline client.
The Jenkins is started as a container with its home folder (a folder e.g. /var/jenkins_home mounted from the Docker host) and the Docker socket file to be able to start Docker containers from Jenkins build jobs.
docker run -d --name jenkins -v /var/jenkins_home:/var/jenkins_home -v /var/run/docker.sock:/var/run/docker.sock ... <yourDerivedJenkinsImage>
To check, if this setup is working just execute following command after starting the Jenkins container:
docker exec jenkins docker version
If the "docker version" output does NOT show:
Is the docker daemon running on this host?
Everythin is fine.
In your build jobs, you could configure the process you mentioned above. Let Jenkins simply check out the repository. The repository should contain your build and test scripts.
Use a freestyle build job with a shell execution. A shell execution could look like this:
docker run --rm --volumes-from jenkins <yourImageToBuildAndTestTheProject> bash $WORKSPACE/<pathToYourProjectWithinTheGitRepository>/build.sh
This command simply starts a new container (to build and/or test your project) with the volumes from jenkins. Which means that the cloned repository will be available under $WORKSPACE. So if you run "bash $WORKSPACE/<pathToYourProjectWithinTheGitRepository>/build.sh" your project will be built within a container of "yourImageToBuildAndTestTheProject". After running this, you could start other containers for integration tests or combine this with "docker-compose" by installing it on the derived Jenkins image.
Advantages are the minimal configuration affort you have within Jenkins - only the SCM configuration for cloning the GIT repository is required. Since each Jenkins job uses the Docker client directly you could use for each project one or Docker image to build and/or test, WITHOUT further Jenkins configuration.
If you need additional configuration e.g. SSH keys or Maven settings, just put them on the Docker host and start the Jenkins container with the additional volumes, which contain those configuration files.
Using this Docker option within the shell execution of your build jobs:
--volumes-from jenkins
Automatically adds workspace and configuration files to each of your build jobs.

I need to run multiple Fitnesse Tests from the command line at the same time. How can I get around the Port resrtriction?

I need to run multiple Fitnesse Tests from the command line at the same time. How can I get around the Port resrtriction? Right now, I start the first .bat file to run one suite. When I try to start the second .bat file I get an error that the Port is in use. My .bat files consist of the following command.
java -jar fitnesse-standalone.jar -p 80 -c "MeasureTestSuite.COLighting?suite&format=text".
You can change the port of the wiki with the -p switch, and use -DSLIM_PORT= to control the port used by the Slim server (if you use the Slim test system):
java -DSLIM_PORT=5555 -jar fitnesse-standalone.jar -p 8080 -c MeasureTestSuite.COLighting?suite&format=text
Setting the Slim port is only needed if the runs really start concurrently, not when there is a couple of seconds between the execution of the commands (FitNesse tries to find a free port, but does this a bit awkwardly).
P.S. The next release of FitNesse will no longer require manual configuration of the Slim port for concurrent runs, IF Slim is run in-process (i.e. DEBUG mode). So, for instance, you can have multiple concurrent test runs by a build server using the jUnit integration (which already removes the need to select a wiki port) without having to worry about ports at all.
You can change the port with the -p switch:
java -jar fitnesse-standalone.jar -p 8080 -c MeasureTestSuite.COLighting?suite&format=text"

How do I run puppet agent inside a docker container to build it out. How do I achieve this?

If I run a docker container with CMD["/use/sbin/ssh", "-D"], I can have them running daemonized, which is good.
Then, I want to run puppet agent too, to build our said container as say an apache server.
Is it possible to do this and then expose the apache server?
Here is another solution. We use ENTRYPOINT docker file instruction as described here: https://docs.docker.com/articles/dockerfile_best-practices/#entrypoint. Using it you can run puppet agent and other services in background before instruction from CMD or command passed via docker run.

How can I run mule not using java wrapper

I'm trying to run mule-3.1.2 on 64bit IBM AIX, but the java wrapper can't be executed (Found but not executable.).
I'm sure I have set the right permission.
Besides, I also can't run mule on a ia64 machine, same problem.
So can I run the mule just as a java application not using the java wrapper?
There are different ways to start Mule without using the wrapper: besides embedding it in a java or web-application, you can also start main() on org.mule.MuleServer.
Edit: 1
A good resource suggested by #rocwing is: Configuring Mule to Run From a Script
Edit 2:
Below is a script that can start Mule standalone: logging is not correctly configured and the shutdown sequence is a little... challenged, but it should get you started.
#!/bin/sh
for f in $MULE_HOME/lib/boot/*.jar
do MULE_CLASSPATH="$MULE_CLASSPATH":$f ;
done
for f in $MULE_HOME/lib/mule/*.jar
do MULE_CLASSPATH="$MULE_CLASSPATH":$f ;
done
for g in $MULE_HOME/lib/opt/*.jar
do MULE_CLASSPATH="$MULE_CLASSPATH":$g ;
done
java -Djava.endorsed.dir=$MULE_HOME/lib/endorsed -cp "$MULE_CLASSPATH" org.mule.MuleServer -config ~/my/mule-config.xml