Openwhisk serverless setup on premise - apache

I want to setup apache openwhisk on-premise in my organization. so that we can use it internally within the org. i am not able to find much on the net on this. i tried cloning the code from git and building it in the windows. but it doesn't work. kindly help

Follow these steps to start the platform in a virtual machine in your local environment.
# Clone openwhisk
git clone --depth=1 https://github.com/openwhisk/openwhisk.git
# Change directory to tools/vagrant
cd openwhisk/tools/vagrant
# Run script to create vm and run hello action
./hello
If this works, you should see the following output.
wsk action invoke /whisk.system/utils/echo -p message hello --result
{
"message": "hello"
}
If you encounter problems with these steps, please open an issue in the Github repository for the project.

Deploy OpenWhisk using the ansible playbook in high-availability mode.

Related

RedisJson rejson.so file not created

i am trying to setup Redisjson on redis-server
git clone https://github.com/RedisLabsModules/rejson.git
cd rejson/
make
after the make command i dont have a file called rejson.so anywhere however make command finished without error
so that's why i cant load module to my redis
i am using ubuntu 20.04
i also have tried to clone from https://github.com/RedisJSON/RedisJSON.git and it didn't work either
Cloning master from each of those repositories produces a librejson.so in the target/release directories. Cloning the 1.0 branch produces rejson.so and the docs in each branch point to the appropriate shared object.
Can you share your output?

Running Jenkins tests in Docker containers build from dockerfile in codebase

I want to deploy a continuous integration platform based on Jenkins. As I have various kinds of projects (PHP / Symfony, node, angular, …) and as I want these tests to run both locally and on Jenkins, I was thinking about using Dockers containers.
The process I’m aiming for is :
A merge request is opened on Github / Gitlab
A webhook notifies Jenkins of the merge request
Jenkins pulls the repo, builds the containers and runs a shell script to execute the tests
Once the tests are finished, Jenkins retrieves the results from one of the containers (through a shared volume) and process the results.
I do not want Jenkins to be in a container.
With this kind of process, I’m hoping to be able to run very easily the tests on each developer machine with something like a docker-composer up and then in one of the container ./tests all.
I’m not very familiar with Jenkins. I’ve read a lot of documentation, but most of them suggested to define Jenkins slaves for each kind of projects beforehand. I would like everything to be as dynamic as possible and require as less configuration on Jenkins as possible.
I would appreciate a description of your test process if you have ever implemented something similar. If you think what I’m aiming for is impossible, I would also appreciate if you could explain to me why.
A setup I suggest is Docker in Docker.
The base is a derived Docker image, which extends the jenkins:2.x image by adding a Docker commandline client.
The Jenkins is started as a container with its home folder (a folder e.g. /var/jenkins_home mounted from the Docker host) and the Docker socket file to be able to start Docker containers from Jenkins build jobs.
docker run -d --name jenkins -v /var/jenkins_home:/var/jenkins_home -v /var/run/docker.sock:/var/run/docker.sock ... <yourDerivedJenkinsImage>
To check, if this setup is working just execute following command after starting the Jenkins container:
docker exec jenkins docker version
If the "docker version" output does NOT show:
Is the docker daemon running on this host?
Everythin is fine.
In your build jobs, you could configure the process you mentioned above. Let Jenkins simply check out the repository. The repository should contain your build and test scripts.
Use a freestyle build job with a shell execution. A shell execution could look like this:
docker run --rm --volumes-from jenkins <yourImageToBuildAndTestTheProject> bash $WORKSPACE/<pathToYourProjectWithinTheGitRepository>/build.sh
This command simply starts a new container (to build and/or test your project) with the volumes from jenkins. Which means that the cloned repository will be available under $WORKSPACE. So if you run "bash $WORKSPACE/<pathToYourProjectWithinTheGitRepository>/build.sh" your project will be built within a container of "yourImageToBuildAndTestTheProject". After running this, you could start other containers for integration tests or combine this with "docker-compose" by installing it on the derived Jenkins image.
Advantages are the minimal configuration affort you have within Jenkins - only the SCM configuration for cloning the GIT repository is required. Since each Jenkins job uses the Docker client directly you could use for each project one or Docker image to build and/or test, WITHOUT further Jenkins configuration.
If you need additional configuration e.g. SSH keys or Maven settings, just put them on the Docker host and start the Jenkins container with the additional volumes, which contain those configuration files.
Using this Docker option within the shell execution of your build jobs:
--volumes-from jenkins
Automatically adds workspace and configuration files to each of your build jobs.

Error creating an Apache Apollo broker

I downloaded and unzipped the Apache Apollo distribution as described in their site.
~/Developer/Web/MQTT/apache-apollo-1.7.1/bin/apollo create mybroker
I got teh below output in the Terminal.
Creating apollo instance at: mybroker
ERROR: mybroker/etc/log4j.properties (No such file or directory)
That command is supposed to create the etc sub directory among others.
Any idea why this error is occurring?
Okay, I resolved it. I installed Apollo via Homebrew successfully. Then I cd'ed to /var/lib and ran the following command. This time with sudo.
sudo apollo create mybroker
It created the broker successfully. Then I ran the below command to run it. Again with sudo.
sudo mybroker/bin/apollo-broker run
Which started the broker and I could login via the web dashboard at http://127.0.0.1:61680/ too.
I use Ubuntu 16.0.4. And I encountered the same problem. I resolved by this:
use "sudo apollo create......"
it seems because the apollo didn't have authorities to create document in /etc/

Jenkins: Publish over SSH after failed build

I am trying to use the Publish Over SSH plugin to publish many kinds of build artifact to an external server. Examples of build artifacts are compiled builds, XML output from testing, and JSON output from linting.
If testing or linting results in errors, the build will fail or be marked unstable. In the case of a failed build, the Publish Over SSH plugin will not copy the build artifacts, writing to the console:
SSH: Current build result is [FAILURE], not going to run.
I see no reason why I wouldn't want to publish this information if it exists, and I would like to continue to report errors as build failures. So, is there any way to force Jenkins to publish build artifacts even if the job is marked as a failure?
I thought I could use the Flexible Publish to force this, by wrapping the Publish Over SSH in an "always" condition, but this gave the same output as before on a build failure.
I can think of a couple of work-arounds:
a) store the build status in an environment variable; force the status to SUCCESS; perform the publish step; recover the build status from the environment variable using java jenkins-cli.jar set-build-status $STORED_STATUS
OR
b) Write a bash script to perform the publishing step manually using SSH, cutting out the Publish Over SSH plugin altogether
Before I push forward with either of these solutions (neither of which I like), is there any piece of configuration that I'm missing?
The solution I ended up using was to use rsync/ssh to copy the files manually using a post build script. I configured this in my Jenkins Job Builder YAML like so:
- publisher:
name: publish-to-archive
publishers:
- post-tasks:
- matches:
- log-text: ".*"
script: |
ssh -i ${{HOME}}/.ssh/id_rsa jenkins#archiver "mkdir -p {archive_path}"
rsync -Pravdtze "ssh -i ${{HOME}}/.ssh/id_rsa" {source_path} jenkins#archiver:{archive_path}
Quoting old hooky on jenkinsci-users:
How can I force Publish Over SSH to work even if the build has been marked
a failure?
Use "Send files or execute commands over SSH after the build runs" in
configuration section "Build environment"
Job configuration / Build Environment / Send files or execute commands over SSH after the build runs
instead of using a post-build or build-step.

How do I run puppet agent inside a docker container to build it out. How do I achieve this?

If I run a docker container with CMD["/use/sbin/ssh", "-D"], I can have them running daemonized, which is good.
Then, I want to run puppet agent too, to build our said container as say an apache server.
Is it possible to do this and then expose the apache server?
Here is another solution. We use ENTRYPOINT docker file instruction as described here: https://docs.docker.com/articles/dockerfile_best-practices/#entrypoint. Using it you can run puppet agent and other services in background before instruction from CMD or command passed via docker run.