Test Kitchen integration tests with Chefspec: testing cookbook with dependencies - testing

I understand that Test Kitchen follows the sequence
create node > converge cookbook > run tests
What is the best practice to create a test that assumes a strong external dependency?
An example is the Kafka cookbook https://supermarket.chef.io/cookbooks/kafka. As you might know, Kafka is a messaging broker application that depends on Zookeeper, a separate application that is the message hub.
Following proper separation of concerns, the Kafka cookbook does not include Zookeeper - it can be installed in the same host or in a different machine.
However in order to do a simple verification if Kafka is working, (i.e. create a simple message), you need to have a Zookeeper server running.
For example, the test could be running these three commands after installation
# creates a message topic
bin/kafka-create-topic.sh --zookeeper localhost:2181 --replica 1 --partition 1 --topic test
# lists existing message topics
bin/kafka-list-topic.sh --zookeeper localhost:2181
# sends a message to this machine
bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test
Using Chefspec, is there a way to stub this external server (the localhost:2181 part above)?
Thank you!

Two parts to the answer: first ChefSpec is used for unit testing, and is unrelated to Test Kitchen and integration testing. Second, you would need to make a minimal test recipe to install a 1-node ZK server and use that for integration testing. Generally you would do this by putting a test cookbook under test/cookbook and then add it to your Berksfile with a path source. You could use a "real" ZK cookbook, or you could use something simpler and more dedicated. Just an example of minimalism for testing, see my MongoDB recipe. You can probably use something similar for ZK in this situation.

Related

Integration tests with Cucumber using embedded GemFire for a Spring Boot application deployed in an Apache Geode client/server topology

I intend to write integration tests with Cucumber for a GemFire cache client application using Spring Boot and deployed in an Apache Geode client/server topology. I referred to the question - How to start Spring Boot app without depending on Pivotal GemFire cache which was answered in 2018 and also referred to the integration test documentation here - Integration Testing with STDG.
The link to an example concrete client/server Integration Test extending STDG’s ForkingClientServerIntegrationTestsSupprt class appears to be broken.
The purpose of my integration tests would be to:
run an embedded locator and a server during the integration test phase
define the regions for the servers using cluster.xml
create, read, update and delete cache entries and verify the different use cases
Any help regarding the ideal approach to write integration tests (probably using an embedded GemFire locator and server) will be very helpful.
Tried an embedded GemFire CacheServer instance for integration tests using #CacheServerApplication annotation but not sure on how to create ClientCache objects to use the embedded GemFire or whether this is the right way to write the integration tests.
Edit: Also came across this - Is it possible to start a PIvotal GemFire Server, Locator and Client in one JVM? where it is mentioned as - In short, NO, you cannot have a peer Cache instance (with embedded Locator) and a ClientCache instance in the same JVM (or Java application process).
DISCLAIMER: I do not have experience with Apache Cucumber...
However, it is not difficult to spin up multiple GemFire or Geode server-side processes, such as 1 or more Locator and [multiple] CacheServers in a single test class. The Locators can be standalone JVM processes or embedded, as part of the servers.
In this typical test configuration arrangement the GemFire or Geode server-side processes are forked, yet coordinated, and the test class itself acts as the ClientCache instance.
You can see 1 such test configuration in the SBDG Multi-site Caching sample, here.
The key to this test configuration is the extension of the ForkingClientServerIntegrationTests class from STDG, as well as the forking of the 2 clusters (and specifically), in the test class setup method.
The configuration for each cluster is handled by Spring config and the coordination is all handled using GemFire/Geode properties (specifically) combined with some Spring Profiles (for example, then see here) to control which configuration gets applied for each GemFire/Geode JVM process.
Of course, this example and test configuration is quite complex given the fact that the test also employs GemFire/Geode's WAN capabilities, hence the "multi-site" caching reference, but serves to demonstrate that Spring and SBDG/SDG/STDG supports as complex or as simple of a setup as your testing needs require.
You can start any number of GemFire/Geode processes (Locators, CacheServers, etc). And, in nearly all cases, the test class (JVM) itself is the cache client (ClientCache instance).
Here are a couple more examples from the Spring Data for Apache Geode (SDG) codebase and test suite: here and here.
I am certain I have another test class or example (somewhere) that for a single Locator, then joined 2 CacheServer instances, and then the test (JVM process) proceeded as ClientCache instance, but I cannot seem to find it at the moment.
In any case, I hope this gives you some ideas.

Kubernetes probe running acceptance test

I have a situation where my acceptance test makes a connection with a rabbitMQ instance during the pipeline. But the rabbitMQ instance is private, making not possible to make this connection in the pipeline.
I was wondering if making an api endpoint that run this test and adding to the startup probe would be a good approach to make sure this test is passing.
If the rabbitmq is a container in your pod yes, if it isn't then you shouldn't.
There's no final answer to this, but the startup probe is just there to ensure that your pod is not being falsly considered unhealthy by other probes just because it takes a little longer to start. It's aimed at legacy applications that need to build assets or compile stuff at startup.
If there was a place to put a connectivity test to rabitmq would be the liveness probe, but you should only do that if your application is entirely dependent on a connection to rabbitmq, otherwise your authentication would fail because you couldn't connect to the messaging queue. And if you have a second app that tries to connect to your endpoint as a liveness probe? And a third app that connects the second one to check if that app is alive? You could kill an entire ecosystem just because rabbitmq rebooted or crashed real quick.
Not recommended.
You could have that as part of your liveness probe IF your app is a worker, then, not having a connection to rabbitmq would make the worker unusable.
Your acceptance tests should be placed on your CD or in a post-deploy script step if you don't have a CD.

How to redirect the Apache log in Kubernetes

I am having one namespace and one deployment(replica set), My Apache logs should be written outside the pod, how is it possible in Kubernetes.
This is a Community Wiki answer so feel free to edit it and add any additional details you consider important.
You should specify more precisely what you exactly mean by outside the pod, but as David Maze have already suggested in his comment, take a closer look at Logging Architecture section in the official kubernetes documentation.
Depending on what you mean by "outside the Pod", different solution may be the most optimal in your case.
As you can read there:
Kubernetes provides no native storage solution for log data, but you can integrate many existing logging solutions into your Kubernetes
cluster ... Cluster-level logging architectures are described in assumption that a logging backend is present inside or outside of your cluster.
Here are mentioned 3 most popular cluster-level logging architectures:
Use a node-level logging agent that runs on every node.
Include a dedicated sidecar container for logging in an application pod.
Push logs directly to a backend from within an application.
Second solution is widely used. Unlike the third one where the logs pushing needs to be handled by your application container, sidecar approach is application independend, which makes it much more flexible solution.
So that the matter was not so simple, it can be implemented in two different ways:
Streaming sidecar container
Sidecar container with a logging agent

Redis Not Loading In Codeship

Our jobs service test suite expects a Redis database to connect to in order to run its test cases. We're running into an issue where sometimes this jobs service fails to load in Redis and sometimes it doesn't.
We've followed the Codeship guide to the dot, and are finding that sometimes, our service is unable to connect to Redis while sometimes it is. I've tried switching Redis versions and this does not seem to have solved the issue.
Sounds like it would be appropriate to implement a Docker healthcheck on your service.

Bamboo remote agent pool

In Jenkins, we can define labels and group number of build slaves under the label. This label then can be mapped to job so jenkins will automatically pick the available build slaves in the pool and execute the jobs. Is something similar available in bamboo to create remote agent pool?
I hope I understood your question correctly but anyway... There's similar concept in Bamboo. There are two types of agents:
Local ones which operate as a thread in Bamboo server. Generally, not recommended for bigger Bamboo instances due to performance and security reasons.
Remote ones which are basically separate processes running the builds, ideally on a different machine so Bamboo server doesn't suffer from higher hardware load.
The match between job and agents bases on job requirement and agent capabilities, e.g:
Agent define a capability, effectively states what it can build, what tools are installed, e.g. .NET or JDK
Job/deployment environment define a requirement which is need to successfully accomplish the task, e.g. Git and Maven.
In the end Bamboo tries to find an agent which provides full set of capabilities a job/deployment environment requires.
The special rules applies if an agent is dedicated to an job or environment or agent is elastic agent (runs in EC2).
More reading:
https://confluence.atlassian.com/bamkb/difference-between-local-agents-and-remote-agents-457703602.html
https://confluence.atlassian.com/bamboo/configuring-a-job-s-requirements-289277064.html
https://confluence.atlassian.com/bamboo/configuring-a-job-s-requirements-289277064.html
https://confluence.atlassian.com/bamboo/requirements-for-deployment-environments-838427584.html
https://confluence.atlassian.com/bamboo/dedicating-an-agent-629015108.html
https://confluence.atlassian.com/bamboo/managing-your-elastic-image-configurations-289277147.html