I would like to run infrastructure tests in my gitlab-ci pipeline in which a database should be integrated. I would integrate the database as a service.
end-to-end-tests:
image: node:latest
services:
- postgres:9.6.19
script:
# fill the database
# run the tests
How can I fill the database with dummy data at the beginning of the tests?
Is it also possible to verify that the initialization of the database was successful?
Related
I have 2 apache nifi servers that are development and production hosted on AWS, currently the migration between development and production is done manually. I would like to know if it is possible to automate this process and ensure that people do not develop in production?
I thought about uploading the entire nifi in github and having it deploy the new nifi on the production server, but I don't know if that would be correct to do.
One option is to use NiFi registry, store the flows in the registry and share the registry between Development and Production environments. You can then promote the latest version of the flow from dev to prod.
As you say, another option is to potentially use Git to share the flow.xml.gz between environments and using a deploy script. The flow.xml.gz stores the data flow configuration/canvas. You can use parameterized flows (https://nifi.apache.org/docs/nifi-docs/html/user-guide.html#Parameters) to point NiFi at different external dev/prod services (eg. NiFi dev processor uses a dev database URL, NiFi prod points to prod database URL).
One more option is to export all or part of the NiFi flow as a template, and upload the template to your production NiFi, however registry is probably a better way of handling this. More info on templates here: https://nifi.apache.org/docs/nifi-docs/html/user-guide.html#templates.
I believe the original design plan behind NiFi was not necessarily to have different environments, and to allow live changes in production. I guess you would build your initial data flow using some test data in production and then once it's ready start the live data flow. But I think it's reasonable to want to have separate environments.
I need to create CI/CD pipeline in AWS cloud for a pyspark application , finally this py-spark is to be invoked through a airflow DAG.
I am no expert on this either, but you can follow this guide:
https://aws.amazon.com/blogs/big-data/implement-continuous-integration-and-delivery-of-apache-spark-applications-using-aws/
The idea is to automate job testing in Spark local mode, then run a live job with infrastructure created on the fly and finally deploy the job to production if all the previous steps succeed. I would keep my production jobs automated in Airflow and run this CI/CD pipeline on development branches (these ones without deploying to production, of course) as well as on PR on the main branch. That way your production jobs will always be functioning correctly and only incorporate new functionality/changes after they are fully tested on development branches.
I am working on an application based on the bluemix container service. To deploy the application I use the IBM Bluemix DevOps service.
I would like to add a test stage before deployment. The problem is that my tests need to run within a docker container using the image built for the application. The application needs the image setup which contains libraries, database etc (libraries, scripts, etc).
However, the available "test" stage in the DevOps service does not seem to allow running tests within a docker container. I would like to run my tests with something like
if ic run --rm my_custom_image custom_test_script.sh
How could I do such a test run within the Bluemix DevOps service?
IDS doesn't include a place to run dedicated sub-containers, and the container service is really intended for longer running containers (i.e. -d daemon style). You could do it by setting up a persistent container there, then using cf ic cp to copy up the changed pieces (i.e. something specific to this run), then a cf ic exec -ti to force it to run there, perhaps?
Or if you'd rather, perhaps break it into a couple pieces - make the test into a "deploy the test container" step, then the test step using that container (or getting the results therefrom), then a cleanup of that container.
this is my first post on here so forgive me if I've missed an existing answer to this question.
Basically my company conducts off-site development for various clients in government. Internally, we use cloud VSTS, Octopus deploy and Selenium to ensure a continuous delivery pipeline in our internal Azure environments. We are looking to extend this pipeline into the on-prem environments of our clients to cut down on unnecessary deployment overheads. Unfortunately, due to security policies we are unable to use our VSTS/Octopus instances to push code directly into the client environment, so I'm looking for a way to get code from our VSTS environment into an on-prem instance of TFS hosted on their end.
What I'm after, really, is a system whereby the client logs into our VSTS environment, validates the code, then pushes some kind of button which will pull it to their local TFS, where a replica of our automated build and test process will manage the CI pipeline through their environments and into prod.
Is this at all possible? What are my options here?
There is not a direct way to achieve migrating source code with history from VSTS to a on-premise TFS. You would need 3rd party tool, like Commercial Edition of OpsHub (note it is not free).
It sounds like you need a new feature that is comming to Octopus Deploy, see https://octopus.com/blog/roadmap-2017 --> Octopus Release Promotions
I quote:
Many customers work in environment where releases must flow between more than one Octopus server - the two most common scenarios being:
Agencies which use one Octopus for dev/test, but then need an Octopus server at each of their customer's sites to do production deployments
I will suggest the following. Though it contains small custom script.
Add build agent to your vsts which is physically located on customer's premises. This is easy, just register agent with online endpoint.
Create build definition in vsts that gets code from vsts. But instead of building commits it to local tfs. You will need a small powershell code here. You can add it as custom powershell step in build definition.
Local tfs orchestrates the rest.
Custom code:
Say your agent is on d:/agent
1. Keep local tfs mapped to some directory (say c:/tfs)
The script copies new sources over some code from d:/agent/work/ to c:/tfs
Commits from c:/tfs to local tfs.
Note:You will need /force option (and probably some more) to prevent conflicts.
I believe this not as ugly as it sounds.
I understand that Test Kitchen follows the sequence
create node > converge cookbook > run tests
What is the best practice to create a test that assumes a strong external dependency?
An example is the Kafka cookbook https://supermarket.chef.io/cookbooks/kafka. As you might know, Kafka is a messaging broker application that depends on Zookeeper, a separate application that is the message hub.
Following proper separation of concerns, the Kafka cookbook does not include Zookeeper - it can be installed in the same host or in a different machine.
However in order to do a simple verification if Kafka is working, (i.e. create a simple message), you need to have a Zookeeper server running.
For example, the test could be running these three commands after installation
# creates a message topic
bin/kafka-create-topic.sh --zookeeper localhost:2181 --replica 1 --partition 1 --topic test
# lists existing message topics
bin/kafka-list-topic.sh --zookeeper localhost:2181
# sends a message to this machine
bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test
Using Chefspec, is there a way to stub this external server (the localhost:2181 part above)?
Thank you!
Two parts to the answer: first ChefSpec is used for unit testing, and is unrelated to Test Kitchen and integration testing. Second, you would need to make a minimal test recipe to install a 1-node ZK server and use that for integration testing. Generally you would do this by putting a test cookbook under test/cookbook and then add it to your Berksfile with a path source. You could use a "real" ZK cookbook, or you could use something simpler and more dedicated. Just an example of minimalism for testing, see my MongoDB recipe. You can probably use something similar for ZK in this situation.