How can Spinnaker perform incremental app deployments? - spinnaker

As part of our pipelines, we currently use a deployment tool that has connectivity to our various instances and we can upload revisions/versions of our app to a central repository, archive them, and redeploy them at any time. Is Spinnaker intended to replace an existing deployment automation tool (there are many on market today) or is more meant for us to create pipelines that call the API of our other tool(s) when actually deploying our code to different servers?

Spinnaker has native support for deployment to supported cloud platforms (AWS, Google, CloudFoundry, and soon Azure).
In those environments, the Spinnaker model is an immutable infrastructure style deployment where new VMs are created to push new software versions.
If that fits your needs, then Spinnaker could replace an existing deployment automation tool.
If that doesn't fit your model, then Spinnaker also supports calling out to an external execution environment as a pipeline stage (currently Jenkins is well supported) where you could implement custom behaviors to integrate to an existing deployment tool.

Related

How to share Azure Datafactory self-hosted integration runtime for CI/CD managed ADF instances

Azure Datafactory tutorial(1) states the following:
In CI/CD scenarios, the integration runtime (IR) type in different environments must be the same. For example, if you have a self-hosted IR in the development environment, the same IR must also be of type self-hosted in other environments, such as test and production. Similarly, if you're sharing integration runtimes across multiple stages, you have to configure the integration runtimes as linked self-hosted in all environments, such as development, test, and production.
If I use dev/test/prod environments like described in the tutorial with self-hosted integration runtime(SHIR) do I need to create an extra Azure Datafactory which is serving SHIR for CI/CD managed environments as a linked service?
(1) https://learn.microsoft.com/en-us/learn/modules/operationalize-azure-data-factory-pipelines/4-continuous-integration-deployment
Yes, a separate data factory is needed whose role is to host the SHIR and share it to other data factories. This is from a Microsoft support ticket I had with the same question.
In my case I had two ADFs provisioned, prod and non-prod, and wanted to share a multi-node SHIR with both. I was unable share the one created on the non-prod ADF with the prod ADF like I was expecting. MS said I needed a third to host the SHIR and share it to prod & non-prod ADFs.
The same tutorial has the following extra best practices
Integration runtimes and sharing. Integration runtimes don't change often and are similar across all stages in your CI/CD. So Data Factory expects you to have the same name and type of integration runtime across all stages of CI/CD. If you want to share integration runtimes across all stages, consider using a ternary factory just to contain the shared integration runtimes. You can use this shared factory in all of your environments as a linked integration runtime type.
Other integration runtime related notes
Default Integration runtime
It should be also noted that on adf_publish only some of the Microsoft.DataFactory/factories/integrationRuntimes resources are visible.
E.g. AutoResolveIntegrationRuntime when deployed without VNet support is not visible but Self-Hosted Integration runtimes are.
Integration Runtime with Vnet support
Current portal driven deployment template is using vNetEnabled to control if
Microsoft.DataFactory/factories/managedVirtualNetworks
Microsoft.DataFactory/factories/integrationRuntimes with name AutoResolveIntegrationRuntime and managedVirtualNetwork config is created
VNet enriched integrationRuntimes is present in the adf_publish/ARM export

Automated Testing of Nifi flows using Jenkins

Is there any way to automatically run regression/functional tests on Nifi flows using Jenkins pipeline ?
Searched for it, without any success.
Thanks for your help.
With the recent release of NiFI-1.5.0 and NiFi-Registry-0.1.0, the community has come together to produce a number of SDLC/CICD integration tools to make using things like Jenkins Pipeline easier.
There is both Python (NiPyAPI), and Java (NiFi-Toolkit-CLI) API wrappers being produced by a team of collaborators to allow scripted manipulation of NiFi Flows across different environments.
Common functions include interaction with integrated version control, import/export of flows as JSON documents, deployment between environments, start/stop of flows, etc.
So, we are working quickly towards supporting things like an integrated wrapper for declarative Jenkins Pipelines, and I would add it is being done fully in public codebase under the Apache license, so we (I am the lead NiPy author) would welcome your collaboration.

Using AWS Serverless Application Repository

It is mentioned that AWS Serverless Application Repository is in preview mode, I want to use it for publishing my live users. So my questions are: 1. Is preview mode means AWS Serverless Application Repository is in beta mode? 2. As it is in preview mode, How much it's support is reliable?
Preview Mode means that you need to apply to use the service. Amazon will ask you for your intended use case and why you want to use the service in advance of general availability. I do not recommend using preview services with live customers, only for internal testing. There is no real answer if its support is reliable - that is one of the purposes of preview mode.

How to package and deploy cumulocity server-side agents?

We are creating a server-side agent which periodically fetches data from nodes and maps this data to cumulocity measurements, events.
What is an elegant approach for hosting and/or packaging such a server-side agent?
We are hosting our own instance of the Cumulocity platform.
It's preferable to keep this server-side agent as 'close' to the core platform as possible, e.g. share some core agent framework dependencies.
We'd like to limit the amount of setting up additional environments or containers (e.g. Tomcat).
Cumulocity uses Karaf, would it make any sense to deploy the server-side agent into Karaf as a bundle?
Is there any recommended approach for hosting server-side agents? Does the cumulocity platform offer an alternative to deploying the agent to some "own environment"?
The Cumulocity examples repository contains the "tracker-agent" server-side agent example, which is an embedded tomcat Java application. There is little information about the intended deployment location.
I don't recommend deploying agents/microservices directly into the core Karaf server, since that endangers the resources available to the core APIs and is not supported. (I.e., will likely be overwritten with the next upgrade...)
Typically, people just provision an additional VM or docker next to Cumulocity to place their agents/microservices in. On top of that, we, for example, often use Spring Boot, so the effort is pretty low (java -jar ...).
We do have a hosting system for agents/microservices and will make that generally available also for others to use in Q1/2018. Follow the announcement channel at https://support.cumulocity.com to stay posted...

Trouble in using AWS SWF

I am new to Amazon simple workflow service. Is there a way to run the swf workflows on EMR. I have AWS CLI setup and able to bootstrap hadoop and bring up the cluster. I have not found enough documentation on this and no source on the web. Is there any change that I can boot the EMR cluster using SWF instead of AWS CLI. Thanks.
You should use one of the dedicated AWS SDKs to coordinate between the two services. I am successfully using the AWS SDK for Java to create a workflow that starts several EMR clusters in parallel with different jobs and then just waits for them to finish, failing the whole workflow if one of the jobs fail.
Out of all available AWS SDKs, I highly recommend the Java one. It struck me as extremely robust. I have also used the PHP one in the past, but it lacks on certain departments (it does not provide a 'flow' framework for SWF for example).