How to package and deploy cumulocity server-side agents? - cumulocity

We are creating a server-side agent which periodically fetches data from nodes and maps this data to cumulocity measurements, events.
What is an elegant approach for hosting and/or packaging such a server-side agent?
We are hosting our own instance of the Cumulocity platform.
It's preferable to keep this server-side agent as 'close' to the core platform as possible, e.g. share some core agent framework dependencies.
We'd like to limit the amount of setting up additional environments or containers (e.g. Tomcat).
Cumulocity uses Karaf, would it make any sense to deploy the server-side agent into Karaf as a bundle?
Is there any recommended approach for hosting server-side agents? Does the cumulocity platform offer an alternative to deploying the agent to some "own environment"?
The Cumulocity examples repository contains the "tracker-agent" server-side agent example, which is an embedded tomcat Java application. There is little information about the intended deployment location.

I don't recommend deploying agents/microservices directly into the core Karaf server, since that endangers the resources available to the core APIs and is not supported. (I.e., will likely be overwritten with the next upgrade...)
Typically, people just provision an additional VM or docker next to Cumulocity to place their agents/microservices in. On top of that, we, for example, often use Spring Boot, so the effort is pretty low (java -jar ...).
We do have a hosting system for agents/microservices and will make that generally available also for others to use in Q1/2018. Follow the announcement channel at https://support.cumulocity.com to stay posted...

Related

How to deploy multiple version of an application in production for microservice based application

Is it possible to have multiple versions of service(s) deployed in production at the same time. From my assumption, this should be pretty common pattern for microservice/api based projects or mobile projects. I want to know how do you do it and what are common pattern in industry for this kind of problems. It would be helpful if your answers around AWS environment or Kubernetes environment.
Thanks in Advance.
Is it possible to have multiple versions of service(s) deployed in production at the same time
Yes, it is possible. The idea is to keep all used microservices in production (v1, v2 ...) at the same time and to bring down the versions that are not used anymore. For this, you should somehow know when a version is not used anymore.
AFAIK, you have to options:
For every new version you make a new endpoint (like /v2/someApiCall) that is connected to the same (now upgraded) microservice and gradually instruct clients to use the new endpoind; when the old endpoint is not used anymore you deleted it; this is the preferred way.
For every new version you make a new microservice that share the same persistence with the old microservice; you should avoid the use of this solution; Netflix uses this strategy in rare occasions when the cost of changing old consumers is too high.
You can read more at page 62 from Building microservices by Sam Newman.
With AWS API Gateway you could deploy multiple versions of your code and switch between them from the mapping templates, as explained here. You might also want to look into stage variables.
Assuming your are exposing services over An HTTP REST API, the general standard is to always base line your service urls with a version.
Eg,
/v1/account/getUserInfo
If you need to release a new version, expose it over:
/v2/account/getUserInfo
Where v2 can run over a different branch of the codebase.
I have blogged about this: Multi-version Service Discovery using Spring Cloud Netflix Eureka and Ribbon, focussed on Spring Cloud Netflix components / libraries though.
But the idea is to deploy a new version of the artifact / binary in a new host / VPS / Container and have the service register with a registry server (Eureka, Consul, ....) and include metadata about the API versions it supports (v1, v2, ...). Client apps would discover which host / container / ... serves the API version needed.

Requirement to develop scalable web application

We're planning to develop a web based Healthcare Practice Management System. Due to HIPAA we're requested to deploy the app in our own premises. Our company is relatively small currently we have only software engineers and no devops engineers but still we want to develop the application to support horizontal scaling(adding more servers).
Planned to use:
Python3 (Django)
PostgreSQL
I'm looking for something like AppScale but with the freedom of choosing our own runtime, database and frameworks.
In other words from the software engineer's perspective:
Should provide an easy way to deploy django application
Should have web based dashboard to monitor and control(like AppScale)
Should make load balancing simple (app and database)
AppScale implements the Google App Engine APIs which, IMHO, make it super easy to develop web apps quickly and efficiently.
On top of that, you get auto-scaling, load balancing, and the ability to deploy on-premises and plug in any third-party library you need.
AppScale already comes with a dashboard and will soon be launching a new management service for your AppScale deployment(s).
If you're not particularly hung up on Python3 and PostgreSQL, all of the above seem to cover your requirements.
It's worth noting that opting for the GAE model means you opt for NoSQL and, so, postgres is probably not the best option.
Disclaimer: I'm part of the AppScale team and we're already helping companies develop and deliver their apps in the HIPAA compliance realm.
I chose Kubernetes which is a container orchestration technology specifically designed for Docker and also found that scaling is not just the responsibility of platform that the app is deployed on but also its depends on how the app is designed and coded. For that The Twelve-Factor App methodology is really helpful.
But I can't deploy database on Kubernetes because its not recommended by Kelsey Hightower(author of Kubernetes Up and Running) in his talk. So, for now I chose to deploy my database on a VM.

Agent based applications using WCF

i'm about to decide on technology choices for an agent based application used in the transportaion systems domain.
basically there will be a central system hosting the backend, and multiple agents located across town (installed on desktops) that communicate with devices/kiosks collecting data and then transmitting them back to the central server. the central server could also be hosted on the cloud.
following are important
securing the data and communications between the device and the agent
and the agent and central server.
agents should be easily installable with little or no configuration.
near 100% uptime and availability
Does WCF fit the bill here?
if so what binding types should i go for? netTCP or wsHttp with SSL/HTTPS?
WCF is definitely a fit choice for this kind of scenario. For your bindings, the actual question is what technology you are going to use. Do you want to make the agents run in a non .NET environment like Java, then you should chose for wsHttpBinding. This binding communicates through SOAP and is very interoperable.
If you chose to use .NET agents, you might as well use netTcpBinding because they use the same WCF frameworks. It also supports binary encoding. If you really need to make a choice, take a look at the MSDN Documentation.
For your agents you could use a simple console application that runs in the background as a Windows service. WIX can help you with that (install an application as windows service), but thats all I know. WIX can also help you with basic installing and configure everything for you but it has a high learning curve so you might need to invest time in it.

Can Azure be inter-operable with Amazon?

I have a question about whether cloud vendors have an inter-operable mechanism. For example, I am developing a WCF service and hosting in Azure successfully. After a pro-long time using Azure, can I use the same code for deploying it in AWS? Will it be possible? Does the API of both matches the same for deploying? If not, what are all the extra care needed for hosting the same service when switching over other Cloud Vendors like Salesforce.com, OpenStack, etc.,
In general, you can't just take what you develop for one Cloud platform and put it on another: they have different functionality sets and expose different APIs. However, the more low-level you make your code, the more likely it is that you'll find another vendor with a very similar API, since virtualizing infrastructure is simpler (and closer to standardized) than virtualizing a CMS application.
If you're using just IaaS, you can probably port fairly rapidly but you have to do more work to make your application. If you're using PaaS (or SaaS!) then you're more locked-in but you get more support for developing rapidly: it's that support platform which is both the value-add and the lock-in, and you won't get one without the other.
If you're using an Azure web role for hosting your WCF service then from deployment point of view you will not have many problems with AWS. You'll simply use facilities offered by AWS SDK for .NET (aka Publish to AWS CloudFormation). For sure you'll have to change the logging part if you've used Azure Diagnostic and alla Azure services with related AWS services. We did this multiple times in the last year and it works.
For worker role it's not so simple because in Azure they are easily deployed like web role, but in AWS you haven't direct deployment from Visual Studio so you have to do some manual work using Windows Services or something else

Why would I use Apache ServiceMix over just ActiveMQ

I am starting to plan a new platform which needs to integrate various services from various externals platforms. Essentially I'm tying together a bunch of internal, homegrown services and several outside services we license from 3rd parties.
Generally speaking the external services are all web services but they are a mishmash of REST, SOAP and XML-RPC.
Some of our internal services have REST API's but there are many things that aren't so easy: XMPP, Hessian, custom socket protocols, Java RPC, uWSGI, and the list goes on.
From my research it seems like an ESB like Apache ServiceMix might be a good fit for my needs. However it looks REALLY complex. I'm not launching rockets but I do need transactional messaging (mostly for eCommerce and entitlement stuff). I feel like the message queue ServiceMix uses under the hood (ActiveMQ) might be enough on its own.
Can anyone explain what ServiceMix provides above and beyond ActiveMQ? I know there is a lot but it is hard for an ESB n00b like me to really grasp the tangible difference when I'm waste-deep in buzzwords.
Thanks!
ServiceMix is an OSGi based container that allows you to deploy and run applications in a controlled runtime environment (like a J2EE container but less heavy weight and without programming to e.g. J2EE contracts).
Thanks to OSGi you can partition your applications into parts and update/evolve these parts independently from each other. You can upgrade parts of your application without having to take down the entire application. There is far better life cycle management in OSGi then you get with standalone Java processes.
If you think of creating an application that will evolve over time, then OSGi is something you should consider. And ServiceMix provides you a runtime OSGi container to deploy your applications to. I highly recommend the book "OSGi in Action" from Manning.
For tying together different external services that might even use different transport protocols I recommend Apache Camel, which btw also deploys nicely into ServiceMix.
Btw, existing applications can be deployed into an OSGi container with fairly little effort (without requiring code changes).
Torsten Mielke
FuseSource
Web: www.fusesource.com
Blog: http://tmielke.blogspot.com