Lagom without ConductR? - lagom

Is it practical or wise to use Lagom in production without ConductR? The commercial licensing is putting me off. This framework looks like it could be pretty arduous to deploy and custom tooling for that can take a lot of effort to get right.

(disclaimer: I'm a Lightbend employee, currently core member of the Lagom team)
Edit (Nov 2018): Please refer to https://www.lagomframework.com/documentation/current/java/ProductionOverview.html#Running-Lagom-in-production for up to date information on this topic.
(original answer, Aug 2017)
A lot has changed in the Lightbend stack since this question was added over a year ago. For example: ConductR is now free to use in production for up to three nodes.
Also, the team behind ConductR is also working on providing tools to deploy a Lagom application on Kubernetes. The efforts on that front are very advanced and some of our sample apps can be deployed in Kubernetes already.

Your question is rather open-ended and so let's start with "it depends".
If
you're comfortable managing your scaling within the configuration of your Akka cluster,
your usage doesn't violate the open-source licensing terms of lagom, play and Akka, AND
you don't have sufficient cash flow to justify leveraging Lightbend's production suite,
then you arguably can deploy with a minimum of effort and custom tooling.
If those conditions don't hold, your options are to go elsewhere (e.g., Spring Cloud) or retain Lightbend. You may find going elsewhere has it's own cognitive load and/or commercial expense.
Hope that helps even 7 months later.

The other answers are higher level, but I can essentially say "yes." I'm currently deploying a Lagom service to be hosted on a Kubernetes cluster, and I'm not using ConductR or any of the commerically licensed components.
You will need to dig a little bit into some Play internals to start the service properly, and if you want to hook in with some other service locator you may need to implement one yourself, but it certainly isn't impossible and I think it's less effort on net.

Being honest it really worth to pay, because you got much more, good reporting dashboards, automatic cluster formation and what really cool is split brain resolver....
But sometimes agree when you are working on a project that don't have a lot of money, you can do some small tricks and get it work and may be then do all what really lucks or buy enterprise subscription.
So Lagom can be very easy being used without service discovery at all if you will delegate all to Kubernetes DNS or without Kubernetes just put load balancer before each service and use its address.
How this can works
Each lagom service that you have should be external service
Production run configuration should be mixed with
ConfigurationServiceLocatorComponents
Service that should communicate with another one should be started with extra parameter that tell lagom that this service is external one and can be communicated directly(all can be in JAVA_OPTS as for play application)
-Dlagom.services.your_service_name=http://k8s_service_name.default.svc.cluster.local:9000

Related

Filter incoming TCP packets in a web service on a PaaS environment

Advanced Attacks Detection in a Platform-as-a-Service(PaaS) Environment
In the first part of this project, i'm supposed to monitor incoming packets
in a web service, accept only HTTP & HTTPS (TCP)packets for later analysis and, drop the rest.
I was thinking doing this in JAVA, because i think it's a very flexible and
complete language and, it's present in every PaaS Environment! So, my idea is
to build a simple web page in JSP/JSF with a bean to attend this first step
of the project.
This is where i need some guidance! Because i've started considering
libpcap JAVA wrappers like jNetPcap, Jpcap and Pcap4J. But none of them is able to drop packets!
Forgetting JAVA, i also have red about other libraries like: libnet, libdnet and libcrafter.
libnet can not handle the task!
libdnet has network firewall rule manipulation capabilities, but it's a very old library and, i'm not sure it can handle integration with iptables!
libcrafter is the best! Because it's an actual updated project and, it allows the use of iptables rules in the code.
And, of course, working directly with netfilter would be the ideal scenario!
But working with libcrafter or netfilter, to follow my simple idea of a web service with a JAVA bean, i would have to write my own java wrapper by JNI! Which i assume NOT to be a simple task!
Now, what is raising many doubts in my mind, is the fact that this has to be
done in a PaaS environment! None of them (PaaS providers) seem to have the
same restrictions. There are some more flexible like AWS and Microsoft Azure that let you choose and manage a VM with the OS distro you want. Others like OpenShift, BlueMix or Cloud Foundry, in a project, only give you the option of defining the programming language, application server and, that's it! So, one might not have permissions to install libraries and control network & transport layers to manage the packets! Since the hole OS administration is handled by the provider.
Considering only the main purpose of this project, which is managing the packet flow pointed to a domain located in a PaaS environment, without the help of other servers like tcp proxies, i am desperately in need of someone pointing me a direction to start from! Because with that, i can dig as deep as needed to get a solution. Please HELP!
Thank you very much for your time and consideration.

Requirement to develop scalable web application

We're planning to develop a web based Healthcare Practice Management System. Due to HIPAA we're requested to deploy the app in our own premises. Our company is relatively small currently we have only software engineers and no devops engineers but still we want to develop the application to support horizontal scaling(adding more servers).
Planned to use:
Python3 (Django)
PostgreSQL
I'm looking for something like AppScale but with the freedom of choosing our own runtime, database and frameworks.
In other words from the software engineer's perspective:
Should provide an easy way to deploy django application
Should have web based dashboard to monitor and control(like AppScale)
Should make load balancing simple (app and database)
AppScale implements the Google App Engine APIs which, IMHO, make it super easy to develop web apps quickly and efficiently.
On top of that, you get auto-scaling, load balancing, and the ability to deploy on-premises and plug in any third-party library you need.
AppScale already comes with a dashboard and will soon be launching a new management service for your AppScale deployment(s).
If you're not particularly hung up on Python3 and PostgreSQL, all of the above seem to cover your requirements.
It's worth noting that opting for the GAE model means you opt for NoSQL and, so, postgres is probably not the best option.
Disclaimer: I'm part of the AppScale team and we're already helping companies develop and deliver their apps in the HIPAA compliance realm.
I chose Kubernetes which is a container orchestration technology specifically designed for Docker and also found that scaling is not just the responsibility of platform that the app is deployed on but also its depends on how the app is designed and coded. For that The Twelve-Factor App methodology is really helpful.
But I can't deploy database on Kubernetes because its not recommended by Kelsey Hightower(author of Kubernetes Up and Running) in his talk. So, for now I chose to deploy my database on a VM.

ZooKeeper alternatives? (cluster coordination service) [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 6 years ago.
Improve this question
ZooKeeper is a highly available coordination service for data centers. It originated in the Hadoop project. One can implement locking, fail over, leader election, group membership and other coordination issues on top of it.
Are there any alternatives to ZooKeeper? (free software of course)
I've looked extensively at Zookeeper/Curator, Eureka, etcd, and consul. Zookeeper/Curator and Eureka are in many ways the most polished and easiest to integrate if you are in the Java world. Etcd is pretty cool and very flexible, but It is really just a HA key store so you would have to write a lot of code to turn it into an opinionated service discovery system.
Consul is (to me) the best of both worlds. It is an opinionated service discovery system written on top of serf, using raft for cluster consensus and gossip for communication. It exposes the discovery / registration endpoints with a well documented REST api, and also allows you to discover services with DNS SRV records, and register services with configuration (i.e. so you can register a database or application you can't integrate a client with, or if you just want to keep your service discovery decoupled from your app)
I've written a blog post about consul where you can learn more and walk through my "try it out" demo
I've also discussed service discovery with etcd & docker if you want to see more about what that custom code might look like.
One last thing! etcd & consul are written in go, so maintaining them is much easier then java solutions like zookeeper. All you need is the consul / etcd binary. no dependencies, no linked libraries, no jvm.
There's a very promising alternative to ZooKeeper called etcd (github.com/coreos/etcd), written by CoreOS team. Unlike Doozerd, etcd is being actively developed.
Just discovered Accord (C) and OpenReplica/ConCoord (Python) which may be interesting solutions
[EDIT] The Hashicorp crew, of Vagrant and Packer fame, are cooking "a decentralized solution for service discovery and orchestration" called Serf.
[EDIT2] Hashicorp strikes again ! They just released Consul, built on top of Serf. The pitch: "a solution for service discovery and configuration, completely distributed, highly available, scalable to thousands of nodes and services across multiple datacenters".
Yes, there is also Doozerd (https://github.com/ha/doozerd). Take a good look at it, it's a nice, single binary distributed coordination service developed by Heroku. With bindings/libraries for java/python/ruby/node. Very easy to get started with and play around.
Take a look at Serf. There is a comparison vs Zookeeper here.
OpenReplica from my research group is a highly available FOSS coordination service for data centers. It can be used for implementing locking, fail over, leader election, group membership and other coordination services. It differs from ZooKeeper in two critical ways:
It uses an object-oriented API. This makes it much easier to write coordination services. Synchronization code for OpenReplica looks exactly like its textbook counterpart; there is no need to master a file and upcall-based API like in ZooKeeper and Chubby.
It enables dynamic membership updates to the replica set. There is no need for static configuration files. The system is integrated into DNS (authoritative, slave for OpenReplica, or Amazon Route 53).
We actively support the system, do not hesitate to let us know if you have further questions.
There's a project called Noah on github that looks interesting, it says that it's "loosely based on Apache ZooKeeper" https://github.com/lusis/Noah with REST support being a key feature (ZK has this as a contrib/option rather than built in).
There are different tools that optimize for different engineering trade-offs.
ZooKeeper Scales marginally for reads; writes with many observers can be slow. It is proven and has a sizable community.
Accord Seems interesting for write-intensive uses, however typical use-cases already have domain-specific solutions (i.e., logging, telemetry).
The others are somewhat interesting but generally unproven. Don't get this wrong if intended for production usage.
I'd found this comparison of Zookeeper, etcd and Doozer:
http://devo.ps/blog/zookeeper-vs-doozer-vs-etcd/
Serf (serfdom.io) is also a nice solution as it is simple! But you must consider, SERF is just a cluster-manager which enables you to send custom events to all cluster nodes. Thats nice, but you have to write your own shell scripts (aka events).
See this example: "https://www.digitalocean.com/community/articles/how-to-set-up-a-serf-cluster-on-several-ubuntu-vps"
The advantage is, you're getting a very simple cluster-manager and you're able to combine this with your favorite configuration, deployment or continuous integration tool.
It seems Corosync is also like ZooKeeper.
I know this post is quite old, but someone who is looking at all possible alternatives I would also like to suggest JGroups library which is mature enough to be used in production environment. I have used it successfully in one of my projects mainly for distributed coordination and to share messages between cluster. It also support AWS support in addition to its flexible architecture where you can customize its stack to get what you need. I suggest you to have a look at it

Why would I use Apache ServiceMix over just ActiveMQ

I am starting to plan a new platform which needs to integrate various services from various externals platforms. Essentially I'm tying together a bunch of internal, homegrown services and several outside services we license from 3rd parties.
Generally speaking the external services are all web services but they are a mishmash of REST, SOAP and XML-RPC.
Some of our internal services have REST API's but there are many things that aren't so easy: XMPP, Hessian, custom socket protocols, Java RPC, uWSGI, and the list goes on.
From my research it seems like an ESB like Apache ServiceMix might be a good fit for my needs. However it looks REALLY complex. I'm not launching rockets but I do need transactional messaging (mostly for eCommerce and entitlement stuff). I feel like the message queue ServiceMix uses under the hood (ActiveMQ) might be enough on its own.
Can anyone explain what ServiceMix provides above and beyond ActiveMQ? I know there is a lot but it is hard for an ESB n00b like me to really grasp the tangible difference when I'm waste-deep in buzzwords.
Thanks!
ServiceMix is an OSGi based container that allows you to deploy and run applications in a controlled runtime environment (like a J2EE container but less heavy weight and without programming to e.g. J2EE contracts).
Thanks to OSGi you can partition your applications into parts and update/evolve these parts independently from each other. You can upgrade parts of your application without having to take down the entire application. There is far better life cycle management in OSGi then you get with standalone Java processes.
If you think of creating an application that will evolve over time, then OSGi is something you should consider. And ServiceMix provides you a runtime OSGi container to deploy your applications to. I highly recommend the book "OSGi in Action" from Manning.
For tying together different external services that might even use different transport protocols I recommend Apache Camel, which btw also deploys nicely into ServiceMix.
Btw, existing applications can be deployed into an OSGi container with fairly little effort (without requiring code changes).
Torsten Mielke
FuseSource
Web: www.fusesource.com
Blog: http://tmielke.blogspot.com

What tools can be used for testing WCF SOAP services with X.509 authentication?

Can anyone recommend a tool to help with the manual and automated testing of WCF SOAP 1.2 services that use X.509 certificates for authentication?
I've tried WCFStorm - and while it is reasonably close to what I need, it doesn't support X.509 authentication. SoapPanda (though free - my favourite price) didn't seem able to do anything WCFStorm couldn't do and was a lot more clunky.
SoapSonar looks good on the website - does anyone have experience with this? I've asked the IT people at my company to procure me a demo version to test, but it usually takes a long time for them to get round to installing it on my development machine.
Edit: I have written automated tests for my own quality control, however I need to be able to hand over my SOA to be tested by our test team - and they need a UI to test it via as they aren't so technical. I really don't want to have to build and maintain a UI for every service.
Any experience with any similar tools will be of interest to me.
Cheers
Ok, I'll answer my own question - maybe it will help someone one day - though judging by the lack of interest I'm guessing that this is an obscure case.
I've test-driven SoapSonar and it does indeed fit the bill. It has full support for X.509 certificate authentication. It's a bit on the pricey side, but it also has support for load testing and security testing.
It'll be a pretty good fit in my organisation as the testers can create test cases using SoapSonar independently of developers. They can manage suites of tests and ensure that there is no regression. So the testing effort should scale as we role out more and more services.
The tester works through a UI to create the test cases without needing any programming knowledge which is handy.
Have you looked into link textSO-Aware. It is a Service Repository, the helps to manage, track, discover, test your services. It was built with WCF Services in mind, but works with other services.
For full disclosure, I'm the CTO of Tellago Studios, who developed SO-Aware, so I am bias, but I'm also interested in feedback. If you are looking for a pure client side service testing tool that can work with secured services, please feel free to contact me directly ;)
I have used Soap Sonar for a number of years in testing web services. However I have only used the free version and have not tested it with WCF. I am getting ready to test it on a newly constructed WCF service hopefully it will work.