I have just a simple question to answer !
Does Alfresco 3.4e Community Edition support clustering ?
If yes, then what are the supported clustering methods (e.g is JGroups supported?) ?
Regards,
It will work with Community, yes. There are a few little bits in Enterprise that'll make the setup and monitoring easier, which coupled with the support you get may mean you'd be better off going to Enterprise if you can.
You should probably start with this presentation to get you through the basics of Alfresco clustering. Once you've understood that, you likely want to read the Alfresco documentation on Setting up high availability systems which covers the concepts, initial cluster config, setting up JGroups etc.
You may also find it useful to read this guide on the Alfresco Wiki for instructions on setting it up, including how to configure JGroups as part of that process, if you haven't already.
Related
Planning to Migrate the Websphere from 7.0 to 9 and 8.5 to 9.
Can anyone help me getting the detailed Process
Migration here is "In place". (Migration will be done on the same servers, where the old Installation are in)
if at all any migration tools need to be used, please provide the clear info on them.
any documental references, or any video references for the questioner is appreciated.
OS used : RHEL
CUrrent version: WAS 7x and 8.5
Migrating to : WAS 9.0
It sounds like you're in the very beginning stages of doing this migration. Therefore, I highly recommend you take some time to plan this out, especially to figure out the exact steps you'll be taking and how you'll handle something going wrong. For WebSphere, there is a collection of documents from IBM that discuss planning and executing the upgrade. Look around there for documentation on the tools and step by step guides for different kinds of topologies. The step by step guide for an in place migration of a cell is here.
You should make sure to take good backups before you start the process so you can restore back to before the migration if you need to.
In addition to doing the upgrade, an important part is to also make sure your applications are going to work on the new version if you haven't already. IBM provides this tool to scan applications and identify potential issues that developers will have to fix. There is documentation for the tool at that link as well.
If you are in the planning phase, I'd strongly suggest you to consider migrating to WebSphere Liberty instead of traditional WAS v9. All these migration tools (toolkit for binaries, Eclipse migration toolkit) support both migration scenarios.
Choosing Liberty might be a bit more work at the beginning, but you will gain more deployment flexibility and speed up future development. Liberty is also much better fitted for any cloud/containers environments, as it is much more lightweight, so in the future, if you would like to move to containers, it would be much easier.
Check this tutorial Migrate traditional WebSphere apps to WebSphere Liberty on IBM Cloud Private by using Kubernetes, although it shows the steps to migrate to Liberty on ICP, beginning is the same - analyzing of the application whether they are fit for Liberty and migrating. If you don't have access to IBM Cloud or ICP, you can use stand alone version of the Transformation Advisor that was released recently - IBM Cloud Transformation Advisor.
Having said all that, some apps include old or proprietary traditional WebSphere APIs and in that case it may be easier and cheaper to temporary migrate them to WAS v9, and modernize in the future.
We currently need a portal solution. One of our service providers has already suggested that we develop the portal in Apache OFBiz.
Now I would like to know if Apache OFBiz is still state of the art or if it is already on the way down.
Or is there another technology we should consider.
Best Regards,
Foerstar
Apache OFBiz is a solid Open Source framework that is actively maintained and updated by its community that is part of the Apache Software Foundation.
While the OFBiz architecture has been outlined years ago, it is still a modern framework that incorporates several pragmatic patterns and is designed to be flexible and extendable. Moreover various components and technologies have been kept up to date or replaced with newer ones over the years.
It is impossible to tell you if OFBiz is a good fit for your portal solution because I don't know your specific needs but my recommendation is to at least consider it especially if in your portal you will publish content related to products or other business entities: if this is the case then the OFBiz universal data model will be a valuable resource that will help you to achieve your goals efficiently and with high quality.
Is it practical or wise to use Lagom in production without ConductR? The commercial licensing is putting me off. This framework looks like it could be pretty arduous to deploy and custom tooling for that can take a lot of effort to get right.
(disclaimer: I'm a Lightbend employee, currently core member of the Lagom team)
Edit (Nov 2018): Please refer to https://www.lagomframework.com/documentation/current/java/ProductionOverview.html#Running-Lagom-in-production for up to date information on this topic.
(original answer, Aug 2017)
A lot has changed in the Lightbend stack since this question was added over a year ago. For example: ConductR is now free to use in production for up to three nodes.
Also, the team behind ConductR is also working on providing tools to deploy a Lagom application on Kubernetes. The efforts on that front are very advanced and some of our sample apps can be deployed in Kubernetes already.
Your question is rather open-ended and so let's start with "it depends".
If
you're comfortable managing your scaling within the configuration of your Akka cluster,
your usage doesn't violate the open-source licensing terms of lagom, play and Akka, AND
you don't have sufficient cash flow to justify leveraging Lightbend's production suite,
then you arguably can deploy with a minimum of effort and custom tooling.
If those conditions don't hold, your options are to go elsewhere (e.g., Spring Cloud) or retain Lightbend. You may find going elsewhere has it's own cognitive load and/or commercial expense.
Hope that helps even 7 months later.
The other answers are higher level, but I can essentially say "yes." I'm currently deploying a Lagom service to be hosted on a Kubernetes cluster, and I'm not using ConductR or any of the commerically licensed components.
You will need to dig a little bit into some Play internals to start the service properly, and if you want to hook in with some other service locator you may need to implement one yourself, but it certainly isn't impossible and I think it's less effort on net.
Being honest it really worth to pay, because you got much more, good reporting dashboards, automatic cluster formation and what really cool is split brain resolver....
But sometimes agree when you are working on a project that don't have a lot of money, you can do some small tricks and get it work and may be then do all what really lucks or buy enterprise subscription.
So Lagom can be very easy being used without service discovery at all if you will delegate all to Kubernetes DNS or without Kubernetes just put load balancer before each service and use its address.
How this can works
Each lagom service that you have should be external service
Production run configuration should be mixed with
ConfigurationServiceLocatorComponents
Service that should communicate with another one should be started with extra parameter that tell lagom that this service is external one and can be communicated directly(all can be in JAVA_OPTS as for play application)
-Dlagom.services.your_service_name=http://k8s_service_name.default.svc.cluster.local:9000
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 6 years ago.
Improve this question
ZooKeeper is a highly available coordination service for data centers. It originated in the Hadoop project. One can implement locking, fail over, leader election, group membership and other coordination issues on top of it.
Are there any alternatives to ZooKeeper? (free software of course)
I've looked extensively at Zookeeper/Curator, Eureka, etcd, and consul. Zookeeper/Curator and Eureka are in many ways the most polished and easiest to integrate if you are in the Java world. Etcd is pretty cool and very flexible, but It is really just a HA key store so you would have to write a lot of code to turn it into an opinionated service discovery system.
Consul is (to me) the best of both worlds. It is an opinionated service discovery system written on top of serf, using raft for cluster consensus and gossip for communication. It exposes the discovery / registration endpoints with a well documented REST api, and also allows you to discover services with DNS SRV records, and register services with configuration (i.e. so you can register a database or application you can't integrate a client with, or if you just want to keep your service discovery decoupled from your app)
I've written a blog post about consul where you can learn more and walk through my "try it out" demo
I've also discussed service discovery with etcd & docker if you want to see more about what that custom code might look like.
One last thing! etcd & consul are written in go, so maintaining them is much easier then java solutions like zookeeper. All you need is the consul / etcd binary. no dependencies, no linked libraries, no jvm.
There's a very promising alternative to ZooKeeper called etcd (github.com/coreos/etcd), written by CoreOS team. Unlike Doozerd, etcd is being actively developed.
Just discovered Accord (C) and OpenReplica/ConCoord (Python) which may be interesting solutions
[EDIT] The Hashicorp crew, of Vagrant and Packer fame, are cooking "a decentralized solution for service discovery and orchestration" called Serf.
[EDIT2] Hashicorp strikes again ! They just released Consul, built on top of Serf. The pitch: "a solution for service discovery and configuration, completely distributed, highly available, scalable to thousands of nodes and services across multiple datacenters".
Yes, there is also Doozerd (https://github.com/ha/doozerd). Take a good look at it, it's a nice, single binary distributed coordination service developed by Heroku. With bindings/libraries for java/python/ruby/node. Very easy to get started with and play around.
Take a look at Serf. There is a comparison vs Zookeeper here.
OpenReplica from my research group is a highly available FOSS coordination service for data centers. It can be used for implementing locking, fail over, leader election, group membership and other coordination services. It differs from ZooKeeper in two critical ways:
It uses an object-oriented API. This makes it much easier to write coordination services. Synchronization code for OpenReplica looks exactly like its textbook counterpart; there is no need to master a file and upcall-based API like in ZooKeeper and Chubby.
It enables dynamic membership updates to the replica set. There is no need for static configuration files. The system is integrated into DNS (authoritative, slave for OpenReplica, or Amazon Route 53).
We actively support the system, do not hesitate to let us know if you have further questions.
There's a project called Noah on github that looks interesting, it says that it's "loosely based on Apache ZooKeeper" https://github.com/lusis/Noah with REST support being a key feature (ZK has this as a contrib/option rather than built in).
There are different tools that optimize for different engineering trade-offs.
ZooKeeper Scales marginally for reads; writes with many observers can be slow. It is proven and has a sizable community.
Accord Seems interesting for write-intensive uses, however typical use-cases already have domain-specific solutions (i.e., logging, telemetry).
The others are somewhat interesting but generally unproven. Don't get this wrong if intended for production usage.
I'd found this comparison of Zookeeper, etcd and Doozer:
http://devo.ps/blog/zookeeper-vs-doozer-vs-etcd/
Serf (serfdom.io) is also a nice solution as it is simple! But you must consider, SERF is just a cluster-manager which enables you to send custom events to all cluster nodes. Thats nice, but you have to write your own shell scripts (aka events).
See this example: "https://www.digitalocean.com/community/articles/how-to-set-up-a-serf-cluster-on-several-ubuntu-vps"
The advantage is, you're getting a very simple cluster-manager and you're able to combine this with your favorite configuration, deployment or continuous integration tool.
It seems Corosync is also like ZooKeeper.
I know this post is quite old, but someone who is looking at all possible alternatives I would also like to suggest JGroups library which is mature enough to be used in production environment. I have used it successfully in one of my projects mainly for distributed coordination and to share messages between cluster. It also support AWS support in addition to its flexible architecture where you can customize its stack to get what you need. I suggest you to have a look at it
Do you use Glassfish 2 or v3 in a production environment?
Do you find it robust?
Have you ever been able to find a complete set of documentation?
What do you do when you find that Glassfish ignores J2EE standards, like class and anotation scanning?
Glassfish is Sun's reference standard for a J2EE app server. V3 supports the new 3.1 standard. However, it is only a preview. It is currently scheduled to be released on Dec 10, 2009. Of course, it can always be dangerous to be a very early adopter in a production environment. Currently V3 doesn't support JMS or clustering, for example, but they should be in the final release.
I've used V2 in production for about 3 years and I personally like it. The web admin console makes it very easy to manage (http://localhost:4848, admin, adminadmin), and the performance is good. Here's one example, where someone benchmarked Glassfish: Blog. Of course, you should search for more examples and your YMMV. Here's a Sun document for Glassfish to help Tomcat User.
One last thing that I would add is that Sun ships, and integrates, both Tomcat and Glassfish in their Java IDE Netbeans so you can easily switch between the two app servers to test your particular app.
GlassFish Server V3 or V2 can be used in production environments but the number of users should be less than 1500. Its not very robust and scalable during high load. If used for simple applications GF works perfectly fine, as it is the reference implementation of Java EE standards by Sun which only server to be a guide to other vendors of application servers.
For more complex and high load applications, its better to go to IBM WebSphere Application Server. That's the most robust app server I have seen in my 15 yrs of experience.
Do I use GF in production? no.
Do I find it robust? yes, but I do not tax it very hard.
Have I ever found a complete set of documentation? I think so... the GlassFish v2.1 docs and the GlassFish v3 docs (http://docs.sun.com/app/docs/prod/gf.entsvr.v3?l=en&a=view)
What do I do when GlassFish ignores the J2EE standards? I file an issue here: https://glassfish.dev.java.net/issues/
Do I use in production? Yes. (Now, using 3.0.1)
Is it robust? Yes. But my point of view is from someone that likes to follow the server's developers community and can try some tricks.
What about documentation? The official one is really good, and the developers blogs are a great plus (http://blogs.oracle.com/theaquarium/). What is maybe far from other communities, at the moment, is the collective experience material (like forums), but I think the mail lists are good enough (http://glassfish.java.net/public/mailing-lists.html).