Apache activemq and iscsi - apache

We are planning to use the iscsi target to handle the Activemq master/slave setup. In this case we will mount a SAN storage volume on two virtual machines using the iscsi protocol and those two VMs would share the same mount (from SAN). So the question is, will the file locking works properly with this approach? And can we anticipate any issues in this design?
Mounting as NFS may need a fileserver between SAN and the VMs so we are not considering this as an option and planning to use iscsi. Any help would be greatly appreciated.

You must use a clustered, "shared disk" filesystem on the iSCSI LUNs. Conventional filesystems (ext3, xfs, ntfs, etc.) do not expect (or handle) the data changing out from underneath them. They just won't work.
I don't have any particular one to recommend, but the most accessible of these shared-disk filesystems is probably GFS2? The wikipedia page for clustered filesystems has several examples listed under the Shared-disk file system heading.
https://en.wikipedia.org/wiki/Clustered_file_system

Related

Chronicle Queue issue with certain boxes

In certain container boxes chronicle queue is not working.I am seeing this exception: 2018-11-17 16:30:57.825 [failsafe-sender] WARN n.o.c.q.i.s.SingleChronicleQueueExcerpts$StoreTailer - Unable to append EOF, skipping
java.util.concurrent.TimeoutException: header: 80000000, pos: 104666
at net.openhft.chronicle.wire.AbstractWire.writeEndOfWire(AbstractWire.java:459)
at net.openhft.chronicle.queue.impl.single.SingleChronicleQueueStore.writeEOF(SingleChronicleQueueStore.java:349)
at
I want to understand why only in certain VM's.
Note: We are using NFS file system
Tried to understand the behavior in NFS
Chronicle Queue does not support operating off any network file system, be it NFS, AFS, SAN-based storage or anything else. The reason for this is those file systems do not provide all the required primitives for memory-mapped files Chronicle Queue uses.
Or putting it another way, Chronicle Queue uses off-heap memory mapped files and these files utilize memory mapped CAS based locks, usually these CAS operations are not atomic between processes when using network attached storage and certainly not atomic between processes that are hosted on different machines. If your test sometimes works on different combinations of file-system and/or OS's, then it is possible your test did not experience a concurrency race, or that that on some combination of NAS and OS, it is possible the hardware and OS has honoured these CAS operations, however, we feel this is very unlikely. As a solution to this, we have created a product called chronicle-queue-enterprise, it is a commercial product that will let you share a queue between machines using TCP/IP. Please contact sales#chronicle.software for more information on chronicle-queue-enterprise.
For reliable distribution of data between machines you need to use Chronicle Queue Enterprise. NFS doesn't support atomic memory operations between machines.

High availability using cloud load balancing

I am looking for the best way to achieve high availability for my organizations applications. Since they contain sensitive information, the applications must reside inside my organizations data centers.
I was thinking of using Google load balancing to direct requests to my servers, but I don't think they can be pointed at external servers, just Google VMs. Does anyone know if that's true?
My other thought was that I could use Google load balancing to point to Google VMs running Nginx and have that load balance between my data centers. Does anyone know if that is feasible? Under this scenario, can I terminate SSL on my servers, or does it have to terminate at the Google VM?
Unfortunately, you are correct: You cannot use Google Cloud's Network load-balancing with external servers.
You could do your second option, but I'd strongly suggest you reconsider the approach: too many moving parts, and for what benefit? If a server goes down you lose session state anyway, so maybe it'd be better for you to use DNS load balancing instead.
FYI I use Google LoadBalancing and AutoScaling, it works pretty good, but not perfect (frequent 502 burps), which is probably why it's still in "Beta".

GCP - CDN Server

I'm trying to architect a system on GCP for scalable web/app servers. My initial intention was to have one disk per web server group hosting the OS, and another hosting the source code + imagery etc. My idea was to mount the OS disk on multiple VM instances so to have exact clones of the servers, with one place to store PHP session files (so moving in between different servers would be transparent and not cause problems).
The second idea was to mount a 2nd disk, containing the source code and media files, which would then be shared with 2 web servers, one configured as a CDN server and one with the main website and backend. The backend would modify/add/delete media files, and the CDN server would supply them to the browser when requested.
My problem arises when reading that the Persistent Disk Storage is only mountable on a single VM instance with read/write access, and if it's needed on multiple instances it can be mounted only in write access. I need to have one of the instances with read/write access with the others (possibly many) with read only access.
Would you be able to suggest ways or methods on how to implement such a system on the GCP, or if it's not possible at all?
Unfortunately, it's not possible.
But, you can create a Single-Node File Server and mount it as a read and write disk on other VMs.
GCP has documentation on how to create a single-Node File Server
An alternative to using persistent (which as you said, only alows a single RW mount or many read-only) is to use Cloud Storage - which can be mounted through FUSE.

Real world example of Apache Helix, Zookeeper, Mesos and Erlang?

I am new in
Apache ZooKeeper : ZooKeeper is a centralized service for maintaining configuration information, naming, providing distributed synchronization, and providing group services.
Apache Mesos : Apache Mesos is a cluster manager that simplifies the complexity of running applications on a shared pool of servers.
Apache Helix : Apache Helix is a generic cluster management framework used for the automatic management of partitioned, replicated and distributed resources hosted on a cluster of nodes.
Erlang Langauge : Erlang is a programming language used to build massively scalable soft real-time systems with requirements on high availability.
It sounds to me that Helix and Mesos both are useful for Clustering management System. How they are related to ZooKeeper? It'd better if someone give me a real world example for their usage.
I am curious to know How [BOINC][1] are distributing tasks to their clients? Are they using any of the above technologies? (Forget about Erlang).
I just need a brief view on it :)
Erlang was built by Ericsson, designed for use in phone systems. By design, it runs hundreds, thousands, or even 10s of thousands of small processes to handle tasks by sending information between them instead of sharing memory or state. This enables all sorts of interesting features that are great for high availability distributed systems such as:
hot code reloading. Each process is paused, it's relevant module code is swapped out, and it is resumed where it left off, so deploys can happen without restarting or causing significant interruption.
Easy distributed messaging and clustering. Sending a message to a local process or a remote one is fairly seamless in most instances.
Process-local GC. Garbage collection happens in each process independently instead of a global stop-the-world even like java, aiding in low-latency results.
Supervision trees and complex process hierarchy and monitoring/managing.
A few concrete real-world examples that makes great use of Erlang would be:
MongooseIM A highly performant and incredibly scalable, distributed XMPP / Chat server
Riak A distributed key/value store.
Mesos, on the other hand, you can sort of think of as a platform effectively for turning a datacenter of servers into a platform for teams and developers. If I, say as a company, own a datacenter with 10,000 physical servers, and I have 1,000 engineers developing hundreds of services, a good way to allow the engineers to deploy and manage services across that hardware without them needing to worry about the servers directly. It's an abstraction layer over-top of the physical servers to that allows you to share and intelligently allocate resources.
As a user of Mesos, I might say that I have Service X. It's an executable bundle that lives in location Y. Each instance of Service X needs 4 GB of RAM and 2 cores. And I need 8 instances which will be attached to a load balancer. You can specify this in configuration and deploy based on that config. Mesos will find hardware that has enough ram and CPU capacity available to handle each instance of that service and start it running in each of those locations.
It can handle a lot of other more complex topics about the orchestration of them as well, but that's probably a bit in-depth for this :)
Zookeepers most common use cases are Service Discover and configuration management. You can think of it, fundamentally, a bit like a nested key value store, where services can look at pre-defined paths to see where other services currently live.
A simple example is that I have a web service using a shared database cluster. I know a simple name for that database cluster and where the configuration for it lives in zookeeper. I can look up (or repeatedly poll) that path in zookeeper to check what the addresses of the active database hosts are. And on the other side, if I take a database node out of rotation and replace it with a new one, the config in zookeeper gets updated with the new address, and anything continually looking at it will detect this change and change where it's connected to.
A more complex use case for zookeeper is how Kafka uses it (or did at the time that I last used Kafka). Kafka has streams, and streams have many shards. Each consumer of each stream use zookeeper to save checkpoints in each shard after they have read and processed up to a certain point in the stream. That way if the consumer crashes or is restarted, it knows where to pick up in the stream.
I dont know about Meos and Earlang language. But this article might help you with Helix and Zookeeper.
This article tells us:
Zookeeper is responsible for gluing all parts together where Helix is cluster management component that registers all cluster details (cluster itself, nodes, resources).
The article is related to clustering in JBPM using helix and zookeeper.But with this you will get a basic idea on what helix and zookeeper is used for.
And from most of the articles i read online it seems like zookeeper and helix are used together.
Apache Zookeeper can be installed on a single machine or on a cluster.
It can be used to keep track of logs. It can provide various services on a distributed platform.
Storm and Kafka rely on Zookeeper.
Storm uses Zookeeper to store all state so that it can recover from an outage in any of its (distributed) component services.
Kafka queue consumers can use Zookeeper to store information on what has been consumed from the queue.

GlusterFS as shared storage for ActiveMQ master/slave cluster

I want to setup an ActiveMQ cluster. As I encountered problems with shared nothing approach, I'd like to do it using shared filesystem. However, the ActiveMQ documentation warns about possible problems related to filesystem locks. As I'm not sure, I'd like to ask, if GlusterFS would be a good choice for shared filesystem.
Shared-storage master-slave requires that the underlying file system supports network file locks. GlusterFS seems to support network locks going by the documentation (it's not 100% clear). Ultimately the best way to find out is to set it up and check.
If it doesn't you still have the option of falling back to a shared JDBC-based store.