Clustering of WSO2 EMM with redundant database - wso2-iot

Before I blindly run into a wall and unnecessarily break my teeth, are the guides which are available on WSO2 about clustering applicable to EMM?
I have managed to build so far a backend percona mysql-cluster, fronted by 2 haproxies but only one WSO2 EMM-server. I would like to extend this setup with at least 2 ELBs and 2 EMMs.
Thanks in advance.

Yes you can follow the same. It is applicable.

Related

Organizing services dataflow / eip

Say I have like 1000 VMs with different services running on them with different technologies used like python, NET, java and different middleware like rabbitmq, redis etc.
How can I dynamically handle the interactions between the services and provide scalability?
For Example, say I have Service A which is pushing Data to a rabbitmq then the data is processed by service B while fetching additional data from Service C. You see at the end I have a decentralized system which is pulling data somewhere and pushing it somewhere else... a total mess! Scale it up to 2000 microservices omg XD.
The moment I change one thing a lot of other systems are affected.
Do you know something maybe like an ESB where I can couple two services together with a message transform adapter in the middle of it and I can change dependenciesat runtime? Like the stream doesn't end in service F anymore and does end in G for example?
I think microservices are a good idea because they can be stateless, can scale, can easily be deployed as a container. But I don't know a good tool/program for managing the data flow. The rabbitmq doesn't support enough enterprise integration patterns. Do you have any advice?
How can I dynamically handle the interactions -
See if using an existing EIP pattern solves your problem to implement the logistics
Depending on how your design shapes up, you may need to use Distributed Lock Management
Or maybe your application is simple enough to use a Consul K/V store as a semaphore & a simple mosquitto topic based bus.
Provide scalability
What is the solution you are trying to scale? AMQP, Consul, "microservices" in themselves are very scalable & distributed
However, to scale your thought process & devops, you need to find a way to see things as patterns that help you split the problem & tackle the complexity
Do you know something maybe like an ESB where I can couple two services together with a message transform adapter in the middle of it and I can change dependenciesat runtime?
Read up on EIP. ESBs are just one of the many ways you can solve your problem. RTFM, & get some perspective.
But I don't know a good tool/program for managing the data flow.
Ask yourself if your problem is related to distributed workflow management, or if a data pipeline is what you are really looking for
Look at Spark, Storm, Luigi, Airflow - they all have a different purpose - but you will know what to do with them if you manage to read up on everything else in this post ;)

Updating Redis Click-To-Deploy Configuration on Compute Engine

I've deployed a single micro-instance redis on compute engine using the (very convenient) click-to-deploy feature.
I would now like to update this configuration to have a couple of instances, so that I can benchmark how this increases performance.
Is it possible to modify the config while it's running?
The other option would be to add a whole new redis deployment, bleed traffic onto that over time and eventually shut down the old one. Not only does this sound like a pain in the butt, but, I also can't see any way in the web UI to click-to-deploy multiple clusters.
I've got my learners license with all this, so would also appreciate any general 'good-to-knows'.
I'm on the Google Cloud team working on this feature and wanted to chime in. Sorry no one replied to this for so long.
We are working on some of the features you describe that would surely make the service more useful and powerful. Stay tuned on that.
I admit that there really is not a good solution for modifying an existing deployment to date, unless you launch a new cluster and migrate your data over / redirect reads and writes to the new cluster. This is a limitation we are working to fix.
As a workaround for creating two deployments using Click to Deploy with Redis, you could create a separate project.
Also, if you wanted to migrate to your own template using the Deployment Manager API https://cloud.google.com/deployment-manager/overview, keep in mind Deployment Manager does not have this limitation, and you can create multiple deployments from the same template in the same project.
Chris

How to set send-buffer-size and receive-buffer-size in infinispan hotrod client and server

I was planning to use out or process distributed caching solution and I am trying infinispan hot rod protocol for this purpose. It performs quite well compare to other caching solutions but I feel it is taking more in network communication than expected.
We have 1000Mbps ethernet network and round trip time between client and server is around 200ms but infinispan hot rod protocol is taking around 7 seconds in transferring an object of size 30 MB from server to client. I feel that I need to do tcp tuning to reduce this time, can someone please suggest me how can I tune tcp to get best performance? On googling I found that send-buffer-size and receive -buffer-size can help in this case but I don't know how and where to set these properties. Can someone please help me in this regard.
Any help in this regard is highly appreciated.
Thanks,
Abhinav
By default, Hot Rod client and server enable TCP-no-delay, which is good for small objects. For bigger objects, such as your case, you might wanna disable it so that client/server can buffer and then send instead. For the client, when you construct RemoteCacheManager, try passing infinispan.client.hotrod.tcp_no_delay=false, and the server needs a similar configuration option too. How the server is configured depends on your Infinispan versions. If using the latest Infinispan 6.0.0 version, you'll have to go to the standalone.xml file and change the endpoint subsystem configuration so that hotrod-connector has tcp-nodelay attribute set to false. Send/receive buffers only apply when TCP-no-delay are disabled. These are also configurable via similar methods, but I'd only do that if you're not happy with the result once TCP-no-delay has been disabled.

Is Redis a bottleneck in SignalR + Redis when it comes to scaling out?

I'm interested in SignalR + Redis solution for implementing a server application that is scalable. And my concern is that Redis cluster is not production ready yet! So my question is:
Is Redis a bottleneck in SignalR + Redis when it comes to scaling out? If it is, is there any Linux-based solution that solves the problem?
On a single redis server you can easily handle up to 10K concurrent clients using pubsub. If you are still evaluating what to use, this should be more than you need at your current stage.
Redis cluster is supposed to be production ready by the end of the year or early 2014. You can actually download it and try it already. Lots of people are using it now and reporting the odd bug. The creator of redis is focused on making the cluster work and as of now it is very mature.
By using the proxy you could have up to 1000 nodes simultaneously, with over 10K clients on pubsub, so 10 million of concurrent users. The limit of the cluster is theoritecally of 16384 nodes, but a maximum of 1000 is recommended right now.
Unless you are of facebook scale, you can probably use redis for your case use (and even when you are twitter scale, given twitter uses redis intensively for storing all the timelines on redis)
I've been asked to add some references on a comment, so here you are the relevant links:
On the number of concurrent connections per redis process http://redis.io/topics/clients
On how twitter is using redis http://highscalability.com/blog/2013/7/8/the-architecture-twitter-uses-to-deal-with-150m-active-users.html
On cluster size/specs http://redis.io/topics/cluster-spec
Is Redis a bottleneck in SignalR + Redis when it comes to scaling out? If it is, is there any Linux-based solution that solves the problem?
I don't think so. Check the below article on how to scale out using Redis
http://www.asp.net/signalr/overview/performance-and-scaling/scaleout-with-redis

Using an ESB system to replicate data among databases

I work in a small supermarket chain (4 stores). Each store has its own local database which contains information of each product, prices, and transactions that have ocurred on the store. In addition, each store needs to replicate this information back and forth to a central location.
Right now we are using something called SQLRemote, which is a feature of Sybase's SQL Anywhere database. It works, but sometimes fails and is difficult to manage. To its' credit, SQLRemote actually wasn't designed for this type of scenarios, so it could be said that we are using it incorrectly.
I was thinking that an ESB system such as Mule (or ChainBuilder which seems easier to set up) might be a good alternative to SQL remote. I understand that these systems can detect when changes occur in the database (i.e. when records are added, modified or deleted), and can be set up to deliver a message in a transaction.
Would this be a viable solution to my scenario?
Best regards,
Edgard
Yeah I am sure Mule should be able to do this.
However I work for a company which provides Fuse ESB which is using Apache projects such as Apache ServiceMix, Apache ActiveMQ, Apache Camel and Apache CXF.
We have a user story about a very big retailler in US which uses Fuse ESB to integrate their stores and warehouses and whatnot
http://fusesource.com/collateral/17
Fuse ESB
http://fusesource.com/products/enterprise-servicemix/
Yes, Mule can support this scenario thought it might be overkill. There are targeted database replication solutions out there. The advantage of Mule would be it's ability to handle failure and other scenarios where you need the workflow to be adapted based on what is happening. This allows you to build a very robust solution.
Mule flows could be a very good choice to address this problem. It's a new feature of Mule 3 designed for orchestrating integrations like this.