I have a client-server based microservice architecture. i.e, a server-config(Spring cloud config) service that has all the configurations and properties that are fetched using jdbc Backend-(PostgreSQL in my case) along with profiling.
I wanted to understand that incase of failure of my server-config service are there any chances of the rest of the dependent services to fail as they will no longer be able to fetch the required properties from the server-config service? If yes, then how can I mitigate this issue.
I could think of implementing cache but will have to go through the steps required to do so. Can someone help me clear out the above scenario.
Related
can anyone point me to a resource, how I can get spring-webflux and javamelody to work together?
Seems, that a servletcontext is neccessary for startup, which I don't have/need.
I'm aware of the coll metrics stuff, that comes with spring-boot 2.x, but I don't have anything to display the metrics with, and am locked to a company environment, where just installing something isn't a valid option.
Thanks,
Henning
javamelody is mainly based on monitoring of memory, cpu, http requests, sql requests and spring components among other things. See javamelody-spring-boot-starter for example.
But as far as I know, Spring webflux does not use the servlet api. So what do you want to monitor?
If you just want to have graphs in a browser, then start a http server for javamelody reports like in standalone. And if you also want to monitor sql requests and spring components, then add in your application all methods from this example, except monitoringSessionListener and monitoringFilter.
A new spring-boot-starter for javamelody in webflux could be created if it makes sense.
Hi I am new to Marklogic and Apache. I have been provided task to use apache as loadbalancer for our Marklogic cluster of 3 machines. Marklogic cluster is currently running on Linux servers.
How can we achieve this? Any information regarding this would be helpful.
You could use mod_proxy_balancer. How you configure it depends what MarkLogic client you would like to use. If you would like to use the Java Client API, please follow the second example here to allow apache to generate stickiness cookies. If you would like to use XCC, please configure it to use the ML-Server-generated or backend-generated "SessionID" cookie.
The difference here is that XCC uses sessions whereas the Java Client API builds on the REST API which is stateless, so there are no sessions. However, even in the Java Client API when you use multi-request transactions, that imposes state for the duration of that transaction so the load balancer needs a way to route requests during that transaction to the correct node in the MarkLogic cluster. The stickiness cookie will be resent by the Java Client API with every request that uses a Transaction so the load balancer can maintain that stickiness for requests related to that transaction.
As always, do some testing of your configuration to make sure you got it right. Properly configuring apache plugins is an advanced skill. Since you are new to apache, your best hope of ensuring you got it right is checking with an HTTP monitoring tool like WireShark to look at the HTTP traffic from your application to MarkLogic Server to make sure things are going to the correct node in the cluster as expected.
Note that even with the client APIs (Java, Node.js) its not always obvious or explicit at the language API layer what might cause a session to be created. Explicitly creating multi statement transactions definately will, but other operations may do so as well. If you are using the same connection for UI (browser) and API (REST or XCC) then the browser app is likely to be doing things that create session state.
The safest, but least flexable configuration is "TCP Session Affinity". If they are supported they will eliminate most concerns related to load balancing. Cookie Session Affinity relies on guarenteeing that the load balencer uses the correct cookie. Not all code is equal. I have had cases where it the load balancer didn't always use the cookie provided. Changing the configuration to "Load Balancer provided Cookie Affinity" fixed that.
None of this is needed if all your communications are stateless at the TCP layer, the HTTP layer and the app layer. The later cannot be inferred by the server.
Another conern is if your app or middle tier is co-resident with other apps or the same app connecting to the same load balancer and port. That can be difficult to make sure there are no 'crossed wires' . When ML gets a request it associates its identity with the client IP and port. Even without load balencers, most modern HTTP and TCP client libraries implement socket caching. A great perfomrance win, but a hidden source of subtle random severe errors if the library or app are sharing "cookie jars" (not uncomnon). A TCP and Cookie Jar cache used by different application contexts can end up sending state information from one unrelated app in the same process to another. Mostly this is in middle tier app servers that may simply pass on requests from the first tier without domain knowledge, presuming that relying on the low level TCP libraries to "do the right thing" ... They are doing the right thing -- for the use case the library programmers had in mind -- don't assume that your case is the one the library authors assumed. The symptoms tend to be very rare but catastrophic problems with transaction failures and possibly data corruption
and security problems (at an application layer) because the server cannot tell the difference between 2 connections from the same middle tier.
Sometimes a better strategy is to load balance between the first tier and the middle tier, and directly connect from the middle tier to MarkLogic.
Especially if caching is done at the load balancer. Its more common for caching to be useful between the middle tier and the client then the middle tier and the server. This is also more analogous to the classic 3 tier architecture used with RDBMS's .. where load balancing is between the client and business logic tiers not between business logic and database.
Spring documentation says that Spring Session can transparently leverage Redis to back a web application’s HttpSession when using REST endpoints.
Does anyone know if Spring supports GemFire in this place instead of Redis to back a web application's HttpSession ?
Ref: http://docs.spring.io/spring-session/docs/current/reference/html5/guides/rest.html
Not yet, ;).
However, I did spend a little time researching the effort involved to implement a GemFire adapter for Spring Session to back (store/replicate) an HttpSession. I still need to dig a little deeper and I will be tracking this effort in JIRA here (SGF-373).
Also know that GemFire already has support for HTTP server session replication using GemFire's HTTP Session Management Module.
Will post back when I have more details.
Will these 3 steps (at a high level) be sufficient to allow Spring Session to write to Gemfire repository instead of Redis ?
Step 1: Implement just a Configuration class which provide all functions as the annotation
Allow spring to Load the configuration class
Register Spring Session Filter in Container
Establish Repo Connection Factory
Repo connection configuration
we will continue to re-use the Spring Session’s springSessionRepositoryFilter
Step 2: Need to develop an equivalent GemfireOperationsSessionRepository implementing the interface SessionRepository
Step 3: SessionMessageListener.java
3.1. Need to decide a technique to identify and save delta changes in Session to underlying repository
3.2. Need to see how session expire notification from underlying repository can be captured to invoke SessionDestroyEvent and cleanup operations -
I have explored the web on MULE and got to understand that for Apps to communicate among themselves - even if they are deployed in the same Mule instance - they will have to use either TCP, HTTP or JMS transports.
VM isn't supported.
However I find this a bit contradictory to ESB principles. We should ideally be able to define EndPoints in and ESB and connect to that using any Transport? I may be wrong.
Also since all the apps are sharing the same JVM one would expect to be able to communicate via the in-memory VM queue rather than relying on a transactionless HTTP protocol, or TCP where number of connections one can make is dependent on server resources. Even for JMS we need to define and manage another queue and for heavy usage that may have impact on performances. Though I agree if we have distributed and clustered systems may be HTTP or JMS will be only options.
Is there any plan to incorporate VM as a inter-app communication protocol or is there any other way one Flow can communicate with another Flow Endpoint but in different app?
EDIT : - Answer from Mulesoft
http://forum.mulesoft.org/mulesoft/topics/concept_of_endpoint_and_inter_app_communication
Yes, we are thinking about inter-app communication for a future release.
Still is not clear when we are going to do it but we have a couple of ideas on how we want this feature to behave. We may create a server level configuration in which you can define resources to use in all your apps. There you would be able to define a VM connector and use it to send messages between apps in the same server.
As I said, this is just an idea.
Regarding the usage of VM as inter-app communication, only MuleSoft can answer if VM will have a future feature or not.
I don't think it's contradictory to the ESB principle. The "container" feature is pretty well defined in David A Chappell's "Enterprise Service Bus book" chapter 6. The container should try it's best to keep the applications isolated.
This will provide some benefits like "independently deployable integration services" (same chapter), easier clusterization, and other goodies.
You should approach same VM inter-app communications as if they where between apps placed in different servers.
Seems that Mule added in 3.5 version, a feature to enable communication between apps deployed in the same server. But sharing a VM connector is only available in the Enterprise edition.
Info:
http://www.mulesoft.org/documentation/display/current/Shared+Resources#SharedResources-DefiningDomains
Example:
http://blogs.mulesoft.org/optimize-resource-utilization-mule-shared-resources/
I am working on a project which has following requirements:
Perform sticky based load balancing(based on SOAP session ID) onto multiple backend servers.
Possibility to plugin my own custom based load balancer.
Easy to write and deploy.
A central configuration file(Possibly an XML), to take care of all the backend servers.
Easy extraction of a node from this configuration file(Possibly with xpath).
I tried working with camel for a while but, wasn't able to do perform certain task with it.
So thought of giving a try to Akka.
Will akka be possibly able to satisfy the above requirements?
If so is there a load balancing example in akka or proxy example?
Would really appreciate some feedbacks.
You can do everything you've described with Akka.
You don't mention what language you're working with, Scala or Java. I've included links to the Scala documentation.
Before you do anything with Akka you HAVE TO read the documentation and understand how Akka works.
http://doc.akka.io/docs/akka/2.0.3/
Doing so, you'll find Akka is perfect for the project you've described with some minor caveats.
Once you read the documentation the following answers should make a lot of sense.
Perform sticky based load balancing(based on SOAP session ID) onto multiple backend servers.
Load balancing is already part of the framework (it's called Routing in Akka http://doc.akka.io/docs/akka/2.0.3/scala/routing.html) and Remoting (http://doc.akka.io/docs/akka/2.0.3/scala/remoting.html) will take care of the backend servers. You can easily combine the two.
To my knowledge the idea of sticky load balancing is not a part of Akka but I can envision this being accomplished with a Map using the session ID as the key and the Actor name (or path) as the value. A quick actorFor will take care of the rest. Not well thought out but should give you a good idea of where to start.
Possibility to plugin my own custom based load balancer.
Refer to the Routing documentation.
Easy to write and deploy.
This depends on your aptitude and effort but after you read certain parts of the documentation you should be build a proof of concept in a couple of hours.
Deployment can be a bit frustrating mostly because the documentation isn't really great with respect to deploying Akka networks with remote components. However, there are enough examples on the web that you can figure out how to get it done...eventually. Once you do it once it's no big deal.
A central configuration file(Possibly an XML), to take care of all the backend servers.
Akka uses Typesafe Config (https://github.com/typesafehub/config) which is a lot easier to work with than XML (but I hate XML so take that with a grain of salt). As far as a central configuration, I'm not sure what you're trying to accomplish but it sounds like something that can be solved using remote actor creation. Again, see the Remoting documentation.
Easy extraction of a node from this configuration file(Possibly with xpath).
Akka provides a lookup method .actorFor. There's no need to go to the configuration file once the system is up and running.
If so is there a load balancing example in akka or proxy example?
Google is your friend.