I am looking for a way to propagate client details (like application name, host name, tenant, etc) from a standalone client that is looking up and invoking remote EJBs on a Weblogic 12c server.
I am familiar with similar capabilities in JBoss (https://issues.jboss.org/browse/EJBCLIENT-61) that allow me to register a interceptor and pass additional metadata in the invocation context.
Is there a similar API for WebLogic?
Any pointers, alternatives much appreciated.
Related
I have a client-server based microservice architecture. i.e, a server-config(Spring cloud config) service that has all the configurations and properties that are fetched using jdbc Backend-(PostgreSQL in my case) along with profiling.
I wanted to understand that incase of failure of my server-config service are there any chances of the rest of the dependent services to fail as they will no longer be able to fetch the required properties from the server-config service? If yes, then how can I mitigate this issue.
I could think of implementing cache but will have to go through the steps required to do so. Can someone help me clear out the above scenario.
I'm using streamed service calls in Lagom. Once I upgraded to 1.4, error messages from the server are not being propagated to the client over websockets. This works in test using the lagomtestkit, but not when running a service using 'runAll' from SBT or in a live deployment.
Using 'runAll', all client calls that fail come back with "Peer closed connection with code 1011 'internal error'"
The issue here is fairly easy to diagnose. Lines 66-68 of akka-http 10.0.11 FrameOutHandler create the WebSocket closeFrame, throwing away the passed in exception and returning "internal error", even though they have the exception message.
My problem is that although I can see the error, I can't see any easy way to fix it without patching akka-http. Is this something that should be supported in Lagom? It used to work in 1.3 when we used the netty client.
Are you testing with another Lagom client connecting directly to the port that the service listens to, or using a web browser or some other client connecting through port 9000?
If it's the latter, you might also need to change the service gateway implementation back to Netty as described in the documentation on Default gateway implementation:
The Lagom development environment provides an implementation of a
Service Gateway based on Akka HTTP and the (now legacy) implementation
based on Netty.
You may opt in to use the old netty implementation.
In the Maven root project pom:
<plugin>
<groupId>com.lightbend.lagom</groupId>
<artifactId>lagom-maven-plugin</artifactId>
<version>${lagom.version}</version>
<configuration>
<serviceGatewayImpl>netty</serviceGatewayImpl>
</configuration>
</plugin>
In sbt:
// Implementation of the service gateway: "akka-http" (default) or
"netty" lagomServiceGatewayImpl in ThisBuild := "netty"
In any case, please create an issue on GitHub and we can investigate a solution in the framework.
I have explored the web on MULE and got to understand that for Apps to communicate among themselves - even if they are deployed in the same Mule instance - they will have to use either TCP, HTTP or JMS transports.
VM isn't supported.
However I find this a bit contradictory to ESB principles. We should ideally be able to define EndPoints in and ESB and connect to that using any Transport? I may be wrong.
Also since all the apps are sharing the same JVM one would expect to be able to communicate via the in-memory VM queue rather than relying on a transactionless HTTP protocol, or TCP where number of connections one can make is dependent on server resources. Even for JMS we need to define and manage another queue and for heavy usage that may have impact on performances. Though I agree if we have distributed and clustered systems may be HTTP or JMS will be only options.
Is there any plan to incorporate VM as a inter-app communication protocol or is there any other way one Flow can communicate with another Flow Endpoint but in different app?
EDIT : - Answer from Mulesoft
http://forum.mulesoft.org/mulesoft/topics/concept_of_endpoint_and_inter_app_communication
Yes, we are thinking about inter-app communication for a future release.
Still is not clear when we are going to do it but we have a couple of ideas on how we want this feature to behave. We may create a server level configuration in which you can define resources to use in all your apps. There you would be able to define a VM connector and use it to send messages between apps in the same server.
As I said, this is just an idea.
Regarding the usage of VM as inter-app communication, only MuleSoft can answer if VM will have a future feature or not.
I don't think it's contradictory to the ESB principle. The "container" feature is pretty well defined in David A Chappell's "Enterprise Service Bus book" chapter 6. The container should try it's best to keep the applications isolated.
This will provide some benefits like "independently deployable integration services" (same chapter), easier clusterization, and other goodies.
You should approach same VM inter-app communications as if they where between apps placed in different servers.
Seems that Mule added in 3.5 version, a feature to enable communication between apps deployed in the same server. But sharing a VM connector is only available in the Enterprise edition.
Info:
http://www.mulesoft.org/documentation/display/current/Shared+Resources#SharedResources-DefiningDomains
Example:
http://blogs.mulesoft.org/optimize-resource-utilization-mule-shared-resources/
I would like a recommendation/idea on a method to configure properties for a running Mule service dynamically, i.e. I want the service to pick up the new settings without the need to restart Mule. Typically the kind of properties/settings I would like to change are FTP connector user ID, passwords, service URLs etc.
Any idea would be welcome.
Regards, Ola
Use the URI endpoint format do dynamically address endpoints. In simple cases you may be able to use the message properties in a TemplateEndpointRouter
Otherwise You need to write a component that composes the URI and sends the message to the dynamic endpoint using the MuleEventContext or MuleClient.
See here:
http://www.mulesoft.org/documentation/display/MULE2USER/Outbound+Routers#OutboundRouters-TemplateEndpointRouter
http://www.mulesoft.org/documentation/display/MULE2USER/Using+the+Mule+Client#UsingtheMuleClient-PerforminganEventRequestCall
http://www.mulesoft.org/documentation/display/MULE2USER/Mule+Endpoint+URIs
Mule exposes all service configuration via JMX, but I don't see any obvious way to reconfigure the connectors without a restart. They are internally managing pools of connections.
If there is a limited, you can create connectors for each and reconfigure the routes via jmx attributes.
If it is to be fully dynamic you likely need to implement your own service component to manage the ftp connection. Exposing the connection management, configuration, and restarting via JMX should be pretty straight forward.
We are working on developing a Java EE based application. Our application is Java 1.5 compatible and will be deployed to WAS ND 6.1.0.21 with EBJ 3.0 and Web Services feature packs. The configuration is currently one cell with two clusters. Each cluster will have two nodes.
Our application, or our system, as I should rather say, comes in two or three parts.
Part 1: An ear deployed to one cluster that contains 3rd party vendor code combined with customization code. Their code is EJB 2.0 compliant and has a lot of Remote Home interfaces.
Part 2: An ear deployed to the same cluster as the first ear. This ear contains EBJ 3's that make calls into the EJB 2's supplied by the vendor and the custom code. These EJB 3's are used by the JSF UI also packaged with the EAR, and some of them are also exposed as web services (JAX-WS 2.0 with SOAP 1.2 compliance) for other clients.
Part 3: There may be other services that do not depend on our vendor/custom code app. These services will be EJB 3.0's and web services that are deployed to the other cluster.
Per a recommendation from some IBM staff on site here, communication between nodes in a cluster can be EJB RMI. But if we are going across clusters and/or other cells, then the communication should be web services.
That said, some of us are wondering about performance and optimizing communication for speed of our applications that will use our web services and EJB's. Right now most EJB's are exposed as remote. (and our vendor set theirs up that way, rather than also exposing local home interfaces). We are wondering if WAS does any optimizations between apps in the same node/cluster node space. If two apps are installed in the same area and they call each other via remote home interface, is WAS smart enough to make it a local home interface call?
Are their other optimization techniques? Should we consider them? Should we not? What are the costs/benefits? Here is the question from one of our team members as sent in their email:
The question is: Supposing we develop our EJBs as remote EJBs, where our UI controller code is talking to our EXT java services via EJB3...what are our options for performance optimization when both the EJB server and client are running in the same container?
As one point of reference, google has given me some oooooold websphere performance tuning documentation from 2000 that explains a tuning configuration you can set to enable Call By Reference for EJB communication when they're in the same application server JVM. It states the following:
Because EJBs are inherently location independent, they use a remote programming
model. Method parameters and return values are serialized over RMI-IIOP and returned
by value. This is the intrinsic RMI "Call By Value" model.
WebSphere provides the "No Local Copies" performance optimization for running EJBs
and clients (typically servlets) in the same application server JVM. The "No Local
Copies" option uses "Call By Reference" and does not create local proxies for called
objects when both the client and the remote object are in the same process. Depending
on your workload, this can result in a significant overhead savings.
Configure "No Local Copies" by adding the following two command line parameters to
the application server JVM:
* -Djavax.rmi.CORBA.UtilClass=com.ibm.CORBA.iiop.Util
* -Dcom.ibm.CORBA.iiop.noLocalCopies=true
CAUTION: The "No Local Copies" configuration option improves performance by
changing "Call By Value" to "Call By Reference" for clients and EJBs in the same JVM.
One side effect of this is that the Java object derived (non-primitive) method parameters
can actually be changed by the called enterprise bean. Consider Figure 16a:
Also, we will also be using Process Server 6.2 and WESB 6.2 as well in the future. Any ideas? recommendations?
Thanks
The only automatic optimization that can really be done for remote EJBs is if they are colocated (accessed from within the same JVM). In that case, the ORB will short-circuit some of the work that would otherwise be required if the request needed to go across the wire. There will still be some necessary ORB overhead including object serialization (unless you turn on noLocalCopies, with all the caveats it brings).
Alternatively, if you know that the UI controller is colocated, your method calls do not rely on parameter or return value copying, and your interface does not rely on the exception differences between local and remote views, then you could create and expose a local subinterface that will be much faster than remote access through the ORB.