How do I bind microservices in Lagom to something other than localhost? - lagom

I am developing several microservices using Lagom. Each exposes a set of REST endpoints. All default implementations that I have seen bind to localhost. How do I bind specific services to different hosts?

You can use the lagomServiceAddress setting key in SBT or the serviceAddress tag in Maven. For more information, see here. Note that this is for development -- using a container orchestration mechanism is recommended for deployment.

Related

Is there a way to configure MDBs programatically?

I am currently working on a EJB 3.1 based project running on GlassFish that uses a custom built framework to configure the functionality of any SessionBeans. Using this we can enable, disable and re-configure most of the services at run-time. Unfortunately we can't extend this to support conifguration of MDBs. I would like to set the selector a MDB is using based upon the configuration information and re-configure this if the settings change.
Unfortunately I could only come up with a SessionBean that creates MessageConsumers natively on the JMS Sessions based upon the configuration and to have the JMS messages handleb by MessageListeners, but I was told this way we would be loosing concurrency and the transaction handling of the EJB system, as we would no longer be using MDBs this way.
So is there any way to do what I'm looking for using MDBs? Someone told me there are some planned extensions in new EJB and JMS spec drafts, but I couln't find a pointer to that particular topic.
No, MDBs are configured by the Deployer at deploy-time.
Similar question answered here: Configuring MappedName annotation in Message Driven Bean dynamically

Mule Inter - App communication in same instance

I have explored the web on MULE and got to understand that for Apps to communicate among themselves - even if they are deployed in the same Mule instance - they will have to use either TCP, HTTP or JMS transports.
VM isn't supported.
However I find this a bit contradictory to ESB principles. We should ideally be able to define EndPoints in and ESB and connect to that using any Transport? I may be wrong.
Also since all the apps are sharing the same JVM one would expect to be able to communicate via the in-memory VM queue rather than relying on a transactionless HTTP protocol, or TCP where number of connections one can make is dependent on server resources. Even for JMS we need to define and manage another queue and for heavy usage that may have impact on performances. Though I agree if we have distributed and clustered systems may be HTTP or JMS will be only options.
Is there any plan to incorporate VM as a inter-app communication protocol or is there any other way one Flow can communicate with another Flow Endpoint but in different app?
EDIT : - Answer from Mulesoft
http://forum.mulesoft.org/mulesoft/topics/concept_of_endpoint_and_inter_app_communication
Yes, we are thinking about inter-app communication for a future release.
Still is not clear when we are going to do it but we have a couple of ideas on how we want this feature to behave. We may create a server level configuration in which you can define resources to use in all your apps. There you would be able to define a VM connector and use it to send messages between apps in the same server.
As I said, this is just an idea.
Regarding the usage of VM as inter-app communication, only MuleSoft can answer if VM will have a future feature or not.
I don't think it's contradictory to the ESB principle. The "container" feature is pretty well defined in David A Chappell's "Enterprise Service Bus book" chapter 6. The container should try it's best to keep the applications isolated.
This will provide some benefits like "independently deployable integration services" (same chapter), easier clusterization, and other goodies.
You should approach same VM inter-app communications as if they where between apps placed in different servers.
Seems that Mule added in 3.5 version, a feature to enable communication between apps deployed in the same server. But sharing a VM connector is only available in the Enterprise edition.
Info:
http://www.mulesoft.org/documentation/display/current/Shared+Resources#SharedResources-DefiningDomains
Example:
http://blogs.mulesoft.org/optimize-resource-utilization-mule-shared-resources/

A bit confused about .NET's WCF configuration basics

I'm new to WCF, so I opened up 2 projects: WCF class library and a host console application.
Now, both projects have app.config to store the WCF service configuration settings.
As it seems to me now, and correct me if I'm wrong, I have redundancy in configuring both projects with WCF settings.
How is it done in real world production software ? Does it use separate *.dll library for WCF services, or is it implemented withing the host project (and by that use a single place configuration of it) ?
Thank you .
EliorCohen's answer is correct, but I wanted to expand on a couple of points.
First, your building a WCF class library - library's don't use configuration files on their own. They use the configuration file of the calling application. This is something that I've seen cause a lot of confusion for developers, especially if they create a new class library and they see an App.config file in the project.
Second, with WCF 4 you can actually host a service without specifying anything in the configuration file. The runtime will add default endpoints based on the URI's supplied when you construct the service host.
You can also use set up default bindings and behaviors that will override the normal defaults - for example, if all of your services would be handling large requests, you might want to define a default binding with larger values (by ommitting the name attribute in the Binding configuration).
WCF is wonderful in that it has a lot of options - but that blessing is also a curse at times, especially when you first start working with it.
For more information on default endpoints and stuff, see A Developer's Introduction to Windows Communication Foundation 4.
Note that you'll still need a configuration file for any client apps.
the wcf project you are building represent an implementation of the service.
the configuration need to be on the host of the service (your host app).

DOSGI Support in Glassfish

I am using OSGI with Glassfish 3.0.1.We use Jersey REST as the resource layer. We have lots of osgi services. we are planning to decouple them & deploy it in a cloud. One way we do this is by doing a http REST call. But we would like to do service to service call at the api level. One way to do this is by using DOSGI. But Glassfish 3.0.1 doesn't seem to support DOSGI. Any other suggestions?
I believe that Glassfish contains Apache Felix, which is a fully compliant OSGi Framework. Therefore you do not need explicit support from Glassfish in order to use a set of bundles that provide Remote Services (the name "DOSGI" is now deprecated). Indeed, this is kind of the point of OSGi!
Anyway the next obvious question is which Remote Services implementation to choose. I would advise you NOT to use CXF since it is too buggy and unmaintained. That leaves Eclipse ECF or Paremus RSA.
(Disclaimer: the Paremus implementation is commercial and I work for Paremus).

How to configure a running Mule service's properties dynamically?

I would like a recommendation/idea on a method to configure properties for a running Mule service dynamically, i.e. I want the service to pick up the new settings without the need to restart Mule. Typically the kind of properties/settings I would like to change are FTP connector user ID, passwords, service URLs etc.
Any idea would be welcome.
Regards, Ola
Use the URI endpoint format do dynamically address endpoints. In simple cases you may be able to use the message properties in a TemplateEndpointRouter
Otherwise You need to write a component that composes the URI and sends the message to the dynamic endpoint using the MuleEventContext or MuleClient.
See here:
http://www.mulesoft.org/documentation/display/MULE2USER/Outbound+Routers#OutboundRouters-TemplateEndpointRouter
http://www.mulesoft.org/documentation/display/MULE2USER/Using+the+Mule+Client#UsingtheMuleClient-PerforminganEventRequestCall
http://www.mulesoft.org/documentation/display/MULE2USER/Mule+Endpoint+URIs
Mule exposes all service configuration via JMX, but I don't see any obvious way to reconfigure the connectors without a restart. They are internally managing pools of connections.
If there is a limited, you can create connectors for each and reconfigure the routes via jmx attributes.
If it is to be fully dynamic you likely need to implement your own service component to manage the ftp connection. Exposing the connection management, configuration, and restarting via JMX should be pretty straight forward.