How do I find the connection information of a RabbitMQ server that is bound to a SCDF stream deployed on Tanzu (Pivotal/PCF) environment? - rabbitmq

This is a follow-up question of How to implement HTTP request/reply when the response comes from a rabbitMQ reply queue using Spring Integration DSL?.
We were able to build the Spring Integration application and the SCDF stream successfully locally. We could send a http request to the rabbitMQ request queue which was bound to the SCDF stream rabbit source. We could also receive the response back from the rabbitMQ response queue which was bound to the SCDF stream rabbit sink.
We have deployed the SCDF stream into PCF environment which had a binding of an internal rabbitMQ broker. Now we need to specify the spring rabbitMQ connection information in the Spring Integration application properties - currently it's using the default localhost#5762, which is no longer valid. Does anyone know how to get this rabbitMQ configuration properties? We already checked the SCDF stream rabbit source/sink log files but couldn't find the information. I know we probably need to check internally whoever set up the SCDF/rabbitMQ in PCF environment, but so far we haven't heard the answers from them.
Also, it appears we can have a different approach that binds both the SCDF stream and the integration application to a separate rabbitMQ instance (instead of using the existing one bundled with the SCDF configuration). Is it a recommended solution?
Thanks,

It is unclear whether you're using the SCDF tile or the SCDF OSS (via manfest.yml) on PCF.
Suppose you're using the OSS, AFA. In that case, you are providing the right RMQ service-instance configuration (that you pre-created) in the manifest.yml, then SCDF would automatically propagate that RMQ service instance and bind it to the apps it is deploying to your ORG/Space. You don't need to muck around with connection credentials manually.
On the other hand, if you are using the SCDF Tile, the SCDF service broker will auto-create the RMQ SI and automatically bind it to the apps it deploys.
In summary, there's no reason to manually pass the connection credentials or pack them as application properties inside your apps. You can automate all this provided you're configuring all this correctly.

Related

Why did we only receive the response half of the time (round-robin) with "Spring Cloud DataFlow for HTTP request/response" approach deployed in PCF?

This issue is related to 2 earlier questions:
How to implement HTTP request/reply when the response comes from a rabbitMQ reply queue using Spring Integration DSL?
How do I find the connection information of a RabbitMQ server that is bound to a SCDF stream deployed on Tanzu (Pivotal/PCF) environment?
As you can see the update for the question 2 above, we can receive the correct response back from the rabbit sink. However, it only works half of the time alternated as round-robin way (success-timeout-success-timeout-...). The outside http app was implemented with Spring Integration showed in question 1 - sending the request to the request rabbit source queue and receiving the response from the response rabbit sink queue. This only happened in PCF environment after we deployed both the outside http app and created the stream (see following POC stream) there. However, it's working locally all the time (NOT alternately). Did we miss anything? Not sure what's the culprit in PCF. Thanks.
rabbitSource: rabbit --queues=rabbitSource | my-processor | rabbitSink: rabbit --routing-key=pocStream.rabbitSink.pocStream
Sounds like you have several instances of your stream in that PCF environment. This way there are more then one (round-robin feels like two) subscribers to the same RabbitMQ queue. Where only one consumer must be for that queue since only initiator of the request waits for reply, but odd (or even) replies go to different consumer of the same queue. I don't place it as an answer, just because it is the best guess what is going on since you don't see a problem locally.
Please, investigate your PCF environment and how does it scale instances for your stream. There also might be some option of SCDF which does scaling for us.

PCF / Cloud connector for Rabbit management API

All,
I'm running a simple SpringBoot app in PCF using a Rabbit on-demand service. The auto reconfiguration of the ConnectionFactory for the internal Rabbit service works just fine.
However I need a list of all queues on the Rabbit host. AFAIK this is only available through a call to the Rabbit management plugin (a REST API), see RabbitManagementTemplate::getQueues. This class expects an http URI with credentials.
I know the URI+credentials are exposed through the vcap.service variables as "http_api_uri', but I wonder if there's a more elegant way to get an instance of RabbitManagentTemplate with Spring magic cloud connectors / auto reconfiguration instead of manually reading the env vars and writing custom bean config.
It seems the ConnectionFactory only knows about the AMQP interface, and cannot create a RabbitManagementTemplate?
Thanks!
Spring Cloud Connectors won't help you here. It doesn't support setting up RabbitManagementTemplate, only a ConnectionFactory.
You don't have to parse the env yourself, you can use the flattened properties that Boot provides such as vcap.services.rabbitmq.credentials.http_api_uri. But you'll need to configure a RabbitManagementTemplate yourself using those Boot properties.

Consumer Proxy unable to pick up messages from queue due to service configuration in flux

The Consumer proxy is not picking up messages from queue. We have redeployed service and restarted servers. But it did not help. I am attaching logs in here.
<01-Mar-2019 10:39:53 o'clock GMT>
<01-Mar-2019 10:39:53 o'clock GMT>
According to Oracle support document 1573359.1:
CAUSE
The service has been re-deployed/changed while there were messaging being processed. Review Doc ID 1571958.1 "OSB SBConsole Activation - Limitations for configuration or deployment changes in production" for other reasons that this error can occur.
SOLUTION
Stop consumption on the jms queue, delete and re-deploy service.
Log in to Weblogic Console
Expand services -> Messaging -> JMS Modules -> Select the Queue your service is interacting with.
Select the Control tab
For both production and consumption, select pause.
Wait a short while (5 minutes) and restart the queue
Re-deploy your Proxy Services
If message still persist please check config.xml and make sure that there is a correct number of applications with name starting with "ALSB". The correct number depends on the kind of services you have deployed. JMS request-response, JMS plain request, JMS topic etc...
The easiest way to make sure that config.xml is correct is to do the following:
Delete all the JMS proxies from OSB configuration
Open WLS console go to "Deployments" and make sure that there are no application "_ALSB_xyz" deployed. If they are present delete them.
Re-deploy JMS proxies
Alternately, check Note 1382976.1 to locate the related deployments. Delete any application deployments starting with "ALSB" which are not related to any actively deployed JMS proxy service.

Configuring RabbitMQ consumer as windows service

I am looking in for the best way to implement the RabbitMQ consumer by using .Net Client which should be run as windows service.
I referred the RabbitMQ documentation and found the way to consume messages by using .Net client (https://www.rabbitmq.com/tutorials/tutorial-one-dotnet.html).
My current scenario is like, RabbitMQ is installed in AWS VM machine. I have to install dotnet client consumer service resides in On-premise network which should consume messages.
Which one is the best way, to always listen the Queue (AMQP protocol) or HTTP API which should get messages on demand (https://pulse.mozilla.org/api/).
Please advise.
Thanks,
Vinoth
I believe the answer is "neither." You should have your message queue as a back-end service behind the firewall, and expose your application functionality through a set of carefully-specified web services. The web services, which are exposed through the firewall but can communicate to services behind the firewall, would produce messages that would be transmitted to the server. Any services needing to produce or consume messages would need to do so via the web services, which would perform safety/security checking prior to forwarding the request on to the AMQP server.
If you need to expose AMQP directly to clients (i.e. that is the purpose of your app), then the recommendation is to do so via STOMP. I think a valid use case for exposing AMQP directly over the internet would be a rare thing to come across. The security implications of doing so would be immense.

how to checks if RabbitMQ server is alive using the REST API

I am totally new to spring framework. I am trying to create a project where I can have the connectivity to the rabbitMq and I even before I publish the message, I want to check if the queues are alive or not. Is this possible to ping the queue to see if it is alive or not?
RabbitMQ have the management API. You can use it to check the status of queue,exchange,binding.
If you are working on PHP. Then here is the libarary which can be used.