I am running the pcfdev v11.2.0 locally on my laptop.
When I try to list the marketplace it is empty.
$ cf marketplace
Getting services from marketplace in org cfdev-org / space cfdev-space as admin...
OK
No service offerings found
I understand that I need a service-broker for this but am not sure where I can get one from. Also, once I install/deploy the service broker how can I create a service from it for rabbitmq ?
I did explore the bosh route too and all I could find was the multitenant broker for rabbitmq.
I was able to create a CUPS for my local rabbitmq running on my laptop, but would like to get the standard (ie non-CUPS) service for rabbitmq working.
Since this is just for local development, a single node rabbitmq would be fine.
Please advise/suggest some options if you have worked this out already.
TIA.
I am a maintainer of CF Dev. Installing additional services is a topic covered in the FAQ.
The only service available is mysql. How do I get access to pivotal apps manager, rabbitmq, redis, spring-cloud-services?
A separate asset is needed. You can download the correct asset for your platform at https://network.pivotal.io/products/pcfdev. Then you perform a start with the downloaded asset specified via the -f flag, like so: cf dev start -f ./pcfdev-v*.tgz.
Related
I'm looking for the advice of how to manually (i.e. without using Runtime Manager - RM) deploy a mule application package on the on-premises Mule cluster. The official documentation suggests using the RM for the purpose either via the gui or cli or api. However, the RM is not available on our environment.
I can manually deploy the package on a single node by copying it to the /apps folder. But this way the application is only deployed on a single node, not on the cluster.
I've tried using the AMC agent rest API for the purpose with the same result - it only deploys on a single node.
So, what's the correct way of manually deploying a mule application on the Mule servers cluster without using Anypoint RM?
We are on Mule 4.4 EE.
Copy the application jar file into the apps directory of every node. Mule clusters do not transfer applications between nodes.
Alternatively ou can use the Runtime Manager Agent instead however it also works in a per node basis. You need to send the same request to each node to deploy.
Each connector may or may not be cluster aware. Read each connector documentation to understand how they behave. In particular the documentation of the VM connector states:
When running in cluster mode, persistent queues are instead backed by the memory grid. This means that when a Mule flow uses VM Connector to publish content to a queue, Mule runtime engine (Mule) decides whether to process that message in the same origin node or to send it out to the cluster to be picked up and processed by another node.
You can register the multiple nodes through AMC agent on the cloudhub control plane and create a server group and deploy code through control plain runtime manager it does the job of deployment to same app in n nodes
I've come into an issue post install of Rabbit MQ where it was all set up and configured with the web apps on the machine and communicating to local applications however the machine had to be moved to a different tranch of machines and renamed as a result. Now Rabbit MQ can no longer serve or handle comms as intended as it's config points to rabbit#PREVIOUS_MACHINE instead of rabbit#CURRENT_MACHINE.
In the rabbit MQ config however, to complicate this, there was some configuration that was done from the users on the system that were fed into the local apps that are then encrypted into that local app's database and used for communicating with all the local apps. the issue here is if I drop and recreate Rabbit MQ a make a new user this won't align to what the other internal apps are using and I believe they are not configurable post install so a reinstall of everything is the potential impact.
the question is, is it possible to re-config or update the current RabbitMQ installation files to now point at the local machine name instead of the previous machine name AND I guess by proxy is this something that would even work. The docs over at rabbitmq don't quite deal with this specific scenario, unfortunately from what I've read through.
so i want to confirm that RMQ is the absolute dogs tits of a black magic box.
anyway,
following these steps from here minus the first two
How to change RabbitMQ node name without changing my hostname
this is the inverse of my problem pretty much. But for those in the future who have this issue;
I had Rabbit MQ on another machine installed and running, the machines name was changed and the solution was to uninstall the service, delete the db and reinstall the service. SOMEHOW rmq manages to keep the config knowledge of all the queues that were in the db in the system and when you reinstall the service it brings all the queues back as well. the only issue I had after that was to remember my username and password that were not default user set ups and I did so that solved my issue. still have no idea how RMQ manages to remember the previous configs despite deleting the local db, crazy cool. very grateful to whoever built that into the tool
We are using prometheus in our production envirment recently. Before we only have 30-40 nodes for each service and those servers not change very often, so we just write it in the prometheus.yml, but right now it become too long to hold in one file and change much frequently then before, so my question is should i use file_sd_config to put those server list out of yml file and change those config files sepearately, or using consul for service discovery(same much easy to handle changes).
I have install 3 nodes consul cluster in data center and as i can see if i change to use consul to slove this problem , i also need to install consul client in each server(node) and define its services info. Is that correct? or does anyone have good advise.
Thanks
I totally advocate the use of a service discovery system. It may be a bit hard to deploy at first but surely it will worth it in the future.
That said, Prometheus comes with a lot of service discovery integrations. It's possible that you don't need a Consul cluster. If your servers are in a cloud provider like AWS, GCP, Azure, Openstack, etc, prometheus are able to autodiscover the instances.
If you keep running with Consul, the answer is yes, the agent must be running in every node. You can also register services and nodes via API but it's easier to deploy the agent.
I have been tasked with integrating ActiveMQ with Weblogic (v 10.3.6.0).
I have downloaded ActiveMQ v 5.10.0, installed it upon the server and browsed to localhost:8161/admin in order to confirm that ActiveMQ is running.
I'm not sure how to progress from here in order to complete my goal. This link:
http://activemq.apache.org/weblogic-integration.html
.. suggests that there are two approaches to deploying ActiveMQ on Weblogic: either deploying a broker as an application or using a J2EE Connector. I'm investigating the latter approach as I have now installed ActiveMQ on the server (which means that I already have a running broker, I assume) but can't find much useful information on the Net about how to do this.
This page:
http://activemq.apache.org/resource-adapter.html
... suggests that it can be done via a JCA Resource Adapter but again does not give any details on how to do it.
If anyone has any advice or guidance, I'd appreciate it.
Thanks in advance.
Did you try this: http://activemq.apache.org/how-to-deploy-activemq-ra-versionrar-to-weblogic.html?
You will have to grab the resource adapter from maven.
Not that your local installation will help you much expect for testing etc. You should deploy AMQ inside WebLogic if you want it to serve as the JMS layer of WebLogic - otherwise a totally standalone installation is fine. But then you're done, and I suspect you want the deployed version non the less.
I have deployed a multi node application to cloud foundry, all connected via a shared rabbitmq service. The application consists of:
A grails app.
3 standalone spring-integration-amqp java apps.
All are communicating to rabbit via spring-integration-amqp, using cloud:rabbit-connection-factory.
All of the applications have the same rabbitmq service bound.
All of the applications start correctly and seem to connect to rabbit ok.
The behaviour I am seeing is that the grails app is timing out whilst waiting for a response from one of the standalone apps. This is consistent with me only starting up the grails app locally and not the message consumers.
What I am struggling with is how to debug where the problem is.
I can't see any errors in the logs
It doesn't seem possible to tunnel to the rabbitmq service in order to query the state of the queues etc.
Any ideas?
Are you pushing to cloudfoundry.com or micro cloudfoundry?
To answer your questions:
Have you tried using "vmc file"? For java web applications cloudfoundry uses tomcat as the app server and you can use that command to navigate to tomcat/logs to have a look. Maybe some stdout was redirected there.
Did you have Caldecott installed? If you did not read this doc, here it is: http://docs.cloudfoundry.com/tools/vmc/caldecott.html