Mule CISF transport failing in Cloud Hub - mule

I have use case as below.
Created a sample mule flow using smb connector as inbound which reads the files from local network on specific machine and its working fine.
Currently I have a problem that I want deploy this code to cloud hub and want to read the files from same shared location.
Can some one please guide what are the steps need to taken care?
is this achievable using VPC ?

The original answer was provided by David Dossot in the comments, however the link is outdated.
To summarize, to connect from a CloudHub application to an on-premise or private serivce you need to establish some kind of network connectivity. For that CloudHub requieres a VPC and a connectivity link. The process for the later is described at the documentation page To Request VPC Connectivity to Your Network.

Related

Cloud Foundry on Google Cloud Platform

My application has Jackrabbit Oak implementation, so it uses direct binary upload features through S3DataStore for storing files on AWS-S3. For AWS-S3 integration, we had created service broker instance on Cloud Foundry which was on our on-premises server.
Now I moved Cloud Foundry on Google Cloud Platform, but when I am trying to search AWS-S3 service broker using cf marketplace command. I could not see the aws-s3 service broker.
How to get aws-s3 service broker and if it is not possible then, is there any way to integrate aws-s3 storage with application deployed on CloudFoundry on GCP in case of above scenarios.
It's hard to know what you had deployed on your platform as we don't have any context of what was installed there. Just a guess, but it sounds like you had the Tanzu AWS Service Broker installed. It has service offerings for aws-s3.
https://docs.pivotal.io/aws-services/creating.html#view
You can still install the Tanzu AWS Service Broker when running Tanzu Application Service on top of GCP, you just need to have a AWS account where the broker will create your service instances. The broker creates AWS resources on behalf of the users and it does it under a given AWS account, so as long as you still have an AWS account you can make it work.
That said, there's also a GCP broker that functions in the same, so if you are trying to move off AWS to GCP entirely you could look at using the GCP broker instead. GCP has a similar cloud storage offering.
https://docs.pivotal.io/partners/gcp-sb/index.html
Once you install either broker, you'll see the service plan offerings in your marketplace. If you're still not seeing them, check cf service-access as an admin user. You may need to enable access to those services with cf enable-service-access.
Go back to the team that moved you between CF/AWS and CF/GCP and tell them you need your S3 broker back :)

Gridgain console load balance

I have Gridgain three node cluster and also running Gridgain web console agent and web console on all three nodes. It is all hosted on Windows Server.
I would like to load balance My web console. The problem is I don't know how to share user registration database which it stores in a work directory. Can I use external database to store all that information so that my cluster uses the same database?
There is a problem with Web Console Agent as well. How do I share tokens stored in default.properties?
There is no definitive guide on how to create a cluster for web console for high availability.
Can someone please guide me on how can I form a cluster for a Web console sharing its user store and tokens?
Thanks
If you are looking for multi-cluster support, take a look at documentation:
https://www.gridgain.com/docs/web-console/latest/multi-cluster-support
If you are looking for agent fault-tolerance: just start several agents. Fisrt agent will process all messages, other will be in the hot-stand-by mode.
If you are looking for connection fault-tolerance between agent and cluster (if cluster node failed that is a connection point for agent, Web Console will loose connection to cluster), just specify several nodes addresses as comma-separated list for "node-uri" parameter (in default.properties or as command-line argument).
For example:
node-uri=http://192.168.0.1:8080,http://192.168.0.2:8080;http://192.168.0.3:8080
Hope this helps.

Prometheus target management

We are using prometheus in our production envirment recently. Before we only have 30-40 nodes for each service and those servers not change very often, so we just write it in the prometheus.yml, but right now it become too long to hold in one file and change much frequently then before, so my question is should i use file_sd_config to put those server list out of yml file and change those config files sepearately, or using consul for service discovery(same much easy to handle changes).
I have install 3 nodes consul cluster in data center and as i can see if i change to use consul to slove this problem , i also need to install consul client in each server(node) and define its services info. Is that correct? or does anyone have good advise.
Thanks
I totally advocate the use of a service discovery system. It may be a bit hard to deploy at first but surely it will worth it in the future.
That said, Prometheus comes with a lot of service discovery integrations. It's possible that you don't need a Consul cluster. If your servers are in a cloud provider like AWS, GCP, Azure, Openstack, etc, prometheus are able to autodiscover the instances.
If you keep running with Consul, the answer is yes, the agent must be running in every node. You can also register services and nodes via API but it's easier to deploy the agent.

Legacy application to communicate with cloud foundry using RabbitMQ

I am new to cloud foundry and investigating possible ways for our legacy Java EE application to communicate asynchronously with an application running on cloud foundry.
We are doing a lot of asynchronous work already and are publishing events to Active MQ.
I know that cloud foundry has a possibility to bind with Rabbit MQ and my question is with the possibility for a cloud foundry running application to connect (listen) to an existing out of CF platform Rabbit MQ?
Any idea on other alternatives to achieve this?
Yes, that is possible. You can use a user provided service.
That allows you to inject the environment variables into your app, that are needed to connect to RabbitMQ (like host, port, vhost, username, password).
Once you create that service, you can bind it to your app. Inside your app code, you then can read the environment variables exactly the same way as you would do it, if you had used a RabbitMQ service provided by CloudFoundry.

How to: can I test an application using both worker role AND VM role in Azure emulator?

I've looked but can't see an answer to this one:
I have an application that passes Azure messages between a VM role and a worker role. Before I load this into Azure I'd like to test that both work correctly by using the Azure emulator.
Does anyone know if the Azure emulator will accept messages that originate from the VM role and will it allow me to send messages to the VM? Is there a workaround or solution to this?
Both the emulator and the VM will be running on the same host server in my case.
The queues are accessed as HTTP endpoints, so you need to ensure that both components you want to test can access the queue.
If you want to test your application using the storage emulator (an HTTP endpoint provisioned on your local machine, normally http://127.0.0.1:1001/) then you will to ensure that the VM role can get to that address.
I would recommend testing with the real storage service. There are difference between the emulator and the actual service, so it's better to test the real deal (you can always create a test queue).
In this case the endpoint will be on the internet (i.e. http://myaccount.queue.core.windows.net/).