I have a requirement where I need to read message from one rabbit mq and publish it to another.
I tried configuring both the host, but when i publish it publishes only to the first configured rabbit mq.
I got the multibus running but now I am stuck with healthcheck
Both of the bus are returning status under IBus
{
"status": "Unhealthy",
"results": {
"IBus": {
"status": "Healthy",
"description": "Ready",
"data": {
"Endpoints": {
"rabbitmq://localhost:5672/XXXXX_Sxxxxxxx_bus_39pyyy81rrcpzwhibdcedd8sno?temporary=true": {
"Message": "ready (not started)"
},
"rabbitmq://localhost:5673/XXXXX_Sxxxxxxx_bus_39pyyy81rrcpzwhibdcedd8sno?temporary=true": {
"Message": "ready (not started)"
},
"rabbitmq://localhost:5673/xxxxxConsumer": {
"Message": "ready"
},
"rabbitmq://localhost:5672/xxxxxx--xxxxxConsumer": {
"Message": "ready"
}
}
}
},
"IRabbitMqBusB": {
"status": "Unhealthy",
"description": "Not ready: not started",
"data": {
"Endpoints": {}
}
}
}
}
It's called MultiBus and thoroughly described in the documentation.
In addition, it doesn't "publish to the first configured bus". It only does that if you use a DI container and use the resolved IPublishEndpointProvider. Since it's registered as a singleton with Try, you won't get the second instance registered.
If you don't use the DI container and use the bus instance, you can do whatever you want. The MultiBus feature allows you to use multiple bus instances registered in the container.
Related
I am using a kickstart.json file to setup FusionAuth in developer environments. Everything is automated except I still need to manually go and get the client secret from the fusion auth instance.
Is there anyway I can predefine the client secret in the kickstart file so I can pre-configure it in my app?
you should absolutely be able to set the client secret from kickstart.json. Any API call should work from within Kickstart.
https://fusionauth.io/docs/v1/tech/apis/applications#create-an-application indicates you can POST an application including the client secret.
So a kickstart file like this should work:
{
"variables": {
"defaultTenantId": "30663132-6464-6665-3032-326466613934"
},
"apiKeys": [
{
"key": "mykey",
"description": "API key"
}
],
"requests": [
{
"method": "POST",
"url": "/api/application/85a03867-dccf-4882-adde-1a79aeec50df",
"body": {
"application": {
"name": "Pied Piper",
"roles": [
{
"name": "dev"
},
{
"name": "ceo"
},
{
"name": "intern"
}
],
"oauthConfiguration" : {
"clientSecret": "shhh-your-desired-secret"
}
}
}
}
]
}
I haven't tested that, but don't see any reason why it would not work. (Note that 1.37, the most recent version, has an issue with kickstart as documented here: https://github.com/FusionAuth/fusionauth-issues/issues/1816 but that should be fixed soon.)
If this doesn't work for you, please share the error message and a scrubbed kickstart file.
I'm trying to use the chaos toolkit istio extension, my problem is as follows:
I have a experiment.json file which contains a single probe to retrieve a virtual service. The file looks similar to the following:
{
"version": "1.0.0",
"title": "test",
"description": "N/A",
"tags": []
"secrets": {
"istio": {
"KUBERNETES_CONTEXT": {
"type": "env",
"key": "KUBERNETES_CONTEXT"
}
}
},
"method": [
{
"type": "probe",
"name": get_virtual_service:,
"provider": {
"type": "python",
"module": "chaosistio.fault.probes",
"func": "get_virtual_service",
"arguments": {
"virtual_service_name": "test"
"ns": "test-ns"
}
}
}
}
I have set KUBERNETES_CONTEXT and http/https proxy as env vars. My authorisation is using $HOME/.kube/config.
When playing the experiment it validates the file fine and tries to perform the action but becomes stuck and just hangs until it times out.
The error I see in the logs is a HTTPSConnectionPool error (failed to establish a new connection, operation timed out).
Am I missing any settings? All help appreciated.
I want to consume one kafka topic via ZeroCode framework. I can consume my localhost kafka server from ZeroCode scenario. Also I can consume the topic that is got from my actual remote kafka server with using kafka-consumer.bat via command line but I can not consume same topic from ZeroCode.
What Do I need to add special configuration?
{
"name": "Consume Message From doob-ship-topic",
"url": "kafka-topic:my-sample-topic",
"operation": "consume",
"request": {
"consumerLocalConfigs": {
"recordType": "JSON",
"commitSync": false,
"showRecordsConsumed": true,
"maxNoOfRetryPollsOrTimeouts": 3
}
},
"assertions": {
"size": 1,
"records": [
{
"value": {
"key": "99930000000000260001"
}
}
]
}
}
I tried different configurations but I didn't handle with that.
I solved.
"consumerLocalConfigs": {
"recordType": "RAW",
"commitSync": true,
"showRecordsConsumed": true,
"maxNoOfRetryPollsOrTimeouts": 2,
"pollingTime": 1595 // I added this.
}
Added pollingTime is solved my problem.
I have a RabbitMQ queue that is declared with the following options:
{
"queue": "events/online",
"durable": true,
"args": {
"x-max-priority": 10
}
}
I am trying to connect to the queue from Node-RED, using the node-red-contrib-amqp plugin, with the following Topology that is set under the connection source:
{
"queues": [
{
"name": "events/online",
"durable": true,
"options": {
"x-max-priority": 10
}
}
]
}
I am getting the following error:
"AMQP input node disconnect error: Operation failed: QueueDeclare; 406
(PRECONDITION-FAILED) with message "PRECONDITION_FAILED - inequivalent
arg 'x-max-priority' for queue 'myqueue' in vhost 'vhost': received
none but current is the value '10' of type 'signedint'""
Turns out the answer is as follows.
Make sure the following checkbox is selected: "use AMQP topology definition (JSON) defined below"
{
"queues": [
{
"name": "events/online",
"durable": true,
"options": {
"maxPriority": 10
}
}
]
}
I need a mini-hdfs service that can run in a single agent, so I started building one in a docker container. I then deployed it to DCOS. The Namenode UI comes up, but un-styled. It turns out that the references inside the UI we not prefixed.
My service is at http://m1.dcos/service/small-hdfs/dfshealth.html
The browser generates requests such as http://m1.dcos/static/bootstrap-3.0.2/css/bootstrap.min.css
Instead of http://m1.dcos/service/small-hdfs/static/bootstrap-3.0.2/css/bootstrap.min.css
This is my marathon.json - very basic for now - I'll expose the volumes after I get it basically working ...
How do I fix this. If I can pass the prefix into the container, I may be able to configure a Hadoop property with the prefix, but not sure if that is possible. I also did not see any documented way of passing this prefix.
{
"id": "small-hdfs",
"cmd": "/root/docker_entrypoint.sh",
"cpus": 1.5,
"mem": 4096.0,
"instances": 1,
"container": {
"type": "DOCKER",
"docker": {
"image": "slowenthal/small-hdfs",
"network": "BRIDGE",
"portMappings": [
{ "containerPort": 9000, "hostPort": 0, "protocol": "tcp" },
{ "containerPort": 50070, "hostPort": 0, "protocol": "tcp" }
]
}
},
"labels": {
"DCOS_SERVICE_NAME": "small-hdfs",
"DCOS_SERVICE_PORT_INDEX": "1",
"DCOS_SERVICE_SCHEME": "http"
}
}