I have a RabbitMQ queue that is declared with the following options:
{
"queue": "events/online",
"durable": true,
"args": {
"x-max-priority": 10
}
}
I am trying to connect to the queue from Node-RED, using the node-red-contrib-amqp plugin, with the following Topology that is set under the connection source:
{
"queues": [
{
"name": "events/online",
"durable": true,
"options": {
"x-max-priority": 10
}
}
]
}
I am getting the following error:
"AMQP input node disconnect error: Operation failed: QueueDeclare; 406
(PRECONDITION-FAILED) with message "PRECONDITION_FAILED - inequivalent
arg 'x-max-priority' for queue 'myqueue' in vhost 'vhost': received
none but current is the value '10' of type 'signedint'""
Turns out the answer is as follows.
Make sure the following checkbox is selected: "use AMQP topology definition (JSON) defined below"
{
"queues": [
{
"name": "events/online",
"durable": true,
"options": {
"maxPriority": 10
}
}
]
}
Related
I'm learning OpenTelemetry and I wonder how dotnet-monitor is connected with OpenTelemetry (Meter). Are those things somehow connected or maybe dotnet-monitor is just custom MS tools that is not using standards from OpenTelemetry (API, SDK and exporters).
If you run dotnet-monitor on your machine it exposes the dotnet metrics in Prometheus format which mean you can set OpenTelemetry collector to scrape those metrics
For example in OpenTelemetry-collector-contrib configuration
receivers:
prometheus_exec:
exec: dotnet monitor collect
port: 52325
Please note that for dotnet-monitor to run you need to create a setting.json in theis path:
$XDG_CONFIG_HOME/dotnet-monitor/settings.json
If $XDG_CONFIG_HOME is not defined, create the file in this path:
$HOME/.config/dotnet-monitor/settings.json
If you want to identify the process by its PID, write this into settings.json (change Value to your PID):
{
"DefaultProcess": {
"Filters": [{
"Key": "ProcessId",
"Value": "1"
}]
}
}
If you want to identify the process by its name, write this into settings.json (change Value to your process name):
{
"DefaultProcess": {
"Filters": [{
"Key": "ProcessName",
"Value": "iisexpress"
}]
},
}
In my example I used this configuration:
{
"DefaultProcess": {
"Filters": [{
"Key": "ProcessId",
"Value": "1"
}]
},
"Metrics": {
"Providers": [
{
"ProviderName": "System.Net.Http"
},
{
"ProviderName": "Microsoft-AspNetCore-Server-Kestrel"
}
]
}
}
I have a requirement where I need to read message from one rabbit mq and publish it to another.
I tried configuring both the host, but when i publish it publishes only to the first configured rabbit mq.
I got the multibus running but now I am stuck with healthcheck
Both of the bus are returning status under IBus
{
"status": "Unhealthy",
"results": {
"IBus": {
"status": "Healthy",
"description": "Ready",
"data": {
"Endpoints": {
"rabbitmq://localhost:5672/XXXXX_Sxxxxxxx_bus_39pyyy81rrcpzwhibdcedd8sno?temporary=true": {
"Message": "ready (not started)"
},
"rabbitmq://localhost:5673/XXXXX_Sxxxxxxx_bus_39pyyy81rrcpzwhibdcedd8sno?temporary=true": {
"Message": "ready (not started)"
},
"rabbitmq://localhost:5673/xxxxxConsumer": {
"Message": "ready"
},
"rabbitmq://localhost:5672/xxxxxx--xxxxxConsumer": {
"Message": "ready"
}
}
}
},
"IRabbitMqBusB": {
"status": "Unhealthy",
"description": "Not ready: not started",
"data": {
"Endpoints": {}
}
}
}
}
It's called MultiBus and thoroughly described in the documentation.
In addition, it doesn't "publish to the first configured bus". It only does that if you use a DI container and use the resolved IPublishEndpointProvider. Since it's registered as a singleton with Try, you won't get the second instance registered.
If you don't use the DI container and use the bus instance, you can do whatever you want. The MultiBus feature allows you to use multiple bus instances registered in the container.
I want to consume one kafka topic via ZeroCode framework. I can consume my localhost kafka server from ZeroCode scenario. Also I can consume the topic that is got from my actual remote kafka server with using kafka-consumer.bat via command line but I can not consume same topic from ZeroCode.
What Do I need to add special configuration?
{
"name": "Consume Message From doob-ship-topic",
"url": "kafka-topic:my-sample-topic",
"operation": "consume",
"request": {
"consumerLocalConfigs": {
"recordType": "JSON",
"commitSync": false,
"showRecordsConsumed": true,
"maxNoOfRetryPollsOrTimeouts": 3
}
},
"assertions": {
"size": 1,
"records": [
{
"value": {
"key": "99930000000000260001"
}
}
]
}
}
I tried different configurations but I didn't handle with that.
I solved.
"consumerLocalConfigs": {
"recordType": "RAW",
"commitSync": true,
"showRecordsConsumed": true,
"maxNoOfRetryPollsOrTimeouts": 2,
"pollingTime": 1595 // I added this.
}
Added pollingTime is solved my problem.
I'm using Debezium SQL Server Connector to stream a table into a topic. Thanks to Debezium's ExtractNewRecordState SMT, I'm getting the following message in my topic.
{
"schema":{
"type":"struct",
"fields":[
{
"type":"int64",
"optional":false,
"field":"id"
},
{
"type":"string",
"optional":false,
"field":"customer_code"
},
{
"type":"string",
"optional":false,
"field":"topic_name"
},
{
"type":"string",
"optional":true,
"field":"payload_key"
},
{
"type":"boolean",
"optional":false,
"field":"is_ordered"
},
{
"type":"string",
"optional":true,
"field":"headers"
},
{
"type":"string",
"optional":false,
"field":"payload"
},
{
"type":"int64",
"optional":false,
"name":"io.debezium.time.Timestamp",
"version":1,
"field":"created_on"
}
],
"optional":false,
"name":"test_server.dbo.kafka_event.Value"
},
"payload":{
"id":129,
"customer_code":"DVTPRDFT411",
"topic_name":"DVTPRDFT411",
"payload_key":null,
"is_ordered":false,
"headers":"{\"kafka_timestamp\":1594566354199}",
"payload":"MSG 18",
"created_on":1594595154267
}
}
After adding value.converter.schemas.enable=false, I could get rid of the schema portion and only the payload part is left as shown below.
{
"id":130,
"customer_code":"DVTPRDFT411",
"topic_name":"DVTPRDFT411",
"payload_key":null,
"is_ordered":false,
"headers":"{\"kafka_timestamp\":1594566354199}",
"payload":"MSG 19",
"created_on":1594595154280
}
I'd like to go 1 step further and extract only the customer_code field. I tried ExtractField$Value SMT but I keep getting the exception IllegalArgumentException: Unknown field: customer_code.
My configuration is as following
transforms=unwrap,extract
transforms.unwrap.type=io.debezium.transforms.ExtractNewRecordState
transforms.unwrap.drop.tombstones=true
transforms.unwrap.delete.handling.mode=drop
transforms.extract.type=org.apache.kafka.connect.transforms.ExtractField$Key
transforms.extract.field=customer_code
I tried a bunch of other SMTs including ExtractField$Key, ValueToKey but I couldn't make it work. I'd be very grateful if you could show me what I've done wrong. According to this tutorial from Confluent, it should work but it didn't.
** UPDATE **
I'm running Kafka Connect using connect-standalone worker.properties sqlserver.properties.
worker.properties
offset.storage.file.filename=C:/development/kafka_2.12-2.5.0/data/kafka/connect/connect.offsets
plugin.path=C:/development/kafka_2.12-2.5.0/plugins
bootstrap.servers=127.0.0.1:9092
offset.flush.interval.ms=10000
rest.port=10082
rest.host.name=127.0.0.1
rest.advertised.port=10082
rest.advertised.host.name=127.0.0.1
internal.key.converter=org.apache.kafka.connect.json.JsonConverter
internal.value.converter=org.apache.kafka.connect.json.JsonConverter
internal.key.converter.schemas.enable=false
internal.value.converter.schemas.enable=false
key.converter=org.apache.kafka.connect.json.JsonConverter
value.converter=org.apache.kafka.connect.json.JsonConverter
value.converter.schemas.enable=false
sqlserver.properties
name=sql-server-connector
connector.class=io.debezium.connector.sqlserver.SqlServerConnector
database.hostname=127.0.0.1
database.port=1433
database.user=sa
database.password=dummypassword
database.dbname=STGCTR
database.history.kafka.bootstrap.servers=127.0.0.1:9092
database.server.name=wfo
table.whitelist=dbo.kafka_event
database.history.kafka.topic=db_schema_history
transforms=unwrap,extract
transforms.unwrap.type=io.debezium.transforms.ExtractNewRecordState
transforms.unwrap.drop.tombstones=true
transforms.unwrap.delete.handling.mode=drop
transforms.extract.type=org.apache.kafka.connect.transforms.ExtractField$Value
transforms.extract.field=customer_code
The schema and payload fields sound like you're using data that was serialized with a JsonConverter with schemas enabled.
You can just set value.converter.schemas.enable=false to achieve your goal.
I am getting the following error when I tried to used the push API to send a notification. The JSON object works in version V7.1
{
"code": "FPWSE0011E",
"message": "Bad Request - The JSON validation failed at 'target'.",
"productVersion": "8.0.0.00-20161122-1902"
}
Here is my JSON object
{
"message": {
"alert": "hello"
},
"settings": {
"apns": {
"badge": 1,
"iosActionKey": "Ok",
"payload": {
"messageType": "HELLO",
"detail": "Here's your message details."
},
"sound": "song.mp3"
},
"gcm": {
"payload": {},
"sound": "song.mp3"
}
},
"target": {
"consumerIds": [],
"deviceIds": ["4A1086CF-873A-4404-BE2D-200EA6BDA8AD"],
"platforms": [
"A","G"
]
}
}
I am using the admin RestAPi interface
https://myserver/mfpadmin/management-apis/2.0/runtimes/mfp/notifications/applications/com.myjobs/messages
I followed the format from the documentation
http://www.ibm.com/support/knowledgecenter/SSHS8R_8.0.0/com.ibm.worklight.apiref.doc/apiref/r_restapi_send_message_post.html
Thanks for your help
According to the v8.0 documentation only 1 property is allowed in target. In your JSON I see several properties are defined.
See example JSON here: https://mobilefirstplatform.ibmcloud.com/tutorials/en/foundation/8.0/notifications/sending-notifications/#sending-notifications
And as can be seen:
target" : {
// The list below is for demonstration purposes only - per the documentation only 1 target is allowed to be used at a time.
"deviceIds" : [ "MyDeviceId1", ... ],
"platforms" : [ "A,G", ... ],
"tagNames" : [ "Gold", ... ],
"userIds" : [ "MyUserId", ... ],
},