I am trying to put two custom properties in a STOMP message header when publishing a topic message so that a subscriber can filter messages. Here are two frames that I send to ActiveMQ 5.14 to connect and publish:
CONNECT
login: myUserName
passcode: myPassword
Note: Actual string is CONNECT\nlogin: myUserName\npasscode: myPassword.
and
SEND
destination:/topic/myTopic
myTopicMessage
Note: Actual string is SEND\ndestination:/topic/myTopic\n\nmyTopicMessage.
How am I supposed to add the following two pairs of properties to above strings?
package_code = ''
whse_code = 'MyWarehouse'
BTW, I am using lua to implement this.
You can add the properties to your SEND frame with the same syntax used by destination, e.g.:
SEND
destination:/topic/myTopic
package_code:MyPackageCode
whse_code:MyWarehouse
myTopicMessage^#
If package_code (or any other header) is blank simply don't set it.
A few other details are worth noting:
Be sure to follow the body of the message with the NULL octet as noted in the "STOMP Frames" section of the STOMP 1.2 spec. The example above uses ^# (i.e. control-# in ASCII) to represent the NULL octet.
SEND frames should include a content-length header and a content-type header if a body is present as noted in the "SEND" section of the STOMP 1.2 spec.
Troubleshooting:
You can enable STOMP protocol tracing with the following steps:
ActiveMQ 5.x: Set trace=true on the STOMP transportConnector, e.g.: <transportConnector name="stomp" uri="stomp://localhost:61613?trace=true"/>. Then set the org.apache.activemq.transport.stomp.StompIO logger to TRACE in conf/log4j.properties
ActiveMQ Artemis: Set the logger org.apache.activemq.artemis.core.protocol.stomp.StompConnection to DEBUG in etc/logging.properties.
Related
We are migrating from Mule 3 to Mule 4 and in one of our functionalities we need to publish messages to a topic and downstream another mule component is consuming from the queue which is bridged to the topic.
Nothing special here .
To ensure we are able to trace the flow via logs we were sending a 'TrackingId' attribute while publishing messages to the topic ( Mule 3 )
message.setOutboundProperty("XYZ_TrackingID", flowVars['idFromUI']);
return payload;
However when I try the same in Mule 4 we get the following exception :
ERROR 2020-12-20 10:09:12,214 [[MuleRuntime].cpuIntensive.14: [mycomponent].my_Flow.CPU_INTENSIVE
#66024695] org.mule.runtime.core.internal.exception.OnErrorPropagateHandler:
Message : groovy.lang.MissingMethodException: No signature of method:
org.mule.runtime.api.el.BindingContextUtils$MessageWrapper.setOutboundProperty() is applicable for
argument types: (java.lang.String, org.mule.weave.v2.el.ByteArrayBasedCursorStream) values:
[XYZ_TrackingID, "1234567"].\nError type : (set debug level logging or '-
Dmule.verbose.exceptions=true' for
everything)\n********************************************************************************
Checked internet and it seems in Mule4 setting outbound properties is removed as per here
So how do I achieve the same in Mule 4 ?
Don't even try to do that for several reasons. For one message structure is different, so output properties doesn't exist anymore and that method doesn't even exists. On the other hand, in Mule 4 components like the Groovy component can only return a value and cannot change the event. They can not decide to what that value is going to be assigned. You can set the target in the configuration (payload or a variable) and not change the attributes. Note that variables in Mule 4 are referenced by var., not by flowVars. like in Mule 3 (ie vars.idFromUI).
There is a simpler way to set message properties in the Mule 4 JMS connector. Use the properties element and pass it an object with the properties.
For example it could be something like this:
<jms:publish config-ref="JMS_config" destination="${bridgeDestination}" destinationType="TOPIC">
<jms:message>
<jms:body>#["bridged_" ++ payload]</jms:body>
<jms:properties>#[{
XYZ_TrackingID: vars.idFromUI
}]</jms:properties>
</jms:message>
</jms:publish>
It is in the documentation: https://docs.mulesoft.com/jms-connector/1.0/jms-publish#setting-user-properties. I adapted my example from there.
I am not sure if Correlation Id serves the purpose of a tracking ID for your scenario. But you can pass a CID as below. It's there in the mule documentation.
https://docs.mulesoft.com/jms-connector/1.7/jms-publish
<jms:publish config-ref="JMS_config" sendCorrelationId="ALWAYS" destination="#[attributes.headers.replyTo.destination]">
<jms:message correlationId="#[attributes.headers.correlationId]"/>
</jms:publish>
If your priority is to customise the Tracking ID you want to publish, then try passing below format. The key names may differ as per your use case.
<jms:publish config-ref="JMS_config" destination="${bridgeDestination}" destinationType="TOPIC">
<jms:message>
<jms:body>#["bridged_" ++ payload]</jms:body>
<jms:properties>#[{
AUTH_TYPE: 'jwt',
AUTH_TOKEN: attributes.queryParams.token
}]</jms:properties>
</jms:message>
</jms:publish>
In the above the expression attributes.queryParams.token is basically trying to access a token query parameters which is passed to JMS as a property AUTH_TOKEN key-name , consumed by the API through a HTTP Listener or Requestor earlier.
However, attributes.headers.correlationId is a header. Both queryParams and headers are part of attributes in Mule 4.
I am trying to consume JMS messages (IBM Websphere MQ) using Apache Flume and storing the data to HDFS. While reading the message, i am only able to see the body of the message and not the header content of the message.
Is it possible to read the jms message with the header property using Apache Flume?
My configuration:
# Source definition
u.sources.s1.type=jms
u.sources.s1.initialContextFactory=ABC
u.sources.s1.connectionFactory=<my connection factory>
u.sources.s1.providerURL=ABC
u.sources.s1.destinationName=r1
u.sources.s1.destinationType=QUEUE
# Channel definition
u.channels.c1.type=file
u.channels.c1.capacity=10000000
u.channels.c1.checkpointDir=/checkpointdir
u.channels.c1.transactionCapacity=10000
u.channels.c1.dataDirs=/datadir
# Sink definition
u.sinks.r1.type=hdfs
u.sinks.r1.channel=c1
u.sinks.r1.hdfs.path=/message/%Y%m%d
u.sinks.r1.hdfs.filePrefix=event_
u.sinks.r1.hdfs.fileSuffix=.xml
u.sinks.r1.hdfs.fileType = DataStream
u.sinks.r1.hdfs.writeFormat=Text
u.sinks.r1.hdfs.useLocalTimeStamp=TRUE
There are quite a few types of JMS messages as in "Table 30–2 JMS Message Types" here.
The Flume DefaultJMSMessageConverter uses TextMessage as in here and is given below for your reference:
...
else if(message instanceof TextMessage) {
TextMessage textMessage = (TextMessage)message;
event.setBody(textMessage.getText().getBytes(charset));
}
...
TextMessage offers only body of the message.
IMHO, you have two options:
If at all possible, send the message-header, header-value pair in the body itself and use the "DefaultJMSMessageConverter" as is.
Build your own "flume-jms-source.jar" by writing a custom JMSMessageConverter and type-cast the "message" to javax.jms.Message, get the JMS headers, set them in SimpleEvent.
Hope this gives some direction.
I have configured Apache Flume to receive messages (JSON type) in HTTP source. My sinks are MongoDB and HBase.
How can I write the message according to a specified field to different collections and tables?
For example: let's assume we have T_1 and T_2. Now there is an incoming message that should be saved in T_1. How can I handle those messages and assign them where to be saved?
Try using the Multiplexing Channel Selector. The default one (Replicating Channel Selector copies the Flume event produced by the source to all its configured channels. Nevertheless, the multiplexing one is able to put the event into a specific channel depending on the value of a header within the Flume event.
In order to create such a header accordingly to your application logic you will need to create a custom handler for the HTTPSource. This can be easily done by implementing the HttpSourceHandler interface of the API.
you can use regex for tagging message type + multiplexing for sending it to right destination.
example , based on message "TEST"
regex for a string / field
agent.sources.s1.interceptors.i1.type=regex_extractor
agent.sources.s1.interceptors.i1.regex=(TEST1)
assign interceptor to serializer SE1
agent.sources.s1.interceptors.i1.serializers=SE1
agent.sources.s1.intercetpros.i1.serializers.SE1.name=Test
send to required channel , channels (c1,c2) you can map to different sinks
agent.sources.s1.selector.type=multiplexing
agent.sources.s1.selector.header=Test
agent.sources.s1.selector.mapping.Test=c1
all events of test regex will go to channel c1 , others will be defaulted to C2
agent.sources.s1.selector.default=c2
It seems MQTTUtils Only provide three methods,
def createStream(jssc: JavaStreamingContext, brokerUrl: String, topic: String, storageLevel: StorageLevel): JavaDStream[String]
Create an input stream that receives messages pushed by a MQTT publisher.
def createStream(jssc: JavaStreamingContext, brokerUrl: String, topic: String): JavaDStream[String]
Create an input stream that receives messages pushed by a MQTT publisher.
def createStream(ssc: StreamingContext, brokerUrl: String, topic: String, storageLevel: StorageLevel = StorageLevel.MEMORY_AND_DISK_SER_2): DStream[String]
Create an input stream that receives messages pushed by a MQTT publisher.
But How can I provide username and password if the broker enabled authentication?
You could try including the username and password in the url:
mqtt://username:password#host:port
Please find this MQTT Scala Word Count Example.
Particular for your case run the publisher as
bin/run-example org.apache.spark.examples.streaming.MQTTPublisher mqtt://username:password#host:port foo
And Subscriber as
bin/run-example org.apache.spark.examples.streaming.MQTTWordCount mqtt://username:password#host:port foo
Before doing this ensure that you are started ActiveMQ broker.
example code
import org.apache.activemq.broker.{TransportConnector, BrokerService}
.
.
.
.
def startActiveMQMQTTBroker() {
broker = new BrokerService()
broker.setDataDirectoryFile(Utils.createTempDir())
connector = new TransportConnector()
connector.setName("mqtt")
connector.setUri(new URI("mqtt:" + brokerUri))
broker.addConnector(connector)
broker.start()
}
pom file
<dependency>
<groupId>org.apache.activemq</groupId>
<artifactId>activemq-core</artifactId>
<version>5.7.0</version>
</dependency>
You can trying using the customized spark-streaming-mqtt-connector library available here - https://github.com/sathipal/spark-streaming-mqtt-with-security_2.10-1.3.0.
This library adds the following on top of the original library,
Added TLS v1.2 security such that the communication is always secured.
Stored topic along with the payload in the RDD.
So, use the following method to create the stream,
val lines = MQTTUtils.createStream(ssc, // Spark Streaming Context
"ssl://URL", // Broker URL
"<topic>", // MQTT topic
"MQTT client-ID", // Unique ID of the application
"Username",
"passowrd")
There are overloaded constructors that allows you to pass the RDD storage level as well. Hope this helps.
I am using Restlet2.0 (java) to build passbook server. When I send a Push Notification to APNs with PushToken, I got message 'if-modified-since (null)' from the server log:
entity.getText() : {"logs":["[2013-03-31 00:18:29 +1100] Get pass task
(pass type pass.xxxxxx.freehug, serial number ABC, if-modified-since
(null); with web service url
http://192.168.1.43:8080/passbook/restlet) encountered error:
Server response was malformed (Missing response data)"]}
This responding URL matches the router defined for the LoggingResource class (Line 4), but not the SerialNumbersPassWithDeviceResource class (Line 2) which defines the passUpdatedSince={tag} parameter to be captured for the latest pkpass comparison:
router.attach("/v1/devices/{deviceLibraryIdentifier}/registrations/{passTypeIdentifier}/{serialNumber}", DeviceRegistrationResource.class); //1/4. Registration - POST/DELETE
router.attach("/v1/devices/{deviceLibraryIdentifier}/registrations/{passTypeIdentifier}?passUpdatedSince={tag}", SerialNumbersPassWithDeviceResource.class); //2. SerialNumbers - GET
router.attach("/v1/passes/{passTypeIdentifier}/{serialNumber}", LatestVersionPassResource.class); //3. LatestVersion - GET
router.attach("/v1/log", LoggingResource.class); //5. Logging - POST
So where can I set the Update Tag (passUpdatedSince={tag}) and how can I get it under the router in above Line 2? Is my router setup for getting Update tag correct?
The passUpdatedSince={tag} value is set from the last successful response that your web service gave to the requsest:
https://{webServiceURL}/v1/devices/{deviceLibraryIdentifier}/registrations/{passTypeIdentifier}
You set it by providing a key of lastUpdated in the JSON dictionary response to the above request. The value can be anything you like, but the simplest approach would be to use a timestamp.
The if-modified-since value is set by the Last-Modified HTTP header sent with the last .pkpass bundle received matching the passTypeIdentifier and serialNumber. Again, you can choose what value to send in this header.
The specific error that you mention above is not due to either of these. It is caused by your web service not providing a .pkpass bundle in response to the request to:
https://{webServiceURL}/v1/passes/{passTypeIdentifier}/{serialNumber}
You may want to try hooking your device up to Xcode, turning on PassKit logging (Settings -> Developer), then monitoring the device's console log as you send the push. This may give you more detail as to why the device sent the message to your web service log.