I am using ActiveMQ REST APIS to POST and GET the messages from queue.
POST is always working but GET API is not working all the time. It is giving 204 No content messages but there are message in Queue.
Am not able understand why it is giving like this. am i using wrong API to read the message, can any one help me out please.
POST API:
http://localhost:8161/api/message/TEST?type=queue&clientId=test
GET API:
http://localhost:8161/api/message/TEST?readTimeout=100&type=queue&clientId=consumerA&oneShot=true
Your configuration file should have imported jetty.xml in the activemq.xml file. jetty.xml file has configured all the rest endpoints baseurl.
Take a look at ${ACTIVEMQ_HOME}/conf/jetty.xml for more details
<import resource="jetty.xml"/>
Related
I will need to create a flow in Anypoint to download a .gz file from an external API source(through OAuth). I have created a listener -> Request -> Write flow. But I don't see the file saved in my local after I called the API. I have hardcoded Bearer token in the header and raw parameters in body, everything looks good. It doesn't show any error but when I tried to debug it and I'm seeing output and payload both are empty. I'm able to download the gz file with Postman. Am I in the right direction? I saw someone was using outbound endpoint but it is not showing in Mule 4.
Is there any way I can see what the external API returned(Success or failed)? And the content? Please advise.
Many thanks.
Regards,
Richard
Update 1:
Mule Flow
DEBUG
Added the Flow and the Debugging message. I have simplified the flow and just tried to make the POST rest call with Bearer token. It should return a json response. But still getting an empty response. Do we know what's that java.util.LinkedHashMap thing? Thanks.
Update 2:
Request Body
Request Header
XML Configuration:
XML Flow
You can add loggers to the application to trace the execution.
You could also enable HTTP wire logging to print requests/responses in the log too. To enable it you have to set the package org.mule.service.http.impl.service.HttpMessageLogger to DEBUG in the log4j2.xml or in the Runtime Manager console. Detailed instructions available at this KB article.
I have an error queue in ActiveMQ, which is populated by Apache Camel's onException error handler. There could be thousands of messages in this queue.
Instead of using the ActiveMQ web console, I am building a custom web admin to integrate several other statistics from other components as well. Thus, I wanted to include the statistics from ActiveMQ as well.
ActiveMQ version: 5.14.3
I have looked at Jolokia JMX API, and its operations. For instance, I have the following payload to the broker's Jolokia API endpoint:
{
"type": "exec",
"mbean": "org.apache.activemq:type=Broker,brokerName=localhost,destinationType=Queue,destinationName=test.errors",
"operation": "browse(java.lang.String)",
"arguments": ["EXCEPTION_TYPE LIKE '%jdbc%'"]
}
The header field EXCEPTION_TYPE is already populated via Apache Camel Route. I have more than 10k messages in this queue at the moment. I made a POST request to my broker API endpoint with the payload as shown above. Although I had more than 10k messages, this request resulted in just 400 messages (due to the max page size limitation, hard coded in the source code). This means that I will not be able to get more than 400 messages at a time via Jolokia. I also tried the browseMessages() method as well. Looks like, it does the same thing, in general.
Is it possible to browse these messages (let's say if they are high in number, may be around 10k+)?
Or, is it possible to paginate them? I could not see a relevant operation method for that.
I tried to see if Hawtio did something special in retrieving all the messages. But, the result is same( with max 400 messages).
ActiveMQ web console does fetch all the messages. This probably could be because it is tightly coupled with the ActiveMQ project.
I am not restricted to just JMX/Jolokia. If these stats can be fetched via some API, its equally fine.
Any inputs would be great !
I have these warnings filling all my logs. It seems to be caused by crawlers and spiders http://www.robotstxt.org/
No receiver found on secondary lookup of receiver on connector: HTTP_HTTPS with URI key: https://myhost:443/robots.txt.
org.mule.transport.http.HttpsConnector: Receivers on connector are: {
I cannot find anyone who had this issue on the web before me...
I want to get rid of it in my flow, how can I do that? I can share the code but I don't see how it is relevant in this case.
Thanks.
The issue you're having can be reproduced through the following steps:
Add an inbound endpoint that has the following url: http://localhost:8081/test, start your app and call http://localhost:8081/something.txt
The explanation is: There is no inbound endpoint that matches the beginning of the url you're calling, the easiest solution is to have a catch-all flow with an inbound endpoint whose address is http://localhost:8081/ and log (or not) every message received in that flow.
This MuleSoft tutorial explains how to filter our requests for favicon.ico. You should be able to do the same with requests for robots.txt
New HTTP Listener in 3.6+:
<expression-filter
expression="#[message.inboundProperties.'http.request.uri' != '/robots.txt']" />
For 3.5.0 and less:
<expression-filter
expression="#[payload != '/robots.txt']" />
First a little explanation; I have a system, lets call it SystemA that I can configure to send http Posts to a url I specify when something goes wrong, but can't modify the request body directly.
My goal is to get the body of the post request to a storm spout via a redis pubsub queue.
I know I can publish to the redis pubsub channel by doing a POST to webdis like:
url: http://127.0.0.1:7379/
body:/PUBLISH/channelname/someimportantinfo
Since I can't modify the body of the POST from SystemA to prepend the /PUBLISH/channelname I was hoping I could structure the request like:
url: http://127.0.0.1:7379/PUBLISH/channelname
body:someimportantinfo
but that does not work; I don't get an error the event just never flows through the channel.
Any thoughts on how to get around this.
Your problem could be solved by adding a "shim" between SystemA and WebDis interface.
This shim will receive the HTTP post request from SystemA, extract the body and then send the request to Redis in the desired format.
Since you want to feed this data into Storm only when something goes wrong, I don't think this approach is going to be a bottle neck in your system (hopefully, your system isn't generating errors every second!).
Using WSO2 API Manager 1.3.1. Trying to use the API Manager to proxy to a REST service. I have set up the service in API Mgr and can successfully post and get responses, typically json, though some are text.
However, when I try to GET a resource that returns binary content (a zip "file", content-type:application/octet-stream), the API Manager does not seem to respond and I can see an error in the console window (i'm running wso2server.bat in console):
[2013-07-03 11:52:05,048] WARN - SourceHandler Connection time out
while writing the response: 173.21.1.22:1268->173.21.1.22:8280
I have an HTTPModule on my internal service and it seems to be responding with the appropriate content (I can see the GET and response data logged). I can also call to the internal service directly and get a response, so that end of things seems OK. But going through the API Manager seems to fail.
I found information on enabling other content-types:
WSO2 API Manager - Publishing API with non-XML response
http://wso2.com/library/articles/binary-relay-efficient-way-pass-both-xml-non-xml-content-through-apache-synapse
Using that information I tried to enable the application/octet-stream for messageFormatter and messageBuilder using the binary relay and it did not help (or seem to make a difference). I have even disabled all other content-types and use the binary relay for all content-types and it does not help.
Currently, I'm running with just the following in both axis2.xml and axis2_client.xml (in their appropriate sections):
<messageBuilder contentType=".*" class="org.wso2.carbon.relay.BinaryRelayBuilder"/
<messageFormatter contentType=".*" class="org.wso2.carbon.relay.ExpandingMessageFormatter"/>
I still get my json and text responses, but WSO2 times out getting the zip content. I saw the JIRA referenced in axis2.xml about enabling the ".*" relay, but as the other requests seem to work, I'm not sure it's an issue for me. I did try adding
'format="rest"' to the API definition, but it seemed to break all operations even the ones that worked prior so I've pulled it back out.
Any ideas on what is happening or how to dig in and debug this will help. Thanks!
After working with this for much too long, it turns out that my WSO2 configuration was correct, using the Message Relay and BinaryRelayBuilder, etc. While my REST service could reply immediately, I was setting a HTTP header that I assume WSO2 does not like, because when i removed it WSO2 would reply at an expected rate (instantly).
I was setting the header:
Transfer-Encoding: binary
When I removed that header from my service reply, then WSO2 operated as expected. I don't know if that's a "bug" in WSO2 or if I was implementing incorrectly, but I do have what seems like a "workaround" by omitting that header from my service response.