redis/webdis & storm, publish with http post - redis

First a little explanation; I have a system, lets call it SystemA that I can configure to send http Posts to a url I specify when something goes wrong, but can't modify the request body directly.
My goal is to get the body of the post request to a storm spout via a redis pubsub queue.
I know I can publish to the redis pubsub channel by doing a POST to webdis like:
url: http://127.0.0.1:7379/
body:/PUBLISH/channelname/someimportantinfo
Since I can't modify the body of the POST from SystemA to prepend the /PUBLISH/channelname I was hoping I could structure the request like:
url: http://127.0.0.1:7379/PUBLISH/channelname
body:someimportantinfo
but that does not work; I don't get an error the event just never flows through the channel.
Any thoughts on how to get around this.

Your problem could be solved by adding a "shim" between SystemA and WebDis interface.
This shim will receive the HTTP post request from SystemA, extract the body and then send the request to Redis in the desired format.
Since you want to feed this data into Storm only when something goes wrong, I don't think this approach is going to be a bottle neck in your system (hopefully, your system isn't generating errors every second!).

Related

Can you add a Nagios Acknowledgment URL to Slack messages

I'm new to nagios and my first task has been slack integration. I am using the Nagios created plugin for Slack and everything is working correctly as far as posting the message in the channels for both hosts and services. I want to add an acknowledgment URL to the message that gets sent to slack, so our team can just acknowledge through the slack message. I have tried adding the URL in various places, like the host/service definitions so I could then call it in the command Slack_notification_handlers. Is it possible to add said acknowledgement urls with the current methods I'm using? or is this something I'm going to have to make myself?

Log Laravel API calls to AWS Cloudwatch

I have been using this package (https://github.com/maxbanton/cwh) to setup logging in my laravel 8 app.
I can already send the logs properly and see them in Cloudwatch and what I am trying to do is to log the API calls (request and response). I was able to do this by doing it by adding it in the middleware. My only question is that how can you do this asynchronously? I have found this discussion posted as an issue and this was the suggested solution (https://github.com/maxbanton/cwh/issues/25#) but I couldn't figure out how to implement it.
Has anybody able to send API logs to Cloudwatch asynchronously? Thank you.
What you can try is sent the call to a queued job or listener (beanstalkd preferably as using DB as storage will probably take the same time as sending the log to AWS) that is not running on the sync queue.
This way you can fire the call off to something else, and that can deal with it without you waiting for it.

Is it possible to retrieve more than 400 messages from ActiveMQ queue via Jolokia API?

I have an error queue in ActiveMQ, which is populated by Apache Camel's onException error handler. There could be thousands of messages in this queue.
Instead of using the ActiveMQ web console, I am building a custom web admin to integrate several other statistics from other components as well. Thus, I wanted to include the statistics from ActiveMQ as well.
ActiveMQ version: 5.14.3
I have looked at Jolokia JMX API, and its operations. For instance, I have the following payload to the broker's Jolokia API endpoint:
{
"type": "exec",
"mbean": "org.apache.activemq:type=Broker,brokerName=localhost,destinationType=Queue,destinationName=test.errors",
"operation": "browse(java.lang.String)",
"arguments": ["EXCEPTION_TYPE LIKE '%jdbc%'"]
}
The header field EXCEPTION_TYPE is already populated via Apache Camel Route. I have more than 10k messages in this queue at the moment. I made a POST request to my broker API endpoint with the payload as shown above. Although I had more than 10k messages, this request resulted in just 400 messages (due to the max page size limitation, hard coded in the source code). This means that I will not be able to get more than 400 messages at a time via Jolokia. I also tried the browseMessages() method as well. Looks like, it does the same thing, in general.
Is it possible to browse these messages (let's say if they are high in number, may be around 10k+)?
Or, is it possible to paginate them? I could not see a relevant operation method for that.
I tried to see if Hawtio did something special in retrieving all the messages. But, the result is same( with max 400 messages).
ActiveMQ web console does fetch all the messages. This probably could be because it is tightly coupled with the ActiveMQ project.
I am not restricted to just JMX/Jolokia. If these stats can be fetched via some API, its equally fine.
Any inputs would be great !

How to receive webhook signal from 3rd party service

I'm using a SaaS for my AWS instance monitoring and Mandrill for email sending/campaigns.
I had created a simple chart with Zapier but I'd rather like to host it myself. So my question is:
How can I receive a webhook signal from Mandrill and then send it to Datadog from my server? Then again I guess hosting this script right on the same server I'm monitoring would be a terrible idea...
Basically I don't know how to "receive the webhook" so I can report it back to my Datadog service agent so it gets updated on their website.
I get how to actually report the data to Datadog as explained here http://docs.datadoghq.com/api/ but I just don't have a clue how to host a listener for web hooks?
Programming language isn't important, I don't have a preference for that case.
Here you can find how to add a new webhook to your mandrill account: https://mandrillapp.com/api/docs/webhooks.php.html#method=add
tha main thing here is this:
$url = 'http://example/webhook-url';
this is your webhook URL what will process the data sent by mandrill and forward the information to Datadog.
and this is a description about what mandrill will send to your webhook URL: http://help.mandrill.com/entries/21738186-Introduction-to-Webhooks
a listener for webhooks is nothing else then a website/app which triggers an action if a request comes in. Usually you keep it secret or secure it with (http basic) authentication. E.g. create a website called http://yourdomain.com/hooklistener.php. You can then call it with HTTP POST or GET and pass some data like hooklistener.php?event=triggerDataDog or with POST and send data along with the body. You then run a script or anything you want to process that event.
A "listener" is just any URL that you host where you can receive data that is posted to it. Keep in mind, since you mentioned Zapier, you can set up a trigger that receives the webhook data - in this case the listener URL is provided by Zapier, and you can then send that data into any application (or even post to another webhook). Using Zapier is nice because it doesn't require you to write the listener code that receives the hook data and does something with it.

WSO2 API Manager is not responding to a request that returns zip file (application/octet-stream)

Using WSO2 API Manager 1.3.1. Trying to use the API Manager to proxy to a REST service. I have set up the service in API Mgr and can successfully post and get responses, typically json, though some are text.
However, when I try to GET a resource that returns binary content (a zip "file", content-type:application/octet-stream), the API Manager does not seem to respond and I can see an error in the console window (i'm running wso2server.bat in console):
[2013-07-03 11:52:05,048] WARN - SourceHandler Connection time out
while writing the response: 173.21.1.22:1268->173.21.1.22:8280
I have an HTTPModule on my internal service and it seems to be responding with the appropriate content (I can see the GET and response data logged). I can also call to the internal service directly and get a response, so that end of things seems OK. But going through the API Manager seems to fail.
I found information on enabling other content-types:
WSO2 API Manager - Publishing API with non-XML response
http://wso2.com/library/articles/binary-relay-efficient-way-pass-both-xml-non-xml-content-through-apache-synapse
Using that information I tried to enable the application/octet-stream for messageFormatter and messageBuilder using the binary relay and it did not help (or seem to make a difference). I have even disabled all other content-types and use the binary relay for all content-types and it does not help.
Currently, I'm running with just the following in both axis2.xml and axis2_client.xml (in their appropriate sections):
<messageBuilder contentType=".*" class="org.wso2.carbon.relay.BinaryRelayBuilder"/
<messageFormatter contentType=".*" class="org.wso2.carbon.relay.ExpandingMessageFormatter"/>
I still get my json and text responses, but WSO2 times out getting the zip content. I saw the JIRA referenced in axis2.xml about enabling the ".*" relay, but as the other requests seem to work, I'm not sure it's an issue for me. I did try adding
'format="rest"' to the API definition, but it seemed to break all operations even the ones that worked prior so I've pulled it back out.
Any ideas on what is happening or how to dig in and debug this will help. Thanks!
After working with this for much too long, it turns out that my WSO2 configuration was correct, using the Message Relay and BinaryRelayBuilder, etc. While my REST service could reply immediately, I was setting a HTTP header that I assume WSO2 does not like, because when i removed it WSO2 would reply at an expected rate (instantly).
I was setting the header:
Transfer-Encoding: binary
When I removed that header from my service reply, then WSO2 operated as expected. I don't know if that's a "bug" in WSO2 or if I was implementing incorrectly, but I do have what seems like a "workaround" by omitting that header from my service response.