activeMQ with logstash - activemq

can activeMQ work with logstash?
I was switching from rabbitMQ to activeMQ, and trying to make logstash to work with activeMQ..
In my previous rabbitMQ, I have something like:
input {
rabbitmq {
host => "hostname"
queue => "queue1"
key => "key1"
exchange => "ex1"
type => "all"
durable => true
auto_delete => false
exclusive => false
format => "json_event"
debug => false
}
}
filter {....}
on logstash webpage -> doc, it does not show activeMQ supported as input...
http://logstash.net/docs/1.4.1/
any suggestions?

You can probably use (not tried it myself) the STOMP input. ActiveMQ supports stomp.

Related

how to made logstash work with io.confluent.kafka.serializers.KafkaAvroSerializer

kafka{
topic_id => "myTopic"
bootstrap_servers => "127.0.0.1:9092"
value_serializer => "io.confluent.kafka.serializers.KafkaAvroSerializer"
}
}
[[main]-pipeline-manager] kafka - Unable to create Kafka producer from given configuration {:kafka_error_message=>org.apache.kafka.common.config.ConfigException: Invalid value io.confluent.kafka.serializers.KafkaAvroSerializer for configuration value.serializer: Class io.confluent.kafka.serializers.KafkaAvroSerializer could not be found., :cause=>nil}
Has anyone made logstash work with io.confluent.kafka.serializers.KafkaAvroSerializer ?
You'll need to use the ByteArraySerializer and install this codec
https://github.com/revpoint/logstash-codec-avro_schema_registry

Logstash to Elasticsearch Bulk Request , SSL peer shut down incorrectly- Manticore::ClientProtocolException logstash

ES version - 2.3.5 , Logstash - 2.4
'Attempted to send bulk request to Elasticsearch, configured at ["xxxx.com:9200"] ,
An error occurred and it failed! Are you sure you can reach elasticsearch from this machine using the configuration provided ?
Error:
"SSL peer shut down incorrectly", Manticore::ClientProtocolException
logstash"'
My logstash Output section:
output
{
stdout { codec => rubydebug }
stdout { codec => json }
elasticsearch
{
user => "xxxx"
password => "xxx"
index => "wrike_jan"
document_type => "data"
hosts => ["xxxx.com:9200"]
ssl => true
ssl_certificate_verification => false
truststore => "elasticsearch-2.3.5/config/truststore.jks"
truststore_password => "83dfcdddxxxxx"
}
}
Logstash file is executed , but it is failing to send the data to ES.
Could you please suggest, thank you.
Be particular about http or https in the url, in the above case i am sending data to https but my ES is using http.
Later, upgrade of logstash version solved to send data to ES.

Spring jms listener configuration - Websphere MQ resource adapter 7.5 - JBoss EAP 6.2

I'm looking for some help.
Context:
Application configuration,
JBoss EAP 6.2,
WebSphere MQ 7.5 Resource Adapter,
Spring
Current Status:
Application manages to send messages just fine, but unless we "boost" the number of connections related to message consumption, we have a warning message popping up in the logs:
WARN [org.springframework.jms.listener.DefaultMessageListenerContainer] (org.springframework.jms.listener.DefaultMessageListenerContainer#0-28) Setup of JMS message listener invoker failed for destination 'jms/workflowInputQueue' - trying to recover. Cause: MQJCA1018: Only one session per connection is allowed.
Research result:
After some research, I identified the root cause from the following 2 links,
and is referenced here Removal of internal Connection Pooling for WebSphere MQ Classes
and here IBM MQ - Configuring the resource adapter for inbound communication
The root cause of the issue can be summarized by this quote:
For Inbound Communications, the JMS Connection Pools and JMS Session Pools are implemented by the WebSphere MQ JCA resource adapter.
What I'm looking for:
I don't have prior Spring knowledge, my tasks usually were focused on Linux/Unix Administration, Application Server setup and configuration.
I'd like:
A sample configuration for a jms listener (MDP), with some indication of what can be configured in JBoss, and what cannot and should be configured in spring or other xml configuration (Activation Spec being an example
Additional sources I can read to better understand the spring configuration
Some explanation on how to reference the resource adapter in the Spring configuration, while the wmq.jmsra.rar resource adapter has been uploaded in the JBoss Content Repository, and assigned to the same server group as the application.
The ideal form of the solution:
A sample Spring configuration (xml based), with Resource adapter, ActivationSpec, MessageEndpointListener or (jca-?)listener-container, that would provide a good overview of the configuration necessary
Possibly explanations on how to define all this configuration in JBos, and make reference to it in spring through jndi lookup.
I already asked this question through the JBoss Community forum but haven't received much feedback
Current problematic configuration:
/profile=myapp/subsystem=resource-adapters/resource-adapter=wmq.jmsra.rar/connection-definitions=myappListenerQCF:read-resource(include-defaults=true,recursive=true)
{
"outcome" => "success",
"result" => {
"allocation-retry" => undefined,
"allocation-retry-wait-millis" => undefined,
"background-validation" => false,
"background-validation-millis" => undefined,
"blocking-timeout-wait-millis" => undefined,
"class-name" => "com.ibm.mq.connector.outbound.ManagedConnectionFactoryImpl",
"enabled" => true,
"flush-strategy" => "FailingConnectionOnly",
"idle-timeout-minutes" => undefined,
"interleaving" => false,
"jndi-name" => "java:/jms/myappListenerQCF",
"max-pool-size" => "100",
"min-pool-size" => "100",
"no-recovery" => false,
"no-tx-separate-pool" => false,
"pad-xid" => false,
"pool-prefill" => "true",
"pool-use-strict-min" => "true",
"recovery-password" => undefined,
"recovery-plugin-class-name" => undefined,
"recovery-plugin-properties" => undefined,
"recovery-security-domain" => "wsmq_security",
"recovery-username" => undefined,
"same-rm-override" => undefined,
"security-application" => false,
"security-domain" => "wsmq_security",
"security-domain-and-application" => undefined,
"use-ccm" => true,
"use-fast-fail" => false,
"use-java-context" => true,
"use-try-lock" => undefined,
"wrap-xa-resource" => true,
"xa-resource-timeout" => undefined,
"config-properties" => {
"hostName" => {"value" => "mqserverhostname"},
"port" => {"value" => "1520"},
"channel" => {"value" => "channelname"},
"transportType" => {"value" => "CLIENT"},
"queueManager" => {"value" => "qmname"}
}
}
}
/profile=myapp/subsystem=resource-adapters/resource-adapter=wmq.jmsra.rar/admin-objects=workflowInputQueue:read-resource(include-defaults=true,recursive=true)
{
"outcome" => "success",
"result" => {
"class-name" => "com.ibm.mq.connector.outbound.MQQueueProxy",
"enabled" => true,
"jndi-name" => "java:/jms/workflowInputQueue",
"use-java-context" => true,
"config-properties" => {"baseQueueName" => {"value" => "QUEUE.IN"}}
}
}
Also worth noting that there is another post on Stack Overflow entitled "Spring JMS and WebSphere MQ", which covers at least partly the subject, but not at all the jboss aspect.
I'd favor being able to push most of the configuration at jboss level and just look it up in Spring if possible.
Thanks in advance for all the input the community could provide

logstash rabbitmq output never posts to exchange

I've got logstash running, and successfully reading in a file
rabbitmq is running, I'm watching the log, and I can see the web interface
I've configured logstash to output to a rabbitmq exchange... I think!
Here's the problem: nothing ever gets posted to the exchange, as seen in the web interface.
Any ideas?
My output config:
output {
rabbitmq {
codec => plain
host => localhost
exchange => yomtvraps
exchange_type => direct
}
file { path => "/tmp/heartbeat-from-logstash.log" }
}
UPDATE: I'm watching the rabbit log with
tail -F /usr/local/var/log/rabbitmq/rabbit\#localhost.log
As it turns out, the problem was that there was no routing key set for the exchange and queue.
A working config is:
output {
rabbitmq {
codec => plain
host => localhost
exchange => yomtvraps
exchange_type => direct
key => yomtvraps
# these are defaults but you never know...
durable => true
port => 5672
user => "guest"
password => "guest"
}
}
Here's a sample receiver code (using ruby "Bunny")
require "bunny"
conn = Bunny.new(:automatically_recover => false)
conn.start
ch = conn.create_channel
q = ch.queue("yomtvraps")
exchange = ch.direct("yomtvraps", :durable => true)
begin
puts " [*] Waiting for messages. To exit press CTRL+C"
q.bind(exchange, :routing_key => "yomtvraps").subscribe(:block => true) do |delivery_info, properties, body|
puts " [x] Received #{body}"
end
rescue Interrupt => _
conn.close
exit(0)
end
you rabbitmq's parameter seems not enough, username,password and port have not been configured.
You can configure two outputs, one is to rabbitmq, the other is to file for vertifying the log's creation and log stash is ok.
pay attention to the logstash's version(log stash, rabbitmq plugin), it gave me lots of trouble in my trial before (log stash to another redis server etc).
You could debug rabbitmq's log.
ps -ef|grep erl you could find the log file's path in the arguments.
Be sure that rabbitmq's web manager plugin is enabled, and firewall is rightly configured, then open rabbitmq's web manager, ipaddress:15672
check the exchange's type is ok (in this case 'direct' may be a correct choice), your message consumer is configured ok, and your consumer's queue has been been bound to the exchange correctly.
try to post the message to your consumer through web manager and ensure consumer work well.
Monitor your queue when log stash push log into your consumer.

How to purge data older than 30 days from Redis Server

I am using Logstash, Redis DB, ElasticSearch and Kibana 3 for my centalize log server. It's working fine and I am able to see the logs in Kibana. Now I want to keep only 30 days log in ElasticSearch and Redis Server. Is it possible to purge data from Redis?
I am using the below configuration
indexer.conf
input {
redis {
host => "127.0.0.1"
port => 6379
type => "redis-input"
data_type => "list"
key => "logstash"
format => "json_event"
}
}
output {
stdout { debug => true debug_format => "json"}
elasticsearch {
host => "127.0.0.1"
}
}
shipper.conf
input {
file {
type => "nginx_access"
path => ["/var/log/nginx/**"]
exclude => ["*.gz", "error.*"]
discover_interval => 10
}
}
filter {
grok {
type => nginx_access
pattern => "%{COMBINEDAPACHELOG}"
}
}
output {
stdout { debug => true debug_format => "json"}
redis { host => "127.0.0.1" data_type => "list" key => "logstash" }
}
As per this configuration the shipper file is sending data to Redis DB with the key "logstash". From the redis db documents I came to know that we can set TTL for any key with expire command to purge them. But when I am searching for the key "logstash" in redis db keys logstash or keys *I am not getting any result. Please let me know if my question is not understandable. Thanks in advance.
Redis is a key:value store. Keys are unique by definition. So if you want to store several logs, you need to add a new entry, with a new key and associated value, for each log.
So it seems to me you have a fundamental flaw here, as you're always using the same key for all your logs. Try with a different key for each log (not sure how to do that).
Then set TTL to 30 days.