How to purge data older than 30 days from Redis Server - redis

I am using Logstash, Redis DB, ElasticSearch and Kibana 3 for my centalize log server. It's working fine and I am able to see the logs in Kibana. Now I want to keep only 30 days log in ElasticSearch and Redis Server. Is it possible to purge data from Redis?
I am using the below configuration
indexer.conf
input {
redis {
host => "127.0.0.1"
port => 6379
type => "redis-input"
data_type => "list"
key => "logstash"
format => "json_event"
}
}
output {
stdout { debug => true debug_format => "json"}
elasticsearch {
host => "127.0.0.1"
}
}
shipper.conf
input {
file {
type => "nginx_access"
path => ["/var/log/nginx/**"]
exclude => ["*.gz", "error.*"]
discover_interval => 10
}
}
filter {
grok {
type => nginx_access
pattern => "%{COMBINEDAPACHELOG}"
}
}
output {
stdout { debug => true debug_format => "json"}
redis { host => "127.0.0.1" data_type => "list" key => "logstash" }
}
As per this configuration the shipper file is sending data to Redis DB with the key "logstash". From the redis db documents I came to know that we can set TTL for any key with expire command to purge them. But when I am searching for the key "logstash" in redis db keys logstash or keys *I am not getting any result. Please let me know if my question is not understandable. Thanks in advance.

Redis is a key:value store. Keys are unique by definition. So if you want to store several logs, you need to add a new entry, with a new key and associated value, for each log.
So it seems to me you have a fundamental flaw here, as you're always using the same key for all your logs. Try with a different key for each log (not sure how to do that).
Then set TTL to 30 days.

Related

Logstash to Elasticsearch Bulk Request , SSL peer shut down incorrectly- Manticore::ClientProtocolException logstash

ES version - 2.3.5 , Logstash - 2.4
'Attempted to send bulk request to Elasticsearch, configured at ["xxxx.com:9200"] ,
An error occurred and it failed! Are you sure you can reach elasticsearch from this machine using the configuration provided ?
Error:
"SSL peer shut down incorrectly", Manticore::ClientProtocolException
logstash"'
My logstash Output section:
output
{
stdout { codec => rubydebug }
stdout { codec => json }
elasticsearch
{
user => "xxxx"
password => "xxx"
index => "wrike_jan"
document_type => "data"
hosts => ["xxxx.com:9200"]
ssl => true
ssl_certificate_verification => false
truststore => "elasticsearch-2.3.5/config/truststore.jks"
truststore_password => "83dfcdddxxxxx"
}
}
Logstash file is executed , but it is failing to send the data to ES.
Could you please suggest, thank you.
Be particular about http or https in the url, in the above case i am sending data to https but my ES is using http.
Later, upgrade of logstash version solved to send data to ES.

Socket.io redis How data stored and cleared

i am hosting an app on heroku which is using socket.io. it is using sockets and i am using heroku 4 standard 1X dynos . So for this i used redistogo service and socket.io-redis plugin. it's working great but i want to know does socket.io-redis also clear the data from redis db when socket disconnected. Heroku redis goto service providing only 20MB data storage. .Please guideline How socket.io-redis inserting and clearing the data in redis database.
Assuming that you are referring to https://github.com/Automattic/socket.io-redis/blob/master/index.js, it appears that the plugin uses Redis' PubSub functionality. PubSub does not maintain state in the Redis database so there's no need to clear any data.
The session store is responsible for session clean up upon socket disconnection. I use https://github.com/tj/connect-redis for my session store.
Here is an example of cleaning up the socket connection properly upon disconnecting.
const websocket = require('socket.io')(app.get('server'), {
transports: process.env.transports
})
websocket.setMaxListeners(0)
websocket.adapter(require('socket.io-redis')({
host: process.env.redis_host,
port: process.env.redis_port,
key: 'socket_io',
db: 2
}))
websocket.use((socket, next) => {
app.get('session')(socket.request, socket.request.res || {}, next)
})
websocket.on('connection', socket => {
var sess = socket.request.session
socket.use((packet, next) => {
if(!socket.rooms[sess.id]) {
socket.join(sess.id, () => {
websocket.of('/').adapter.remoteJoin(socket.id, sess.id, err => {
delete socket.rooms[socket.id]
next()
})
})
}
})
socket.on('disconnecting', () => {
websocket.of('/').adapter.remoteDisconnect(sess.id, true, err => {
delete socket.rooms[sess.id]
socket.removeAllListeners()
})
})
})

How to get all redis keys using crediscache

I am using the cRedisCache extension of yii, how can I get all the keys of a pattern from redis using cRedisCache.
Suppost you are looking for all keys start from "ltp".
Add this code in main.php for redis configuration
cache => array(
'class' => 'CRedisCache',
'hostname' => '172.16.3.37',
'port' => 6379,
'database' => 0,
'hashKey' => false,
'keyPrefix' => '',
);
When using redis to get all keys
$result = Yii::app()->cache->executeCommand('keys', array('ltp_*'));
foreach ($result as $mainkey => $value) {
// your loop here
}
According to the docs at the crediscache docs page there is an executeCommand
Method which allows you to pass a Redis command. The Redis docs at http://Redis.io/commands detail the SCAN command which will let you iterate over some of the keys in the DB until you have gem all. This will not be a trivial task, but should work.

activeMQ with logstash

can activeMQ work with logstash?
I was switching from rabbitMQ to activeMQ, and trying to make logstash to work with activeMQ..
In my previous rabbitMQ, I have something like:
input {
rabbitmq {
host => "hostname"
queue => "queue1"
key => "key1"
exchange => "ex1"
type => "all"
durable => true
auto_delete => false
exclusive => false
format => "json_event"
debug => false
}
}
filter {....}
on logstash webpage -> doc, it does not show activeMQ supported as input...
http://logstash.net/docs/1.4.1/
any suggestions?
You can probably use (not tried it myself) the STOMP input. ActiveMQ supports stomp.

Multiple Logstash instances causing duplication of lines

We're receiving logs using Logstash with the following configuration:
input {
udp {
type => "logs"
port => 12203
}
}
filter {
grok {
type => "tracker"
pattern => '%{GREEDYDATA:message}'
}
date {
type => "tracker"
match => [ "timestamp", "yyyy-MM-dd HH:mm:ss,SSS" ]
}
}
output{
tcp{
type => "logs"
host => "host"
port => 12203
}
}
We're then picking the logs up on the machine "host" with the following settings:
input {
tcp {
type => "logs"
port => 12203
}
}
output {
pipe {
command => "python /usr/lib/piperedis.py"
}
}
From here, we're doing parsing of the lines and putting them into a Redis database. However, we've discovered an interesting problem.
Logstash 'wraps' the log message in a JSON style package i.e.:
{\"#source\":\"source/\",\"#tags\":[],\"#fields\":{\"timestamp\":[\"2013-09-16 15:50:47,440\"],\"thread\":[\"ajp-8009-7\"],\"level\":[\"INFO\"],\"classname\":[\"classname\"],\"message\":[\"message"\]}}
We then, on receiving it and passing it on on the next machine, take that as the message and put it in another wrapper! We're only interested in the actual log message and none of the other stuff (source path, source, tags, fields, timestamp e.t.c.)
Is there a way we can use filters or something to do this? We've looked through the documentation but can't find any way to just pass the raw log lines between instances of Logstash.
Thanks,
Matt
The logstash documentation is wrong - it indicates that the default "codec" is plain but in fact it doesn't use a codec - it uses an output format.
To get a simpler output, change your output to something like
output {
pipe {
command => "python /usr/lib/piperedis.py"
message_format => "%{message}"
}
}
Why not just extract those messages from stdout?
line = sys.stdin.readline()
line_json = json.loads(line)
line_json['message'] # will be your #message