I am trying to send my logs from filebeat into redis and then from redis into logstash and elasticsearch.
I am getting the error as
( Failed to RPUSH to redis list with ERR wrong number of arguments for 'rpush' command
2018-05-22T11:15:43.063+0530 ERROR pipeline/output.go:92
Failed to publish events: ERR wrong number of arguments for 'rpush' command).
What configuration is to be made in my filebeat.yml so that the output is redis
Also what configuartion is to be made in my logstash.yml when my input is redis.
Also I am getting error as Redis Connection problem when running logstash conf file
Please provide solution for this.
Thank you.
Related
I have configured a stream on a rabbitmq docker container. I am able to use the command rabbitmqadmin to publish to it. I can also see that the message got appended to the stream when I run a rabbitmqaddmin publish. But for some reason a rabbitmqadmin get command on the stream fails saying "540 basic.get not supported by stream queues queue "xxxx" in vhost".
Looks like a different command is needed to read from a stream or cannot be done from command console. Anyone have any ideas how to over come this?
I am trying to run Airflow Celery worker with redis backend but cant get the connection between the worker and redis to work.
Here is how the configuration looks like:
broker_url = redis://:<my-password>#localhost:6379/0
I am getting error:
ValueError: invalid literal for int() with base 10: 'my-password-string'
Anyone know any fix for this?
That's because there're special characters in your password, you can simply do this:
e.g. your password is my#pw#
do url-encoding, then you get my%23pw%23
append another % before % to allow airflow read the config successfully
then the final password is my%%23pw%%23, so e.g. the redis broker url in airflow.cfg should be like this:
broker_url = redis://:my%%23pw%%23#redis-host:6379/0
So i solved it now.
My password was generated from random characters piped through base64 and it was causing problems.
I changed my password to bit shorter and simplier one and airflow worker now runs normally.
i'm trying to configure schema registry to work with SSL, i have already zookeeper and kafka brokers working with the same SSL keys.
but whenever i start the schema-registry i get the following error
ERROR Error starting the schema registry(io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication)
io.confluent.kafka.schemaregistry.exceptions.SchemaRegistryInitializationException: Error initializing kafka store while initializing schema registry
Caused by: io.confluent.kafka.schemaregistry.storage.exceptions.StoreInitializationException: Timed out trying to create or validate schema topic configuration
schema-registry.properties configuration :
listeners=https://IP:8081
kafkastore.connection.url=IP:2181
kafkastore.bootstrap.servers=SSL://IP:9092
kafkastore.topic=_schemas
kafkastore.topic.replication.factor=1
kafkastore.security.protocol=SSL
ssl.truststore.location=/.kafka_ssl/kafka.server.truststore.jks
ssl.truststore.password=password
ssl.keystore.location=/.kafka_ssl/kafka.server.keystore.jks
ssl.keystore.password=password
ssl.key.password=password
ssl.endpoint.identification.algorithm=
inter.instance.protocol=https
can someone advise ?
There are a couple of reasons that might cause this issue. Try to use a different topic for kafkastore.topic in case _schemas got corrupted.
For example,
kafkastore.topic=_schemas_new
https://github.com/antirez/redis/issues/3689
On a RHEL(RedHat) machine installed Redis 3.0.7 as a deamon: Let's call this "A" .
On a Windows Server 2012 machine installed Redis 3.2.1 as a service: Let's call this "B".
I want to migrate the key of "IdentityRepo" from A to B. In order to achive that I tried to execute the following command on Redis A.
migrate <IP of B> 6379 "IdentityRepo" 3 1000 COPY REPLACE
The following error occured:
(error) ERR Target instance replied with error: ERR DUMP payload version or checksum are wrong
What can be the problem?
The encoding version was changed between these v3.0 to v3.2 due to the addition of quick lists, so MIGRATE as well as DUMP/RESTORE will not work in that scenario.
To work around it, you'll need to read the value from the old database and then write it to the new one using any Redis client.
I have a 3 node Brisk cluster (Briskv1.0_beta2). Cassandra is working fine (all three nodes see each other and data is balanced across the ring). I started the nodes with the brisk cassandra -t command. I cannot, however, run any Hive or Pig jobs. When I do, I get an exception saying that it cannot connect to the task tracker.
During the startup process, I see the following in the log:
TaskTracker.java (line 695) TaskTracker up at: localhost.localdomain/127.0.0.1:34928
A few lines later, however, I see this:
Retrying connect to server: localhost.localdomain/127.0.0.1:8012. Already tried 9 time(s).
INFO [TASK-TRACKER-INIT] RPC.java (line 321) Server at localhost.localdomain/127.0.0.1:8012 not available yet, Zzzzz...
Those lines are repeated non-stop as long as my cluster is running.
My cassandra.yaml file specifies the box IP (not 0.0.0.0 or localhost) as the listen_address and the rpc_address is set to 0.0.0.0
Why is the client attempting to connect to a different port than the log shows the task tracker as using? Is there anywhere these addresses/ports can be specified?
I figured this out. In case anyone else has the same issues, here's what was going on:
Brisk uses the first entry in the Cassandra cluster's seed list to pick the initial jobtracker. One of my nodes had 127.0.0.1 in the seed list. This worked for the Cassandra setup since all the other nodes in the cluster connected to that box to get the cluster topology but this didn't work for the job tracker selection.
looks like your jobtracker isn't running. What do you see when you run "brisktool jobtracker"?