I am new to airflow and trying to setup airflow cluster. The issue is when I try starting airflow worker I get error
KeyError: u'No such transport: ampq. Did you mean amqp?'.
Versions:
apache-airflow==1.10.6
celery==4.0.2
kombu==4.0.2
My rabbitmq instance is up and running. Not sure what I am doing wrong. Below is my configuration in airflow.cfg file.
broker_url = ampq://guest:***#192.168.43.130:5672//
result_backend = db+postgresql://postgres:***#192.168.43.130:5432/postgres
Related
I want to connect aws-Kafka with s3 using confluence connector on my ec2 server. I try to configure everything like in tutorials. When I run connect-standalone or connect-distributed, at first everything goes well, I don't get any errors in the logs but after information about connection starting, my connector died instantly without any information. Has anybody got same problem?
config/connect-standalone.properties
bootstrap.servers=msk-connection-string
plugin.path=/home/ubuntu/connectors/confluentinc-kafka-connect-s3
key.converter=org.apache.kafka.connect.converters.ByteArrayConverter
value.converter=org.apache.kafka.connect.converters.ByteArrayConverter
offset.storage.file.filename=/tmp/connect.offsets
connector.properties
connector.class=io.confluent.connect.s3.S3SinkConnector
format.class=io.confluent.connect.s3.format.bytearray.ByteArrayFormat
flush.size=1
topics=SomeTopic
s3.bucket.name=bucket-name-here
s3.region=us-west-2
s3.part.size=5242880
aws.access.key.id=****
aws.secret.access.key=****
behavior.on.null.values=ignore
storage.class=io.confluent.connect.s3.storage.S3Storage
topics.dir=../topics
store.url=http://bucket-name.s3-website-Region.amazonaws.com
key.converter=org.apache.kafka.connect.converters.ByteArrayConverter
value.converter=org.apache.kafka.connect.converters.ByteArrayConverter
logs:
[2021-08-20 06:32:35,954] INFO Kafka version: 2.7.0 (org.apache.kafka.common.utils.AppInfoParser:119)
[2021-08-20 06:32:35,954] INFO Kafka commitId: 448719dc99a19793 (org.apache.kafka.common.utils.AppInfoParser:120)
[2021-08-20 06:32:35,954] INFO Kafka startTimeMs: 1629441155953 (org.apache.kafka.common.utils.AppInfoParser:121)
Killed
Please help!
MSK requires TLS connection
When adding few lines with ssl configuration to config/connect-standalone.properties
producer.security.protocol=SSL
consumer.security.protocol=SSL
security.protocol=SSL
ssl.protocol=TLS
ssl.truststore.location=/your/path/to/truststore/kafka.client.truststore.jks
It starts working properly!
I am trying to setup a logstash configuration to push lines from a file to rabbitmq. I installed both logstash 2.1.1 and rabbit 3.6.0 on my local machine to test it. the output configuration is:
output {
rabbitmq {
exchange => "test_exchange"
key => "test"
exchange_type => "direct"
host => "127.0.0.1"
port => "15672"
user => "logstash"
password => "logstashpassword"
}
}
But when I now start logstash it fails to startup with the following error (the output is only shown when logstash is started in debug mode):
Worker threads expected: 4, worker threads started: 4 {:level=>:info, :file=>"logstash/pipeline.rb", :line=>"186", :method=>"start_filters"}
Connecting to RabbitMQ. Settings: {:vhost=>"/", :host=>"127.0.0.1", :port=>15672, :user=>"logstash", :automatic_recovery=>true, :pass=>"logstashpassword", :timeout=>0, :heartbeat=>0} {:level=>:debug, :file=>"logstash/plugin_mixins/rabbitmq_connection.rb", :line=>"135", :method=>"connect"}
The error reported is:
com.rabbitmq.utility.BlockingCell.get(com/rabbitmq/utility/BlockingCell.java:77)
com.rabbitmq.utility.BlockingCell.uninterruptibleGet(com/rabbitmq/utility/BlockingCell.java:111)
com.rabbitmq.utility.BlockingValueOrException.uninterruptibleGetValue(com/rabbitmq/utility/BlockingValueOrException.java:37)
com.rabbitmq.client.impl.AMQChannel$BlockingRpcContinuation.getReply(com/rabbitmq/client/impl/AMQChannel.java:367)
com.rabbitmq.client.impl.AMQConnection.start(com/rabbitmq/client/impl/AMQConnection.java:293)
com.rabbitmq.client.ConnectionFactory.newConnection(com/rabbitmq/client/ConnectionFactory.java:621)
com.rabbitmq.client.ConnectionFactory.newConnection(com/rabbitmq/client/ConnectionFactory.java:648)
java.lang.reflect.Method.invoke(java/lang/reflect/Method.java:606)
RUBY.new_connection_impl(/opt/logstash/vendor/bundle/jruby/1.9/gems/march_hare-2.11.0-java/lib/march_hare/session.rb:505)
org.jruby.RubyProc.call(org/jruby/RubyProc.java:281)
RUBY.converting_rjc_exceptions_to_ruby(/opt/logstash/vendor/bundle/jruby/1.9/gems/march_hare-2.11.0-java/lib/march_hare/session.rb:467)
RUBY.new_connection_impl(/opt/logstash/vendor/bundle/jruby/1.9/gems/march_hare-2.11.0-java/lib/march_hare/session.rb:500)
RUBY.initialize(/opt/logstash/vendor/bundle/jruby/1.9/gems/march_hare-2.11.0-java/lib/march_hare/session.rb:136)
RUBY.connect(/opt/logstash/vendor/bundle/jruby/1.9/gems/march_hare-2.11.0-java/lib/march_hare/session.rb:109)
RUBY.connect(/opt/logstash/vendor/bundle/jruby/1.9/gems/march_hare-2.11.0-java/lib/march_hare.rb:20)
RUBY.connect(/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-mixin-rabbitmq_connection-2.2.0-java/lib/logstash/plugin_mixins/rabbitmq_connection.rb:137)
RUBY.connect!(/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-mixin-rabbitmq_connection-2.2.0-java/lib/logstash/plugin_mixins/rabbitmq_connection.rb:94)
RUBY.register(/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-rabbitmq-3.0.6-java/lib/logstash/outputs/rabbitmq.rb:40)
org.jruby.RubyArray.each(org/jruby/RubyArray.java:1613)
RUBY.start_outputs(/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.1.1-java/lib/logstash/pipeline.rb:192)
RUBY.run(/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.1.1-java/lib/logstash/pipeline.rb:102)
RUBY.execute(/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.1.1-java/lib/logstash/agent.rb:165)
RUBY.run(/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.1.1-java/lib/logstash/runner.rb:90)
org.jruby.RubyProc.call(org/jruby/RubyProc.java:281)
RUBY.run(/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.1.1-java/lib/logstash/runner.rb:95)
org.jruby.RubyProc.call(org/jruby/RubyProc.java:281)
RUBY.initialize(/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.22/lib/stud/task.rb:24)
java.lang.Thread.run(java/lang/Thread.java:745)
And then logstash "crashes" i.e. stops. Does anyone know if that has to do with the config of logstash or is it a problem in rabbitMQ setup?
Thanks in advance.
Daniel
Found my error the port for rabbitMQ has to be 5672 not 15672.
I have a python program connecting to a rabbitmq server. When this program starts, it connects well. But when rabbitmq server restarts, my program can not reconnect to it, and leaving error just "Socket closed"(produced by kombu), which is meaningless.
I want to know the detailed info about the connection failure. On the server side, there is nothing useful in the rabbitmq log file either, it just said "connection failed" with no reason given.
I tried the trace plugin(https://www.rabbitmq.com/firehose.html), and found there was no trace info published to amq.rabbitmq.trace exchange when the connection failure happended. I enabled the plugin with:
rabbitmq-plugins enable rabbitmq_tracing
systemctl restart rabbitmq-server
rabbitmqctl trace_on
and then i wrote a client to get message from amq.rabbitmq.trace exchange:
#!/bin/env python
from kombu.connection import BrokerConnection
from kombu.messaging import Exchange, Queue, Consumer, Producer
def on_message(self, body, message):
print("RECEIVED MESSAGE: %r" % (body, ))
message.ack()
def main():
conn = BrokerConnection('amqp://admin:pass#localhost:5672//')
channel = conn.channel()
queue = Queue('debug', channel=channel,durable=False)
queue.bind_to(exchange='amq.rabbitmq.trace', routing_key='publish.amq.rabbitmq.trace')
consumer = Consumer(channel, queue)
consumer.register_callback(on_message)
consumer.consume()
while True:
conn.drain_events()
if __name__ == '__main__':
main()
I also tried to get some debug log from rabbitmq server. I reconfigured rabbitmq.config according to https://www.rabbitmq.com/configure.html, and set
log_levels to
{log_levels, [{connection, info}]}
but as a result rabbitmq server failed to start. It seems like the official doc is not for me, my rabbitmq server version is 3.3.5. However
{log_levels, [connection,debug,info,error]}
or
{log_levels, [connection,debug]}
works, but with this there is no DEBUG info showing in the logs, which i don't know whether it is because the log_levels configuration is not effective or there is just no DEBUG log got printed all the time.
I know that this answer comes massively late, but for future purveyors, this worked for me:
[
{rabbit,
[
{log_levels, [{connection, debug}, {channel, debug}]}
]
}
].
Basically, you just need to wrap the parameters you want to set in whichever module/plugin they belong to.
I'm very new to rabbitmq, I installed rabbitmq-server on one EC2 instance, and want to create a consumer on another EC2 instance.
But I'm getting this error:
socket.gaierror: [Errno -2] Name or service not known
That's the node status:
ubuntu#ip-10-147-xxx-xxx:~$ sudo rabbitmq-server restart
ERROR: node with name "rabbit" already running on "ip-10-147-xxx-xxx"
DIAGNOSTICS
===========
nodes in question: ['rabbit#ip-10-147-xxx-xxx']
hosts, their running nodes and ports:
- ip-10-147-xxx-xxx: [{rabbit,46074},{rabbitmqprelaunch4603,51638}]
current node details:
- node name: 'rabbitmqprelaunch4603#ip-10-147-xxx-xxx'
- home dir: /var/lib/rabbitmq
- cookie hash: Gsnt2qHd7wWDEOAOFby=
And that's the consumer code:
import pika
cred = pika.PlainCredentials('guest', 'guest')
conn_params = pika.ConnectionParameters('10-147-xxx-xxx', credentials=cred)
conn_broker = pika.BlockingConnection(conn_params)
conn_broker = pika.BlockingConnection(conn_params)
channel = conn_broker.channel()
channel.exchange_declare(exchange='hello-exchange', type='direct', passive=False, durable=True, auto_delete=False)
channel.queue_declare(queue='hello-queue')
channel.queue_bind(queue='hello-queue', exchange='hello-exchange', routing_key='hola')
def msg_consumer(channel, method, header, body):
channel.basic_ack(delivery_tag=method.delivery_tag)
if body == 'quit':
channel.basic_cancel(consumer_tag='hello-consumer')
channel.stop_consuming()
else:
print body
return
channel.basic_consume(msg_consumer, queue='hello-queue', consumer_tag='hello-consumer')
channel.start_consuming()
You should check that the security group allows you to use the rabbitMQ port, also it seems that you are not using Rabbit default's port (5672) so it should be in your connection parameters
hey guys i am new to celery. i am working on periodic task scheduling. I have configured my celeryconfig.py as follow:
from datetime import timedelta
BROKER_URL = 'redis://localhost:6379/0'
CELERY_RESULT_BACKEND = "redis"
CELERY_REDIS_HOST = "localhost"
CELERY_REDIS_PORT = 6379
CELERY_REDIS_DB = 0
CELERY_IMPORTS=("mytasks")
CELERYBEAT_SCHEDULE={'runs-every-60-seconds' :
{
'task': 'mytasks.add',
'schedule': timedelta(seconds=60),
'args':(16,16)
},
}
and mytask.pyas follow:
from celery import Celery
celery = Celery("tasks",
broker='redis://localhost:6379/0',
backend='redis')
#celery.task
def add(x,y):
return x+y
#celery.task
def mul(x,y):
return x*y
when i am running
celery beat -s celerybeat-schedule then i am getting
Configuration ->
. broker -> redis://localhost:6379/0
. loader -> celery.loaders.default.Loader
. scheduler -> celery.beat.PersistentScheduler
. db -> celerybeat-schedule
. logfile -> [stderr]#INFO
. maxinterval -> now (0s)
[2012-08-28 12:27:17,825: INFO/MainProcess] Celerybeat: Starting...
[2012-08-28 12:28:00,041: INFO/MainProcess] Scheduler: Sending due task mytasks.add
[2012-08-28 12:29:00,057: INFO/MainProcess] Scheduler: Sending due task mytasks.add
[2012-08-28 12:30:00,064: INFO/MainProcess] Scheduler: Sending due task mytasks.add
[2012-08-28 12:31:00,097: INFO/MainProcess] Scheduler: Sending due task mytasks.add
now i am not getting that i have passed arguments (16,16) then how i can get the answer of this function add(x,y)
I'm not sure I quite understand what you have asked, but from what I can tell, your issue may be one of the following:
1) Are you running celeryd (the worker daemon)? If not, did you start a celery worker in a terminal? Celery beat is a task scheduler. It is not a worker. Celerybeat only schedules the tasks (i.e. places them in a queue for a worker to eventually consume).
2) How did you plan on viewing the results? Are they being saved somewhere? Since you have set your results backend to redis, the results are at least temporarily stored in the redis results backend