i am using pika client for creating connection to rabbitmq for some pub-sub project. A direct call to the client work fine but when i am trying to create a channel inside a celery task i getting an error.
raised unexpected: OSError(9, 'Bad file descriptor')
This is the code for creating connection
connection = pika.BlockingConnection(
pika.ConnectionParameters(
host=get_secret("RABBIT_MQ_HOST"),
credentials=pika_credentials.PlainCredentials(
get_secret("RABBIT_MQ_USERNAME"),
get_secret("RABBIT_MQ_PASSWORD")
)
))
channel = connection.channel()
I am getting error at connection.channel().
How can i create connection in celery (broker is also rabbitmq).
Related
I have an endpoint wired up to subscribe to an Azure Service Bus Topic. Something like this:
[Topic("my-component", "my-topic")]
[HttpPost("[action]")]
public async Task<ActionResult<MyResponse>> DoIt(CloudEvent cloudEvent)
{ ... }
Sometimes, the processing is failing. It just seems to stop in the middle of processing. From stdout I see this from the sidecar daprd container:
time="2023-02-02T17:13:30.699870594Z" level=error msg="App handler returned an error for message 73b53011f9b54ec4a1563d790fa3db99 on topic blah-topic: error from app channel while sending pub/sub event to app: dial tcp4 127.0.0.1:80: connect: connection refused" app_id=my-app instance=my-app--5deqq11-59465485d4c-n52rv scope=dapr.contrib type=log ver=1.9.5-msft-2
What doesn't make sense is the connection refused. I'm subscribing not publishing. Connection to what is refused?
It seems like this is happening on long-running requests so timeouts come to mind but that doesn't add up w/ connection refused.
What does that error mean? Is it a red herring or related to the failure of some of my messages? 🤔
I have 35 queues in my application. I have created a single connection object and 35 unique channel to consume the data from the dedicated queue.
Some time (after running 6 hrs and 12 hrs), I'm not able to receive any messages from the RabbitMQ. In the rabbitMQ management portal the consumers are not available.
Exception in the RabbitMQ loggers:
closing AMQP connection <0.10985.3> (127.0.0.1:63478 ->
127.0.0.1:5672, vhost: '/', user: 'guest'): client unexpectedly closed TCP connection
Is this fine with one connection for the all the consumers? Or is something wrong?
I am using RabbitMQ 3.6.12 and amqp-client-5.5.0.jar for the Java client.
I have an iOS client that connects to several ActiveMQ topics and queues via STOMP protocol. When I connect to the server, I send the following message:
2012-10-30 10:19:29,757 [MQ NIO Worker 2] TRACE StompIO
CONNECT
passcode:*****
login:system
2012-10-30 10:19:29,758 [MQ NIO Worker 2] DEBUG ProtocolConverter
2012-10-30 10:19:29,775 [MQ NIO Worker 2] TRACE StompIO
CONNECTED
heart-beat:0,0
session:ID:mbp.local-0123456789
server:ActiveMQ/5.6.0
version:1.0
And then, I subscribe to several topics using the following message:
2012-10-30 10:19:31,028 [MQ NIO Worker 2] TRACE StompIO
SUBSCRIBE
activemq.subscriptionName:user#mail.com-/topic/SPOT.SPOTCODE
activemq.prefetchSize:1
activemq.dispatchAsync:true
destination:/topic/SPOT.SPOTCODE
client-id:1234
activemq.retroactive:true
I'm facing two problems with the ActiveMQ server. Each time I connect, the Number of Consumers column in the web interface gets incremented, so I have just one real consumer but the count is around 50 consumers. But the most problematic issue is that when I plug another iOS device into my laptop to test the messaging environment, i get the following error when connecting to ActiveMQ:
WARN | Async error occurred: javax.jms.JMSException: Durable consumer is in use for client: ID:mbp.local-0123456789 and subscriptionName: user#mail.com-/topic/SPOT.SPOTCODE
This seems to be that disconnecting from ActiveMQ via STOMP is not working propertly, because this logging attempt is made when the other device is not running the app. I've tried the following things in order to solve the issue:
Always logoff when attempting to subscribe to the topics.
Subscribe
I'm currently using v5.6.0 executing the server on my laptop.
IF you read the STOMP page on the ActiveMQ site you will notice that client-id and activemq-subscriptionName must match in order to use STOMP durable subscribers. These value should be different for each of you client's otherwise you will see the same errors because of the name clashes.
I want to use a comet server written using java nio for sending out live updates. When receiving information I want it to scan the data, and send tasks to worker threads via rabbitmq. Ideally I would like a celery server to sit on the other end of rabbit, managing a pool of worker threads that will handle these tasks.
However, from my understanding, celery works by sitting on both ends of rabbitmq, and it essentially takes over the role of producer and consumer by being embedded in both the consumer and producer's code. Is there a way to set up celery as I described above? Thanks
Yes, of cource !
You can add Custom Message Consumers to a celery app.
Please refer to Extensions and Bootsteps in celery documents.
Here is a part of example code in the link above:
from celery import Celery
from celery import bootsteps
from kombu import Consumer, Exchange, Queue
my_queue = Queue('custom', Exchange('custom'), 'routing_key')
app = Celery(broker='amqp://')
class MyConsumerStep(bootsteps.ConsumerStep):
def get_consumers(self, channel):
return [Consumer(channel,
queues=[my_queue],
callbacks=[self.handle_message],
accept=['json'])]
def handle_message(self, body, message):
print('Received message: {0!r}'.format(body))
message.ack()
app.steps['consumer'].add(MyConsumerStep)
Test it:
python -m celery -A main worker
See also: Using Celery with existing RabbitMQ messages
It is not necessary to use Celery to publish messages. You can publish messages to RabbitMQ or to other broker from your own app and use Celery to consume tasks.
Celery uses simple message protocol. You can implement the client side in you application.
If you don't want to implement the client side of the protocol you can implement a simple http server which accepts requests and makes appropriate calls. Like this.
I have a simple client server apps that uses WCF (netTcpBinding) when i'm launching the server and sending messages through the client everythings works fine , but when i'm closing the server manually and open it again (without closing the client app at all) the next time the client tries to send a message to the server i get this exception (on the client side):
The socket connection was aborted. This could be caused by an error processing y
our message or a receive timeout being exceeded by the remote host, or an underl
ying network resource issue. Local socket timeout was '00:00:59.9843903'.
if i use basicHttpBinding the problem doesn't occur.
is any one knows why this problem occurs ???
Thanks,
Liran
This is expected behavior. When you close the server, TCP connection on the server is closed and you can't call it from the client anymore. Starting the server again will not help. You have to catch the exception on the client, Abort current proxy and create and open new one.
With BasicHttpBinding it works because NetTcpBinding uses single channel for whole life of the proxy (the cannel is bound to TCP connection) whereas BasicHttpBinding creates new one for each call (it reuses existing HTTP connection or create new one if connection doesn't exist).