Pika SelectConnection adapter 'Unresolved attribute reference' - rabbitmq

I have a problem with connection to RabbitMQ using pika.SelectConnection adapter. I am using Pika 1.1.0 and Python 3.7.9, development using PyCharm Community.
Below snapshot of my code showing how I am creating connection.
import pika
def on_done():
connect.channel()
print("Open Callback")
if __name__ == '__main__':
account = "user"
password = "password"
server = "172.17.0.5"
credentials = pika.PlainCredentials(account, password)
parameters = pika.ConnectionParameters(host=server, port=15672, credentials=credentials, socket_timeout=10)
connect = pika.SelectConnection(parameters, on_open_callback=on_done)
connect.ioloop.start()
RabbitMQ is running, I have checked connection and messaging using pika.BlockingConnection adapter.
My IDE (PyCharm) is highliting start() function as "Unresolved attribute reference 'start' for class 'object'". When I run this code, there is no error. On admin webpage I don't see that connection is opened.
Has somebody meet similar problem? Something is wrong with my IDE?
Thank you for answer.

Just had the same warning, but in the AsyncPublisher-Example and I also wanted to get rid of it.
The problem is that the IOLoop is not specifically defined by pika, even though it should.
If you work with pika, the type of IOLoop you are looking for is:
pika.adapters.select_connection.IOLoop
In your case it would be the easiest to cast your IOLoop and then use this one to start.
io_loop = cast(pika.adapters.select_connection.IOLoop, connect.ioloop)
io_loop.start()
For the more complex AsyncPublisher I did pretty much the same thing:
def __init__(self, amqp_url: str, queues: List[str], interval: float):
self._ioloop: Optional[pika.adapters.select_connection.IOLoop] = None
And after the connection is established:
def run(self):
"""Run the example code by connecting and then starting the IOLoop.
"""
while not self._stopping:
self._connection = None
self._deliveries = []
self._ack = 0
self._not_ack = 0
self._message_number = 0
try:
self._connection = self.connect()
self._ioloop = cast(pika.adapters.select_connection.IOLoop, self._connection.ioloop)
self._ioloop.start()

Related

Falcon - Difference in stream type between unittests and actual API on post

I'm trying to write unittests for my falcon api, and I encountered a really weird issue when I tried reading the body I added to the unittests.
This is my unittest:
class TestDetectionApi(DetectionApiSetUp):
def test_valid_detection(self):
headers = {"Content-Type": "application/x-www-form-urlencoded"}
body = {'test': 'test'}
detection_result = self.simulate_post('/environments/e6ce2a50-f68f-4a7a-8562-ca50822b805d/detectionEvaluations',
body=urlencode(body), headers=headers)
self.assertEqual(detection_result.json, None)
and this is the part in my API that reads the body:
def _get_request_body(request: falcon.Request) -> dict:
request_stream = request.stream.read()
request_body = json.loads(request_stream)
validate(request_body, REQUEST_VALIDATION_SCHEMA)
return request_body
Now for the weird part, my function for reading the body is working without any issue when I run the API, but when I run the unittests the stream type seems to be different which affect the reading of it.
The stream type when running the API is gunicorn.http.body.Body and using unittests: wsgiref.validate.InputWrapper.
So when reading the body from the api all I need to do it request.stream.read() but when using the unittests I need to do request.stream.input.read() which is pretty annoying since I need to change my original code to work with both cases and I don't want to do it.
Is there a way to fix this issue? Thanks!!
It seems like issue was with how I read it. instead of using stream I used bounded_stream which seemed to work, also I removed the headers and just decoded my body.
my unittest:
class TestDetectionApi(DetectionApiSetUp):
def test_valid_detection(self):
body = '''{'test': 'test'}'''
detection_result = self.simulate_post('/environments/e6ce2a50-f68f-4a7a-8562-ca50822b805d/detectionEvaluations',
body=body.encode(), headers=headers)
self.assertEqual(detection_result.json, None)
how I read it:
def _get_request_body(request: falcon.Request) -> dict:
request_stream = request.bounded_stream.read()
request_body = json.loads(request_stream)
validate(request_body, REQUEST_VALIDATION_SCHEMA)
return request_body

web3 event_filter.get_new_entries: ValueError {'code': -32000, 'message': 'filter 0x?????????? not found'}

pretty stuck here. I am following web3 examples and try to subscribe to smart contract events by calling web3 event_filter.get_new_entries: ValueError {'code': -32000, 'message': 'filter 0x???????????????? not found'}
I specifically want to listen for SINGLE/USDC LP's Swap event
Code
import json
import asyncio
from web3 import Web3
my_wallet_address = "xxx"
my_wallet_address = Web3.toChecksumAddress(my_wallet_address)
node_url = "https://evm.cronos.org/"
single_usdc_contract_address = Web3.toChecksumAddress("0x0fbab8a90cac61b481530aad3a64fe17b322c25d")
single_usdc_contract_abi = json.loads('... ABI json ...')
web3 = Web3(Web3.HTTPProvider(node_url))
contract = web3.eth.contract(address=single_usdc_contract_address, abi=single_usdc_contract_abi)
def handle_event(event):
print(Web3.toJSON(event))
async def log_loop(event_filter, poll_interval):
while True:
for event in event_filter.get_new_entries():
handle_event(event)
await asyncio.sleep(poll_interval)
filter = contract.events.Swap.createFilter(fromBlock='latest')
# filter = web3.eth.filter('latest')
loop = asyncio.get_event_loop()
try:
loop.run_until_complete(
asyncio.gather(
log_loop(filter, 2)))
finally:
loop.close()
Some poor guy fixed by changing from HTTPProvider to RPC. That's less than satisfactory. I worked around the problem by calling "createFilter" from each iteration of the While-loop! But I am not sure if I understand the root of the problem. Some suggests Node keep dropping filters, thus my fix to call "createFilter" every loop worked. But I really don't understand the root of the problem.
Thanks in advance.
References
web3 doc - events
SO reference
SINGLE/USDC
SINGLE/USDC - CRONOS Explorer where you will find ABI
CRONOS integration doc
Example from cryptomarketpool

Python Telegram Bot ConversationHandler not working with webhook

I want to make a ConversationHandler in my bot that is using a webhook, the ConversationHandler only runs the function at the entry point, after that neither does it run the state function, nor does it run the fallback function. This CommandHandler runs fine when bot is run by polling.
ConversationHandler:
conv_handler = ConversationHandler(
entry_points=[CommandHandler("start", start)],
states={
NEW_PROJECT: [CallbackQueryHandler(project_name)],
PROJECT_NAME: [MessageHandler(Filters.regex(".*"), store_name_maybe_project_type)],
PROJECT_TYPE: [CallbackQueryHandler(store_type_maybe_admin)]
},
fallbacks=[CommandHandler('cancel', cancel)],
)
All the required functions:
def start(update, context):
# Gives button of add project
# We can use for loop to display buttons
keyboard = [
[InlineKeyboardButton("Add Project", callback_data="add_project")],
]
reply_markup = InlineKeyboardMarkup(keyboard)
update.message.reply_text("You have no projects right now.", reply_markup=reply_markup)
# if existing project then PROJECT or else NEW_PROJECT
return NEW_PROJECT
def project_name(update, context):
# asks for project name
query = update.callback_query
update.message.reply_text(text="Okay, Please enter your project name:")
return PROJECT_NAME
def store_name_maybe_project_type(update, context):
# stores project name and conditionally asks for project type
print(update.message.text)
keyboard = [
[InlineKeyboardButton("Telegram Group", callback_data="group")],
[InlineKeyboardButton("Telegram Channel", callback_data="channel")]
]
reply_markup = InlineKeyboardMarkup(keyboard)
update.message.reply_text("What do you want to make?", reply_markup=reply_markup)
return PROJECT_TYPE
def store_type_maybe_admin(update, context):
# stores project type and conditonally asks for making admin
print(update.message.text)
keyboard = [[InlineKeyboardButton("Done", callback_data="done")]]
reply_markup = InlineKeyboardMarkup(keyboard)
update.message.reply_text(f"Make a private {update.message.text} and make this bot the admin", reply_markup=reply_markup)
return ConversationHandler.END
def cancel(update, context):
update.message.reply_text("Awww, that's too bad")
return ConversationHandler.END
This is how I set up the webhook(I think the problem is here somewhere):
#app.route(f"/{TOKEN}", methods=["POST"])
def respond():
"""Run the bot."""
update = telegram.Update.de_json(request.get_json(force=True), bot)
dispatcher = setup(bot, update)
dispatcher.process_update(update)
return "ok"
The setup function
def setup(bot, update):
# Create bot, update queue and dispatcher instances
dispatcher = Dispatcher(bot, None, workers=0)
##### Register handlers here #####
bot_handlers = initialize_bot(update)
for handler in bot_handlers:
dispatcher.add_handler(handler)
return dispatcher
And then I manually setup the webhook by using this route:
#app.route("/setwebhook", methods=["GET", "POST"])
def set_webhook():
s = bot.setWebhook(f"{URL}{TOKEN}")
if s:
return "webhook setup ok"
else:
return "webhook setup failed"
The add project button doesn't do anything.
ConversationHandler stores the current state in memory, so it's lost once the conv_handler reaches the end of it's lifetime (i.e. the variable is deleted or the process is shut down). Now your snippets don't show where you initialize the ConversationHandler, but I have the feeling that you create it anew for every incoming update - and every new instance doesn't have the knowledge of the previous one.
I have that feeling, because you create a new Dispatcher for every update as well. That's not necessary and in fact I'd strongly advise against it. Not only does it take time to initialize the Dispatcher, which you could save, but also if you're using chat/user/bot_data, the data get's lost every time you create a new instance.
The initialize_bot function is called in setup, where you create the new Dispatcher, which is why my guess would be that you create a new ConversationHandler for every update. Also it seems odd to me that the return value of that function seems to be dependent on the update - the handlers used by your dispatcher should be fixed ...
Disclaimer: I'm currently the maintainer of python-telegram-bot

Can we use two MTProto connections in a single app?

Hello I am trying to run both "userbot" and "bot account" in a single app (worker).
These two connections are namely tbot[main bot] and ubot[userbot]. For example:
tbot = TelegramClient("myapp", API_KEY, API_HASH)
tbot.start(bot_token=TOKEN)
ubot = TelegramClient(StringSession(STRING_SESSION), API_KEY, API_HASH)
ubot.start()
But the problem is these two connections can't run simultaneously, both the connections quits just after the first (tbot) starts
Anything I can do here ?
Try to use python threading for that
import threading
# ...
tbot = TelegramClient("myapp", API_KEY, API_HASH)
tbot_thread = threading.Thread(target=tbot.start, kwargs={'bot_token': TOKEN})
tbot_thread.start()
ubot = TelegramClient(StringSession(STRING_SESSION), API_KEY, API_HASH)
ubot_thread = threading.Thread(target=ubot.start)
ubot_thread.start()
So each client will start separately in separate Thread instead of waiting in the main thread. If anything else fails - see logs.

DB not changing when testing Flask-Security with peewe and PyTest

I just got into testing my flask application with pytest, and it mostly works as expected. Unfortunately the test uses the live DB instead a mock one. I'm quite sure this has to do with the fact, that flask-security is using peewee's database_wrapper instead of an "straightforward" database.
Here's some code. This is from the test:
#pytest.fixture
def client():
db_fd, belavoco_server.app.config['DATABASE'] = { 'name': 'userLogin_TEST.db',
'engine': 'peewee.SqliteDatabase' } }
belavoco_server.app.config['TESTING'] = True
client = belavoco_server.app.test_client()
#this seems not to help at all
with belavoco_server.app.app_context():
belavoco_server.users.user_managment.db_wrapper.init_app(belavoco_server.app)
yield client
os.close(db_fd)
os.unlink(belavoco_server.app.config['DATABASE'])
This is some code from my bv_user_model.py
app.config['DATABASE'] = {
'name': 'userLogin.db',
'engine': 'peewee.SqliteDatabase',
}
app.config['SECURITY_URL_PREFIX'] = "/users"
# Create a database instance that will manage the connection and
# execute queries
db_wrapper = FlaskDB(app)
class Role(db_wrapper.Model, RoleMixin):
name = CharField(unique=True, default="standard")
description = TextField(null=True)
def __repr__(self):
return self.name
When preforming the test, Flask is using the userLogin.db instead of userLogin_TEST.db. I suppose this is because of the db_wrapper in bv_user_model.py - but I did not find a way to change this behaviour. Any help would be greatly appreciated!
The root of the issue seems to be this in bv_user_model:
app.config['DATABASE'] = {
'name': 'userLogin.db',
'engine': 'peewee.SqliteDatabase',
}
Since you are using FlaskDB with the app that has the production credentials, it seems like the db_wrapper will "remember" that and not be overridden by your tests.
The most straightforward answer would be to not use your app to create the FlaskDB instance directly
db = FlaskDB()
And then later on initialize it on your app
from models import db
def create_app():
app = ...
app.config["DATABASE"] = ...
db.init_app(app)
...
return app
Which would let you have a separate function like this which you can use for testing.
def create_test_app():
app = ...
app.config["DATABASE"] = ...test credentials...
db.init_app(app)
...
return app
and when you create your models, use the FlaskDB instance just the same as you were already.
db = FlaskDB()
class Role(db.Model, RoleMixin):
...