How to call a the python code when a new message is delivered from telethon API - telethon

How can I call some Python code when a new message is delivered from the Telethon API? I need to run the code all the day so that I can do my processing from Python code.
How to use this? #client.on(events.NewMessage(chats=channel, incoming=True))
Do I need run the scheduler to check this?
I am using history = client(GetHistoryRequest) method.

First Steps - Updates in the documentation greets you with the following code:
import logging
logging.basicConfig(format='[%(levelname) 5s/%(asctime)s] %(name)s: %(message)s',
level=logging.WARNING)
from telethon import TelegramClient, events
client = TelegramClient('anon', api_id, api_hash)
#client.on(events.NewMessage)
async def my_event_handler(event):
if 'hello' in event.raw_text:
await event.reply('hi!')
client.start()
client.run_until_disconnected()
Note you can "call" any Python code inside my_event_handler. It also shows how #client.on() is meant to be used. Note there is no need for a scheduler.
I am using history = client(GetHistoryRequest) method.
As a side note this is raw API which is discouraged if a friendly alternative, like client.get_messages, exists.

Related

JobQueue.run_repeating to run a function without command handler in Telegram

I need to start sending notifications to a TG group, before that I want to run a function continuosly which would query an API and store data in DB. While this function is running I would want to be able to send notifications if they are available in the DB:
That's my code:
import telegram
from telegram.ext import Updater,CommandHandler, JobQueue
token = "tokenno:token"
bot = telegram.Bot(token=token)
def start(update, context):
context.bot.send_message(chat_id=update.message.chat_id,
text="You will now receive msgs!")
def callback_minute(context):
chat_id = context.job.context
# Check in DB and send if new msgs exist
send_msgs_tg(context, chat_id)
def callback_msgs():
fetch_msgs()
def main():
JobQueue.run_repeating(callback_msgs, interval=5, first=1, context=None)
updater = Updater(token,use_context=True)
dp = updater.dispatcher
dp.add_handler(CommandHandler("start",start, pass_job_queue=True))
updater.start_polling()
updater.idle()
if __name__ == '__main__':
main()
This code gives me error:
TypeError: run_repeating() missing 1 required positional argument: 'callback'
Any help would greatly appreciated
There are a few issues with your code, let me try to point them out:
1.
def callback_msgs(): fetch_msgs()
You use callback_msgs as callback for your job. But job callbacks take exactly one argument of type telegram.ext.CallbackContext.
JobQueue.run_repeating(callback_msgs, interval=5, first=1, context=None)
JobQueue is a class. To use run_repeating, which is an instance method, you'll need an instance of that class. In fact the Updater already builds an instance for you, it's available as updater.job_queue in your case. So the call should look like this:
updater.job_queue.run_repating(callback_msgs, interval=5, first=1, context=None)
CommandHandler("start",start, pass_job_queue=True)
This is not strictly speaking an issue, bot pass_job_queue=True has no effect at all, because you use use_context=True
Please note that there is a nice tutorial on JobQueue over at the ptb-wiki. There is also an example on how to use it.
Disclaimer: I'm currently the maintainer of python-telegram-bot

Creating a REST Handler for any of Splunk's REST endpoints

How to create a Persistent(or any for that matter) REST HANDLER for any given(inbuilt) SPLUNK REST API Endpoint? How to use PersistentServerConnectionApplication class ?
I have gone through https://gist.github.com/LukeMurphey/238004c8976804a8e79570d22721fd99 but cant figure out where to start and how to make one.
There was a great .conf presentation about REST Handlers by James Ervin from a few years ago, https://conf.splunk.com/files/2016/slides/extending-splunks-rest-api-for-fun-and-profit.pdf
Sample code is available from https://github.com/jrervin/splunk-rest-examples
James' echo example is quite straight forward. Make sure you also pay attention to the additions that are necessary in web.conf and restmap.conf.
import os
import sys
if sys.platform == "win32":
import msvcrt
# Binary mode is required for persistent mode on Windows.
msvcrt.setmode(sys.stdin.fileno(), os.O_BINARY)
msvcrt.setmode(sys.stdout.fileno(), os.O_BINARY)
msvcrt.setmode(sys.stderr.fileno(), os.O_BINARY)
from splunk.persistconn.application import PersistentServerConnectionApplication
class EchoHandler(PersistentServerConnectionApplication):
def __init__(self, command_line, command_arg):
PersistentServerConnectionApplication.__init__(self)
def handle(self, in_string):
return {'payload': in_string, # Payload of the request.
'status': 200 # HTTP status code
}
Suggest you just get a copy of his app and deploy it, confirm it all works, then modify if for your particular use-case.

Adding custom headers to all boto3 requests

I need to add some custom headers to every boto3 request that is sent out. Is there a way to manage the connection itself to add these headers?
For boto2, connection.AWSAuthConnection has a method build_base_http_request which has been helpful. I've yet to find an analogous function within the boto3 documentation though.
This is pretty dated but we encountered the same issue, so I'm posting our solution.
I wanted to add custom headers to boto3 for specific requests.
I found this: https://github.com/boto/boto3/issues/2251, and used the event system for adding the header
def _add_header(request, **kwargs):
request.headers.add_header('x-trace-id', 'trace-trace')
print(request.headers) # for debug
some_client = boto3.client(service_name=SERVICE_NAME)
event_system = some_client.meta.events
event_system.register_first('before-sign.EVENT_NAME.*', _add_header)
You can try using a wildcard for all requests:
event_system.register_first('before-sign.*.*', _add_header)
*SERVICE_NAME- you can find all available services here: https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/index.html
For more information about register a function to a specific event: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/events.html
Answer from #May Yaari is pretty awesome. To the concern raised by #arainchi:
This works, there is no way to pass custom data to event handlers, currently we have to do it in a non-pythonic way using global variables/queues :( I have opened issue ticket with Boto3 developers for this exact case
Actually, we could leverage the python functional programming property: returning a function inside a function to get around:
In the case we want to add a custom value custom_variable to the header, we could do
some_client = boto3.client(service_name=SERVICE_NAME)
event_system = some_client.meta.events
event_system.register_first('before-sign.EVENT_NAME.*', _register_callback(custom_variable))
def _register_callback(custom_variable):
def _add_header(request, **kwargs):
request.headers.add_header('header_name_you_want', custom_variable)
return _add_header
Or a more pythonic way using lambda
some_client = boto3.client(service_name=SERVICE_NAME)
event_system = some_client.meta.events
event_system.register_first('before-sign.EVENT_NAME.*', lambda request, **kwargs: _add_header(request, custom_variable))
def _add_header(request, custom_variable):
request.headers.add_header('header_name_you_want', custom_variable)

Acessing a database using zeep

I am trying to programmatically retrieve information from a database(BRENDA) using Zeep.
The following is the code.
import zeep
import hashlib
wsdl = "https://www.brenda-enzymes.org/soap/brenda.wsdl"
password = hashlib.sha256("xx".encode('utf-8')).hexdigest()
parameters = "xxx," + password + ",ecNumber*{}#organism*{}#".format("2.7.1.2", "Homo sapiens")
client = zeep.Client(wsdl=wsdl)
print(client)
km_string = client.getKmValue(parameters)
However, I get the following error
AttributeError: 'Client' object has no attribute 'getKmValue'
Could someone help me with this?
The above code works fine while using SOAPpy library in python 2. However, I couldn't successfully install SOAPpy in python 3, therefore I tried Zeep.
The sample code that shows SOAP implementation is available here
We fixed the webservice. It should work, now. Please have a look at the SOAP documentation on our website.
not the resolution but some hints.
1) with zeep you need to put .service between client and the name of the method. the correct syntax is client.service.getKmValue(parameters) (take a look at documentation)
anyway for zeep, getKmValue doesn't exists (but it exists on the wsdl schema and SoapUi see it).
you can also try py-suds,
but for some reason i obtain a 403 calling the wsdl.
from suds.client import Client
import hashlib
client = Client("https://www.brenda-enzymes.org/soap/brenda.wsdl")

What is the Recommended Twisted Sentry/Raven Integration?

There are many integrations for raven, including python logging. On the one side, twisted does not use python's logging. And on the other side, there is no direct integration for raven in twisted.
So what is the current best practice for using raven in a twisted based setup?
raven's captureException can only be called without arguments if there is an exception active, which is not always the case when a log observer is called. So, instead, pull the exception information out of the Failure that gets logged:
from twisted.python import log
from raven import Client
client = Client(dsn='twisted+http://YOUR_DSN_HERE')
def logToSentry(event):
if not event.get('isError') or 'failure' not in event:
return
f = event['failure']
client.captureException((f.type, f.value, f.getTracebackObject()))
log.addObserver(logToSentry)
user1252307 answer is a great start, but on the sentry side you get a relatively unhelpful dictionary and no stack trace.
If you are trying to view and track down unexpected exceptions try this small change to the log_sentry function:
from twisted.python import log
from raven import Client
client = Client(dsn='twisted+http://YOUR_DSN_HERE')
def log_sentry(dictionary):
if dictionary.get('isError'):
if 'failure' in dictionary:
client.captureException() # Send the current exception info to Sentry.
else:
#format the dictionary in whatever way you want
client.captureMessage(dictionary)
log.addObserver(log_sentry)
There maybe a better way to filter non exception based error message, and this might try to emit exception information that doesn't exist where there are failures which aren't exceptions.
from twisted.python import log
from raven import Client
client = Client(dsn='twisted+http://YOUR_DSN_HERE')
def log_sentry(dictionary):
if dictionary.get('isError'):
#format the dictionary in whatever way you want
client.captureMessage(dictionary)
log.addObserver(log_sentry)