telethon :A wait of 16480 seconds is required (caused by ResolveUsernameRequest) - telethon

i'm trying to use telethon to send messages to telegram groups. after some times runing, it reruens:
A wait of 16480 seconds is required (caused by ResolveUsernameRequest).
the code is:
async def main():
print(time.strftime('%Y-%m-%d %H:%M:%S', time.localtime(time.time())))
config = configparser.ConfigParser()
config.read("seetings.ini",encoding= 'utf-8')
message = config['Customer']['message']
internal = config['Customer']['internal']
count = 0
excel_data = pandas.read_excel('tg_groups.xlsx', sheet_name='Groups')
for column in excel_data['GroupUsername'].tolist():
try:
if str(excel_data['GroupUsername'][count]) == 'None':
count += 1
continue
else:
chat = await client.get_input_entity(str(excel_data['GroGroupUsernameupID'][count]))
await client.send_message(entity=chat, message=message)
except Exception as e:
print(e)
time.sleep(int(internal))
count = count + 1
continue
time.sleep(int(internal))
count = count + 1
if __name__ == '__main__':
if proxytype == 'HTTP':
print('HTTP')
client = TelegramClient('phone'+phone, api_id, api_hash, proxy=(socks.HTTP, 'localhost', int(proxyport))).start()
if proxytype == 'socks5':
print('SOCKS5')
client = TelegramClient('phone'+phone, api_id, api_hash, proxy=(socks.SCOKS5, 'localhost', int(proxyport))).start()
myself = client.get_me()
print(myself)
freqm = config['Customer']['freq']
print(int(freqm))
while True:
with client:
client.loop.run_until_complete(main())
time.sleep(int(freqm))`
`
from the 'Entity' guide, it says get_input_entity method will search the user info from session file cache, why it it still call the 'ResolveUsernameRequest'to get the user info? anything i missed?
thanks for any advice.
'Entity' guide, it says get_input_entity method will search the user info from session file cache, why it it still call the 'ResolveUsernameRequest'to get the user info? anything i missed or the session file didn't keep the user info cache?

Related

How to enable TLS on a socket created with asyncio.start_server

I created a server in python using asyncio.start_server but what I want is to enable TLS for this server after its creation. I know it's possible to enable TLS using the tls parameter when creating the server but I want to do it later in the code.
I saw How do I enable TLS on an already connected Python asyncio stream?, it doesn't help me understand what to do.
Thank you for your kind answer and your time reading this.
The code for the server :
# ..init some parameters for the server
async def start_tls_connection(client_reader: asyncio.StreamReader, client_writer: asyncio.StreamWriter):
global ID
global msgList
global onlineList
global socket_server
data = await PStream.read_from_stream(streamObject=client_reader, debugMessage='Got message from client :')
if data == None:
return -1 # Handshake failure
# before proceeding we must upgrade the socket of the server to one that enables tls :
context = get_context()
logging.debug('socket before = ' + str(socket_server.sockets[0]._sock))
logging.debug(str(context))
socket_server.sockets[0]._sock = context.wrap_socket(socket_server.sockets[0]._sock)
logging.debug('socket after = ' + str(ssl_sock))
# upgrade socket here
data = await PStream.write_from_stream(streamObject=client_writer,
msg='<proceed xmlns="urn:ietf:params:xml:ns:xmpp-tls"/>',
debugMessage='Answer sent from server :')
if data == '':
return -1 # Handshake failure
# waiting for client to send a new stream header
data = await PStream.read_from_stream(streamObject=client_reader, debugMessage='Got message from client :')
if data == None:
return -1 # Handshake failure
# ... others methods for handshake
async def on_connect_client(client_reader: asyncio.StreamReader, client_writer: asyncio.StreamWriter):
logging.debug("START OF HANDSHAKE\nFIRST STEP: STREAM SESSION")
res = await stream_init(client_reader, client_writer)
if res == -1:
return res
logging.debug("START OF TLS HANDSHAKE")
res = await start_tls_connection(client_reader, client_writer)
if res == -1:
return res
logging.debug("START OF AUTHENTIFICATION")
res = await handshake_auth(client_reader, client_writer)
if res == -1:
return res
logging.debug("START OF ROOSTER PRESENCE")
res = await handshake_rooster_presence(client_reader, client_writer)
if res == -1:
return res
logging.debug("Waiting for request...")
while True:
data = await PStream.read_from_stream(streamObject=client_reader, debugMessage='Got message from client :')
if data == None:
break
async def server_start():
global socket_server
socket_server = await asyncio.start_server(on_connect_client, IP, PORT) # tls=ssl_context not here yet
logging.debug("Server just started")
async with socket_server:
await asyncio.gather(socket_server.serve_forever())
if __name__ == '__main__':
try:
asyncio.run(server_start())
except Exception as e:
print("Exception: ", str(e))
The line where
socket_server.sockets[0]._sock = context.wrap_socket(socket_server.sockets[0]._sock)
is written is where I have a problem, I don't know how to enable TLS on an already existing asyncio.start_server object.
The get_context() methods works well so don't worry about it btw.

Having trouble running multiple functions in Asyncio

I'm a novice programmer looking to build a script that reads a list of leads from Google Sheets and then messages them on telegram. I want to separate out the first and second message by three days thats why im separating the methods.
import asyncio
from telethon import TelegramClient
from telethon.errors.rpcerrorlist import SessionPasswordNeededError
import logging
from async_class import AsyncClass, AsyncObject, task, link
from sheetdata import *
logging.basicConfig(format='[%(levelname) 5s/%(asctime)s] %(name)s: %(message)s',
level=logging.WARNING)
api_id = id
api_hash = 'hash'
phone='phone'
username='user'
client = TelegramClient(username, api_id, api_hash)
#already been touched once
second_touch_array=[]
#touched twice
third_touch_array=[]
async def messageCount(userid):
count = 0
async for message in client.iter_messages(userid):
count+=1
yield count
async def firstMessage():
#clear prospects from array and readData from google sheet
clearProspects()
readData(sheet)
#loop through prospects and send first message
for user in prospect_array:
#check if we already messaged the prospect. If we haven't, execute function
if(messageCount(user.id) == 0):
await client.send_message(user.id, 'Hi')
second_touch_array.append(prospect(user.name, user.company, user.id))
print("First Message Sent!")
else:
print("Already messaged!")
async def secondMessage():
for user in second_touch_array:
if(messageCount(user.id) == 1):
await client.send_message(user.id, 'Hello')
third_touch_array.append(prospect(user.name, user.company, user.id))
print("Second Message Sent!")
else:
print("Prospect has already replied!")
async def main():
# Getting information about yourself
me = await client.get_me()
await firstMessage()
await secondMessage()
for user in second_touch_array:
print(user.name, user.company, user.id)
with client:
client.loop.run_until_complete(main())
Anyways, when I run my code i'm successfully getting the "Already Messaged!" print statement in my terminal from the firstMessage function.
This is good - it's detecting I've already messaged the one user on my Google Sheets list; however, my second function isn't being called at all. I'm not getting any print statement and every time I try to print the contents of the second array nothing happens.
If you have any advice it would be greatly appreciated :)

How can I create reliable flask-SQLAlchemy interactions with server-side-events?

I have a flask app that is functioning to expectations, and I am now trying to add a message notification section to my page. The difficulty I am having is that the database changes I am trying to rely upon do not seem to be updating in a timely fashion.
The html code is elementary:
<ul id="out" cols="85" rows="14">
</ul><br><br>
<script type="text/javascript">
var ul = document.getElementById("out");
var eventSource = new EventSource("/stream_game_channel");
eventSource.onmessage = function(e) {
ul.innerHTML += e.data + '<br>';
}
</script>
Here is the msg write code that the second user is executing. I know the code block is run because the redis trigger is properly invoked:
msg_join = Messages(game_id=game_id[0],
type="gameStart",
msg_from=current_user.username,
msg_to="Everyone",
message=f'{current_user.username} has requested to join.')
db.session.add(msg_join)
db.session.commit()
channel = str(game_id[0]).zfill(5) + 'startGame'
session['channel'] = channel
date_time = datetime.utcnow().strftime("%Y/%m/%d %H:%M:%S")
redisChannel.set(channel, date_time)
Here is the flask stream code, which is correctly triggered by a new redis time, but when I pull the list of messages, the new message the the second user has added is not yet accessible:
#games.route('/stream_game_channel')
def stream_game_channel():
#stream_with_context
def eventStream():
channel = session.get('channel')
game_id = int(left(channel, 5))
cnt = 0
while cnt < 1000:
print(f'cnt = 0 process running from: {current_user.username}')
time.sleep(1)
ntime = redisChannel.get(channel)
if cnt == 0:
msgs = db.session.query(Messages).filter(Messages.game_id == game_id)
msg_list = [i.message for i in msgs]
cnt += 1
ltime = ntime
lmsg_list = msg_list
for i in msg_list:
yield "data: {}\n\n".format(i)
elif ntime != ltime:
print(f'cnt > 0 process running from: {current_user.username}')
time.sleep(3)
msgs = db.session.query(Messages).filter(Messages.game_id == game_id)
msg_list = [i.message for i in msgs]
new_messages = # need to write this code still
ltime = ntime
cnt += 1
yield "data: {}\n\n".format(msg_list[len(msg_list)-len(lmsg_list)])
return Response(eventStream(), mimetype="text/event-stream")
The syntactic error that I am running into is that the msg_list is exactly the same length (i.e the pushed new message does not get written when i expect it to). Strangely, the second user's session appears to be accessing this information because its stream correctly reflects the addition.
I am using an Amazon RDS MySQL database.
The solution was to utilize a db.session.commit() before my db.session.query(Messages).filter(...) even where no writes were pending. This enabled an immediate read from a different user session, and my code commenced to react to the change in message list length properly.

how to get channel's members count with telegram api

I want to get a channel's members' count but I don't know which method should I use?
I am not admin in that channel, I just want to get the count number.
EDIT:I am using main telegram api, not telegram Bot api
You can use getChatMembersCount method.
Use this method to get the number of members in a chat.
It worked for me :)
from telethon import TelegramClient, sync
from telethon.tl.functions.channels import GetFullChannelRequest
api_id = API ID
api_hash = 'API HASH'
client = TelegramClient('session_name', api_id, api_hash)
client.start()
if (client.is_user_authorized() == False):
phone_number = 'PHONE NUMBER'
client.send_code_request(phone_number)
myself = client.sign_in(phone_number, input('Enter code: '))
channel = client.get_entity('CHANNEL LINK')
members = client.get_participants(channel)
print(len(members))
It is possible to do it also through GetFullChannelRequest in telethon
async def main():
async with client_to_manage as client:
full_info = await client(GetFullChannelRequest(channel="moscowproc"))
print(f"count: {full_info.full_chat.participants_count}")
if __name__ == '__main__':
client_to_manage.loop.run_until_complete(main())
or to write it without async/await
def main():
with client_to_manage as client:
full_info = client.loop.run_until_complete(client(GetFullChannelRequest(channel="moscowproc")))
print(f"count: {full_info.full_chat.participants_count}")
if __name__ == '__main__':
main()
Also as above was said, it is also feasible by bot-api with
getChatMembersCount method. You can curl it or use python to query needed url
with python code can look like this one:
import json
from urllib.request import urlopen
url ="https://api.telegram.org/bot<your-bot-api-token>/getChatMembersCount?chat_id=#<channel-name>"
with urlopen(url) as f:
resp = json.load(f)
print(resp['result'])
where <your-bot-api-token> is token provided by BotFather, and <channel-name> is channel name which amount of subscribers you want to know (of course, everything without "<>")
to check firstly, simply curl it:
curl https://api.telegram.org/bot<your-bot-api-token>/getChatMembersCount?chat_id=#<channel-name>

Google Cloud Pubsub Data lost

I'm experiencing a problem with GCP pubsub where a small percentage of data was lost when publishing thousands of messages in couple seconds.
I'm logging both message_id from pubsub and a session_id unique to each message on both the publishing end as well as the receiving end, and the result I'm seeing is that some message on the receiving end has same session_id, but different message_id. Also, some messages were missing.
For example, in one test I send 5,000 messages to pubsub, and exactly 5,000 messages were received, with 8 messages lost. The log lost messages look like this:
MISSING sessionId:sessionId: 731 (missing in log from pull request, but present in log from Flask API)
messageId FOUND: messageId:108562396466545
API: 200 **** sessionId: 731, messageId:108562396466545 ******(Log from Flask API)
Pubsub: sessionId: 730, messageId:108562396466545(Log from pull request)
And the duplicates looks like:
======= Duplicates FOUND on sessionId: 730=======
sessionId: 730, messageId:108562396466545
sessionId: 730, messageId:108561339282318
(both are logs from pull request)
All missing data and duplicates look like this.
From the above example, it is clear that some messages has taken the message_id of another message, and has been sent twice with two different message_ids.
I wonder if anyone would help me figure out what is going on? Thanks in advance.
Code
I have an API sending message to pubsub, which looks like this:
from flask import Flask, request, jsonify, render_template
from flask_cors import CORS, cross_origin
import simplejson as json
from google.cloud import pubsub
from functools import wraps
import re
import json
app = Flask(__name__)
ps = pubsub.Client()
...
#app.route('/publish', methods=['POST'])
#cross_origin()
#json_validator
def publish_test_topic():
pubsub_topic = 'test_topic'
data = request.data
topic = ps.topic(pubsub_topic)
event = json.loads(data)
messageId = topic.publish(data)
return '200 **** sessionId: ' + str(event["sessionId"]) + ", messageId:" + messageId + " ******"
And this is the code I used to read from pubsub:
from google.cloud import pubsub
import re
import json
ps = pubsub.Client()
topic = ps.topic('test-xiu')
sub = topic.subscription('TEST-xiu')
max_messages = 1
stop = False
messages = []
class Message(object):
"""docstring for Message."""
def __init__(self, sessionId, messageId):
super(Message, self).__init__()
self.seesionId = sessionId
self.messageId = messageId
def pull_all():
while stop == False:
m = sub.pull(max_messages = max_messages, return_immediately = False)
for data in m:
ack_id = data[0]
message = data[1]
messageId = message.message_id
data = message.data
event = json.loads(data)
sessionId = str(event["sessionId"])
messages.append(Message(sessionId = sessionId, messageId = messageId))
print '200 **** sessionId: ' + sessionId + ", messageId:" + messageId + " ******"
sub.acknowledge(ack_ids = [ack_id])
pull_all()
For generating session_id, sending request & logging response from API:
// generate trackable sessionId
var sessionId = 0
var increment_session_id = function () {
sessionId++;
return sessionId;
}
var generate_data = function () {
var data = {};
// data.sessionId = faker.random.uuid();
data.sessionId = increment_session_id();
data.user = get_rand(userList);
data.device = get_rand(deviceList);
data.visitTime = new Date;
data.location = get_rand(locationList);
data.content = get_rand(contentList);
return data;
}
var sendData = function (url, payload) {
var request = $.ajax({
url: url,
contentType: 'application/json',
method: 'POST',
data: JSON.stringify(payload),
error: function (xhr, status, errorThrown) {
console.log(xhr, status, errorThrown);
$('.result').prepend("<pre id='json'>" + JSON.stringify(xhr, null, 2) + "</pre>")
$('.result').prepend("<div>errorThrown: " + errorThrown + "</div>")
$('.result').prepend("<div>======FAIL=======</div><div>status: " + status + "</div>")
}
}).done(function (xhr) {
console.log(xhr);
$('.result').prepend("<div>======SUCCESS=======</div><pre id='json'>" + JSON.stringify(payload, null, 2) + "</pre>")
})
}
$(submit_button).click(function () {
var request_num = get_request_num();
var request_url = get_url();
for (var i = 0; i < request_num; i++) {
var data = generate_data();
var loadData = changeVerb(data, 'load');
sendData(request_url, loadData);
}
})
UPDATE
I made a change on the API, and the issue seems to go away. The changes I made was instead of using one pubsub.Client() for all request, I initialized a client for every single request coming in. The new API looks like:
from flask import Flask, request, jsonify, render_template
from flask_cors import CORS, cross_origin
import simplejson as json
from google.cloud import pubsub
from functools import wraps
import re
import json
app = Flask(__name__)
...
#app.route('/publish', methods=['POST'])
#cross_origin()
#json_validator
def publish_test_topic():
ps = pubsub.Client()
pubsub_topic = 'test_topic'
data = request.data
topic = ps.topic(pubsub_topic)
event = json.loads(data)
messageId = topic.publish(data)
return '200 **** sessionId: ' + str(event["sessionId"]) + ", messageId:" + messageId + " ******"
Talked with some guy from Google, and it seems to be an issue with the Python Client:
The consensus on our side is that there is a thread-safety problem in the current python client. The client library is being rewritten almost from scratch as we speak, so I don't want to pursue any fixes in the current version. We expect the new version to become available by end of June.
Running the current code with thread_safe: false in app.yaml or better yet just instantiating the client in every call should is the work around -- the solution you found.
For detailed solution, please see the Update in the question
Google Cloud Pub/Sub message IDs are unique. It should not be possible for "some messages [to] taken the message_id of another message." The fact that message ID 108562396466545 was seemingly received means that Pub/Sub did deliver the message to the subscriber and was not lost.
I recommend you check how your session_ids are generated to ensure that they are indeed unique and that there is exactly one per message. Searching for the sessionId in your JSON via a regular expression search seems a little strange. You would be better off parsing this JSON into an actual object and accessing fields that way.
In general, duplicate messages in Cloud Pub/Sub are always possible; the system guarantees at-least-once delivery. Those messages can be delivered with the same message ID if the duplication happens on the subscribe side (e.g., the ack is not processed in time) or with a different message ID (e.g., if the publish of the message is retried after an error like a deadline exceeded).
You shouldn't need to create a new client for every publish operation. I'm betting that the reason that that "fixed the problem" is because it mitigated a race that exists in the publisher client side. I'm also not convinced that the log line you've shown on the publisher side:
API: 200 **** sessionId: 731, messageId:108562396466545 ******
corresponds to a successful publish of sessionId 731 by publish_test_topic(). Under what conditions is that log line printed? The code that has been presented so far does not show this.