I have a flask app that is functioning to expectations, and I am now trying to add a message notification section to my page. The difficulty I am having is that the database changes I am trying to rely upon do not seem to be updating in a timely fashion.
The html code is elementary:
<ul id="out" cols="85" rows="14">
</ul><br><br>
<script type="text/javascript">
var ul = document.getElementById("out");
var eventSource = new EventSource("/stream_game_channel");
eventSource.onmessage = function(e) {
ul.innerHTML += e.data + '<br>';
}
</script>
Here is the msg write code that the second user is executing. I know the code block is run because the redis trigger is properly invoked:
msg_join = Messages(game_id=game_id[0],
type="gameStart",
msg_from=current_user.username,
msg_to="Everyone",
message=f'{current_user.username} has requested to join.')
db.session.add(msg_join)
db.session.commit()
channel = str(game_id[0]).zfill(5) + 'startGame'
session['channel'] = channel
date_time = datetime.utcnow().strftime("%Y/%m/%d %H:%M:%S")
redisChannel.set(channel, date_time)
Here is the flask stream code, which is correctly triggered by a new redis time, but when I pull the list of messages, the new message the the second user has added is not yet accessible:
#games.route('/stream_game_channel')
def stream_game_channel():
#stream_with_context
def eventStream():
channel = session.get('channel')
game_id = int(left(channel, 5))
cnt = 0
while cnt < 1000:
print(f'cnt = 0 process running from: {current_user.username}')
time.sleep(1)
ntime = redisChannel.get(channel)
if cnt == 0:
msgs = db.session.query(Messages).filter(Messages.game_id == game_id)
msg_list = [i.message for i in msgs]
cnt += 1
ltime = ntime
lmsg_list = msg_list
for i in msg_list:
yield "data: {}\n\n".format(i)
elif ntime != ltime:
print(f'cnt > 0 process running from: {current_user.username}')
time.sleep(3)
msgs = db.session.query(Messages).filter(Messages.game_id == game_id)
msg_list = [i.message for i in msgs]
new_messages = # need to write this code still
ltime = ntime
cnt += 1
yield "data: {}\n\n".format(msg_list[len(msg_list)-len(lmsg_list)])
return Response(eventStream(), mimetype="text/event-stream")
The syntactic error that I am running into is that the msg_list is exactly the same length (i.e the pushed new message does not get written when i expect it to). Strangely, the second user's session appears to be accessing this information because its stream correctly reflects the addition.
I am using an Amazon RDS MySQL database.
The solution was to utilize a db.session.commit() before my db.session.query(Messages).filter(...) even where no writes were pending. This enabled an immediate read from a different user session, and my code commenced to react to the change in message list length properly.
Related
i'm trying to use telethon to send messages to telegram groups. after some times runing, it reruens:
A wait of 16480 seconds is required (caused by ResolveUsernameRequest).
the code is:
async def main():
print(time.strftime('%Y-%m-%d %H:%M:%S', time.localtime(time.time())))
config = configparser.ConfigParser()
config.read("seetings.ini",encoding= 'utf-8')
message = config['Customer']['message']
internal = config['Customer']['internal']
count = 0
excel_data = pandas.read_excel('tg_groups.xlsx', sheet_name='Groups')
for column in excel_data['GroupUsername'].tolist():
try:
if str(excel_data['GroupUsername'][count]) == 'None':
count += 1
continue
else:
chat = await client.get_input_entity(str(excel_data['GroGroupUsernameupID'][count]))
await client.send_message(entity=chat, message=message)
except Exception as e:
print(e)
time.sleep(int(internal))
count = count + 1
continue
time.sleep(int(internal))
count = count + 1
if __name__ == '__main__':
if proxytype == 'HTTP':
print('HTTP')
client = TelegramClient('phone'+phone, api_id, api_hash, proxy=(socks.HTTP, 'localhost', int(proxyport))).start()
if proxytype == 'socks5':
print('SOCKS5')
client = TelegramClient('phone'+phone, api_id, api_hash, proxy=(socks.SCOKS5, 'localhost', int(proxyport))).start()
myself = client.get_me()
print(myself)
freqm = config['Customer']['freq']
print(int(freqm))
while True:
with client:
client.loop.run_until_complete(main())
time.sleep(int(freqm))`
`
from the 'Entity' guide, it says get_input_entity method will search the user info from session file cache, why it it still call the 'ResolveUsernameRequest'to get the user info? anything i missed?
thanks for any advice.
'Entity' guide, it says get_input_entity method will search the user info from session file cache, why it it still call the 'ResolveUsernameRequest'to get the user info? anything i missed or the session file didn't keep the user info cache?
I made a Web app that takes in a text file, reads each line, takes the 11th character and saves it to SQLite3 db. How do I lock the database or have two or more separate tables while multiple requests are running?
I have added adding ATOMIC_REQUESTS': True to the settings.py in Django.
and I tried creating temporary tables for each request, but can't figure it out. I am pretty fresh to Django 2.2
My View.py
def home(request):
if request.method == 'GET':
return render(request, 'home.html')
if request.method == 'POST':
form = DocumentForm(data=request.POST, files=request.FILES)
print(form.errors)
if form.is_valid():
try:
f = request.FILES['fileToUpload']
except:
print('\033[0;93m'+ "No File uploaded, Redirecting" +'\033[0m')
return HttpResponseRedirect('/tryagain')
print('\033[32m'+ "Success" +'\033[0m')
print('Working...')
line = f.readline()
while line:
#print(line)
mst = message_typer.messages.create_message(str(line)[11])
line = f.readline()
else:
print('\033[0;91m'+ "Failed to validate Form" +'\033[0m')
return HttpResponseRedirect('/output')
return HttpResponse('Failure')
def output(request):
s = message_typer.messages.filter(message='s').count()
A = message_typer.messages.filter(message='A').count()
d = message_typer.messages.filter(message='d').count()
E = message_typer.messages.filter(message='E').count()
X = message_typer.messages.filter(message='X').count()
P = message_typer.messages.filter(message='P').count()
r = message_typer.messages.filter(message='r').count()
B = message_typer.messages.filter(message='B').count()
H = message_typer.messages.filter(message='H').count()
I = message_typer.messages.filter(message='I').count()
J = message_typer.messages.filter(message='J').count()
R = message_typer.messages.filter(message='R').count()
message_types = {'s':s, 'A':A, 'd':d, 'E':E, 'X':X, 'P':P,\
'r':r, 'B':B, 'H':H, 'I':I, 'J':J, 'R':R }
output = {'output':message_types}
#return HttpResponse('Received')
message_typer.messages.all().delete()
return render(request, 'output.html',output)
When the web page loads, it should display a simple break down each character in the 11th position of the uploaded text file.
However, if two requests are running concurrently, the first page that makes the request gets an Operation Error; Db is locked.
Traceback to here:
message_typer.messages.all().delete()
The second page will sum the total of the two files that were uploaded.
I do want to wipe the table after so that the next user will have an empty table to populate and perform a count on.
Is there a better way?
I have a collection with two folders, one for POSTs and one for GETs
At the collection level, I have set variables
And the following collection-level scripts to be run after every request:
requestLast = pm.variables.get("requestLast");
requestCurrent = pm.variables.get("requestCurrent");
statusGet = pm.variables.get("statusGet");
requestLast = requestCurrent;
requestCurrent = pm.request.name;
I want to always be keeping track of the previously run request, so I can return to it when necessary.
In the 'positivePosts' folder I have the following test script:
if(statusGet === 0) {
postman.setNextRequest("resultsPositive");
}
else {
statusGet = 0;
}
pm.variables.set("requestLast", requestLast);
pm.variables.set("requestCurrent", requestCurrent);
pm.variables.set("statusGet", statusGet);
The individual POST requests have no test scripts.
The results folder does not have any tests, but the resultsPositive GET has this test script:
var jsonData = JSON.parse(responseBody);
schema = pm.variables.get("schemaPositive");
tests["Valid Schema"] = tv4.validate(jsonData, schema);
tests["Status code is 200"] = responseCode.code === 200;
statusGet = 1;
postman.setNextRequest(requestLast);
pm.variables.set("requestLast", requestLast);
pm.variables.set("requestCurrent", requestCurrent);
pm.variables.set("statusGet", statusGet);
There are no pre-request scripts anywhere in the collection.
When running the collection, I would expect this order:
postRich
resultsPositive
postAllProperties
resultsPositive
postMinimum
resultsPositive
However, what I actually see is:
postRich
postAllProperties
postPositive
I also don't understand why postPositive is not run after postRich.
I'm experiencing a problem with GCP pubsub where a small percentage of data was lost when publishing thousands of messages in couple seconds.
I'm logging both message_id from pubsub and a session_id unique to each message on both the publishing end as well as the receiving end, and the result I'm seeing is that some message on the receiving end has same session_id, but different message_id. Also, some messages were missing.
For example, in one test I send 5,000 messages to pubsub, and exactly 5,000 messages were received, with 8 messages lost. The log lost messages look like this:
MISSING sessionId:sessionId: 731 (missing in log from pull request, but present in log from Flask API)
messageId FOUND: messageId:108562396466545
API: 200 **** sessionId: 731, messageId:108562396466545 ******(Log from Flask API)
Pubsub: sessionId: 730, messageId:108562396466545(Log from pull request)
And the duplicates looks like:
======= Duplicates FOUND on sessionId: 730=======
sessionId: 730, messageId:108562396466545
sessionId: 730, messageId:108561339282318
(both are logs from pull request)
All missing data and duplicates look like this.
From the above example, it is clear that some messages has taken the message_id of another message, and has been sent twice with two different message_ids.
I wonder if anyone would help me figure out what is going on? Thanks in advance.
Code
I have an API sending message to pubsub, which looks like this:
from flask import Flask, request, jsonify, render_template
from flask_cors import CORS, cross_origin
import simplejson as json
from google.cloud import pubsub
from functools import wraps
import re
import json
app = Flask(__name__)
ps = pubsub.Client()
...
#app.route('/publish', methods=['POST'])
#cross_origin()
#json_validator
def publish_test_topic():
pubsub_topic = 'test_topic'
data = request.data
topic = ps.topic(pubsub_topic)
event = json.loads(data)
messageId = topic.publish(data)
return '200 **** sessionId: ' + str(event["sessionId"]) + ", messageId:" + messageId + " ******"
And this is the code I used to read from pubsub:
from google.cloud import pubsub
import re
import json
ps = pubsub.Client()
topic = ps.topic('test-xiu')
sub = topic.subscription('TEST-xiu')
max_messages = 1
stop = False
messages = []
class Message(object):
"""docstring for Message."""
def __init__(self, sessionId, messageId):
super(Message, self).__init__()
self.seesionId = sessionId
self.messageId = messageId
def pull_all():
while stop == False:
m = sub.pull(max_messages = max_messages, return_immediately = False)
for data in m:
ack_id = data[0]
message = data[1]
messageId = message.message_id
data = message.data
event = json.loads(data)
sessionId = str(event["sessionId"])
messages.append(Message(sessionId = sessionId, messageId = messageId))
print '200 **** sessionId: ' + sessionId + ", messageId:" + messageId + " ******"
sub.acknowledge(ack_ids = [ack_id])
pull_all()
For generating session_id, sending request & logging response from API:
// generate trackable sessionId
var sessionId = 0
var increment_session_id = function () {
sessionId++;
return sessionId;
}
var generate_data = function () {
var data = {};
// data.sessionId = faker.random.uuid();
data.sessionId = increment_session_id();
data.user = get_rand(userList);
data.device = get_rand(deviceList);
data.visitTime = new Date;
data.location = get_rand(locationList);
data.content = get_rand(contentList);
return data;
}
var sendData = function (url, payload) {
var request = $.ajax({
url: url,
contentType: 'application/json',
method: 'POST',
data: JSON.stringify(payload),
error: function (xhr, status, errorThrown) {
console.log(xhr, status, errorThrown);
$('.result').prepend("<pre id='json'>" + JSON.stringify(xhr, null, 2) + "</pre>")
$('.result').prepend("<div>errorThrown: " + errorThrown + "</div>")
$('.result').prepend("<div>======FAIL=======</div><div>status: " + status + "</div>")
}
}).done(function (xhr) {
console.log(xhr);
$('.result').prepend("<div>======SUCCESS=======</div><pre id='json'>" + JSON.stringify(payload, null, 2) + "</pre>")
})
}
$(submit_button).click(function () {
var request_num = get_request_num();
var request_url = get_url();
for (var i = 0; i < request_num; i++) {
var data = generate_data();
var loadData = changeVerb(data, 'load');
sendData(request_url, loadData);
}
})
UPDATE
I made a change on the API, and the issue seems to go away. The changes I made was instead of using one pubsub.Client() for all request, I initialized a client for every single request coming in. The new API looks like:
from flask import Flask, request, jsonify, render_template
from flask_cors import CORS, cross_origin
import simplejson as json
from google.cloud import pubsub
from functools import wraps
import re
import json
app = Flask(__name__)
...
#app.route('/publish', methods=['POST'])
#cross_origin()
#json_validator
def publish_test_topic():
ps = pubsub.Client()
pubsub_topic = 'test_topic'
data = request.data
topic = ps.topic(pubsub_topic)
event = json.loads(data)
messageId = topic.publish(data)
return '200 **** sessionId: ' + str(event["sessionId"]) + ", messageId:" + messageId + " ******"
Talked with some guy from Google, and it seems to be an issue with the Python Client:
The consensus on our side is that there is a thread-safety problem in the current python client. The client library is being rewritten almost from scratch as we speak, so I don't want to pursue any fixes in the current version. We expect the new version to become available by end of June.
Running the current code with thread_safe: false in app.yaml or better yet just instantiating the client in every call should is the work around -- the solution you found.
For detailed solution, please see the Update in the question
Google Cloud Pub/Sub message IDs are unique. It should not be possible for "some messages [to] taken the message_id of another message." The fact that message ID 108562396466545 was seemingly received means that Pub/Sub did deliver the message to the subscriber and was not lost.
I recommend you check how your session_ids are generated to ensure that they are indeed unique and that there is exactly one per message. Searching for the sessionId in your JSON via a regular expression search seems a little strange. You would be better off parsing this JSON into an actual object and accessing fields that way.
In general, duplicate messages in Cloud Pub/Sub are always possible; the system guarantees at-least-once delivery. Those messages can be delivered with the same message ID if the duplication happens on the subscribe side (e.g., the ack is not processed in time) or with a different message ID (e.g., if the publish of the message is retried after an error like a deadline exceeded).
You shouldn't need to create a new client for every publish operation. I'm betting that the reason that that "fixed the problem" is because it mitigated a race that exists in the publisher client side. I'm also not convinced that the log line you've shown on the publisher side:
API: 200 **** sessionId: 731, messageId:108562396466545 ******
corresponds to a successful publish of sessionId 731 by publish_test_topic(). Under what conditions is that log line printed? The code that has been presented so far does not show this.
I am learning to use the Rx extensions for a Silverlight 4 app I am working on. I created a sample app to nail down the process and I cannot get it to return anything.
Here is the main code:
private IObservable<Location> GetGPSCoordinates(string Address1)
{
var gsc = new GeocodeServiceClient("BasicHttpBinding_IGeocodeService") as IGeocodeService;
Location returnLocation = new Location();
GeocodeResponse gcResp = new GeocodeResponse();
GeocodeRequest gcr = new GeocodeRequest();
gcr.Credentials = new Credentials();
gcr.Credentials.ApplicationId = APP_ID2;
gcr.Query = Address1;
var myFunc = Observable.FromAsyncPattern<GeocodeRequest, GeocodeResponse>(gsc.BeginGeocode, gsc.EndGeocode);
gcResp = myFunc(gcr) as GeocodeResponse;
if (gcResp.Results.Count > 0 && gcResp.Results[0].Locations.Count > 0)
{
returnLocation = gcResp.Results[0].Locations[0];
}
return returnLocation as IObservable<Location>;
}
gcResp comes back as null. Any thoughts or suggestions would be greatly appreciated.
The observable source you are subscribing to is asynchronous, so you can't access the result immediately after subscribing. You need to access the result in the subscription.
Better yet, don't subscribe at all and simply compose the response:
private IObservable<Location> GetGPSCoordinates(string Address1)
{
IGeocodeService gsc =
new GeocodeServiceClient("BasicHttpBinding_IGeocodeService");
Location returnLocation = new Location();
GeocodeResponse gcResp = new GeocodeResponse();
GeocodeRequest gcr = new GeocodeRequest();
gcr.Credentials = new Credentials();
gcr.Credentials.ApplicationId = APP_ID2;
gcr.Query = Address1;
var factory = Observable.FromAsyncPattern<GeocodeRequest, GeocodeResponse>(
gsc.BeginGeocode, gsc.EndGeocode);
return factory(gcr)
.Where(response => response.Results.Count > 0 &&
response.Results[0].Locations.Count > 0)
.Select(response => response.Results[0].Locations[0]);
}
If you only need the first valid value (the location of the address is unlikely to change), then add a .Take(1) between the Where and Select.
Edit: If you want to specifically handle the address not being found, you can either return results and have the consumer deal with it or you can return an Exception and provide an OnError handler when subscribing. If you're thinking of doing the latter, you would use SelectMany:
return factory(gcr)
.SelectMany(response => (response.Results.Count > 0 &&
response.Results[0].Locations.Count > 0)
? Observable.Return(response.Results[0].Locations[0])
: Observable.Throw<Location>(new AddressNotFoundException())
);
If you expand out the type of myFunc you'll see that it is Func<GeocodeRequest, IObservable<GeocodeResponse>>.
Func<GeocodeRequest, IObservable<GeocodeResponse>> myFunc =
Observable.FromAsyncPattern<GeocodeRequest, GeocodeResponse>
(gsc.BeginGeocode, gsc.EndGeocode);
So when you call myFunc(gcr) you have an IObservable<GeocodeResponse> and not a GeocodeResponse. Your code myFunc(gcr) as GeocodeResponse returns null because the cast is invalid.
What you need to do is either get the last value of the observable or just do a subscribe. Calling .Last() will block. If you call .Subscribe(...) your response will come thru on the call back thread.
Try this:
gcResp = myFunc(gcr).Last();
Let me know how you go.
Richard (and others),
So I have the code returning the location and I have the calling code subscribing. Here is (hopefully) the final issue. When I call GetGPSCoordinates, the next statement gets executed immediately without waiting for the subscribe to finish. Here's an example in a button OnClick event handler.
Location newLoc = new Location();
GetGPSCoordinates(this.Input.Text).ObserveOnDispatcher().Subscribe(x =>
{
if (x.Results.Count > 0 && x.Results[0].Locations.Count > 0)
{
newLoc = x.Results[0].Locations[0];
Output.Text = "Latitude: " + newLoc.Latitude.ToString() +
", Longtude: " + newLoc.Longitude.ToString();
}
else
{
Output.Text = "Invalid address";
}
});
Output.Text = " Outside of subscribe --- Latitude: " + newLoc.Latitude.ToString() +
", Longtude: " + newLoc.Longitude.ToString();
The Output.Text assignment that takes place outside of Subscribe executes before the Subscribe has finished and displays zeros and then the one inside the subscribe displays the new location info.
The purpose of this process is to get location info that will then be saved in a database record and I am processing multiple addresses sequentially in a Foreach loop. I chose Rx Extensions as a solution to avoid the problem of the async callback as a coding trap. But it seems I have exchanged one trap for another.
Thoughts, comments, suggestions?