I want to automatically logout from OpenERP session if session time is more than 30 min.
This can be done by editing the session_gc method in .../addons/web/http.py. The following code illustrates your need -- remove or comment out the if condition (and un-indent the following lines accordingly):
def session_gc(session_store):
#if random.random() < 0.001:
# we keep session one week
last_week = time.time() - x
for fname in os.listdir(session_store.path):
path = os.path.join(session_store.path, fname)
try:
if os.path.getmtime(path) < last_week:
os.unlink(path)
except OSError:
pass
The x is the number of seconds for timeout as per your need.
Related
I have a Python thread, and in it, I attempt to call a function, wait X seconds after it finishes, then repeats indefinitely (runs as a Windows service).
I am using the code below. Based on the code below, since I am incrementing "count" every minute, theoretically, is there a case where this could "overflow" if it runs for a long time (like if I lowered it down to every 2 seconds or something very small)?
Is this method overkill if all I need to do is wait till the function finishes and then do it again after approximately (doesn't have to be precise) X seconds?
Code:
def g_tick():
t = time.time()
count = 0
while True:
count += 1
logging.debug(f"Count is: {count}")
yield max(t + count*60 - time.time(), 0) # 1 Minute Intervals (300/60 secs)
g = g_tick()
while True:
try:
processor_work() # this function does work and returns when complete
except Exception as e:
log_msg = f"An exception occurred while processing tickets due to: {e}"
logging.exception(log_msg)
time.sleep(next(g))
I'm trying to get an increase in delay/pause and load over time in jmeter load testing while keeping the sequence constant. For example -
Initially - 10 samples (of 2 get requests - a,b,a,b,a,b...)
Then after 10 samples, a delay/pause of 10 secs and then 20 samples (a,b,a,b,a,b...)
After the 20 samples, another delay/pause of 20 secs; then 30 samples (a,b,a,b,a,b...)
And so on.
Constraints here being -
Getting exact number of samples
Getting the desired delay
The order of requests should be maintained
The critical section controller helps with maintaining the order of threads but only in a normal thread group. So if I try the ultimate thread group to get the desired variable delay and load, the order and number of samples go haywire.
I've tried the following-
Run test group consecutively
Flow control action
Throughput controller
Module controller
Interleave controller
Synchronizing timer (with and without flow control)
Add think times to children
Is there any way to get this output in jMeter? Or should I just opt for scripting?
Add User Defined Variables and set the following variables there:
samples=10
delay=10
Add Thread Group and specify the required number of threads and iterations
Add Loop Controller under the Thread Group and set "Loop Count" to ${samples}. Put your requests under the Loop Controller
Add JSR223 Sampler and put the following code into Script area:
def delay = vars.get('delay') as long
sleep(delay * 1000)
def new_delay = delay + 10
vars.put('delay', new_delay as String)
def samples = vars.get('samples') as int
def new_samples = samples + 10
vars.put('samples', new_samples as String)
I have a routine with multiple possible trials (in a loop), and the participant must press a key to go to the next trial. I want these trials to go on (with keypresses) until a certain time has passed of the routine, say 5 seconds, regardless of how many possible trials are left in my conditionsFile. At the 5-second mark, the routine should end.
(If this is relevant: each trial is only displayed once.)
I've tried a few things, all of which only end up making the routine skip to the next trial after 5 seconds, but still going through all the possible trials, i.e. not ending the routine at 5 s.
For example, I've tried adding:
# ------Prepare to start Routine "Images"-------
continueRoutine = True
routineTimer.add(5)
and then
# -------Run Routine "Images"-------
while continueRoutine and routineTimer.getTime() > 0:
I've also tried adding this to the while sequence from above:
if image.status == STARTED:
# is it time to stop? (based on global clock, using actual start)
if tThisFlipGlobal > image.tStartRefresh + 5.0-frameTolerance:
# keep track of stop time/frame for later
image.tStop = t # not accounting for scr refresh
image.frameNStop = frameN # exact frame index
win.timeOnFlip(image, 'tStopRefresh') # time at next scr refresh
image.setAutoDraw(False)
Can anyone help?
I'm testing python-bigquery-storage to insert multiple items into a table using the _default stream.
I used the example shown in the official docs as a basis, and modified it to use the default stream.
Here is a minimal example that's similar to what I'm trying to do:
customer_record.proto
syntax = "proto2";
message CustomerRecord {
optional string customer_name = 1;
optional int64 row_num = 2;
}
append_rows_default.py
from itertools import islice
from google.cloud import bigquery_storage_v1
from google.cloud.bigquery_storage_v1 import types
from google.cloud.bigquery_storage_v1 import writer
from google.protobuf import descriptor_pb2
import customer_record_pb2
import logging
logging.basicConfig(level=logging.DEBUG)
CHUNK_SIZE = 2 # Maximum number of rows to use in each AppendRowsRequest.
def chunks(l, n):
"""Yield successive `n`-sized chunks from `l`."""
_it = iter(l)
while True:
chunk = [*islice(_it, 0, n)]
if chunk:
yield chunk
else:
break
def create_stream_manager(project_id, dataset_id, table_id, write_client):
# Use the default stream
# The stream name is:
# projects/{project}/datasets/{dataset}/tables/{table}/_default
parent = write_client.table_path(project_id, dataset_id, table_id)
stream_name = f'{parent}/_default'
# Create a template with fields needed for the first request.
request_template = types.AppendRowsRequest()
# The initial request must contain the stream name.
request_template.write_stream = stream_name
# So that BigQuery knows how to parse the serialized_rows, generate a
# protocol buffer representation of our message descriptor.
proto_schema = types.ProtoSchema()
proto_descriptor = descriptor_pb2.DescriptorProto()
customer_record_pb2.CustomerRecord.DESCRIPTOR.CopyToProto(proto_descriptor)
proto_schema.proto_descriptor = proto_descriptor
proto_data = types.AppendRowsRequest.ProtoData()
proto_data.writer_schema = proto_schema
request_template.proto_rows = proto_data
# Create an AppendRowsStream using the request template created above.
append_rows_stream = writer.AppendRowsStream(write_client, request_template)
return append_rows_stream
def send_rows_to_bq(project_id, dataset_id, table_id, write_client, rows):
append_rows_stream = create_stream_manager(project_id, dataset_id, table_id, write_client)
response_futures = []
row_count = 0
# Send the rows in chunks, to limit memory usage.
for chunk in chunks(rows, CHUNK_SIZE):
proto_rows = types.ProtoRows()
for row in chunk:
row_count += 1
proto_rows.serialized_rows.append(row.SerializeToString())
# Create an append row request containing the rows
request = types.AppendRowsRequest()
proto_data = types.AppendRowsRequest.ProtoData()
proto_data.rows = proto_rows
request.proto_rows = proto_data
future = append_rows_stream.send(request)
response_futures.append(future)
# Wait for all the append row requests to finish.
for f in response_futures:
f.result()
# Shutdown background threads and close the streaming connection.
append_rows_stream.close()
return row_count
def create_row(row_num: int, name: str):
row = customer_record_pb2.CustomerRecord()
row.row_num = row_num
row.customer_name = name
return row
def main():
write_client = bigquery_storage_v1.BigQueryWriteClient()
rows = [ create_row(i, f"Test{i}") for i in range(0,20) ]
send_rows_to_bq("PROJECT_NAME", "DATASET_NAME", "TABLE_NAME", write_client, rows)
if __name__ == '__main__':
main()
Note:
In the above, CHUNK_SIZE is 2 just for this minimal example, but, in a real situation, I used a chunk size of 5000.
In real usage, I have several separate streams of data that need to be processed in parallel, so I make several calls to send_rows_to_bq, one for each stream of data, using a thread pool (one thread per stream of data). (I'm assuming here that AppendRowsStream is not meant to be shared by multiple threads, but I might be wrong).
It mostly works, but I often get a mix of intermittent errors in the call to append_rows_stream's send method:
google.cloud.bigquery_storage_v1.exceptions.StreamClosedError: This manager has been closed and can not be used.
google.api_core.exceptions.Unknown: None There was a problem opening the stream. Try turning on DEBUG level logs to see the error.
I think I just need to retry on these errors, but I'm not sure how to best implement a retry strategy here. My impression is that I need to use the following strategy to retry errors when calling send:
If the error is a StreamClosedError, the append_rows_stream stream manager can't be used anymore, and so I need to call close on it and then call my create_stream_manager again to create a new one, then try to call send on the new stream manager.
Otherwise, on any google.api_core.exceptions.ServerError error, retry the call to send on the same stream manager.
Am I approaching this correctly?
Thank you.
The best solution to this problem is to update to the newer lib release.
This problem happens or was happening in the older versions because once the connection write API reaches 10MB, it hangs.
If the update to the newer lib does not work you can try these options:
Limit the connection to < 10MB.
Disconnect and connect again to the API.
I have a GtkTreeView object that uses a GtkListStore model that is constantly being updated as follows:
Get new transaction
Feed data into numpy array
Convert numbers to formatted strings, store in pandas dataframe
Add updated token info to GtkListStore via GtkListStore.set(titer, liststore_cols, liststore_data), where liststore_data is the updated info, liststore_cols is the name of the columns (both are lists).
Here's the function that updates the ListStore:
# update ListStore
titer = ls_full.get_iter(row)
liststore_data = []
[liststore_data.append(df.at[row, col])
for col in my_vars['ls_full'][3:]]
# check for NaN value, add a (space) placeholder is necessary
for i in range(3, len(liststore_data)):
if liststore_data[i] != liststore_data[i]:
liststore_data[i] = " "
liststore_cols = []
[liststore_cols.append(my_vars['ls_full'].index(col) + 1)
for col in my_vars['ls_full'][3:]]
ls_full.set(titer, liststore_cols, liststore_data)
Class that gets the messages from the websocket:
class MyWebsocketClient(cbpro.WebsocketClient):
# class exceptions to WebsocketClient
def on_open(self):
# sets up ticker Symbol, subscriptions for socket feed
self.url = "wss://ws-feed.pro.coinbase.com/"
self.channels = ['ticker']
self.products = list(cbp_symbols.keys())
def on_message(self, msg):
# gets latest message from socket, sends off to be processed
if "best_ask" and "time" in msg:
# checks to see if token price has changed before updating
update_needed = parse_data(msg)
if update_needed:
update_ListStore(msg)
else:
print(f'Bad message: {msg}')
When the program first starts, the updates are consistent. Each time a new transaction comes in, the screen reflects it, updating the proper token. However, after a random amount of time - seen it anywhere from 5 minutes to over an hour - the screen will stop updating, unless I change the focus of the window (either activate or inactive). This does not last long, though (only enough to update the screen once). No other errors are being reported, memory usage is not spiking (constant at 140 MB).
How can I troubleshoot this? I'm not even sure where to begin. The data back-ends seem to be OK (data is never corrupted nor lags behind).
As you've said in the comments that it is running in a separate thread then i'd suggest wrapping your "update liststore" function with GLib.idle_add.
from gi.repository import GLib
GLib.idle_add(update_liststore)
I've had similar issues in the past and this fixed things. Sometimes updating liststore is fine, sometimes it will randomly spew errors.
Basically only one thread should update the GUI at a time. So by wrapping in GLib.idle_add() you make sure your background thread does not intefer with the main thread updating the GUI.