Getting UnboundLocalError in Python 3.8 program - python-3.8

It keeps coming up with UnboundLocalError: local variable 'can_i_get' referenced before assignment right here.
#this makes you have to have a certain level to get better weapon.
#ignore this -->
if player_level == (10,12): #checks if there a high level
can_i_get = True #this checks if its True
elif player_level == (1,9): #checks if there not a high level
can_i_get = False #this checks if its False
else:print("you cant use that ")
#this makes you able to get better weapons
# create a dictionary of the possible moves and randomly select the damage it does when selected
if can_i_get:
What = {"ROOF": random.randint(18, 10000),
"Power Sword": random.randint(10, 21000), #this makes the damage random
"Mega heal": random.randint(20, 10000)} #why do i need to keep adding these?

Related

GtkTreeView stops updating unless I change the focus of the window

I have a GtkTreeView object that uses a GtkListStore model that is constantly being updated as follows:
Get new transaction
Feed data into numpy array
Convert numbers to formatted strings, store in pandas dataframe
Add updated token info to GtkListStore via GtkListStore.set(titer, liststore_cols, liststore_data), where liststore_data is the updated info, liststore_cols is the name of the columns (both are lists).
Here's the function that updates the ListStore:
# update ListStore
titer = ls_full.get_iter(row)
liststore_data = []
[liststore_data.append(df.at[row, col])
for col in my_vars['ls_full'][3:]]
# check for NaN value, add a (space) placeholder is necessary
for i in range(3, len(liststore_data)):
if liststore_data[i] != liststore_data[i]:
liststore_data[i] = " "
liststore_cols = []
[liststore_cols.append(my_vars['ls_full'].index(col) + 1)
for col in my_vars['ls_full'][3:]]
ls_full.set(titer, liststore_cols, liststore_data)
Class that gets the messages from the websocket:
class MyWebsocketClient(cbpro.WebsocketClient):
# class exceptions to WebsocketClient
def on_open(self):
# sets up ticker Symbol, subscriptions for socket feed
self.url = "wss://ws-feed.pro.coinbase.com/"
self.channels = ['ticker']
self.products = list(cbp_symbols.keys())
def on_message(self, msg):
# gets latest message from socket, sends off to be processed
if "best_ask" and "time" in msg:
# checks to see if token price has changed before updating
update_needed = parse_data(msg)
if update_needed:
update_ListStore(msg)
else:
print(f'Bad message: {msg}')
When the program first starts, the updates are consistent. Each time a new transaction comes in, the screen reflects it, updating the proper token. However, after a random amount of time - seen it anywhere from 5 minutes to over an hour - the screen will stop updating, unless I change the focus of the window (either activate or inactive). This does not last long, though (only enough to update the screen once). No other errors are being reported, memory usage is not spiking (constant at 140 MB).
How can I troubleshoot this? I'm not even sure where to begin. The data back-ends seem to be OK (data is never corrupted nor lags behind).
As you've said in the comments that it is running in a separate thread then i'd suggest wrapping your "update liststore" function with GLib.idle_add.
from gi.repository import GLib
GLib.idle_add(update_liststore)
I've had similar issues in the past and this fixed things. Sometimes updating liststore is fine, sometimes it will randomly spew errors.
Basically only one thread should update the GUI at a time. So by wrapping in GLib.idle_add() you make sure your background thread does not intefer with the main thread updating the GUI.

Lego-EV3: How to fix EOFError when catching user-input via multiprocessing?

Currently, I am working with a EV3 lego robot that is controlled by several neurons. Now I want to modify the code (running on
python3) in such a way that one can change certain parameter values on the run via the shell (Ubuntu) in order to manipulate the robot's dynamics at any time (and for multiple times). Here is a schema of what I have achieved so far based on a short example code:
from multiprocessing import Process
from multiprocessing import SimpleQueue
import ev3dev.ev3 as ev3
class Neuron:
(definitions of class variables and update functions)
def check_input(queue):
while (True):
try:
new_para = str(input("Type 'parameter=value': "))
float(new_para[2:0]) # checking for float in input
var = new_para[0:2]
if (var == "k="): # change parameter k
queue.put(new_para)
elif (var == "g="): # change parameter g
queue.put(new_para)
else:
print("Error". Type 'k=...' or 'g=...')
queue.put(0) # put anything in queue
except (ValueError, EOFError):
print("New value is not a number. Try again!")
(some neuron-specific initializations)
queue = SimpleQueue()
check = Process(target=check_input, args=(queue,))
check.start()
while (True):
if (not queue.empty()):
cmd = queue.get()
var = cmd[0]
val = float(cmd[2:])
if (var == "k"):
Neuron.K = val
elif (var == "g"):
Neuron.g = val
(updating procedure for neurons, writing data to file)
Since I am new to multiprocessing there are certainly some mistakes concerning taking care of locking, efficiency and so on but the robot moves and input fields occur in the shell. However, the current problem is that it's actually impossible to make an input:
> python3 controller_multiprocess.py
> Type 'parameter=value': New value is not a number. Try again!
> Type 'parameter=value': New value is not a number. Try again!
> Type 'parameter=value': New value is not a number. Try again!
> ... (and so on)
I know that this behaviour is caused by putting the exception of EOFError due to the fact that this error occurs when the exception is removed (and the process crashes). Hence, the program just rushes through the try-loop here and assumes that no input (-> empty string) was made over and over again. Why does this happen? - when not called as a threaded procedure the program patiently waits for an input as expected. And how can one fix or bypass this issue so that changing parameters gets possible as wanted?
Thanks in advance!

Avoid Race condition

I have a simple code that create a folder structure in a specific place.
I am trying to have the code create and hidden .okay.dat file in every single folder in order to allow AWS S3 to successfully upload everything to the cloud when I came across a race condition.
Can someone advise the best way to avoid it?
#!/usr/bin/python
print 'This script will generate the standard tree structure'
while True:
import os
aw_directory = "/mnt/sdb1/9999_testfolder/"
for child in os.listdir(aw_directory):
project_path = os.path.join(aw_directory, child)
if os.path.isdir(project_path):
#GUARD AGAINST RACE CONDITION TO BE ADDED
# open('001_DATALAB/.OKAY.dat', 'w').close()
# except OSError as exc: # Guard against race condition
# if exc.errno != errno.EEXIST:
# raise
#with open(okaystudio.dat, "w") as f:
# f.write("hidden file for AWS3")
a = os.path.join(project_path, '001_DATALAB/')
open('001_DATALAB/OKAYstudio.dat', 'w').close()
b = os.path.join(project_path, '001_DATALAB/001_Rushes')
if not os.path.exists(a):
original_umask_a = os.umask(0)
os.makedirs(a, mode=0777)
os.umask(original_umask_a)
if not os.path.exists(b):
original_umask_b = os.umask(0)
os.makedirs(b, mode=0777)
os.umask(original_umask_b)

What's the equivalent of moment-yielding (from Tornado) in Twisted?

Part of the implementation of inlineCallbacks is this:
if isinstance(result, Deferred):
# a deferred was yielded, get the result.
def gotResult(r):
if waiting[0]:
waiting[0] = False
waiting[1] = r
else:
_inlineCallbacks(r, g, deferred)
result.addBoth(gotResult)
if waiting[0]:
# Haven't called back yet, set flag so that we get reinvoked
# and return from the loop
waiting[0] = False
return deferred
result = waiting[1]
# Reset waiting to initial values for next loop. gotResult uses
# waiting, but this isn't a problem because gotResult is only
# executed once, and if it hasn't been executed yet, the return
# branch above would have been taken.
waiting[0] = True
waiting[1] = None
As it is shown, if in am inlineCallbacks-decorated function I make a call like this:
#inlineCallbacks
def myfunction(a, b):
c = callsomething(a)
yield twisted.internet.defer.succeed(None)
print callsomething2(b, c)
This yield will get back to the function immediately (this means: it won't be re-scheduled but immediately continue from the yield). This contrasts with Tornado's tornado.gen.moment (which isn't more than an already-resolved Future with a result of None), which makes the yielder re-schedule itself, regardless the future being already resolved or not.
How can I run a behavior like the one Tornado does when yielding a dummy future like moment?
The equivalent might be something like a yielding a Deferred that doesn't fire until "soon". reactor.callLater(0, ...) is generally accepted to create a timed event that doesn't run now but will run pretty soon. You can easily get a Deferred that fires based on this using twisted.internet.task.deferLater(reactor, 0, lambda: None).
You may want to look at alternate scheduling tools instead, though (in both Twisted and Tornado). This kind of re-scheduling trick generally only works in small, simple applications. Its effectiveness diminishes the more tasks concurrently employ it.
Consider whether something like twisted.internet.task.cooperate might provide a better solution instead.

python motor mongo cursor length or peek next

is there a way of determining the length of the motor mongo cursor or peeking ahead to see if there is a next ( instead of fetch_next perhaps has_next )
and not the cursor.size() that does not take into the provided limit()
basically i desire to add the required json comma
while (yield cursor.fetch_next):
document = cursor.next_object()
print document
if cursor.has_next() # Sweeet
print ","
You can use the "alive" property. Try running this:
from tornado import gen, ioloop
import motor
client = motor.MotorClient()
#gen.coroutine
def f():
collection = client.test.collection
yield collection.drop()
yield collection.insert([{'_id': i} for i in range(100)])
cursor = collection.find()
while (yield cursor.fetch_next):
print cursor.next_object(), cursor.alive
ioloop.IOLoop.current().run_sync(f)
It prints "True" until the final document, when alive is "False".
A MotorCursor fetches data from the server in batches. (The MongoDB documentation on batches explains how cursors and batches work for all MongoDB drivers, including Motor.) When "alive" is True it means either that there is more data available on the server, or data is buffered in the MotorCursor, or both.
There is a race condition, however. Say that you fetch all but the final document, and before you fetch that last document another client deletes it, then you'll fail to find the last document even though "alive" was "True". Better to rearrange your loop:
#gen.coroutine
def f():
collection = client.test.collection
yield collection.drop()
yield collection.insert([{'_id': i} for i in range(100)])
cursor = collection.find()
if (yield cursor.fetch_next):
sys.stdout.write(str(cursor.next_object()))
while (yield cursor.fetch_next):
sys.stdout.write(", ")
sys.stdout.write(str(cursor.next_object()))
print