How do you set a variable from an input in a module - module

So, I'm trying to set a variable in my main program to an integer input from a function in a module. I can't work out how to do this.
This is my main program. Menu is the name of my module, as I'm using it to display a menu. 'menulist' is where you say what you want the menu to display. That part works OK.
import time, sys
from menu import *
menulist = ['Say hi', 'Say bye', 'Say yes', 'Say no', 'Exit']
choice = int(0)
menu(menulist)
choosing(menulist, choice)
print(choice) ##This is so I can see what it is
if choice == 1:
print('Say hi')
elif choice == 2:
print('Say bye')
elif choice == 3:
print('Say yes')
elif choice == 4:
print('Say no')
elif choice == 5:
sys.exit()
else: ##This will print if choice doesn't equal what I want
print('Error')
This is my module.
import time
def menu(menulist):
print('Menu:')
time.sleep(1)
x = 0
while x < len(menulist):
y = x + 1
printout = ' ' + str(y) + '. ' + menulist[x]
print(printout)
x = x + 1
time.sleep(0.3)
time.sleep(0.7)
print('')
def choosing(menulist, choice):
flag = True
while flag:
try:
choice = int(input('Enter the number of your choice: '))
time.sleep(0.8)
if choice < 1 or choice > len(menulist):
print('That wasn\'t an option sorry')
time.sleep(1)
else:
flag = False
except ValueError:
print('That wasn\'t an option sorry')
time.sleep(1)
The menu function works fine, and the choosing function almost does what I want it to, but it won't set 'choice' in my main program to the input when I call it from my module. Sorry if it's something blatantly obvious, I'm pretty new to programming. Thanks

Your module doesn't recognise choice as a global variable. Inside def choosing(...), choice is simply a local variable that gets assigned the value of your input (converted to an int).
You do pass choice to choosing, but since you've set choice to 0 before that, it is an immutable variable, and Python has (behind the scenes) created a local copy. So for all matters, choice is a local variable inside choosing.
Once your program leaves the function choosing, the variable and its value simply disappear (for all practical purposes).
To solve this, you shouldn't attempt to make choice global: generally, that is bad design (plenty of exceptions, but still, don't).
Instead, you can simply return choice from the function, and assign it in your main program. Relevant part of your module:
def choosing(menulist):
while True:
try:
choice = int(input('Enter the number of your choice: '))
time.sleep(0.8)
if choice < 1 or choice > len(menulist):
print('That wasn\'t an option sorry')
time.sleep(1)
else:
break
except ValueError:
print('That wasn\'t an option sorry')
time.sleep(1)
return choice
(I made a slight alteration here: you can simply use the break statement to break out of the continous while loop, instead of the extra flag variable.)
Note: there is no need to assign an initial value to choice: the function is structured to it always has to pass through the line choice = int(.... That, or it exist with an exception other than ValueError.
The relevant part of your main program:
import time, sys
from menu import *
menulist = ['Say hi', 'Say bye', 'Say yes', 'Say no', 'Exit']
menu(menulist)
choice = choosing(menulist)
Together with my above note: no need for an initial value of choice.
Finally: see how choice has disappeared from the parameters in the function, both in the call to and the definition of choosing.
That last point suggest to me that you might be coming from a different programming language. In Python, it is rare to pass a parameter to a function to have it altered. You simply return it, because that is easy and clearer. In you have multiple variables to alter, you can for example return a tuple: return a, b, c. Or a dict, or whatever you fancy (but the tuple is the starting point for multiple return values).

Related

VTK cutter output

I am seeking a solution of connecting all the lines that have the same slope and share a common point. For example, after I load a STL file and cut it using a plane, the cutter output includes the points defining the contour. Connecting them one by one forms a (or multiple) polyline. However, some lines can be merged when their slopes are the same and they share a common point. E.g., [[0,0,0],[0,0,1]] and [[0,0,1],[0,0,2]] can be represented by one single line [[0,0,0],[0,0,2]].
I wrote a function that can analyse all the lines and connect them if they can be merged. But when the number of lines are huge, this process is slow. I am thinking in the VTK pipeline, is there a way to do the line merging?
Cheers!
plane = vtk.vtkPlane()
plane.SetOrigin([0,0,5])
plane.SetNormal([0,0,1])
cutter = vtk.vtkCutter()
cutter.SetCutFunction(plane)
cutter.SetInput(triangleFilter.GetOutput())
cutter.Update()
cutStrips = vtk.vtkStripper()
cutStrips.SetInputConnection(cutter.GetOutputPort())
cutStrips.Update()
cleanDataFilter = vtk.vtkCleanPolyData()
cleanDataFilter.AddInput(cutStrips.GetOutput())
cleanDataFilter.Update()
cleanData = cleanDataFilter.GetOutput()
print cleanData.GetPoint(0)
print cleanData.GetPoint(1)
print cleanData.GetPoint(2)
print cleanData.GetPoint(3)
print cleanData.GetPoint(4)
The output is:
(0.0, 0.0, 5.0)
(5.0, 0.0, 5.0)
(10.0, 0.0, 5.0)
(10.0, 5.0, 5.0)
(10.0, 10.0, 5.0)
Connect the above points one by one will form a polyline representing the cut result. As we can see, the line [point0, point1] and [point1, point2] can be merged.
Below is the code for merging the lines:
Assume that the LINES are represented by list: [[(p0),(p1)],[(p1),(p2)],[(p2),(p3)],...]
appended = 0
CurrentLine = LINES[0]
CurrentConnectedLine = CurrentLine
tempLineCollection = LINES[1:len(LINES)]
while True:
for HL in tempLineCollection:
QCoreApplication.processEvents()
if checkParallelAndConnect(CurrentConnectedLine, HL):
appended = 1
LINES.remove(HL)
CurrentConnectedLine = ConnectLines(CurrentConnectedLine, HL)
processedPool.append(CurrentConnectedLine)
if len(tempLineCollection) == 1:
processedPool.append(tempLineCollection[0])
LINES.remove(CurrentLine)
if len(LINES) >= 2:
CurrentLine = LINES[0]
CurrentConnectedLine = CurrentLine
tempLineCollection = LINES[1:len(LINES)]
appended = 0
else:
break
Solution:
I figured out a way of further accelerating this process using some vtk data structure. I found out that a polyline line will be stored in a cell, which can be checked by using GetCellType(). Since the point order for a polyline is sorted already, We do not need to search globally which lines are colinear with the current one. For each point on the polyline, I just need to check the point[i-1], point[i], point[i+1]. And if they are colinear, the end of the line will be updated to the next point. This process continues until the end of the polyline is reached. The speed increases by a huge amount compared with the global search approach.
Not sure if it is the main source of slowness (depends on how many positive hits on the colinearity you have), but removing items from a vector is costly (O(n)), since it requires reorganizing the rest of the vector, you should avoid it. But even without hits on colinearity, the LINES.remove(CurrentLine) call is surely slowing things down and there isn't really any need for it - just leave the vector untouched, write the final results to a new vector (processedPool) and get rid of the LINES vector in the end. You can modify your algorithm by making a bool array (vector), initialized at "false" for each item, then when you remove a line, you don't actually remove it, but only mark it as "true" and you skip all lines for which you have "true", i.e. something like this (I don't speak python so the syntax is not accurate):
wasRemoved = bool vector of the size of LINES initialized at false for each entry
for CurrentLineIndex = 0; CurrentLineIndex < sizeof(LINES); CurrentLineIndex++
if (wasRemoved[CurrentLineIndex])
continue // skip a segment that was already removed
CurrentConnectedLine = LINES[CurrentLineIndex]
for HLIndex = CurrentLineIndex + 1; HLIndex < sizeof(LINES); HLIndex++:
if (wasRemoved[HLIndex])
continue;
HL = LINES[HLIndex]
QCoreApplication.processEvents()
if checkParallelAndConnect(CurrentConnectedLine, HL):
wasRemoved[HLIndex] = true
CurrentConnectedLine = ConnectLines(CurrentConnectedLine, HL)
processedPool.append(CurrentConnectedLine)
wasRemoved[CurrentLineIndex] = true // this is technically not needed since you won't go back in the vector anyway
LINES = processedPool
BTW, the really correct data structure for LINES to use for that kind of algorithm would be a linked list, since then you would have O(1) complexity for removal and you wouldn't need the boolean array. But a quick googling showed that that's not how lists are implemented in Python, also don't know if it would not interfere with other parts of your program. Alternatively, using a set might make it faster (though I would expect times similar to my "bool array" solution), see python 2.7 set and list remove time complexity
If this does not do the trick, I suggest you measure times of individual parts of the program to find the bottleneck.

How can I have increasing named variables?

I want a bunch of buttons, using QtGui to all have their own unique values, but when looping to create a grid of them, the button variable is overwritten.
I was trying to get something that would have each button have its own variable, like grid_btn01, grid_btn02, and so on.
Ideally, it would be like this
for x in range(gridx):
grid_btn + str(x) = GridBtn(self, x, y, btn_id)
But of course, this doesn't work.
consider using a python dictionary
,also i'm not familiar with Qt but double check what is the return value of this function maybe btn_id is the variable you should store
buttons = {}
for x in range(gridx):
buttons[x] = GridBtn(self,x,y,btn_id)
What you're asking may technicallybe possible in Python, but it's definitely the wrong approach.
Use a list instead:
grid_btns = []
for x in range(gridx):
y = ...
grid_btns.append(GridBtn(self, x, y, btn_id))

matlab subsref: {} with string argument fails, why?

There are a few implementations of a hash or dictionary class in the Mathworks File Exchange repository. All that I have looked at use parentheses overloading for key referencing, e.g.
d = Dict;
d('foo') = 'bar';
y = d('foo');
which seems a reasonable interface. It would be preferable, though, if you want to easily have dictionaries which contain other dictionaries, to use braces {} instead of parentheses, as this allows you to get around MATLAB's (arbitrary, it seems) syntax limitation that multiple parentheses are not allowed but multiple braces are allowed, i.e.
t{1}{2}{3} % is legal MATLAB
t(1)(2)(3) % is not legal MATLAB
So if you want to easily be able to nest dictionaries within dictionaries,
dict{'key1'}{'key2'}{'key3'}
as is a common idiom in Perl and is possible and frequently useful in other languages including Python, then unless you want to use n-1 intermediate variables to extract a dictionary entry n layers deep, this seems a good choice. And it would seem easy to rewrite the class's subsref and subsasgn operations to do the same thing for {} as they previously did for (), and everything should work.
Except it doesn't when I try it.
Here's my code. (I've reduced it to a minimal case. No actual dictionary is implemented here, each object has one key and one value, but this is enough to demonstrate the problem.)
classdef TestBraces < handle
properties
% not a full hash table implementation, obviously
key
value
end
methods(Access = public)
function val = subsref(obj, ref)
% Re-implement dot referencing for methods.
if strcmp(ref(1).type, '.')
% User trying to access a method
% Methods access
if ismember(ref(1).subs, methods(obj))
if length(ref) > 1
% Call with args
val = obj.(ref(1).subs)(ref(2).subs{:});
else
% No args
val = obj.(ref.subs);
end
return;
end
% User trying to access something else.
error(['Reference to non-existant property or method ''' ref.subs '''']);
end
switch ref.type
case '()'
error('() indexing not supported.');
case '{}'
theKey = ref.subs{1};
if isequal(obj.key, theKey)
val = obj.value;
else
error('key %s not found', theKey);
end
otherwise
error('Should never happen')
end
end
function obj = subsasgn(obj, ref, value)
%Dict/SUBSASGN Subscript assignment for Dict objects.
%
% See also: Dict
%
if ~strcmp(ref.type,'{}')
error('() and dot indexing for assignment not supported.');
end
% Vectorized calls not supported
if length(ref.subs) > 1
error('Dict only supports storing key/value pairs one at a time.');
end
theKey = ref.subs{1};
obj.key = theKey;
obj.value = value;
end % subsasgn
end
end
Using this code, I can assign as expected:
t = TestBraces;
t{'foo'} = 'bar'
(And it is clear that the assignment work from the default display output for t.) So subsasgn appears to work correctly.
But I can't retrieve the value (subsref doesn't work):
t{'foo'}
??? Error using ==> subsref
Too many output arguments.
The error message makes no sense to me, and a breakpoint at the first executable line of my subsref handler is never hit, so at least superficially this looks like a MATLAB issue, not a bug in my code.
Clearly string arguments to () parenthesis subscripts are allowed, since this works fine if you change the code to work with () instead of {}. (Except then you can't nest subscript operations, which is the object of the exercise.)
Either insight into what I'm doing wrong in my code, any limitations that make what I'm doing unfeasible, or alternative implementations of nested dictionaries would be appreciated.
Short answer, add this method to your class:
function n = numel(obj, varargin)
n = 1;
end
EDIT: The long answer.
Despite the way that subsref's function signature appears in the documentation, it's actually a varargout function - it can produce a variable number of output arguments. Both brace and dot indexing can produce multiple outputs, as shown here:
>> c = {1,2,3,4,5};
>> [a,b,c] = c{[1 3 5]}
a =
1
b =
3
c =
5
The number of outputs expected from subsref is determined based on the size of the indexing array. In this case, the indexing array is size 3, so there's three outputs.
Now, look again at:
t{'foo'}
What's the size of the indexing array? Also 3. MATLAB doesn't care that you intend to interpret this as a string instead of an array. It just sees that the input is size 3 and your subsref can only output 1 thing at a time. So, the arguments mismatch. Fortunately, we can correct things by changing the way that MATLAB determines how many outputs are expected by overloading numel. Quoted from the doc link:
It is important to note the significance of numel with regards to the
overloaded subsref and subsasgn functions. In the case of the
overloaded subsref function for brace and dot indexing (as described
in the last paragraph), numel is used to compute the number of
expected outputs (nargout) returned from subsref. For the overloaded
subsasgn function, numel is used to compute the number of expected
inputs (nargin) to be assigned using subsasgn. The nargin value for
the overloaded subsasgn function is the value returned by numel plus 2
(one for the variable being assigned to, and one for the structure
array of subscripts).
As a class designer, you must ensure that the value of n returned by
the built-in numel function is consistent with the class design for
that object. If n is different from either the nargout for the
overloaded subsref function or the nargin for the overloaded subsasgn
function, then you need to overload numel to return a value of n that
is consistent with the class' subsref and subsasgn functions.
Otherwise, MATLAB produces errors when calling these functions.
And there you have it.

Uniqueness of global Python objects void in sub-interpreters?

I have a question about inner-workings of Python sub-interpreter initialization (from Python/C API) and Python id() function. More precisely, about handling of global module objects in a WSGI Python containers (like uWSGI used with nginx and mod_wsgi on Apache).
The following code works as expected (isolated) in both of the mentioned environments, but I can not explain to my self why the id() function always returns the same value per variable, regardless of the process/sub-interpreter in which it is executed.
from __future__ import print_function
import os, sys
def log(*msg):
print(">>>", *msg, file=sys.stderr)
class A:
def __init__(self, x):
self.x = x
def __str__(self):
return self.x
def set(self, x):
self.x = x
a = A("one")
log("class instantiated.")
def application(environ, start_response):
output = "pid = %d\n" % os.getpid()
output += "id(A) = %d\n" % id(A)
output += "id(a) = %d\n" % id(a)
output += "str(a) = %s\n\n" % a
a.set("two")
status = "200 OK"
response_headers = [
('Content-type', 'text/plain'), ('Content-Length', str(len(output)))
]
start_response(status, response_headers)
return [output]
I have tested this code in uWSGI with one master process and 2 workers; and in mod_wsgi using a deamon mode with two processes and one thread per process. The typical output is:
pid = 15278
id(A) = 139748093678128
id(a) = 139748093962360
str(a) = one
on first load, then:
pid = 15282
id(A) = 139748093678128
id(a) = 139748093962360
str(a) = one
on second, and then
pid = 15278 | pid = 15282
id(A) = 139748093678128
id(a) = 139748093962360
str(a) = two
on every other. As you can see, id() (memory location) of both the class and the class instance remains the same in both processes (first/second load above), while at the same time class instances live in a separate context (otherwise the second request would show "two" instead of "one")!
I suspect the answer might be hinted by Python docs:
id(object):
Return the “identity” of an object. This is an integer (or long integer) which
is guaranteed to be unique and constant for this object during its lifetime. Two
objects with non-overlapping lifetimes may have the same id() value.
But if that indeed is the reason, I'm troubled by the next statement that claims the id() value is object's address!
While I appreciate the fact this could very well be just a Python/C API "clever" feature that solves (or rather fixes) a problem of caching object references (pointers) in 3rd party extension modules, I still find this behavior to be inconsistent with... well, common sense. Could someone please explain this?
I've also noticed mod_wsgi imports the module in each process (i.e. twice), while uWSGI is importing the module only once for both processes. Since the uWSGI master process does the importing, I suppose it seeds the children with copies of that context. Both workers work independently afterwards (deep copy?), while at the same time using the same object addresses, seemingly. (Also, a worker gets reinitialized to the original context upon reload.)
I apologize for such a long post, but I wanted to give enough details.
Thank you!
It's not entirely clear what you're asking; I'd give a more concise answer if the question was more specific.
First, the id of an object is, in fact--at least in CPython--its address in memory. That's perfectly normal: two objects in the same process at the same time can't share an address, and an object's address never changes in CPython, so the address works neatly as an id. I don't know how this violates common sense.
Next, note that a backend process may be spawned in two very distinct ways:
A generic WSGI backend handler will fork processes, and then each of the processes will start a backend. This is simple and language-agnostic, but wastes a lot of memory and wastes time loading the backend code repeatedly.
A more advanced backend will load the Python code once, and then fork copies of the server after it's loaded. This causes the code to be loaded only once, which is much faster and reduces memory waste significantly. This is how production-quality WSGI servers work.
However, the end result in both of these cases is identical: separate, forked processes.
So, why are you ending up with the same IDs? That depends on which of the above methods is in use.
With a generic WSGI handler, it's happening simply because each process is doing essentially the same thing. So long as processes are doing the same thing, they'll tend to end up with the same IDs; at some point they'll diverge and this will no longer happen.
With a pre-loading backend, it's happening because this initial code happens only once, before the server forks, so it's guaranteed to have the same ID.
However, either way, once the fork happens they're separate objects, in separate contexts. There's no significance to objects in separate processes having the same ID.
This is simple to explain by way of a demonstration. You see, when uwsgi creates a new process, it forks the interpreter. Now, forks have interesting memory properties:
import os, time
if os.fork() == 0:
print "child first " + str(hex(id(os)))
time.sleep(2)
os.attr = 'test'
print "child second " + str(hex(id(os)))
else:
time.sleep(1)
print "parent first " + str(hex(id(os)))
time.sleep(2)
print "parent second " + str(hex(id(os)))
print os.attr
Output:
child first 0xb782414cL
parent first 0xb782414cL
child second 0xb782414cL
parent second 0xb782414cL
Traceback (most recent call last):
File "test.py", line 13, in <module>
print os.attr
AttributeError: 'module' object has no attribute 'attr'
Although the objects seem to reside at the same memory addr, they are different objects, but this is not python, but the os.
edit: I suspect the reason that mod_wsgi imports twice is that it creates further processes via calling python rather than forking. uwsgi's approach is better because it can use less memory. fork's page sharing is COW (copy on write).

Is this use of isinstance pythonic/"good"?

A side effect of this question is that I was lead to this post, which states:
Whenever isinstance is used, control flow forks; one type of object goes down one code path, and other types of object go down the other --- even if they implement the same interface!
and suggests that this is a bad thing.
However, I've used code like this before, in what I thought was an OO way. Something like the following:
class MyTime(object):
def __init__(self, h=0, m=0, s=0):
self.h = 0
self.m = 0
self.s = 0
def __iadd__(self, other):
if isinstance(other, MyTime):
self.h += other.h
self.m += other.m
self.s += other.s
elif isinstance(other, int):
self.h += other/3600
other %= 3600
self.m += other/60
other %= 60
self.s += other
else:
raise TypeError('Addition not supported for ' + type(other).__name__)
So my question:
Is this use of isinstance "pythonic" and "good" OOP?
Not in general. An object's interface should define its behavior. In your example above, it would be better if other used a consistent interface:
def __iadd__(self, other):
self.h += other.h
self.m += other.m
self.s += other.s
Even though this looks like it is less functional, conceptually it is much cleaner. Now you leave it to the language to throw an exception if other does not match the interface. You can solve the problem of adding int times by - for example - creating a MyTime "constructor" using the integer's "interface". This keeps the code cleaner and leaves fewer surprises for the next guy.
Others may disagree, but I feel there may be a place for isinstance if you are using reflection in special cases such as when implementing a plugin architecture.
isinstance, since Python 2.6, has become quite nice as long as you follow the "key rule of good design" as explained in the classic "gang of 4" book: design to an interface, not to an implementation. Specifically, 2.6's new Abstract Base Classes are the only things you should be using for isinstance and issubclass checks, not concrete "implementation" types.
Unfortunately there is no abstract class in 2.6's standard library to summarize the concept of "this number is Integral", but you can make one such ABC by checking whether the class has a special method __index__ (don't use __int__, which is also supplied by such definitely non-integral classes as float and str -- __index__ was introduced specifically to assert "instances of this class can be made into integers with no loss of important information") and use isinstance on that "interface" (abstract base class) rather than the specific implementation int, which is way too restrictive.
You could also make an ABC summarizing the concept of "having m, h and s attributes" (might be useful to accept attribute synonyms so as to tolerate a datetime.time or maybe timedelta instance, for example -- not sure whether you're representing an instant or a lapse of time with your MyTime class, the name suggests the former but the existence of addition suggests the latter), again to avoid the very restrictive implications of isinstance with a concrete implementation cass.
The first use is fine, the second is not. Pass the argument to int() instead so that you can use number-like types.
To elaborate further on the comment I made under Justin's answer, I would keep his code for __iadd__ (i.e., so MyTime objects can only be added to other MyTime objects) and rewrite __init__ in this way:
def __init__(self, **params):
if params.get('sec'):
t = params['sec']
self.h = t/3600
t %= 3600
self.m = t/60
t %= 60
self.s = t
elif params.get('time'):
t = params['time']
self.h = t.h
self.m = t.m
self.s = t.s
else:
if params:
raise TypeError("__init__() got unexpected keyword argument '%s'" % params.keys()[0])
else:
raise TypeError("__init__() expected keyword argument 'sec' or 'time'")
# example usage
t1 = MyTime(sec=30)
t2 = MyTime(sec=60)
t2 += t1
t3 = MyTime(time=t1)
I just tried to pick short keyword arguments, but you may want to get more descriptive than I did.