Pybind11 multiprocessing hangs - python-multiprocessing

I'm writing an application that use Pybind11 to embed Python interpreter (Windows, 64 bit, Visual C++ 2017). From Python, I need to spawn multiple processes, but it seems doesn't works. I try the following code as test:
import multiprocessing
import os
import sys
import time
print("This is the name of the script: ", sys.argv[0])
print("Number of arguments: ", len(sys.argv))
print("The arguments are: " , str(sys.argv))
prefix=str(os.getpid())+"-"
if len(sys.argv) > 1:
__name__ = "__mp_main__"
def print_cube(num):
"""
function to print cube of given num
"""
print("Cube: {}".format(num * num * num))
def print_square(num):
"""
function to print square of given num
"""
print("Square: {}".format(num * num))
print(__name__)
if __name__ == "__main__":
print(prefix, "checkpoint 1")
# creating processes
p1 = multiprocessing.Process(target=print_square, args=(10, ))
p1.daemon = True
p2 = multiprocessing.Process(target=print_cube, args=(10, ))
# starting process 1
p1.start()
print(prefix, "checkpoint 2")
# starting process 2
p2.start()
print(prefix, "checkpoint 3")
# wait until process 1 is finished
print(prefix, "checkpoint 4")
p1.join()
print(prefix, "checkpoint 5")
# wait until process 2 is finished
p2.join()
print(prefix, "checkpoint 6")
# both processes finished
print("Done!")
print(prefix, "checkpoint 7")
time.sleep(10)
Running it with the Python from command prompt, I obtain:
This is the name of the script: mp.py
Number of arguments: 1
The arguments are: ['mp.py']
__main__
12872- checkpoint 1
12872- checkpoint 2
This is the name of the script: C:\tmp\mp.py
Number of arguments: 1
The arguments are: ['C:\\tmp\\mp.py']
__mp_main__
7744- checkpoint 7
Square: 100
12872- checkpoint 3
12872- checkpoint 4
12872- checkpoint 5
This is the name of the script: C:\tmp\mp.py
Number of arguments: 1
The arguments are: ['C:\\tmp\\mp.py']
__mp_main__
15020- checkpoint 7
Cube: 1000
12872- checkpoint 6
Done!
12872- checkpoint 7
which is correct. If I try the same from a C++ project with Pybind11, the output is:
This is the name of the script: C:\AGPX\Documenti\TestPyBind\x64\Debug\TestPyBind.exe
Number of arguments: 1
The arguments are: ['C:\\AGPX\\Documenti\\TestPyBind\\x64\\Debug\\TestPyBind.exe']
__main__
4440- checkpoint 1
This is the name of the script: C:\AGPX\Documenti\TestPyBind\x64\Debug\TestPyBind.exe
Number of arguments: 4
The arguments are: ['C:\\AGPX\\Documenti\\TestPyBind\\x64\\Debug\\TestPyBind.exe', '-c', 'from multiprocessing.spawn import spawn_main; spawn_main(parent_pid=4440, pipe_handle=128)', '--multiprocessing-fork']
__mp_main__
10176- checkpoint 7
Note that, in this case, the variable __name__ is always set to '__main__', so I have to change it manually (for the spawned processes) to '__mp_main__' (I can detect the child processes thanks to the sys.argv). This is the first strange behaviour.
The parent process have pid 4440 and I can see the process in process explorer.
The first child process have pid 10176 and it reach the end 'checkpoint 7' and process disappears from process explorer. However, the main process doesn't print 'checkpoint 2', that is looks like it hangs on 'p1.start()' and I cannot understand why.
The complete C++ code is:
#include <pybind11/pybind11.h>
#include <pybind11/stl.h>
#include <pybind11/stl_bind.h>
#include <pybind11/embed.h>
#include <iostream>
namespace py = pybind11;
using namespace py::literals;
int wmain(int argc, wchar_t **argv)
{
py::initialize_interpreter();
PySys_SetArgv(argc, argv);
std::string pyCode = std::string(R"(
import multiprocessing
import os
import sys
import time
print("This is the name of the script: ", sys.argv[0])
print("Number of arguments: ", len(sys.argv))
print("The arguments are: " , str(sys.argv))
prefix=str(os.getpid())+"-"
if len(sys.argv) > 1:
__name__ = "__mp_main__"
def print_cube(num):
"""
function to print cube of given num
"""
print("Cube: {}".format(num * num * num))
def print_square(num):
"""
function to print square of given num
"""
print("Square: {}".format(num * num))
print(__name__)
if __name__ == "__main__":
print(prefix, "checkpoint 1")
# creating processes
p1 = multiprocessing.Process(target=print_square, args=(10, ))
p1.daemon = True
p2 = multiprocessing.Process(target=print_cube, args=(10, ))
# starting process 1
p1.start()
print(prefix, "checkpoint 2")
# starting process 2
p2.start()
print(prefix, "checkpoint 3")
# wait until process 1 is finished
print(prefix, "checkpoint 4")
p1.join()
print(prefix, "checkpoint 5")
# wait until process 2 is finished
p2.join()
print(prefix, "checkpoint 6")
# both processes finished
print("Done!")
print(prefix, "checkpoint 7")
time.sleep(10)
)");
try
{
py::exec(pyCode);
} catch (const std::exception &e) {
std::cout << e.what();
}
py::finalize_interpreter();
}
Can anyone explain to me how to overcome this problem, please?
Thanks in advance (and I apologize for my english).

Ok, thanks to this link: https://blender.stackexchange.com/questions/8530/how-to-get-python-multiprocessing-module-working-on-windows, I solved this strange issue (that seems to be Windows related).
It's not a Pybind11 issue, but a Python C API itself.
You can solve the issue by setting sys.executable equals to the path of the python interpreter executable (python.exe) and by writing the python code to a file and setting the path to the __file__ variable. That is, I have to add:
import sys
sys.executable = "C:\\Users\\MyUserName\\Miniconda3\\python.exe"
__file__ = "C:\\tmp\\run.py"
and I need to write the python code to the file specified by __file__, that is:
FILE *f = nullptr;
fopen_s(&f, "c:\\tmp\\run.py", "wt");
fprintf(f, "%s", pyCode.c_str());
fclose(f);
just before execute py::exec(pyCode).
In addition the code:
if len(sys.argv) > 1:
__name__ = "__mp_main__"
is no longer necessary. However, note that in this way the runned processes are not embedded anymore and, unfortunately, if you want to directly pass a C++ module to them, you cannot do it.
Hope this can help someone else.

Related

Redis not returning result after upgrading Celery from 3.1 to 4.0

I recently upgraded my Celery installation to 4.0. After a few days of wrestling with the upgrade process, I finally got it to work... sort of. Some tasks will return, but the final task will not.
I have a class, SFF, that takes in and parses a file:
# Constructor with I/O file
def __init__(self, file):
# File data that's gonna get used a lot
sffDescriptor = file.fileno()
fileName = abspath(file.name)
# Get the pointer to the file
filePtr = mmap.mmap(sffDescriptor, 0, flags=mmap.MAP_SHARED, prot=mmap.PROT_READ)
# Get the header info
hdr = filePtr.read(HEADER_SIZE)
self.header = SFFHeader._make(unpack(HEADER_FMT, hdr))
# Read in the palette maps
print self.header.onDemandDataSize
print self.header.onLoadDataSize
palMapsResult = getPalettes.delay(fileName, self.header.palBankOff - HEADER_SIZE, self.header.onDemandDataSize, self.header.numPals)
# Read the sprite list nodes
nodesStart = self.header.sprListOff
nodesEnd = self.header.palBankOff
print nodesEnd - nodesStart
sprNodesResult = getSprNodes.delay(fileName, nodesStart, nodesEnd, self.header.numSprites)
# Get palette data
self.palettes = palMapsResult.get()
# Get sprite data
spriteNodes = sprNodesResult.get()
# TESTING
spritesResultSet = ResultSet([])
numSpriteNodes = len(spriteNodes)
# Split the nodes into chunks of size 32 elements
for x in xrange(0, numSpriteNodes, 32):
spritesResult = getSprites.delay(spriteNodes, x, x+32, fileName, self.palettes, self.header.palBankOff, self.header.onDemandDataSizeTotal)
spritesResultSet.add(spritesResult)
break # REMEMBER TO REMOVE FOR ENTIRE SFF
self.sprites = spritesResultSet.join_native()
It doesn't matter if it's a single task that returns the entire spritesResult, or if I split it using a ResultSet, the outcome is always the same: the Python console I'm using just hangs at either spritesResultSet.join_native() or spritesResult.get() (depending on how I format it).
Here is the task in question:
#task
def getSprites(nodes, start, end, fileName, palettes, palBankOff, onDemandDataSizeTotal):
sprites = []
with open(fileName, "rb") as file:
sffDescriptor = file.fileno()
sffData = mmap.mmap(sffDescriptor, 0, flags=mmap.MAP_SHARED, prot=mmap.PROT_READ)
for node in nodes[start:end]:
sprListNode = dict(SprListNode._make(node)._asdict()) # Need to convert it to a dict since values may change.
#print node
#print sprListNode
# If it's a linked sprite, the data length is 0, so get the linked index.
if sprListNode['dataLen'] == 0:
sprListNodeTemp = SprListNode._make(nodes[sprListNode['index']])
sprListNode['dataLen'] = sprListNodeTemp.dataLen
sprListNode['dataOffset'] = sprListNodeTemp.dataOffset
sprListNode['compression'] = sprListNodeTemp.compression
# What does the offset need to be?
dataOffset = sprListNode['dataOffset']
if sprListNode['loadMode'] == 0:
dataOffset += palBankOff #- HEADER_SIZE
elif sprListNode['loadMode'] == 1:
dataOffset += onDemandDataSizeTotal #- HEADER_SIZE
#print sprListNode
# Seek to the data location and "read" it in. First 4 bytes are just the image length
start = dataOffset + 4
end = dataOffset + sprListNode['dataLen']
#sffData.seek(start)
compressedSprite = sffData[start:end]
# Create the sprite
sprite = Sprite(sprListNode, palettes[sprListNode['palNo']], np.fromstring(compressedSprite, dtype=np.uint8))
sprites.append(sprite)
return json.dumps(sprites, cls=SpriteJSONEncoder)
I know it reaches the return statement, because if I put a print right above it, it will print in the Celery window. I also know that the task is running to completion because I get the following message from the worker:
[2016-11-16 00:03:33,639: INFO/PoolWorker-4] Task framedatabase.tasks.getSprites[285ac9b1-09b4-4cf1-a251-da6212863832] succeeded in 0.137236133218s: '[{"width": 120, "palNo": 30, "group": 9000, "xAxis": 0, "yAxis": 0, "data":...'
Here are my celery settings in settings.py:
# Celery settings
BROKER_URL='redis://localhost:1717/1'
CELERY_RESULT_BACKEND='redis://localhost:1717/0'
CELERY_IGNORE_RESULT=False
CELERY_IMPORTS = ("framedatabase.tasks", )
... and my celery.py:
from __future__ import absolute_import
import os
from celery import Celery
# set the default Django settings module for the 'celery' program.
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'framedatabase.settings')
from django.conf import settings # noqa
app = Celery('framedatabase', backend='redis://localhost:1717/1', broker="redis://localhost:1717/0",
include=['framedatabase.tasks'])
# Using a string here means the worker will not have to
# pickle the object when using Windows.
app.config_from_object('django.conf:settings', namespace='CELERY')
app.autodiscover_tasks()
#app.task(bind=True)
def debug_task(self):
print('Request: {0!r}'.format(self.request))
Found the problem. Apparently it was leading to deadlock as mentioned in the section "Avoid launching synchronous subtasks" in the Celery documentation here: http://docs.celeryproject.org/en/latest/userguide/tasks.html#tips-and-best-practices
So I got rid of the line:
sprNodesResult.get()
And changed the final result to a chain:
self.sprites = chain(getSprNodes.s(fileName, nodesStart, nodesEnd, self.header.numSprites),
getSprites.s(0,32,fileName,self.palettes,self.header.palBankOff,self.header.onDemandDataSizeTotal))().get()
And it works! Now I just have to find a way to split this the way I want!

Is there a way to get tensorflow tf.Print output to appear in Jupyter Notebook output

I'm using the tf.Print op in a Jupyter notebook. It works as required, but will only print the output to the console, without printing in the notebook. Is there any way to get around this?
An example would be the following (in a notebook):
import tensorflow as tf
a = tf.constant(1.0)
a = tf.Print(a, [a], 'hi')
sess = tf.Session()
a.eval(session=sess)
That code will print 'hi[1]' in the console, but nothing in the notebook.
Update Feb 3, 2017
I've wrapped this into memory_util package. Example usage
# install memory util
import urllib.request
response = urllib.request.urlopen("https://raw.githubusercontent.com/yaroslavvb/memory_util/master/memory_util.py")
open("memory_util.py", "wb").write(response.read())
import memory_util
sess = tf.Session()
a = tf.random_uniform((1000,))
b = tf.random_uniform((1000,))
c = a + b
with memory_util.capture_stderr() as stderr:
sess.run(c.op)
print(stderr.getvalue())
** Old stuff**
You could reuse FD redirector from IPython core. (idea from Mark Sandler)
import os
import sys
STDOUT = 1
STDERR = 2
class FDRedirector(object):
""" Class to redirect output (stdout or stderr) at the OS level using
file descriptors.
"""
def __init__(self, fd=STDOUT):
""" fd is the file descriptor of the outpout you want to capture.
It can be STDOUT or STERR.
"""
self.fd = fd
self.started = False
self.piper = None
self.pipew = None
def start(self):
""" Setup the redirection.
"""
if not self.started:
self.oldhandle = os.dup(self.fd)
self.piper, self.pipew = os.pipe()
os.dup2(self.pipew, self.fd)
os.close(self.pipew)
self.started = True
def flush(self):
""" Flush the captured output, similar to the flush method of any
stream.
"""
if self.fd == STDOUT:
sys.stdout.flush()
elif self.fd == STDERR:
sys.stderr.flush()
def stop(self):
""" Unset the redirection and return the captured output.
"""
if self.started:
self.flush()
os.dup2(self.oldhandle, self.fd)
os.close(self.oldhandle)
f = os.fdopen(self.piper, 'r')
output = f.read()
f.close()
self.started = False
return output
else:
return ''
def getvalue(self):
""" Return the output captured since the last getvalue, or the
start of the redirection.
"""
output = self.stop()
self.start()
return output
import tensorflow as tf
x = tf.constant([1,2,3])
a=tf.Print(x, [x])
redirect=FDRedirector(STDERR)
sess = tf.InteractiveSession()
redirect.start();
a.eval();
print "Result"
print redirect.stop()
I ran into the same problem and got around it by using a function like this in my notebooks:
def tf_print(tensor, transform=None):
# Insert a custom python operation into the graph that does nothing but print a tensors value
def print_tensor(x):
# x is typically a numpy array here so you could do anything you want with it,
# but adding a transformation of some kind usually makes the output more digestible
print(x if transform is None else transform(x))
return x
log_op = tf.py_func(print_tensor, [tensor], [tensor.dtype])[0]
with tf.control_dependencies([log_op]):
res = tf.identity(tensor)
# Return the given tensor
return res
# Now define a tensor and use the tf_print function much like the tf.identity function
tensor = tf_print(tf.random_normal([100, 100]), transform=lambda x: [np.min(x), np.max(x)])
# This will print the transformed version of the tensors actual value
# (which was summarized to just the min and max for brevity)
sess = tf.InteractiveSession()
sess.run([tensor])
sess.close()
FYI, using a logger instead of calling "print" in my custom function worked wonders for me as the stdout is often buffered by jupyter and not shown before "Loss is Nan" kind of errors -- which was the whole point in using that function in the first place in my case.
You can check the terminal where you launched the jupyter notebook to see the message.
import tensorflow as tf
tf.InteractiveSession()
a = tf.constant(1)
b = tf.constant(2)
opt = a + b
opt = tf.Print(opt, [opt], message="1 + 2 = ")
opt.eval()
In the terminal, I can see:
2018-01-02 23:38:07.691808: I tensorflow/core/kernels/logging_ops.cc:79] 1 + 2 = [3]
A simple way, tried it in regular python, but not jupyter yet.
os.dup2(sys.stdout.fileno(), 1)
os.dup2(sys.stdout.fileno(), 2)
Explanation is here: In python, how to capture the stdout from a c++ shared library to a variable
The issue that I faced was that one can't run a session inside a Tensorflow Graph, like in the training or in the evaluation.
That's why the options to use sess.run(opt) or opt.eval() were not a solution for me.
The best thing was to use tf.Print() and redirect the logging to an external file.
I did this using a temporal file, which I transferred to a regular file like this:
STDERR=2
import os
import sys
import tempfile
class captured:
def __init__(self, fd=STDERR):
self.fd = fd
self.prevfd = None
def __enter__(self):
t = tempfile.NamedTemporaryFile()
self.prevfd = os.dup(self.fd)
os.dup2(t.fileno(), self.fd)
return t
def __exit__(self, exc_type, exc_value, traceback):
os.dup2(self.prevfd, self.fd)
with captured(fd=STDERR) as tmp:
...
classifier.evaluate(input_fn=input_fn, steps=100)
with open('log.txt', 'w') as f:
print(open(tmp.name).read(), file=f)
And then in my evaluation I do:
a = tf.constant(1)
a = tf.Print(a, [a], message="a: ")

Issue with using Leap motion and python

I am trying to write a basic program using leap motion, I just want to constantly run the on_frame method but the method only runs once and disconnecting the device does not call the on_disconnect method. The program will not run until I hit enter, What am I doing wrong? Thanks for the help :
import Leap, sys, thread
from Leap import CircleGesture, KeyTapGesture, ScreenTapGesture,SwipeGesture
class LeapMotionListener(Leap.Listener):
state_names = ["STATE_INVAILD","STATE_START","STATE_UPDATE", "STATE_END"]
def on_init(self, controller):
print "Initialized"
def on_connect(self, controller):
print "Connected"
controller.enable_gesture(Leap.Gesture.TYPE_CIRCLE);
controller.enable_gesture(Leap.Gesture.TYPE_KEY_TAP);
controller.enable_gesture(Leap.Gesture.TYPE_SCREEN_TAP);
controller.enable_gesture(Leap.Gesture.TYPE_SWIPE);
print "All gestures enabled"
def on_disconnect(self, controller):
print "Disconnected"
def on_exit(self, controller):
print "Exit"
def on_frame(self, controller):
frame= controller.frame()
print "\n Frame ID"+ str(frame.id)
print "num of hands: " +str(len(frame.hands))
print "num of gestures: " +str(len(frame.gestures()))
for gesture in frame.gestures():
if gesture.type is Leap.Gesture.TYPE_CIRCLE:
circle = Leap.CircleGesture(gesture)
elif gesture.type is Leap.Gesture.TYPE_SWIPE:
swipe = Leap.SwipeGesture(gesture)
elif gesture.type is Leap.Gesture.TYPE_KEY_TAP:
key_tap = Leap.KeyTapGesture(gesture)
elif gesture.type is Leap.Gesture.TYPE_SCREEN_TAP:
screen_tap = Leap.ScreenTapGesture(gesture)
def main():
listener= LeapMotionListener()
controller = Leap.Controller()
controller.add_listener(listener)
print "Press enter to quit: "
try:
sys.stdin.readline()
except KeyboardInterrupt:
pass
finally:
controller.remove_listener(listener)
if __name__ == "__main__":
main()
Your code runs fine from the command line. I.e:
> python scriptname.py
Are you running from Idle? if so, this part:
try:
sys.stdin.readline()
except KeyboardInterrupt:
pass
doesn't work. (See Python 3.2 Idle vs terminal) A similar issue might exist in other IDEs.
The following should work in Idle, but you have to use Ctrl-C to exit.
# Keep this process running until a Ctrl-C is pressed
print "Press Ctrl-C to quit..."
try:
while True:
time.sleep(1)
except:
print "Quiting"
finally:
# Remove the sample listener when done
controller.remove_listener(listener)
Quitting on any key stroke in both Idle and the command line seemed surprisingly difficult and complex when I tried it some time ago -- but then, I'm a Python duffer.

Finding PID's of Virtual Machine in openstack

I am working on openstack and I want to monitor the Virtual Machines cpu usage. For that I want to find their PIDs through the parent (central) openstack instance.
I used
ps aux | grep
and I did receive an output. I however want to confirm if this is correct PID. Is their any way I can check this?
Or is their any other way to find the PID's of the virtual machine?
Update.
This command does not work . It gives me a PID which always change. Its not constant.
Thank you
Well libvirt has some interfaces for this. Here's some python that extracts that data into datastructures for you:
#!/usr/bin/env python
# Modules
import subprocess
import traceback
import commands
import signal
import time
import sys
import re
import os
import getopt
import pprint
try:
import libvirt
except:
print "no libvirt detected"
sys.exit(0)
from xml.dom.minidom import parseString
global instances
global virt_conn
global tick
global virt_exist
def virtstats():
global virt_exist
global virt_conn
global instances
cpu_stats = []
if virt_exist == True:
if virt_conn == None:
print 'Failed to open connection to the hypervisor'
virt_exist = False
if virt_exist == True:
virt_info = virt_conn.getInfo()
for x in range(0, virt_info[2]):
cpu_stats.append(virt_conn.getCPUStats(x,0))
virt_capabilities = virt_conn.getCapabilities()
domcpustats = 0
# domcpustats = virDomain::GetcpuSTATS()
totmem = 0
totvcpu = 0
totcount = 0
vcpu_stats = []
for id in virt_conn.listDomainsID():
dom = virt_conn.lookupByID(id)
totvcpu += dom.maxVcpus()
vcpu_stats.append(dom.vcpus())
totmem += dom.maxMemory()
totcount += 1
dom = parseString(virt_capabilities)
xmlTag = dom.getElementsByTagName('model')[0].toxml()
xmlData=xmlTag.replace('<model>','').replace('</model>','')
for info in virt_info:
print info
for stat in cpu_stats:
print "cpu %s" % stat
for vstat in vcpu_stats:
print "vcpu:\n"
pprint.pprint(vstat)
print "CPU ( %s ) Use - %s vCPUS ( %s logical processors )" % (xmlData, totvcpu, virt_info[2])
sys.exit(0)
def main():
try:
global virt_conn
global virt_exist
virt_conn = libvirt.openReadOnly(None)
virt_exist = True
except:
virt_exist = False
print "OK: not a compute node"
sys.exit(0)
virtstats()
if __name__ == "__main__":
main()
Now what you get from this in terms of usage is cpu time.
The vcpu blocks are basically in this layout:
1st: vCPU number, starting from 0.
2nd: vCPU state.
0: offline
1: running
2: blocked on resource
3rd: CPU time used in nanoseconds
4th: real CPU number
The CPU blocks are obvious once you realize that's what's goin down in libvirt.
Hope that helps!
By using libvirt, python, lxml, and lsof you can recover the pid if your Virtual Instance (domain) has a display output set. (VNC, Spice, ...)
Retrieve display port
Retrieve pid from opened display port
Here is the code:
#!/usr/bin/env python
# -*- coding: utf-8 -*-
from lxml import etree
import libvirt
from subprocess import check_output
def get_port_from_XML(xml_desc):
tree = etree.fromstring(xml_desc)
ports = tree.xpath('//graphics/#port')
if len(ports):
return ports[0]
return None
def get_pid_from_listen_port(port):
if port is None:
return ''
return check_output(['lsof', '-i:%s' % port, '-t']).replace('\n','')
conn = libvirt.openReadOnly('')
if conn is None:
print 'Failed to open connection to the hypervisor'
sys.exit(1)
for domain_id in conn.listDomainsID():
domain_instance = conn.lookupByID(domain_id)
name = domain_instance.name()
xml_desc = domain_instance.XMLDesc(0)
port = get_port_from_XML(xml_desc)
pid = get_pid_from_listen_port(port)
print '%s (port:%s) (pid:%s)' % (name, port, pid)
grep "79d87652-8c8e-4afa-8c13-32fbcbf98e76" --include=libvirt.xml /path/to/nova/instances -r -A 2 | grep "<name" | cut -d " " -f 3
allows to find "instance-" which can be mapped to ps aux output of "-name" parameter. so you can map openstack instance id to pid.
The most simple way is using cgroups:
In Ubuntu:
cat /sys/fs/cgroup/cpuset/libvirt/qemu/<machine-name>/tasks

Using pytest with Jython

I'm trying to use pytest on Jython. And I'm getting stuck right at the beginning.
I've successfully installed the pytest package with easy_install:
$ ./jython easy_install pytest
When I try to run example from this page, things go wrong. I receive an extremely long failure report, like the one bellow. Does anybody have any idea why this is happening?
py.test-jython
============================= test session starts ==============================
platform java1.6.0_37 -- Python 2.5.3 -- pytest-2.3.2
collected 1 items
test_sample.py F
=================================== FAILURES ===================================
_________________ test_answer __________________
def test_answer():
assert func(3) == 5
test_sample.py:5:
self = AssertionError()
def __init__(self, *args):
BuiltinAssertionError.__init__(self, *args)
if args:
try:
self.msg = str(args[0])
except py.builtin._sysex:
raise
except:
self.msg = "<[broken __repr__] %s at %0xd>" %(
args[0].__class__, id(args[0]))
else:
f = py.code.Frame(sys._getframe(1))
try:
source = f.code.fullsource
if source is not None:
try:
source = source.getstatement(f.lineno, assertion=True)
except IndexError:
source = None
else:
source = str(source.deindent()).strip()
except py.error.ENOENT:
source = None
# this can also occur during reinterpretation, when the
# co_filename is set to "<run>".
if source:
self.msg = reinterpret(source, f, should_fail=True)
../jython2.5.3/Lib/site-packages/pytest-2.3.2-py2.5.egg/_pytest/assertion/reinterpret.py:32:
source = 'assert func(3) == 5', frame =
should_fail = True
def interpret(source, frame, should_fail=False):
mod = ast.parse(source)
visitor = DebugInterpreter(frame)
try:
visitor.visit(mod)
../jython2.5.3/Lib/site-packages/pytest-2.3.2-py2.5.egg/_pytest/assertion/newinterpret.py:49:
.
.
.
self = <_pytest.assertion.newinterpret.DebugInterpreter object at 0x4>
name = Name
def visit_Name(self, name):
explanation, result = self.generic_visit(name)
../jython2.5.3/Lib/site-packages/pytest-2.3.2-py2.5.egg/_pytest/assertion/newinterpret.py:147:
self = <_pytest.assertion.newinterpret.DebugInterpreter object at 0x4>
node = Name
def generic_visit(self, node):
# Fallback when we don't have a special implementation.
if _is_ast_expr(node):
mod = ast.Expression(node)
co = self._compile(mod)
try:
result = self.frame.eval(co)
except Exception:
raise Failure()
explanation = self.frame.repr(result)
return explanation, result
elif _is_ast_stmt(node):
mod = ast.Module([node])
co = self._compile(mod, "exec")
try:
self.frame.exec_(co)
except Exception:
raise Failure()
return None, None
else:
raise AssertionError("can't handle %s" %(node,))
E AssertionError: can't handle Name
../jython2.5.3/Lib/site-packages/pytest-2.3.2-py2.5.egg/_pytest/assertion/newinterpret.py:134: AssertionError
=========================== 1 failed in 0.55 seconds ===========================
Pytest has a workaround for jython's lacking AST implementation, see issue1479. I just extended the workaround on the pytest side to work on jython-2.5.3. You can install a dev-candidate of pytest with:
pip install -i http://pypi.testrun.org -U pytest
and should get at least version 2.3.4.dev1 with "py.test-jython --version" and get assertions working with jython-2.5.3.
Currently pytest does not support Jython2.5.3, works only on Jython2.5.1.