Autonomous life can be controlled from within Choregraphe's UI as well as by pressing the chest button twice on the robot, however, how can this be done from a Choregraphe project?
After browsing the list of tools in the Box Library, it is unclear which tool can be used for this.
When Autonomous Life is on, the robot, in order to appear alive, makes small motions with its head, arms, and legs. The behaviour I'm developing is subtle and difficult to distinguish from the Autonomous Life movements, so I'm trying to make the robot stand still before my behaviour runs.
The ALAutonomousLife API offers the method setState
In Choregraph you could make a python box with this content:
class MyClass(GeneratedClass):
def __init__(self):
GeneratedClass.__init__(self)
self.al = ALProxy("ALAutonomousLife")
def onLoad(self):
#put initialization code here
pass
def onUnload(self):
#put clean-up code here
pass
def onInput_onStart(self):
self.al.setState("disabled")
#self.onStopped() #activate the output of the box
pass
def onInput_onStop(self):
self.onUnload() #it is recommended to reuse the clean-up as the box is stopped
self.onStopped() #activate the output of the box
And then activate this Box to disable AutonomousLife.
You can also test it in a python console like this:
import naoqi
from naoqi import ALProxy
al = ALProxy("ALAutonomousLife", "pepper.local", 9559)
al.setState("diabled")
This will have the same effect as pressign the chest button twice.
To only disable subtle autonomous movement have a look at Autonomous Abilities
A method setEnabled is offered by
ALListeningMovement
ALBackgroundMovement
ALSpeakingMovement
Related
I made a code through python to operate a preview of my PiCamera, I have set the time to 10 seconds, then automatically turns off. However I am unsure how I would be able to have a keystroke to stop the camera and return to the previous screen?
At the moment I am able to view for 10 seconds, and nothing else, the usual ctrl-c and various other keys does not work.
How would I be able to integrate a keystroke into the code following to stop the script and return to normal screen?
from picamera import PiCamera
from time import sleep
camera = PiCamera()
camera.start_preview()
sleep(10)
camera.stop_preview()
Subprocess module you can check on the official page:
https://docs.python.org/2/library/subprocess.html#subprocess.Popen
A possible way to implement with subprocess.Popen is here on SO:
Controlling a python script from another script
Another possibility is to use multiprocesses or multithreading module. For instance creation of a thread can be done and you can take care of an ID :-)
All the possibility will lead you to learn a bit more of python!
My better suggestion will be to create easily a thread (https://docs.python.org/3/library/threading.html --> here for python 3), get the ID and leave it run.
If you want to terminate the camera running, then terminate the thread :-)
I have made a custom virtual keyboard widget for my kiosk application, and now comes the time when I want it to produce fake keyboard events and feed them to an QLineEdit of choice.
I do the following:
// target is the QWidget to receive the events
// k is the Qt::Key (keycode) I want to send (Testing with an 'A')
Qt::Key k=Qt::Key_A;
if(0!=target){
//According to docs this will be freed once posted
QKeyEvent * press=new QKeyEvent(QKeyEvent::KeyPress, (int )k,0);
QKeyEvent * release=new QKeyEvent(QKeyEvent::KeyRelease, (int )k,0);
//Give the target focus just to be sure it is available for input
target->setFocus();
//Post the events (queue up and let the target consume them when the eventloop gets around to the target)
QCoreApplication::postEvent ( target, press) ;
QCoreApplication::postEvent ( target, release) ;
}
I see the target widget receive focus, but there are no letters typed into the input field like I would expect. What am I doing wrong? Which assumptions are wrong?
PS: I know that this could be solved by using existing virtual keyboards or at least using the platform interface as is done in this post. In our approach we have decided to build the kayboard into the application to obtain full control over the UX and keyboard design.
Thanks!
Since no-one stepped up, I will try to provide some closure.
It turns out that Qt5 comes with a library of testing facilities called testlib. It has all sorts of goodies to facilitate easy creation, management and running of unit tests for Qt application. Among these facilities there is a set of functions for sending fake events such as fake typing of text, mouse clicks etc. It is quite comprehensive and covers many use-cases. Since this is used internally by Qt developers to test Qt itself it is also production proven code.
I simply copied what I needed from there.
I'm new to PyopenGL and i'm currently working on a code originally based on the pyggel library, but now I'd like to add some features from GLUT (menu & text) and I'm not really sure how I should join both (if possible).
In GLUT, running glutMainLoop() is required, but on the other hand I have this run() routine:
def run(self):
while 1:
self.clock.tick(60)
self.getInput()
self.processInput()
pyggel.view.clear_screen()
self.mouse_over_object = self.scene.render(self.camera)
pyggel.view.refresh_screen()
#glutMainLoop()
Putting the GLUT routine inside my run() doesn't work (it crashes when it gets to the glutMainLoop).
So, how can I join both loops? Can I? I'm guessing that's what I need to make both things work.
Thanks in advance!
You likely are not going to find this easy to do. Pyggel is based on the Pygame GUI framework, while GLUT is its own GUI framework. You may be able to get text-rendering working, as under the covers GLUT is just doing regular OpenGL for that, but the menus are not going to easily work under Pyggel.
Pyggel has both text-rendering and a GUI framework that includes menus, frames, buttons, labels, etc. You likely want to use that if you're using Pyggel in your project there is an example of GUI usage here:
http://code.google.com/p/pyggel/source/browse/trunk/examples_and_tutorials/tut8-gui.py
I'm using matplotlib housed in a wxPython panel to do some heavy duty plotting. My issues comes when using native panning tool - it's appears as though matplotlib tries to constantly redraw the canvas as you drag the pan handle around. With the amount of data I'm plotting this is getting really choppy (already optimized with Collections for data etc)
In terms of performance I think it would be much preferable for the canvas to just draw once when the mouse is released at the end of a pan. I realise this will mean I have to extend the WxAgg NavigationToolbar2 class with my own, but I'm wondering if anyone has attempted something similar to this and can advise me on which functions to override?
many thanks
I've spent a lot of time making modification on the matplotlib backends, I've never done this specific change, but I can show you one line of code to comment out that will stop the dynamic updating:
I presume you are using the WxAgg backend, if this is the case, open this file: C:\Python27\Lib\site-packages\matplotlib\backends\backend_wx.py
And comment out the line indicated here:
def dynamic_update(self):
d = self._idle
self._idle = False
if d:
#self.canvas.draw() #<--- Comment out to stop the redrawing during the Pan/Zoom
self._idle = True
I tested this and it seems to nicely solve your issue. I did some quick digging and I didn't see any other functions calling this procedure so you might even be able to just change it to:
def dynamic_update(self):
pass
...Which is the same code you'll find in the base NavigationToolbar2 class
(And of course, if you're happy with this change you can do a little more work to make your own custom backend with this kind of modification. Just to make sure you don't lose the change when upgrading matplotlib)
So I have been pulling my hair out all day at this, and I am out of patience.
Basically I have a pygtk program that is built from glade. At the bottom of the main window, there is a status bar that I use to display usage errors, and status messages about the connected hardware and they use the functions below. So all of the calls to displayError come from within my object based on user interaction, and they all seem to work. In testing I tried making calls to displayCurrent and it works as well. The problem comes when the hardware process tries to use displayCurrent. The way the system works is that one of the members of my main window class is an interface object to the hardware. This has a separate process, using multiprocessing.Process, which sends a signal every time it gets data with the data being the message to output. Does anyone have any ideas? I'll be happy to explain anything in more depth if needed, it's just a LOT of code to post to get all the details.
def displayCurrent(self, message):
print message
if self.lastMess:
self.statusBar.remove(self.normalID, self.lastMess)
self.lastMess = self.statusBar.push(self.normalID, message)
def displayError(self, message, timeout = 5):
"""
Function that takes an error message and raises it to the user via the statusbar
timeout seconds later.
"""
print message
mess = self.statusBar.push(self.urgentID, message)
# clear statusbar
gobject.timeout_add_seconds(timeout, self.clearStatus, self.urgentID, mess)
def clearStatus(self, cID, mID):
#time.sleep(timeout)
self.statusBar.remove(cID, mID)
#print self.statusBar.remove_all(self.urgentID)
print 'popped'
return False
Post the whole code.
In general, this won't work if you are in a separate process. GTK (and everything else) has been forked. You need to use some form of interprocess communication. Multiprocessing module handles it for you. There are also hacks you can do in GTK such as using a window title (like Vim) or perhaps a plug and socket to communicate; don't be tempted by these.