core.wait command -- am I using pyglet or not? - psychopy

I've set up an experiment in the Builder to obtain rapid reaction times to audio stimuli, and I've subsequently been playing with the code to get the experiment to do exactly what I want. In particular, I'd like very accurate reaction times, so the program would ideally hog the CPU from the onset of each stimulus until a fixed point afterwards, and record keypresses of "w" and "e" during this time.
In an attempt to achieve this, I've been resetting the clock at the start of the audio stimuli, then hogging the CPU for for 2secs, as follows:
event.clearEvents(eventType='keyboard')
response.clock.reset()
core.wait(2,2)
if response.status == STARTED:
theseKeys = event.getKeys(keyList=['w', 'e'])
This seems to work fine. However, I have one concern: the documentation for the core.wait command says:
If you want to obtain key-presses during the wait, be sure to use pyglet.
How would I know if I'm using pyglet? Is it automatic, or do I need to alter the script in some way to ensure that I'm using it?

This refers to the type of window (pyglet or pygame) that you are using to display your stimuli. PsychoPy will generally use pyglet, but to be sure, you can explicitly set the window type when you create it. See the window API at http://www.psychopy.org/api/visual/window.html:
winType : None, ‘pyglet’, ‘pygame’
If None then PsychoPy will revert
to user/site preferences
More importantly, make sure you are using the pyo audio library rather than the default pygame. Set this in the PsychoPy Preferences -> General -> Audio Library dialog box field. Pygame definitely has sound latency problems: you should assume that there is a substantial lag between telling a sound to play and sound actually being produced. Pyo does better apparently, but I think you should validate this independently in some way to ensure that your reaction times to auditory stimuli are meaningful.

Related

Real-time camera input to Julia-lang

TLDR: How can I achieve low-latency, low-cpu impact webcam aquistition in Julia?
edit: I also posted this on the julia devs forum
I am new to Julia. I am interested in processing the video feed from a connected webcam, and see what kind of performance I can get out of Julia.
I am working on Linux Ubuntu, 16.04.
The only way I have found to get webcam input through video4linux, is through VideoIO, which is working on my system. The video has an unacceptable lag however, of up to 4 seconds. I assume this is given by the buffering of frames by the driver and/or libav (or is it ffmpeg, I dunno). With any camera api worth its name, I should be able to access the latest camera frame acquired... or at least set the size of the queue that Im popping frames from. Seems there is no such option in VideoIO, or maybe I am missing it.
It really is important for me to be able show-case Julia as a high performance language to non-techies... so this lag will ruin the demo I am hoping to put together.
edit: here is some of the code I have:
module myViewCam
export myView
import VideoIO, ImageView;
function myView()
camera = VideoIO.opencamera();
buf = VideoIO.read(camera);
guidict = ImageView.imshow(buf);
while !eof(camera)
VideoIO.read!(camera, buf);
ImageView.imshow(guidict["gui"]["canvas"], buf);
sleep(0.00001);
end
end
end
Assuming above is content of myViewCam.jl at the Julia prompt (the "REPL"), I type:
include("myViewCam.jl");
myViewCam.myView();
Note that this is a fix for the function "VideoIO.viewcam()" which does not work out of the box it seems.
On my system, this brings the Julia thread up to about 100% cpu usage, at the beginning of video-stream there is about 4 seconds lag, but this evens out over time, until it lands on about 0.5 seconds lag. There obviously is some queue where frames are popped from.
Also see Video4Linux wrapper in Julia which works well with Images.jl:
https://github.com/Affie/Video4Linux.jl
It's not registered yet, but has been around for a while. It is possible to make this process multithreaded in Julia using SharedArrays.jl, or likely the new Composible Threading model since Julia 1.3.
PS, this vendor specific camera interface package exists too: https://github.com/JuliaCameras/RealSense.jl

Programmable "real-time" MIDI processing

In my band, all musicians have both hands busy at any time. However, we want to add whole synthesizer chords (1/4 .. whole note length), maybe triggered by a simple foot switch every time (because playing along a sequencer is currently too difficult for us).
Some time ago I wrote a (Windows) console application in C (MinGW) that converted incoming MIDI events to text, piped that text to an external program (AWK script), and re-converted that external program's text output back to MIDI events.
Basically every sort of filtering or event generation was possible; I actually produced chords triggered by simple control messages; I kept note-ONs in memory to be able to -OFF them whenever a new chord was sent, etc. - the actual processing (execution) times were not a problem at all(!)
But I had to understand that not only latency, but also the notoriously unreliable (with respect to "when", "for how long") user application OS multitasking/switching made this concept practically worthless at least for "real-time" use. There were always clearly perceivable delays, of unpredictable duration.
I read about user-mode driver programming and downloaded some resources, but somehow stopped working on that project without a real result.
Apart from that specific project, I even have some experience in writing small "virtual" machines that allow for expressing exactly the variables, conditionals and math, stored as a token tree and processed quite fast. Maybe there is also the option to embed Lua, V8, or anything like that. So calling another (external) program is not necessarily the issue here, since that can be avoided.
The problem that remains is that the processing as a whole is still done by a (user) application. So I figure there is no way around a (user mode) driver, in this scenario.
Alternatively, I was even considering (more "real-time") hardware - a Raspi or the like - but then the MIDI interface may be an additional challenge.
Is there any hardware or software solution (or project) available that may serve as a base for such a _Generic MIDI filter/processor_? Apart from predictable timing behaviour, it is desirable not to need a (C) compilation environment when building filters/rules, since that "creative" step will probably happen in our rehearsal room (laptop available), which is certainly not a "programming lab". Text-based "programs" are fine - for long-term I'll maybe build a GUI for wiring/generating rules anyway.
MIDI is handled pretty well in Windows. I'm not sure the source of the original problems you had. No doubt there is some latency though.
You can handle this in real time with a microcontroller. The good news is that you don't even have to build the hardware. Off-the-shelf controllers are available for this. For example: http://www.midisolutions.com/prodevp.htm

Debugging methods for finding the location and error that's causing a game to freeze

I recently I came across an error that I cannot understand. The game I'm developing using Cocos2D just freezes at a certain random point -- it gets a SIGSTOP -- and I cannot find the reason. What tool can I use (and how do I use it) to find out where the error occurs and what's causing it?
Jeremy's suggestion to stop in the debugger is a good one.
There's a really quick way to investigate a freeze (or any performance issue), especially when it's not easy to reproduce. You have to have a terminal handy (so you'll need to be running in the iOS simulator or on Mac OS X, not on an iOS device).
When the hang occurs pop over to a terminal and run:
sample YourProgramName
(If there are spaces in your program name wrap that in quotes like sample "My Awesome Game".) The output of sample is a log showing where your program is spending time, and if your program is actually hung, it will be pretty obvious which functions are stuck.
I disagree with Aaron Golden's answer above as running on a device is extremely useful in order to have a real-case scenario of where the app freezes. The simulator has more memory and does not reproduce the hardware of the device in an accurate way (for example, the frame rate is in certain cases lower).
"Obviously", you need to connect your device (with a developer profile) on Xcode and look at the console terminal to look for traces that user #AaronGolden suggested.
If those are not enough you might want to enable a general exception breakpoint in Xcode to capture more of the stacktrace messages.
When I started learning Cocos2D my app often frooze. This is a list of common causes:
I wasn't using sprite sheets and hence the frame rate was dropping drammatically
I was using too much memory (too many high-definition sprites. Have a look at TexturePacker and use pvr.ccz or pvr.gz format; it cuts memory allocation in half)
Use instruments to profile your app for memory warnings (for example, look at allocation instruments and look for memory warnings).

Why is playing audio through AV Foundation blocking the UI on a slow connection?

I'm using AV Foundation to play an MP3 file loaded over the network, with code that is almost identical to the playback example here: Putting it all Together: Playing a Video File Using AVPlayerLayer, except without attaching a layer for video playback. I was trying to make my app respond to the playback buffer becoming empty on a slow network connection. To do this, I planned to use key-value observing on the AVPlayerItem's playbackBufferEmpty property, but the documentation did not say whether that was possible. I thought it might be possible because the status property can be observed (and is the example above) even though the documentation doesn't say that.
So, in an attempt to create conditions where the buffer would empty, I added code on the server to sleep for two seconds after serving up each 8k chunk of the MP3 file. Much to my surprise, this caused my app's UI (updated using NSTimer) to freeze completely for long periods, despite the fact that it shows almost no CPU usage in the profiler. I tried loading the tracks on another queue with dispatch_async, but that didn't help at all.
Even without the sleep on the server, I've noticed that loading streams using AVPlayerItem keeps the UI from updating for the short time that the stream is being downloaded. I can't see why a slow file download should ever block the responsiveness of the UI. Any idea why this is happening or what I can do about it?
Okay, problem solved. It looks like passing AVURLAssetPreferPreciseDurationAndTimingKey in the options to URLAssetWithURL:options: causes the slowdown. This also only happens when the AVURLAsset's duration property or some other property relating to the stream's timing is accessed from the selector fired by the NSTimer. So if you can avoid polling for timing information, this problem may not affect you, but that's not an option for me. If precise timing is not requested, there's still a delay of around 0.75 seconds to 1 second, but that's all.
Looking back through it, the documentation does warn that precise timing might cause slower performance, but I never imagined 10+ second delays. Why the delay should scale with the loading time of the media is beyond me; it seems like it should only scale with the size of the file. Maybe iOS is doing some kind of heavy polling for new data and/or processing the same bytes over and over.
So now, without "precise timing and duration," the duration of the asset is permanently at 0.0, even when it's fully loaded. I can also answer my original goal of doing KVO on AVPlayerItem.isPlaybackBufferEmpty. It seems KVO would be useless anyway, since the property starts out being NO, changes to YES as soon as I start playback, and continues to be YES even as the media is playing for minutes at a time. The documentation says this about the property:
Indicates whether playback has consumed all buffered media and that playback will stall or end.
So I guess that's not accurate, and, at least in this particular case, the property is not very useful.

Using Cocoa to detect when a running application plays audio

I'm looking into writing an app that runs as a background process and detects when an app (say, Safari) is playing audio. I can use NSWorkspace to get the process ID's of the currently running applications but I'm at a loss when it comes to detecting what those processes are doing. I assume that there is a way to listen in on a process and detect what public messages the objects are sending. I apologize for my ignorance on the subject.
Has anyone attempted anything like this or are aware of any resources that can help?
I don't think that your "answer" is an answer at all...
and there IS an answer (which is not "42")
your best bet for doing this would be to write a pass-through audio output device. Much like soundflower, actually. so your audio output device would then load the actual (physical) audio output device and pass the audio data along to it directly (after first having a look at the audio stream, of course!). then you only need to convince your users to configure your audio device as the default audio output device so that the majority of applications which play sound will use it automatically. and voila...
your audio processing function will probably just do a quick RMS on the buffer before passing it along to the actual output device. and when the audio power crosses a certain threshold (probably something like -54dB with apple audio hardware), then you know that some app is making sound.
|K<
SoundFlower is an open-source project that allows Mac OS X applications to pass audio to each other. It almost certainly does something similar to what you describe.
I've been informed on another thread that while this is possible, it is an extremely advanced technique and not recommended. It would involve using Application Enhancer (APE) and is considered a not 'nice' thing to do. Looks like that app idea is destined for the big recycling bin in the sky :)