So the goal is by using openCV with python, constantly monitor the eyes and if the program fails to detect eyes then it would sound an alarm. The primary issue I am currently having is the fact that I need some sort of timer to delay the program so that the alarm does not trigger after only blinking. The program runs on a constant While loop as it updates frame by frame by the camera and when I use time.sleep(), the entire program halts. Perhaps, I do not need a timer but rather some sort of threshold, I don't know. Any advice would be appreciated
I'm assuming you're using Python, in which case you could take advantage of the time module as follows-
import time
while (condition):
t = time.time
if (eyes_not_detected and time.time - t > max_time):
alarm()
Related
I am creating an application for Mac, in Objective C, which will run in the Menu-bar and do periodic Desktop operations (such as changing the wallpaper). I am creating the application so that it stays in the Menu bar at all times, allowing easy access to configuration options and other information. My main concern is how to schedule my app to run every X minutes to do the desktop operations.
The most common solution I have seen is using NSTimer, however, I am concerned that it will not be memory efficient (after reading the following page on Apple Developer docs. Using an NSTimer will prevent the laptop from going to sleep, and will need an always-running thread to check for when the NSTimer has elapsed. Is there a more memory-efficient way of using NSTimer to schedule these operations?
Alternately, is there a way to use LaunchD to initiate a call to my application (which is in the Menu bar) so that it can handle the event and do the desktop operations. I think that the second way is better, but am not sure if it is possible.
First, excellent instincts on keeping this low-impact. But you're probably over-worried in this particular case.
When they say "waking the system from an idle state" they don't mean system-level "sleep" where the screen goes black. They mean idle state. The CPU can take little mini-naps for fractions of a second when there isn't work that immediately needs to be done. This can dramatically reduce power requirements, even while the system is technically "awake."
The problem with having lots of timers flying around isn't so much their frequencies as their tolerances. Say one you have 10 timers with a 1 second frequency, but they're offset from each other by 100ms (just by chance of what time it was when they happened to start). That means the longest possible "gap" is 100ms. But if they were configured at 1 second with a 0.9 second tolerance (i.e. between 1s and 1.9s), then the system could schedule them all together, do a bunch of work, and spend most of the second idle. That's much better for power.
To be a good timer citizen, you should first set your timer at the interval you really want to do work. If it is common for your timer to fire, but all you do is check some condition and reschedule the timer, then you're wasting power. (Sounds like you already have this in hand.) And the second thing you should do is set a reasonable tolerance. The default is 0 which is a very small tolerance (it's not actually "0 tolerance," but it's very small compared to a minutes). For your kind of problem, I'd probably use a tolerance of at least 1s.
I highly recommend the Energy Best Practices talk from WWDC 2013. You may also be interested in the later Writing Energy Efficient Code sessions from 2014 and Achieving All-day Battery Life from 2015.
It is possible of course to do with with launchd, but it adds a lot of complexity, especially on installation. I don't recommend it for the problem you're describing.
I've set up an experiment in the Builder to obtain rapid reaction times to audio stimuli, and I've subsequently been playing with the code to get the experiment to do exactly what I want. In particular, I'd like very accurate reaction times, so the program would ideally hog the CPU from the onset of each stimulus until a fixed point afterwards, and record keypresses of "w" and "e" during this time.
In an attempt to achieve this, I've been resetting the clock at the start of the audio stimuli, then hogging the CPU for for 2secs, as follows:
event.clearEvents(eventType='keyboard')
response.clock.reset()
core.wait(2,2)
if response.status == STARTED:
theseKeys = event.getKeys(keyList=['w', 'e'])
This seems to work fine. However, I have one concern: the documentation for the core.wait command says:
If you want to obtain key-presses during the wait, be sure to use pyglet.
How would I know if I'm using pyglet? Is it automatic, or do I need to alter the script in some way to ensure that I'm using it?
This refers to the type of window (pyglet or pygame) that you are using to display your stimuli. PsychoPy will generally use pyglet, but to be sure, you can explicitly set the window type when you create it. See the window API at http://www.psychopy.org/api/visual/window.html:
winType : None, ‘pyglet’, ‘pygame’
If None then PsychoPy will revert
to user/site preferences
More importantly, make sure you are using the pyo audio library rather than the default pygame. Set this in the PsychoPy Preferences -> General -> Audio Library dialog box field. Pygame definitely has sound latency problems: you should assume that there is a substantial lag between telling a sound to play and sound actually being produced. Pyo does better apparently, but I think you should validate this independently in some way to ensure that your reaction times to auditory stimuli are meaningful.
I have a program written in Labview for my LEGO Mindstorms NXT 2.0. When the target is set to the computer, the program works just fine. However, when I set the target to the NXT, the program doesn't work the same as when targeted to the computer.
The program makes the robot go forward until it is 30 centimetres away from an object, which is detected by the NXT's ultrasonic sensor. Then the robot will stop. If the object is moved and there is no object within 30 centimeters of the NXT's ultrasonic sensor, the robot will go forward again until it is 30 centimeters away from an object again. Then, it will stop.
This works when the target is set to computer in Labview, but not when set to NXT. When set to NXT, once the first object is detected, it will stop. But, if the object is removed and there is no longer any object within 30 centimetres of the ultrasonic sensor, the robot will remain stationary and not move forward.
Here is a screenshot of the block diagram:
Here is a link to the source code for the program.
Any help would be greatly appreciated.
My experience with NXT is very limited, but I would suggest that you use the string VIs to display some debug data on the NXT's screen (such as i, the distance, etc.). This will allow you to determine where the program is and might help you find the problem.
As a side point, in LV it is generally not recommended to have a loop which doesn't have something controlling its rate of execution. This might be different for code running on the NXT, but I would still suggest adding a simple wait to the loop.
I don't see a mistake in your code, but what I would do when deploying to NXT target is I would make the loop infinite ( replace Stop with a False constant) and delete the waveform chart. You don't need them at NXT.
I fixed this by adding a 200ms wait block to slow the NXT down. This worked, it seemed that the brick was getting ahead of itself.
I am writing an application for OS X (Obj-C/Cocoa) that runs a simulation and displays the results to the user. In one case, I want the simulation to run in "real-time" so that the user can watch it go by at the same speed it would happen in real life. The simulation is run with a specific timestep, dt. Right now, I am using mach_absolute_time() to slow down the simulation. When I profile this code, I see that by far, most of my CPU time is spent in mach_absolute_time() and my CPU is pegged at 100%. Am I doing this right? I figured that if I'm slowing down the simulation such that the program isn't simulating anything most of the time then CPU usage should be down but mach_absolute_time() obviously isn't a "free call" so I feel like there might be a better way?
double nextT = mach_absolute_time();
while (runningSimulation)
{
if (mach_absolute_time() >= nextT)
{
nextT += dt_ns;
// Compute the next "frame" of the simulation
// ....
}
}
Do not spin at all.
That is the first rule of writing GUI apps where battery life and app responsiveness matter.
sleep() or nanosleep() can be made to work, but only if used on something other than the main thread.
A better solution is to use any of the time based constructs in GCD as that'll make more efficient use of system resources.
If you want the simulation to appear smooth to the user, you'll really want to lock the slowed version to the refresh rate of the screen. On iOS, there is CADisplayLink. I don't know of a direct equivalent on the Mac.
You are doing busy spinning. If there is lot of time before you need to simulate again considering sleeping instead.
But any sleep won't guarantee that it would sleep exactly for the duration specified. Depending on how accurate you want it to be, you can sleep for a little less and afterwards spin for the rest.
I'm writing a Cocoa OS X (Leopard 10.5+) end-user program that's using timestamps to calculate statistics for how long something is being displayed on the screen. Time is calculated periodically while the program runs using a repeating NSTimer. [NSDate date] is used to capture timestamps, Start and Finish. Calculating the difference between the two dates in seconds is trivial.
A problem occurs if an end-user or ntp changes the system clock. [NSDate date] relies on the system clock, so if it's changed, the Finish variable will be skewed relative to the Start, messing up the time calculation significantly. My question:
1. How can I accurately calculate the time between Start and Finish, in seconds, even when the system clock is changed mid-way?
I'm thinking that I need a non-changing reference point in time so I can calculate how many seconds has passed since then. For example, system uptime. 10.6 has - (NSTimeInterval)systemUptime, part of NSProcessInfo, which provides system uptime. However, this won't work as my app must work in 10.5.
I've tried creating a time counter using NSTimer, but this isn't accurate. NSTimer has several different run modes and can only run one at a time. NSTimer (by default) is put into the default run mode. If a user starts manipulating the UI for a long enough time, this will enter NSEventTrackingRunLoopMode and skip over the default run mode, which can lead to NSTimer firings being skipped, making it an inaccurate way of counting seconds.
I've also thought about creating a separate thread (NSRunLoop) to run a NSTimer second-counter, keeping it away from UI interactions. But I'm very new to multi-threading and I'd like to stay away from that if possible. Also, I'm not sure if this would work accurately in the event the CPU gets pegged by another application (Photoshop rendering a large image, etc...), causing my NSRunLoop to be put on hold for long enough to mess up its NSTimer.
I appreciate any help. :)
Depending on what's driving this code, you have 2 choices:
For absolute precision, use mach_absolute_time(). It will give the time interval exactly between the points at which you called the function.
But in a GUI app, this is often actually undesirable. Instead, you want the time difference between the events that started and finished your duration. If so, compare [[NSApp currentEvent] timestamp]
Okay so this is a long shot, but you could try implementing something sort of like NSSystemClockDidChangeNotification available in Snow Leopard.
So bear with me here, because this is a strange idea and is definitely non-derterministic. But what if you had a watchdog thread running through the duration of your program? This thread would, every n seconds, read the system time and store it. For the sake of argument, let's just make it 5 seconds. So every 5 seconds, it compares the previous reading to the current system time. If there's a "big enough" difference ("big enough" would need to definitely be greater than 5, but not too much greater, to account for the non-determinism of process scheduling and thread prioritization), post a notification that there has been a significant time change. You would need to play around with fuzzing the value that constitutes "big enough" (or small enough, if the clock was reset to an earlier time) for your accuracy needs.
I know this is kind of hacky, but barring any other solution, what do you think? Might that, or something like that, solve your issue?
Edit
Okay so you modified your original question to say that you'd rather not use a watchdog thread because you are new to multithreading. I understand the fear of doing something a bit more advanced than you are comfortable with, but this might end up being the only solution. In that case, you might have a bit of reading to do. =)
And yeah, I know that something such as Photoshop pegging the crap out of the processor is a problem. Another (even more complicated) solution would be to, instead of having a watchdog thread, have a separate watchdog process that has top priority so it is a bit more immune to processor pegging. But again, this is getting really complicated.
Final Edit
I'm going to leave all my other ideas above for completeness' sake, but it seems that using the system's uptime will also be a valid way to deal with this. Since [[NSProcessInfo processInfo] systemUptime] only works in 10.6+, you can just call mach_absolute_time(). To get access to that function, just #include <mach/mach_time.h>. That should be the same value as returned by NSProcessInfo.
I figured out a way to do this using the UpTime() C function, provided in <CoreServices/CoreServices.h>. This returns Absolute Time (CPU-specific), which can easily be converted into Duration Time (milliseconds, or nanoseconds). Details here: http://www.meandmark.com/timingpart1.html (look under part 3 for UpTime)
I couldn't get mach_absolute_time() to work properly, likely due to my lack of knowledge on it, and not being able to find much documentation on the web about it. It appears to grab the same time as UpTime(), but converting it into a double left me dumbfounded.
[[NSApp currentEvent] timestamp] did work, but only if the application was receiving NSEvents. If the application went into the foreground, it wouldn't receive events, and [[NSApp currentEvent] timestamp] would simply continue to return the same old timestamp again and again in an NSTimer firing method, until the end-user decided to interact with the app again.
Thanks for all your help Marc and Mike! You both definitely sent me in the right direction leading to the answer. :)