I am looking for a function to schedule a function call asynchronously, for example presenting an Image after 100ms, after 200ms and after 300ms and masking this image at 150ms 250ms and 350ms.
I am sure I can do this with delays, but I would prefer to do this asynchronously. I was able to do this in pyepl with timing.timedCall.
To be genuinely 'aysnchronous' would need threads and, as Jonas suggests, these aren't safe for use with OpenGL (your graphics card doesn't know which thread a command is coming from and if its commands are executed out of order because of two interleaved threads then the results are unpredictable and could lead to a crash).
I'd naturally handle this with a function like
def checkTimes(t, listOfPermissible):
for start,stop in listofPermissible:
if start < t < stop:
return True #we found a valid window
return False #if we got here there was no valid window
and then in my script I'd have:
targetTimes = [[0.1, 0.15], [0.2, 0.25]]
maskTimes = [[0.18, 0.2], [0.28, 0.3]]
while continueTrial:
t = trialClock.getTime()
#check if we need target
if checkTimes(t, targetTimes):
target.draw()
#check if we need mask
if checkTimes(t, maskTimes):
mask.draw()
#drawing complete so flip the window
win.flip()
#check for response
keys = event.getKeys()
if keys:
continueTrial = False
Jonas is also right though that you should use frame numbers instead of clock time if you have brief stimuli and care about them being precisely timed. As a cheeky example in the code above I've added some impossible times. For example 0.18 (180ms) which isn't possible with a 60Hz monitor. In the code above the 0.18 will effectively get rounded up to the next frame and the stimulus will appear at 183ms (11 frames into the block).
The rest of the logic above (checking in a list of start/stops) should still work just the same though.
Jon
I don't think that there's currently any way to do this. Previous attempts to run parts of psychopy stimulus presentation separate threads have failed, as far as I know. It's something about OpenGL not really being robust to this.
If there is a way to display stimuli asynchroniously, beware that visual stuff should really be timed in terms of number of frames rather than milliseconds for the durations you're considering. Presenting at e.g. 100 ms could just barely miss the 6th frame, thus shown the image on the 7th frame (after 116.7 ms). This is one of the points where I think many other stimulus presentation software packages mislead the user.
The ```psychopy.visual.Window.flip()`` method allows for timing using frames.
Related
So I have my Q and E set to control a Camera that is fixed in 8 directions. The problem is when I call Input.is_action_just_pressed() it sets true two times, so it does its content twice.
This is what it does with the counter:
0 0 0 0 1 1 2 2 2 2
How can I fix thix?
if Input.is_action_just_pressed("camera_right", true):
if cardinal_count < cardinal_index.size() - 1:
cardinal_count += 1
else:
cardinal_count = 0
emit_signal("cardinal_count_changed", cardinal_count)
On _process or _physics_process
Your code should work correctly - without reporting twice - if it is running in _process or _physics_process.
This is because is_action_just_pressed will return if the action was pressed in the current frame. By default that means graphics frame, but the method actually detect if it is being called in the physics frame or graphic frame, as you can see in its source code. And by design you only get one call of _process per graphics frame, and one call of _physics_process per physics frame.
On _input
However, if you are running the code in _input, remember you will get a call of _input for every input event. And there can be multiple in a single frame. Thus, there can be multiple calls of _input where is_action_just_pressed. That is because they are in the same frame (graphics frame, which is the default).
Now, let us look at the proposed solution (from comments):
if event is InputEventKey:
if Input.is_action_just_pressed("camera_right", true) and not event.is_echo():
# whatever
pass
It is testing if the "camera_right" action was pressed in the current graphics frame. But it could be a different input event that one being currently processed (event).
Thus, you can fool this code. Press the key configured to "camera_right" and something else at the same time (or fast enough to be in the same frame), and the execution will enter there twice. Which is what we are trying to avoid.
To avoid it correctly, you need to check that the current event is the action you are interested in. Which you can do with event.is_action("camera_right"). Now, you have a choice. You can do this:
if event.is_action("camera_right") and event.is_pressed() and not event.is_echo():
# whatever
pass
Which is what I would suggest. It checks that it is the correct action, that it is a press (not a release) event, and it is not an echo (which are form keyboard repetition).
Or you could do this:
if event.is_action("camera_right") and Input.is_action_just_pressed("camera_right", true) and not event.is_echo():
# whatever
pass
Which I'm not suggesting because: first, it is longer; and second, is_action_just_pressed is really not meant to be used in _input. Since is_action_just_pressed is tied to the concept of a frame. The design of is_action_just_pressed is intended to work with _process or _physics_process, NOT _input.
So, apparently theres a built in method for echo detection:
is_echo()
Im closing this.
I've encountered the same issue and in my case it was down to the fact that my scene (the one containing the Input.is_action_just_pressed check) was placed in the scene tree, and was also autoloaded, which meant that the input was picked up from both locations and executed twice.
I took it out as an autoload and Input.is_action_just_pressed is now triggered once per input.
I'm working on a 2D video game framework, and I've never written a game loop before. Most frameworks I've ever looked in to seem to implement both a draw and update methods.
For my project I implemented a loop that calls these 2 methods. I noticed with other frameworks, these methods don't always get called alternating. Some frameworks will have update run way more than draw does. Also, most of these types of frameworks will run at 60FPS. I figure I'll need some sort of sleep in here.
My question is, what is the best method for implementing this type of loop? Do I call draw then update, or vice versa? In my case, I'm writing a wrapper around SDL2, so maybe that library requires something to be setup in a certain way?
Here's some "pseudo" code I'm thinking of for the implementation.
loop do
clear_screen
draw
update
sleep(16.milliseconds)
break if window_is_closed
end
Though my project is being written in Crystal-Lang, I'm more looking for a general concept that could be applied to any language.
It depends what you want to achieve. Some games prefer the game logic to run more frequently than the frame rate (I believe Source games do this), for some games you may want the game logic to run less frequently (the only example of this I can think of is the servers of some multiplayer games, quite famously Overwatch).
It's important to consider as well that this is a question of resolution, not speed. A game with logic rate 120 and frame rate 60 is not necessarily running at x2 speed, any time critical operations within the game logic should be done relative to the clock*, not the tic rate, or your game will literally go into slow motion if the frames take too long to render.
I would recommend writing a loop like this:
loop do
time_until_update = (update_interval + time_of_last_update) - current_time
time_until_draw = (draw_interval + time_of_last_draw) - current_time
work_done = false
# Update the game if it's been enough time
if time_until_update <= 0
update
time_of_last_update = current_time
work_done = true
end
# Draw the screen if it's been enough time
if time_until_draw <= 0
clear_screen
draw
time_of_last_draw = current_time
work_done = true
end
# Nothing to do, sleep for the smallest period
if work_done == false
smaller = time_until_update
if time_until_draw < smaller
smaller = time_until_draw
end
sleep_for(smaller)
end
# Leave, maybe
break if window_is_closed
end
You don't want to wait for 16ms every frame otherwise you might end up over-waiting if the frame takes a non-trivial amount of time to complete. The work_done variable is so that we know whether or not the intervals we calculated at the start of the loop are still valid, we may have done 5ms of work, which would throw our sleeping completely off so in that scenario we go back around and calculate fresh values.
* You may want to abstractify the clock, using the clock directly can have some weird effects, for example if you save the game and you save the last time you used a magical power as a clock time, it will instantly come off cooldown when you load the save, as that is now minutes, hours or even days in the past. Similar issues exist with the process being suspended by the operating system.
This is more of an open discussion topic than anything else. Currently I'm storing 50 Float32 values in my NSMutableArray *voltageArray before I refresh my CPTPlot *plot. Every time I obtain 50 values, I remove the previous 50 from the voltageArray and repeat the process....always displaying the 50 values in "real time" on my plot.
However, the data I'm receiving (which is voltage coming from a Cypress BLE module equipped with a pressure transducer) is so quick that any variation (0.4 V to 4.0 V; no pressure to lots of pressure) cannot be seen on my graph. It just shows up as a straight line, varying up and down without showing increased or decreased slopes.
To show overall change, I wanted to take those 50 values, store them in the first index of another NSMutableArray *stampArray and use the index of stampArray to display information. Meanwhile, the numberOfRecordsForPlot: method would look like this:
- (NSUInteger)numberOfRecordsForPlot:(CPTPlot *)plotnumberOfRecords {
return (DATA_PER_STAMP * _stampCount);
}
This would initially be 50, then after 50 pieces of data are captured from the BLE module, _stampCount would increase by one, and the number of records for plot would increase by 50 (till about 2500-10000 range, then I'd refresh the whole the thing and restart the process.)
Is this the right approach? How would I be able to make the first 50 points stay on the graph, while building the next 50, etc.? Imagine an y = x^2 graph, and what the graph looks like when applying integration (the whole breaking the area under the curve into rectangles).
Look at the "Real Time Plot" demo in the Plot Gallery example app included with Core Plot. It starts off with an empty plot, adding a new point each cycle until reaching the maximum number of points. After that, one old point is removed for each new one added so the total number stays constant. The demo uses a timer to pass random data to the plot, but your app can of course collect data from anywhere. Be sure to always interact with the graph from the main thread.
I doubt you'll be able to display 10,000 data points on one plot (does your display have enough pixels to resolve that many points?). If not, you'll get much better drawing performance if you filter and/or smooth the data to remove some of the points before sending them to the plot.
Working on a filter following, I am having a problem of doing these pieces of codes for processing an image in GPU:
for(int h=0; h<height; h++) {
for(int w=1; w<width; w++) {
image[h][w] = (1-a)*image[h][w] + a*image[h][w-1];
}
}
If I define:
dim3 threads_perblock(32, 32)
then each block I have: 32 threads can be communicated. The threads of this block can not communicate with the threads from other blocks.
Within a thread_block, I can translate that pieces of code using shared_memory however, for edge (I would say): image[0,31] and image[0,32] in different threadblocks. The image[0,31] should get value from image[0,32] to calculate its value. But they are in different threadblocks.
so that is the problem.
How would I solve this?
Thanks in advance.
If image is in global memory then there is no problem - you don't need to use shared memory and you can just access pixels directly from image without any problem.
However if you have already done some processing prior to this, and a block of image is already in shared memory, then you have a problem, since you need to do neighbourhood operations which are outside the range of your block. You can do one of the following - either:
write shared memory back to global memory so that it is accessible to neighbouring blocks (disadvantage: performance, synchronization between blocks can be tricky)
or:
process additional edge pixels per block with an overlap (1 pixel in this case) so that you have additional pixels in each block to handle the edge cases, e.g. work with a 34x34 block size but store only the 32x32 central output pixels (disadvantage: requires additional logic within kernel, branches may result in warp divergence, not all threads in block are fully used)
Unfortunately neighbourhood operations can be really tricky in CUDA and there is always a down-side whatever method you use to handle edge cases.
You can just use a busy spin (no joke). Just make the thread processing a[32] execute:
while(!variable);
before starting to compute and the thread processing a[31] do
variable = 1;
when it finishes. It's up to you to generalize this. I know this is considered "rogue programming" in CUDA, but it seems the only way to achieve what you want. I had a very similar problem and it worked for me. Your performance might suffer though...
Be careful however, that
dim3 threads_perblock(32, 32)
means you have 32 x 32 = 1024 threads per block.
I would like to know if anyone knows how to perform a cross-correlation between two audio signals on iOS.
I would like to align the FFT windows that I get at the receiver (I am receiving the signal from the mic) with the ones at the transmitter (which is playing the audio track), i.e. make sure that the first sample of each window (besides a "sync" period) at the transmitter will also be the first window at the receiver.
I injected in every chunk of the transmitted audio a known waveform (in the frequency domain). I want estimate the delay through cross-correlation between the known waveform and the received signal (over several consecutive chunks), but I don't know how to do it.
It looks like there is the method vDSP_convD to do it, but I have no idea how to use it and whether I first have to perform the real FFT of the samples (probably yes, because I have to pass double[]).
void vDSP_convD (
const double __vDSP_signal[],
vDSP_Stride __vDSP_signalStride,
const double __vDSP_filter[],
vDSP_Stride __vDSP_strideFilter,
double __vDSP_result[],
vDSP_Stride __vDSP_strideResult,
vDSP_Length __vDSP_lenResult,
vDSP_Length __vDSP_lenFilter
)
The vDSP_convD() function calculates the convolution of the two input vectors to produce a result vector. It’s unlikely that you want to convolve in the frequency domain, since you are looking for a time-domain result — though you might, if you have FFTs already for some other reason, choose to multiply them together rather than convolving the time-domain sequences (but in that case, to get your result, you will need to perform an inverse DFT to get back to the time domain again).
Assuming, of course, I understand you correctly.
Then once you have the result from vDSP_convD(), you would want to look for the highest value, which will tell you where the signals are most strongly correlated. You might also need to cope with the case where the input signal does not contain sufficient of your reference signal, and in that case you may wish to (for example) ignore values in the result vector below a certain level.
Cross-correlation is the solution, yes. But there are many obstacles you need to handle. If you get samples from the audio files, they contain padding which cross-correlation function does not like. It is also very inefficient to perform correlation with all those samples - it takes a huge amount of time. I have made a sample code which demonstrates time shift of two audio files. If you are interested in the sample, look at my Github Project.