set fixation events in psychopy/ioHub for eyelink dataviewer - psychopy

I am using psychopy and iohub to collect eyetracking data collection with the eyelink (sr research) system. I would like to be able to set two things up: fixation events (where fixation for 100ms at a certain point is required for the next part of the task/new trial to occur) and "interest areas": basically, areas that are pre-defined regions so that I can analyze gaze duration in specific regions. The code I'm using is just the generic stuff I got psychopy for eyetracking (I'm no coding expert) and I can't figure out how to modify it to do these two things.
Thanks!

In your question, "fixation events" and "interest areas" seem to be effectively the same thing from a calculation point of view.
I guess the essence is that on every frame, you check the current gaze position, and monitor if a fixation within the relevant AOI has lasted at least 100 ms, or whatever duration is required.
I'm assuming you're using Builder
Pseudo-code:
Begin Routine:
fixation_started = False
Each Frame:
if gaze position is in AOI: # pseudo-code
if not fixation_started:
fixation_start_time = t
fixation_started = True
# else fixation has started, so check duration:
elif t - fixation_start_time > 0.100:
# do whatever, as this fixation has exceeded 100 ms
else: # subject is looking elsewhere:
fixation_started = False

Related

PsychoPy: make trials happen with keypress until routine ends at time x

I have a routine with multiple possible trials (in a loop), and the participant must press a key to go to the next trial. I want these trials to go on (with keypresses) until a certain time has passed of the routine, say 5 seconds, regardless of how many possible trials are left in my conditionsFile. At the 5-second mark, the routine should end.
(If this is relevant: each trial is only displayed once.)
I've tried a few things, all of which only end up making the routine skip to the next trial after 5 seconds, but still going through all the possible trials, i.e. not ending the routine at 5 s.
For example, I've tried adding:
# ------Prepare to start Routine "Images"-------
continueRoutine = True
routineTimer.add(5)
and then
# -------Run Routine "Images"-------
while continueRoutine and routineTimer.getTime() > 0:
I've also tried adding this to the while sequence from above:
if image.status == STARTED:
# is it time to stop? (based on global clock, using actual start)
if tThisFlipGlobal > image.tStartRefresh + 5.0-frameTolerance:
# keep track of stop time/frame for later
image.tStop = t # not accounting for scr refresh
image.frameNStop = frameN # exact frame index
win.timeOnFlip(image, 'tStopRefresh') # time at next scr refresh
image.setAutoDraw(False)
Can anyone help?

GtkTreeView stops updating unless I change the focus of the window

I have a GtkTreeView object that uses a GtkListStore model that is constantly being updated as follows:
Get new transaction
Feed data into numpy array
Convert numbers to formatted strings, store in pandas dataframe
Add updated token info to GtkListStore via GtkListStore.set(titer, liststore_cols, liststore_data), where liststore_data is the updated info, liststore_cols is the name of the columns (both are lists).
Here's the function that updates the ListStore:
# update ListStore
titer = ls_full.get_iter(row)
liststore_data = []
[liststore_data.append(df.at[row, col])
for col in my_vars['ls_full'][3:]]
# check for NaN value, add a (space) placeholder is necessary
for i in range(3, len(liststore_data)):
if liststore_data[i] != liststore_data[i]:
liststore_data[i] = " "
liststore_cols = []
[liststore_cols.append(my_vars['ls_full'].index(col) + 1)
for col in my_vars['ls_full'][3:]]
ls_full.set(titer, liststore_cols, liststore_data)
Class that gets the messages from the websocket:
class MyWebsocketClient(cbpro.WebsocketClient):
# class exceptions to WebsocketClient
def on_open(self):
# sets up ticker Symbol, subscriptions for socket feed
self.url = "wss://ws-feed.pro.coinbase.com/"
self.channels = ['ticker']
self.products = list(cbp_symbols.keys())
def on_message(self, msg):
# gets latest message from socket, sends off to be processed
if "best_ask" and "time" in msg:
# checks to see if token price has changed before updating
update_needed = parse_data(msg)
if update_needed:
update_ListStore(msg)
else:
print(f'Bad message: {msg}')
When the program first starts, the updates are consistent. Each time a new transaction comes in, the screen reflects it, updating the proper token. However, after a random amount of time - seen it anywhere from 5 minutes to over an hour - the screen will stop updating, unless I change the focus of the window (either activate or inactive). This does not last long, though (only enough to update the screen once). No other errors are being reported, memory usage is not spiking (constant at 140 MB).
How can I troubleshoot this? I'm not even sure where to begin. The data back-ends seem to be OK (data is never corrupted nor lags behind).
As you've said in the comments that it is running in a separate thread then i'd suggest wrapping your "update liststore" function with GLib.idle_add.
from gi.repository import GLib
GLib.idle_add(update_liststore)
I've had similar issues in the past and this fixed things. Sometimes updating liststore is fine, sometimes it will randomly spew errors.
Basically only one thread should update the GUI at a time. So by wrapping in GLib.idle_add() you make sure your background thread does not intefer with the main thread updating the GUI.

How to interpret the RabbitMQ Message stats?

I to want get and historize queue metrics for the "Enqueued, Dequeued an Size" (Terminology formerly met on ActiveMQ).
The moving charts provided in the management plugin are not enough for the monitoring that I need to do.
So with RabbitMQ, I'm getting data from https://rabbitmq-server:15672/api/queues/myvhost
This returns json.. for a queue, I can obtain real life production data like :
"messages":0, // for "Size"
"message_stats":{
"deliver_get":171528, // for "Dequeued"
"ack":162348,
"redeliver":9513,
"deliver_no_ack":0,
"deliver":171528,
"get":0,
"publish":51293 // for "Enqueued"
(...)
I'm in particular surprised by the publish counter:
Its value can even decrease between 2 measures done with a couple of minutes of delay ! (see sample chart around 17:00)
As you can see on my data, the deliver_get is significantly larger than the publish.
https://my-rabbitmq:15672/doc/stats.html doesn't give a lot of details that could explain what I actually notice.
Also, under the message_stats object that I obtain, I'm missing the some counters like confirm and return which could be related to the enqueuing.
Are there relationships between these metrics ? (like deliver_get + messages = redeliver + publish.. but that one doesn't work with my figures)
Is there another more detailed documentation about these metrics ?

Generating parallel port triggers upon detection of a vocal response

I've created an experiment in psychopy builder in which participants must vocally name pictures presented onscreen (for example, if a picture of a chair appears, the participant has to respond by saying "chair"). I've set up a code component to detect each vocal response, which ends the trial and initiates the next one. This part of the experiment works well, however I'm having trouble integrating EEG recording.
Some important information:
My trial loop reads images and triggerVal's out of a .csv file. I have an image component (called english_naming) that displays images for participants to name out-loud. The component's STOP field is defined as $vpvk.event_onset - this forces the trial to end and the next one to begin upon detection of a vocal response.
So, here is my (working) code component at present:
Begin Experiment:
from psychopy import parallel
port = parallel.port(address=61432)
Begin Routine
vpvk = vk.onsetVoiceKey(
sec=10) # creates the voice key
vpvk.start() #starts recording.
port.setData(triggerVal) # tells psychopy to read trigger values from the .csv file
End Routine
vpvk.stop() # ends the recording
port.setData(0) # resets the trigger value to 0 for the start of the next trial
My problem is this
At present, parallel port events are time-locked to the start of each trial, but I need them to be time-locked to participant's vocal responses. I tried inserting if vpvk.event_onset(): above port.setData(triggerVal), but this fails to generate any trigger codes at all. I've also tried if english_naming==FINISHED but the same problem occurred. I've tried a bunch of variants on these two lines of code, but nothing I can think of seems to work.
I would really really appreciate any advice on this problem. Thanks in advance!

Is it possible to set dynamic download delay in scrapy?

I know that a constant delay can be set in
settings.py
DOWNLOAD_DELAY = 2
however, if I set the delay to 2s it is not efficient enough. If I set the DOWNLOAD_DELAY = 0.
The crawler is able to crawl about 10 pages. after that, the target page will return something like " you are requesting too frequently ".
What I want to do is the keep the download_delay to 0. once the "requesting too frequently" msg is found in the html. it change the delay to 2s. After a while it switch back to zero.
is there any module can do this? or any other better idea to handle such case?
Update:
I found that is a extension call AutoThrottle
but is it able to customize some logic like this??
if (requesting too frequently) is found
increase the DOWNLOAD_DELAY
If right after you get anti-spider page, then in 2 seconds you can get data page, then what you are asking probably requires writing a downloader middleware
that checks for anti-spider page, reset all scheduled requests to a renew-queue, start a looping call when spider is idle to get request from the renew-queue, (the looping interval is your hack for a new download delay), and try to decide when the download delay is not necessary again (requires some tests), then stop the looping and reschedule all the requests in renew-queue to scrapy scheduler. You will need to use redis queue in case of distributed crawl.
With download delay set to 0, in my experience throughput can go easily above 1000 items/min. If anti-spider page pops up after 10 responses, then it is not worth the effort.
Instead maybe you can try to find out how fast does your target server allow, may be 1.5s, 1s, 0.7s, 0.5s etc. Then maybe redesign your product takes into consideration the throughput your crawler can achieve.
You can use Auto Throttle extension now. It is turned off by default. You can add these parameters in your project's settings.py file to enable it.
AUTOTHROTTLE_ENABLED = True
# The initial download delay
AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
AUTOTHROTTLE_MAX_DELAY = 300
# The average number of requests Scrapy should be sending in parallel to
# each remote server
AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
AUTOTHROTTLE_DEBUG = True
Yes, You can use the time module to set the dynamic delay.
import time
for i in range(10):
*** Operations 1****
time.sleep( i )
*** Operations 2****
Now you can see the delay between Operations 1 and Operations 2.
Note:
the variable 'i' is in the form of seconds.