PsychoPy : Displaying in main process and Time through a subprocess - psychopy

I'm using a psychopy code that was done by a previous Phd student of the lab. This code aims to display stimuli (random dot kinematogram) and use a subprocess for precise timing.
The subprocess is created with the following line :
process = subprocess.Popen(['python', 'LPTmat.py'], shell=False, stdin=subprocess.PIPE, stdout=subprocess.PIPE)
Then when the first frame is displayed, the code writes a number to tell the subprocess to begin timing:
if first_frame == True:
process.stdin.write('%i\n'%1) #start timer and check of the PP
first_frame = False
The Subprocess then start the timer and record the button pressed (parallel port) :
while True:
input = sys.stdin.readline()
if input == '1\n':
timer.reset()
parallel_port_state = ctypes.windll.inpout32.Inp32(lptno)
while parallel_port_state == etat_repos:
parallel_port_state = ctypes.windll.inpout32.Inp32(lptno)
lecture_time = timer.getTime()
if parallel_port_state in port_gauche:
send_trigger (1)
elif parallel_port_state in port_droit:
send_trigger (2)
np.savetxt('mat.txt',mat)#a button has been pressed
The main process than detect the txt file and stop the stimulus presentation :
mtext= os.path.exists('mat.txt')
if mtext== True:#if the mat.txt file exists, a button has been pressed
myWin.flip()#black screen
process.stdin.write('%i\n'%3)
check = process.stdout.readline()#read which button has been pressed
check = int(check)
And go checking back the response time recorder by the subprocess and remove the txt file created :
process.stdin.write('%i\n'%2)
RT = process.stdout.readline()
RT = float (RT)
rep.rt = RT
os.remove('mat.txt')
The problem is that the .txt file created is not really a clean way to do the job so I was wondering if they were another way to use this subprocess and to tell the main process that a response was made ?
Cheers,
Gabriel

Asynchronous device polling was one of the main reasons that lead to the development of ioHub, which has been merged into PsychoPy last year or so. Essentially, ioHub creates a new Python process that only interfaces with your external devices, while the main PsychoPy process can continue to present stimuli, uninterrupted by device polling. At any desired time, for example after time-critical phases of stimulus generation and presentation have passed, the PsychoPy process can request the collected data from the ioHub process.
(Please also note that an ordinary Python thread is never executed really simultaneously in CPython due to the GIL; that is why ioHub creates a whole new process, not just another thread.)
ioHub already supports many different types of hardware and interfaces (serial port, eye trackers, different types of response button boxes etc.) Unfortunately, to my knowledge no parallel port support has been integrated to date.
But do not despair! I see there is already support for LabJack devices, which have replaced the steadily disappearing parallel port in many psychophysics and electrophysiology labs. These devices are relatively cheap (about USD 300) and can be connected to modern computers via a USB port. ioHub has a ready-to-use interface for the LabJacks, which is also demonstrated in this demo.
Another alternative, of course, would be to develop your own parallel port interface for ioHub. But given the vanishing popularity, availability, and therefore applicability of this interface, I am wondering whether this is really worth the effort.

Related

Advice on writing a custom bootloader for stm32 MCU

Assume there are two boards with stm32 micro-controllers which are connected to each other with rs-485 (each board has a uart-to-rs485 transceiver). This is the connection diagram and the accessible ports:
I want to be able to re-program each board separately using rs-485 wires that are available. Using st system bootloader and boot0 pin is not an option because it requires changing the PCB and re-wiring the system.
So I need to write my own custom bootloader. What I intend to do is to separate the flash memory of the B-1 MCU into three parts:
20KB for bootloader
120KB for B-2 application (as kind of a buffer)
360KB for B-1 application (bootloader jumps to this part after finishing boot mode)
and for B-2, two partitions:
20KB for bootloader
100KB for main application
and using the UART interface of B-1, I can load the .hex files to the specified flash area and tell the MCU what to do with it (either use it as it's own main app or send it to B-2).
Here is a brief algorithm for the bootloader:
// B1: starts from bootloader
for (5 seconds) {
// check for the boot command from UART
if (command received) {
// send boot and reset command to B-2 and receive ack
// start the boot mode
while (boot mode not aborted) {
// receive command packet containing address
if (command == header && address == B1) {
// prompt for the main .hex file and store it in B-1 partition
} else if (command == header && address == B2) {
// prompt for the main .hex file and store it in B-2 partition
// send header and the app to B-2 using rs-485 port
} else if (command == abort) {
break from while loop
}
}
} else {
// do nothing
}
}
// jump to main application
Now I have 2 concerns:
Since there is no gpio connection between B-1 and B-2 to activate boot mode for B-2, is it possible for B-2 board to set a flag in flash memory outside of it's main application area to check for it and use it as a boot mode activation flag?
Is it possible to write the stream of data directly from uart input to the flash memory area of each application? like this:
// Obviously this address is outside the flash area of the current bootloader running
uint8_t appAddress = B2_APP_FLASH_MEMORY_PARTITION_START_ADDRESS;
for (i from 0 to size_of_app) {
hal_uart_receive(&uartPort, appAddress + i, 1, timeout);
}
Since there is no gpio connection between B-1 and B-2 to activate boot mode for B-2, is it possible for B-2 board to set a flag in flash memory outside of it's main application area to check for it and use it as a boot mode activation flag?
I am not sure that you are suggesting, or how your suggestion will solve the problem. Ideally you would have a means form B1 of directly resetting B2 via its /RESET line, but failing that if it is loaded with an application that accepts a reset command or signal over the RS-485, then you can then have it issue a soft-reset to start the bootloader. On Cortex-M devices you can do that through the NVIC.
If you need to communicate information to the B2 bootloader - perhaps to either invoke an update or to bypass that and boot the application, you need not program flash memory for that, you can simply write a boot command or signature via a reserved area of SRAM (best right at the top) that is not initialised by the runtime start-up (or the content of which you capture before such initialisation). Content in SRAM will survive a reset so long as power is maintained, so it can be used to communicate between the application and the bootloader - both ways.
This is of course a bootstrap issue - what if there is no application loaded to accept a reset command, or the application is not valid/complete (programming interruption). Well the relocated application area will have its vector table including its initial-SP and reset vector right at the start. In your bootloader when the first 8 bytes of the image are received, you hold them back and do not program that area until the rest of the image is written. By programming the reset vector last, if the programming is interrupted, that location will not be a valid address. The bootloader can validate it (or check if it is in the erase state) and if not valid/written, it can wait indefinitely for an update or simply reset to repeat the update polling. Be aware of a bit of an STM32 gotcha here though - most parts erase flash to "all-ones" (0xFF) state, some however (STM32Lxx parts) erase to "all-zeroes". Of course you couls simply check for 0x00000000 or 0xffffffff since neither would be a valid start address, or explicitly check the range.
Is it possible to write the stream of data directly from uart input to the flash memory area of each application? like this:
Yes, but remember that on STM32, normally the code is executing from the same flash memory you are trying to program and that the bus stalls during flash write and erase, such that if you are executing from flash, execution halts. For page erase, that can be several milliseconds - for parts with large pages, several hundred milliseconds even. That may mean that you fail to read characters on the UART if you are using polling or interrupt.
You can overcome this issue by protocol design. For example if you use a packet protocol where the last packet must be acknowledged before the next one is sent, you can use that as a flow control. Once you have collated a block of data to be written, you simply delay the acknowledgement of the last packet until after you have erased and/or written the data to flash. XMODEM-1K is a suitable protocol for that and despite its flaws its simplicity and support in common terminal emulator applications make it a good choice for this application.
Now all that said, you can increase the flash available to B1 by not buffering the image for B2 on B1 at all and simply implement a bi-directional pass-through such that the input on the UART of B1 is passed directly to the B1 RS-485 output, (surely also a UART so your port naming is ambiguous), and B1 RS-485 input passed directly to the UART output. That way B1 becomes "transparent" and the update tool will appear to be communicating directly with B2. That is perhaps far simpler and faster, and if the bootloader is "fail-safe" as described above, will still allow retries following interruption.
The pass-through function might be part of B1's application or a "mode" of the bootloader. The pass-through mode might be invoked by a particular boot command or you might have the application pass a "boot mode" command via the SRAM mechanism described earlier.
In either case ideally you would have identical bootloader code on both B1 and B2 for simplicity and flexibility. There is no reason why that should not be the case; they are both receiving the updates over UART.

OpenThread otJoinerStart Never Times Out

I am trying to integrate OpenThread child with an existing application on the TI CC2652R1 and am having issues trying to join/create a Thread network. Currently I have an external event that calls a function to join and start OpenThread. Below is a snip of this function relating to the join:
bool is_commissioned = otDatasetIsCommissioned(OtStack_instance);
otJoinerState joiner_state = otJoinerGetState(OtStack_instance);
if(!is_commissioned && (OT_JOINER_STATE_IDLE == joiner_state)){
otError error = otIp6SetEnabled(OtStack_instance, true);
error = otThreadSetEnabled(OtStack_instance, true);
error = otJoinerStart(OtStack_instance, "PSK", NULL, "Company", "Device", "0.0.0", NULL, joiner_callback, NULL);
}
otJoinerStart never seems to resolve because joiner callback never is called and additional calls to my joining function show that the joiner state is OT_JOINER_STATE_DISCOVER and the OpenThread instance says that it is initialized. Is there a way to set the joiner callback timeout? I have looked through the documentation and could not find out how the join timeout is set.
Thanks
Joining a Thread device to a Thread network assumes that you have a Thread network running and there is an active commissioner with the joiner's EUI64 and PSK. Make sure that these are setup before you try and call this function to join. It is also helpful to have a sniffer running on the Thread network's channel to ensure the commissioner or joiner router is responding properly.
Joining in Thread is done with an active scan on all the available channels in the IEEE 802.15.4 page 0. The time to send a Joiner request and the time the joiner waits on each channel is not immediately configurable. However these active scans usually complete within a few seconds. Your joiner callback should be getting called with a join failed condition if there are no available joiner routers in about 5 seconds.
The examples in the OpenThread github repository are written in a nortos fashion. Any application code is run in a tasklet and the main loop only calls two functions; process tasklets and process drivers. In the TI SDK we use TI-RTOS and you seem to have based your code on these examples. In general the OtStack_Task will handle processing of OpenThread and the platform driver interface; but deadlocks in a multi-threaded system can occur.
You can use ROV in CCS or IAR to check the state of the kernel and RTOS objects. In CCS with an active debug session, select; Tools >> Runtime Object View. Then check if the stack task is blocking on the API semaphore. Or if the application task is hogging up the processor. This can be due to an unpaired lock/unlock on the API semaphore, or the application task may be in a busy wait.
Immediately I don't see anything wrong with the code snippet posted.

How to program factory reset switch in a small embedded device

I am building a small embedded device. I am using a reset switch, and when this is pressed for more than 5 seconds, the whole device should reset and clear all the data and go to factory reset state.
I know what to clear when this event happens. What I want to know is how do I raise this event? I mean when switch is pressed, how do I design the system to know that 5 seconds have elapsed and I have to reset now. I need high level design with any timers and interrupts. Can some one please help me?
Depends on the device. But few rough ideas:
Possibly the device manual may say about the number of interrupts per second that is produced by "holding down the switch" (switch down). If you have this value, you can easily calculate the 5 seconds.
If not, you would need to use timer too. Start the timer when you get the first interrupt of "switch down" and count up to 5 seconds.
Note that, You should also monitor for "switch up", that is, "release of switch". I hope there will be an interrupt for that too. (Possibly with different status value).
So you should break the above loop (you shouldn't do the reset) when you see this interrupt.
Hope this helps.
Interrupt-driven means low level, close to the hardware. An interrupt-driven solution, with for example a bare metal microcontroller, would look like this:
Like when reading any other switch, sample the switch n number of times and filter out the signal bounce (and potential EMI).
Start a hardware timer. Usually the on-chip timers are far too fast to count a whole 5 seconds, even when you set it to run as slow as possible. So you need to set the timer with a pre-scale value, picked so that one whole timer cycle equals a known time unit (like for example 10 milliseconds).
Upon timer overflow, trigger an interrupt. Inside the interrupt, check that the switch is still pressed, then increase a counter. When the counter reaches a given value, execute the reset code. For example, if you get a timer overflow every 10 milliseconds, your counter should count up to 5000ms/10ms = 500.
If the switch is released before the time is elapsed, reset the counter and stop the timer interrupt.
How to reset the system is highly system-specific. You should put the system in a safe system, then overwrite your current settings by overwriting the NVM where settings is stored with some default factory settings stored elsewhere in NVM. Once that is done, you should force the processor to reset itself and reboot with the new settings in place.
This means that you must have a system with electronically-erasable NVM. Depending on the size of the data, this NVM could either be data flash on-chip in a microcontroller, or some external memory circuit.
Detecting a 5S or 30S timeout can be done using a GPIO on an interrupt.
If using an rtos,
. Interrupt would wake a thread from sleep and disables itself,
. All the thread does is count the time the switch is pressed for (you scan the switch at regular intervals)
. If the switch is pressed for desired time set a global variable/setting in eeprom which will trigger the factory reset function
. Else enable the interrupt again and put the thread to sleep
. Also, use a de-bounce circuit to avoid issues.
Also define what do you mean by factory reset?
There are two kinds in general, both cases I will help using eeprom
Revert all configurations (Low cost, easier)
In this case, you partition the eeprom, have a working configuration and factory configuration. You copy over the factory configurations to the working partition and perform a software reset
Restore complete firmware (Costly, needs more testing)
This is more tricky, but can be done with help of bootloaders that allow for flashing from eeprom/or sd card.
In this case the binary firmware blob will also be stored with the factory configuration, in the safe partition and will be used to flash controller flash and configurations.
All depends on the size/memory and cost. can be designed in many more ways, i am just laying out simplest examples.
I created some products with a combined switch to. I did so by using a capacitator to initiate a reset pulse on the reset pin of the device (current and levels limit by some resistors and/or diodes). At start-up I monitor the state of the input pin connected to the switch. I simply wait until this pin goes height with a time-out of 5 seconds. In case of a time-out I reset my configuration to default.

How does GNU screen actually work

So I have been trying to find some sort of information on how GNU screen actually works at a high without actually having to read through the source code but I have been unable to do so.
What does screen do that it is able to stick around even when a terminal session is closed? Does it run as a daemon or something and everyone who invokes screen just connects to it and then it finds which pseudo tty session to attach to or does it do something completely different?
There's a lot of potential questions in this question, so I'm going to concentrate on just one:
What does screen do that it is able to stick around even when a terminal session is closed?
Screen catches HUP signals, so it doesn't automatically exit when its controlling terminal goes away. Instead, when it gets a HUP, it goes into background mode (since it no longer has an actual terminal attached) and waits. When you start screen with various -d/-D/-r/-R/-RR options, it looks for an already running screen process (possibly detached after having received a HUP, and/or possibly detaching it directly by sending it a HUP) and takes over the child terminal sessions of that screen process (a cooperative process whereby the old screen process sends all the master PTYs to the new process for it to manage, before exiting).
I haven't studied Screen itself in depth, but I wrote a program that was inspired by it on the user end and I can describe how mine works:
The project is my terminal emulator which has an emulation core, a gui front end, a terminal front end, and the two screen-like components: attach.d and detachable.d.
https://github.com/adamdruppe/terminal-emulator
attach.d is the front end. It connects to a particular terminal through a unix domain socket and forwards the output from the active screen out to the actual terminal. It also sends messages to the backend process telling it to redraw from scratch and a few other things (and more to come, my thing isn't perfect yet).
https://github.com/adamdruppe/terminal-emulator/blob/master/attach.d
My terminal.d library provides an event loop that translates terminal input and signals. One of them is HUP signals, which are sent when the controlling terminal is closed. When it sees that, the front-end attach process closes, leaving the backend process in place:
https://github.com/adamdruppe/terminal-emulator/blob/master/attach.d#L709
When attach starts up and cannot connect to an existing process, it forks and creates a detachable backend:
https://github.com/adamdruppe/terminal-emulator/blob/master/attach.d#L454
By separating the front end and backend into separate processes, closing one leaves the other intact. Screen does this too: run a ps aux | grep -i screen. The all-caps SCREEN processes are backends. Frontends are lowercase screen processes.
me 3479 0.0 0.0 26564 1416 pts/14 S+ 19:01 0:00 screen
root 3480 0.0 0.0 26716 1528 ? Ss 19:01 0:00 SCREEN
There, I just started screen and you can see the two separate processes. screen forked and made SCREEN, which actually holds the state. Detaching kills process 3479, but leaves process 3480 in place.
The backend of mine is a full blown terminal emulator that maintains all internal state:
https://github.com/adamdruppe/terminal-emulator/blob/master/detachable.d
Here: https://github.com/adamdruppe/terminal-emulator/blob/master/detachable.d#L140 it reads from the socket attach talks to, reads the message, and forwards it to the application as terminal input.
The redraw method, here: https://github.com/adamdruppe/terminal-emulator/blob/master/detachable.d#L318 loops through its internal screen buffer - stored as an array of attributes and characters - and writes them back out to the attached terminal.
You can see both source files aren't terribly long - most the work is done in the terminal emulator core. https://github.com/adamdruppe/terminal-emulator/blob/master/terminalemulator.d and, to a lesser extent, my terminal client library (think: custom ncurses lib) https://github.com/adamdruppe/arsd/blob/master/terminal.d
Attach.d you can see also manages multiple connections, using select to loop through each open socket, and only drawing the active screen: https://github.com/adamdruppe/terminal-emulator/blob/master/attach.d#L300
My program runs a separate backend process for each individual terminal screen. GNU screen uses one process for a whole session, which may have multiple screens. My guess is screen does more work in the backend than I do, but it is basically the same principle: watch for input on each pty and update an internal screen buffer. When you run screen -r to attach to it, it connects through a named pipe - a FIFO in the file system (on my gnu screen config, they are stored in /tmp/screens) instead of the unix socket I used, but same principle - it gets the state dumped to the screen, then carries on forwarding info back and forth.
Anyway, I've been told my source is easier to read than screens' and xterm's, and while not identical, they are similar, so maybe you can get more ideas browsing through them too.

Is there way to tell terminal wait before send more data?

I have embedded firmware which have terminal over serial transmission. I am doing command from terminal which waits data (text file) which it should save to flash chip. However, writing flash is much slower than terminal transmission.
Text file may be pretty big (many kB), so in small embedded environment I cannot simply dump it to RAM. I though if it possible to communicate with standard terminal emulator (which have drag/dop support for files) to pause transmission every time when write buffer is full and tell continue again after write is done? I haven't find anything which may help me throught this.
Well, offcourse I can make PC frontend which understands this trick, but in basic level it should be nice if all function can be used through normal terminal if needed.
For a basic serial connection you could see if the hardware supports flow control. This would be the CTS, RTS lines (clear to send, request to send).
http://en.wikipedia.org/wiki/RS-232_RTS/CTS#RTS.2FCTS_handshaking
However many simple embedded systems do not implement this type of flow control.
If the hardware does not support flow control, then you will have to look at using some form of software flow control. You maybe able to implement the Xon/Xoff flow control ( http://en.wikipedia.org/wiki/XON/XOFF ) or could implement a simple file transfer protocol, like XMODEM, or ZMODEM, or even tftp. This depends on what your terminal can support.
I always use XMODEM when programming data into FLASH via a serial link from a PC. When using XMODEM it only sends one data packet at a time and waits for you to acknowledge the packet before sending the next one.
This means we control the flow via software on the receiving side:
Get packet ->
Write packet ->
Ack packet ->
Repeat util done...
XMODEM can be implemented on the smallest of devices (less than 1K RAM) and the code is very simple. All serial terminals support XMODEM (upto windows XP ship with an XMODEM capable terminal). XMODEM requires no special hardware.
Here is the spec.
Here is an example implementation.