Sending and receiving repeated commands to a serial instrument with LabVIEW - while-loop

I'm writing a program in LabVIEW 2014 in order to control a linear actuator. The program is very simple, it sets a speed and then runs the subVIs to move the actuator back and forth.
There is a case structure inside a while loop so it would stop when a desired number or iterations is reached. The problem is that the iteration count of the while loop occurs faster than the execution of the program inside the case structure, and therefore the program stops before all the cycles of movement have been completed.
send pulses subVI:
activate subVI:
I tried different time delays in different parts of the code, but none of that worked. I think that the issue is that the while loop iterations run faster than the code of the case structure and somehow I need to slow it down. Or maybe I'm wrong and it is a complete different thing.
Here is the link of the actuator documentation:
https://jp.optosigma.com/html/en_jp/software/motorize/manual_en/SRC-101_InstructionManual_Ver1_1_EN.pdf

Welcome to the fun and infuriating world of interfacing to serial instruments.
Each iteration of a LabVIEW loop can only complete once all the code inside the loop structure has completed, so it's not possible that 'the while loop iterations run faster than the code of the case structure'. There's nothing explicitly wrong with any of your code, but evidently it isn't doing what you expected it to. The way to approach developing an instrument driver is always to start with the simplest case (e.g. one single movement of your actuator), get that working, and build up from there.
The documentation for an instrument's serial interface is rarely perfect and yours is no exception, but it does tell us that
every command is acknowledged by a response, and
you should not send a new command until you have received the response from the previous command.
Your code to send commands and receive the response looks OK. A VISA Read operation will read bytes from the computer's serial buffer until either the number of bytes to read is reached, or a byte matching the termination char is read, or the timeout expires. The manual implies that the instrument's responses are followed by the CR and LF characters, and the default configuration of the serial port in LabVIEW is to terminate each read when an LF is received, so you shouldn't need a time delay between each write and the following read; the instrument's response will be received into the buffer by the OS, then your code will read it out and return it as soon as it hits the LF.
What isn't completely clear from the manual is how the instrument responds to the activation command, G: - does it
Return the acknowledgement immediately, then execute the movement: you can check whether the movement is finished using the status command !:, or
Execute the movement, then return the acknowledgement to show that it's finished.
I think it's 1. but that's the first thing I would check. Unless all your movements are completed in less than 500 ms then I think this is what is wrong here: your program receives the acknowledgement then moves straight on to send the next command, but the actuator is still moving and not ready. In this case you have two options:
add a time delay after the read, calculated to be long enough for the actuator move to finish - this would be easiest, but potentially unreliable
in a While loop after you have got the acknowledgement of the G: command, send the !: command and check the response until you get R for 'ready'. (Remember that the acknowledgement string you receive will also have the CRLF on the end.) Use a time delay in this loop so you don't bombard the instrument with status checks - maybe something like 200 to 1000 ms would be suitable.
If it's case 2. then you would also have two options:
configure your serial port with a read timeout long enough to cover the longest move operation, then the read operation will just block until the acknowledgement is received - again this is the quick and dirty way, or
configure a short timeout, say 1000 ms, and place the read in a While loop that repeats until the acknowledgement is received or too many timeouts have occurred. Note that a timeout is considered an error, so you will have to turn off automatic error handling for the VI and instead test the error wire out of the VISA Read, discard the timeout error and handle any other error yourself.
Just as a general tip, whenever you pass an error wire into a loop and out again, I would use a shift register. That way if one iteration generates an error, the next iteration will see that error and fail immediately, so (for example) if communication fails you don't have to wait for the read timeouts to expire multiple times before your code can exit.
You'll probably have to do some experimenting and referring to LabVIEW help to get this fully working but hopefully this is enough to get you going.

Related

Is there a protocol or well-defined procedure for instruments to send their measurement results to control PC's over GPIB?

With a control PC, I am addressing a R&S ESPI Receiver device to perform a frequency scan and return the measurement results back via BAT-EMC control software and a NI GPIB-USB controller in between. My target is to track the binary measurement data (Definite Length Block Data according to IEEE 488.2) sent to the control PC to understand how the device is deciding on the byte size of each binary block sent.
The trace shows that binary blocks are sent with no consistent pattern or rule!
E.g, running the same scan with the same frequency range and step twice may result in a different distribution of the measurement values' bytes on binary blocks (and possibly different total number of blocks sent), although the amount of data delivered is the same.
Any help to figure out how the device and control software are communicating the measurement data?
PS: The NI trace at the level of GPIB controller is not showing that the control software is specifying a byte size when querying for the next block, neither is the instrument sending this piece of info when it is issuing a service request so that it is queried for more available data by the software (according to the trace).
Make sure that you are giving enough time for the instrument to respond. Possibly you are sending commands from the PC which would assert the ATN line and interrupt the response. You should be able to configure the instrument to send one result. Configure the instrument as a listener and talker and set the instrument to send only one response per trigger. Then send the group execute trigger (GET) and read the results off the bus. When it’s done measure how long it took for that packet to get sent. If you are sending triggers before the full response you will be terminating the output stream. I suspect this because the streams are randomly different.
I’m just starting to learn GPIB so please write back what happened.

FreeRTOS stuck in osDelay

I'm working on a project using a STM32F446 with a boilerplate created with STM32CubeMX (for peripherals initialization and middleware like the FreeRTOS with the CMSIS-V1 interface).
I have two threads which communicate using mailboxes but I encountered a problem: one of the thread body is
void StartDispatcherTask(void const * argument)
{
mailCommand *commandData = NULL;
mailCommandResponse *commandResponse = NULL;
osEvent event;
for(;;)
{
event = osMailGet(commandMailHandle, osWaitForever);
commandData = (mailCommand *)event.value.p;
// Here is the problem
osDelay(5000);
}
}
It gets to the delay but never gets out. Is there a problem with using the mailbox and the delay in the same thread? I tried also bringing the delay before the for(;;) and it works.
EDIT: I guess I can try to add more detail to the problem. The first thread send a mail of a certain type and then waits for a mail of another type; the thread in which I get the problem receive the mail go the first type and execute some code based on what it receive and then send the result as a mail of the second type; sometimes it is that it has to wait using osDelay and there it stop working but without going into any fault handler
I would rather use standard freeRTOS API. ARM CMSIS wrapper is rubbish.
BTW I rather suspect osMailGet(commandMailHandle, osWaitForever);
the delay is in this case not needed at all. If you wait for the data in the BLOCKED state the task does not consume any processing power
If another guesses are:
You are landing in the HF
You are stacked in the context switch (wrong interrupt priorities )
use your debugger and see what is going on.
osStatus osDelay (uint32_t millisec)
The millisec value specifies the number of timer ticks.
The exact time delay depends on the actual time elapsed since the last timer tick.
For a value of 1, the system waits until the next timer tick occurs.
=> You have to check whether timer tick is running or not.
check this link
As P__J__ pointed out in an earlier answer, you shouldn't use the osDelay() call in the loop1
because your task loop will wait at the osMailGet() call for the next request/mail until it arrives anyhow.
But this hint called my attention to another possible reason for your observation, so I'm opening this new answer:2
As the loop execution is interrupted by a delay of 5000 ticks - could it be that the producer of the mails is filling the mailbox faster than the task is consuming mails? Then, you should inspect if this situation is detected/handled in the producer context.
If the producer ignores "queue full" return values and discards the mails before they have been transmitted, the system will only process a few mails every 5000 ticks (or it may lose all but a few mails after the first fill of the mailbox, if the producer in your example only fills the mailbox queue once).
This could look like the consumer task being stuck, even if the main problem is about the producer context (task/ISR).
1
The osDelay() call can only help you if you want to avoid to process another mail within 5000 ticks if request mails are produced faster than the task processes them.
But then, you'd have a different problem, and you should open a different question...
2
Edit: I just noticed that Clifford already mentioned this option in one of his comments to the question. I think this option must be covered by an answer.

8051 UART, Receiving bytes serially

I have to send file byte-by-byte to serially connected AT89s52 from computer (VB.NET).
Every sended byte have some job to do in microcontroller what require some time.
Here is relevant part of my C code to receiving bytes:
SCON = 0x50;
TMOD = 0x20; // timer 1, mode 2, 8-bit reload
TH1 = 0xFD; // reload value for 9600 baud
TR1 = 1;
TI = 1;
again:
while(RI!=0)
{
P1=SBUF; // show data on led's
RI=0;
receivedBytes++;
}
if (key1==0)
{
goto exitreceive; // break receiving
}
show_lcd_received_bytes(receivedBytes);
// here is one more loop
// with different duration for every byte
goto again;
And here is VB.NET code for sending bytes:
For a As Integer = 1 To 10
For t As Integer = 0 To 255
SerialPort1.Write(Chr(t))
Next t
Next a
Problem is that mC have some job to do after every received byte and VB.NET don't know for that and send bytes too fast so in mC finishes just a part of all bytes (about 10%).
I can incorporate "Sleep(20)" in VB loop ant then thing will work but I have many of wasted time because every byte need different time to process and that would be unacceptable slow communication.
Now, my question is if 8051 can set some busy status on UART which VB can read before sending to decide to send byte or not.
Or how otherwise to setup such communication as described?
I also try to receive bytes with serial interrupt on mC side with same results.
Hardware is surely OK because I can send data to computer well (as expected).
Your problem is architectural. Don't try to do processing on the received data in the interrupt that handles byte Rx. Have your byte Rx interrupt only copy the received byte to a separate Rx data buffer, and have a background task that does the actual processing of the incoming data without blocking the Rx interrupt handler. If you can't keep up due to overall throughput issue, then RTS/CTS flow control is the appropriate mechanism. For example, when your Rx buffer gets 90% full, deassert the flow control signal to pause the transmit side.
As #TJD mentions hardware flow control can be used to stop the PC from sending characters while the microcomputer is processing received bytes. In the past I have implemented hardware flow by using an available port line as an output. The output needs to be connected to an TTL to RS-232 driver(if you are currently using a RS-232 you may have and extra driver available). If you are using a USB virtual serial port or RS-422/485 you will need to implement software flow control. Typically a control-S is sent to tell the PC to stop sending and a control-Q to continue. In order to take full advantage of flow control you most likely will need to also implement a fully interrupt driven FIFO to receive/send characters.
If you would like additional information concerning hardware flow control, check out http://electronics.stackexchange.com.
Blast from the past, I remember using break out boxes to serial line tracers debugging this kind of stuff.
With serial communication, if you have all the pins/wires utililzed then there is flow control via the RTS (Ready To Send) and DTR (Data Terminal Ready) that are used to signal when it is OK to send more data. Do you have control over that in the device you are coding via C? IN VB.NET, there are events used to receive these signals, or they can be queried using properties on the SerialPort object.
A lot of these answers are suggesting hardware flow control, but you also have the option of enhancing your transmission to be more robust by using software flow control. Currently, your communication is strong, but if you start running a higher baud rate or a longer distance or even just have a noisy connection, characters could be received that are incorrect, or characters could be dropped.
You could add a simple two-byte ACK sequence upon completion of whatever action is set to happen. It could look something like this:
Host sends command byte: <0x00>
Device echoes command byte: <0x00>
Device executes whatever action is needed
Device sends ACK/NAK byte (based on result):
This would allow you to see on the host side if communication is breaking down. The echoed character may mismatch what was sent which would alert you to an issue. Additionally, if a character is not received by the host within some timeout, the host can try retransmitting. Finally, the ACK/NAK gives you the option of returning a status, but most importantly it will let the host know that you've completed the operation and that it can send another command.
This can be extended to include a checksum to give the device a way to verify that the command received was valid (A simple logical inverse sent alongside the command byte would be sufficient).
The advantage to this solution is that it does not require extra lines or UART support on either end for hardware flow control.

Retrieving data after using System.Net.Sockets SendFile

I'm coding a simple TCP client/server in VB.NET to transfer files of all sizes. I decided to use System.Net.Sockets's command SendFile to transfer the bytes through the socket.
On the receiving side, my code to retrieve the bytes works fairly well, but occasionally the transfer randomly stops.
I figured out that by putting a small sleep delay between retrieving the next block of data makes the transfers 100% stable.
My code to retrieve the data (until there is no data available) is simplified as this:
While newSocket.Available > 0
Threading.Thread.Sleep(100)
newSocket.ReceiveFrom(data, Remote)
End While
I really hate using that sleep delay and figure there must be a proper method/function to retrieve data from SendFile?
Socket.Available returns the total number of bytes that have been received so far that have not yet been read. Therefore, if you read the data faster than its coming in (which is quite possible on a slow network), there will be no more data to read even though the client is still in the middle of sending the data.
If the client makes a new connection to the server for each file it sends, you could simply change it to something like this:
While newSocket.Connected
If newSocket.Available > 0 Then
newSocket.ReceiveFrom(data, Remote)
End If
End While
However, I would suggest using the asynchronous calls, instead, such as BeginReceive. Then, your delegate will be called as soon as there is data to be processed, rather than waiting in a constant loop. See this link for an example:
http://msdn.microsoft.com/en-us/library/dxkwh6zw.aspx

Cancel thread with read() operation on serial port

in my Cocoa project, I communicate with a device connected to a serial port. Now, I am waiting for the serial device to send a particular message of some bytes. For the read operation (and the reaction for once the desired message has been received), I created a new thread. On user request, I want to be able to cancel the thread.
As Apple suggests in the docs, I added a flag to the thread dictionary, periodically check if the flag has been set and if so, call [NSThread exit]. This works fine.
Now, the thread may be stuck waiting for the serial device to finally send the 12 byte message. The read call looks like this:
numBytes = read(fileDescriptor, buffer, 12);
Once the thread starts reading from the device, but no data comes in, I can set the flag to tell the thread to finish, but the thread is not going to read the flag unless it finally received at least 12 bytes of data and continues processing.
Is there a way to kill a thread that currently performs a read operation on a serial device?
Edit for clarification:
I do not insist in creating a separate thread for the I/O operations with the serial device. If there is a way to encapsulate the operations such that I am able to "kill" them if the user presses a cancel button, I am perfectly happy.
I am developing a Cocoa application for desktop Mac OS X, so no restrictions regarding mobile devices and their capabilities apply.
A workaround would be to make the read function return immediately if there are no bytes to read. How can I do this?
Use select or poll with a timeout to detect when the descriptor is ready for reading.
Set the timeout to (say) half a second and call it in a loop while checking to see if your thread should exit.
Asynchronous thread cancellation is almost always a bad idea. Try to stick with event-driven interfaces (and, if necessary, timeouts).
This is exactly what the pthread_cancel interface was designed for. You'll want to wrap the block with read in pthread_cleanup_push and pthread_cleanup_pop in order that you can safely clean up if the thread is cancelled, and also disable cancellation (with pthread_setcancelstate) in other code that runs in this thread that you don't want to be cancellable. This can be a pain if proper cleanup would involve multiple call frames; it essentially forces you to use pthread_cleanup_push at every call level and structure your thread code like C++ or Java with try/catch style exception handling.
An alternative approach would be to install a signal handler for an otherwise-unused signal (like SIGUSR1 or one of the realtime signals) without the SA_RESTART flag, so that it interrupts syscalls with EINTR. The signal handler itself can be a complete no-op; the only purpose of it is to interrupt things. Then you can use pthread_kill to interrupt the read (or any other syscall) in a particular thread. This has the advantage that you don't have to switch your code to using C++/Java-type idioms. You can handle the EINTR error by checking a flag (indicating whether the thread was requested to abort) and resume the read if the flag is not set, or return an error code that causes the caller to clean up and eventually pthread_exit.
If you do use interrupting signal handlers, make sure all your syscalls that can return EINTR are wrapped in loops that retry (or check the abort flag and optionally retry) on EINTR. Otherwise things can break badly.