I am writing a client-server project using TCP protocol, so in my client code I have a while loop with a read() in it that waits for a write() from the server.
I think that this is irrelevant to the question I'm actually asking, however, this is what causes my problem which is that while the client is waiting for a write() from the server, the user can type sentences on the terminal that get stacked on the input buffer, so when my client finally receives a write() and moves on from his read(), I can't clear the input buffer with the usual way of clearing every character until the new line character is found:
while (getchar() != '\n');
because the user has typed many sentences and pressed enter multiple times while he was waiting.
Is there any way that I can fully clear the input stream no matter what is in it?
All I found out so far is that I can use fflush(stdin) and, although unconventional, it works for me, but sometimes it doesn't so I can't use it.
I also tried checking the input stream's end with:
while (getchar() != '\0');
But it gets stuck in an infinite loop. I also tried using a scanf() that ignores new lines after read():
scanf("%[^\r\t\n]s", buf);
so that everything would get moved to this buffer, this didn't work either, only the first word from the last sentence typed gets stored there.
Lastly, I tried messing around with fgets() but no luck there either.
P.S. Is there any way I can completely disable the standard input stream before the read() call and enable it again right after?
Edit: For archive purposes, I actually forked my program at this point as to continue scaning inputs and instantly clearing them with the standard first way I mentioned. Still, I would like to know if there is a way to completely clear the input buffer no matter what's in it.
This seems promising:
Basically what he/she is doing is using fgets with fflush..
https://www.daniweb.com/software-development/c/code/217396/how-to-properly-flush-the-input-stream-stdin
Related
I'm writing a program in LabVIEW 2014 in order to control a linear actuator. The program is very simple, it sets a speed and then runs the subVIs to move the actuator back and forth.
There is a case structure inside a while loop so it would stop when a desired number or iterations is reached. The problem is that the iteration count of the while loop occurs faster than the execution of the program inside the case structure, and therefore the program stops before all the cycles of movement have been completed.
send pulses subVI:
activate subVI:
I tried different time delays in different parts of the code, but none of that worked. I think that the issue is that the while loop iterations run faster than the code of the case structure and somehow I need to slow it down. Or maybe I'm wrong and it is a complete different thing.
Here is the link of the actuator documentation:
https://jp.optosigma.com/html/en_jp/software/motorize/manual_en/SRC-101_InstructionManual_Ver1_1_EN.pdf
Welcome to the fun and infuriating world of interfacing to serial instruments.
Each iteration of a LabVIEW loop can only complete once all the code inside the loop structure has completed, so it's not possible that 'the while loop iterations run faster than the code of the case structure'. There's nothing explicitly wrong with any of your code, but evidently it isn't doing what you expected it to. The way to approach developing an instrument driver is always to start with the simplest case (e.g. one single movement of your actuator), get that working, and build up from there.
The documentation for an instrument's serial interface is rarely perfect and yours is no exception, but it does tell us that
every command is acknowledged by a response, and
you should not send a new command until you have received the response from the previous command.
Your code to send commands and receive the response looks OK. A VISA Read operation will read bytes from the computer's serial buffer until either the number of bytes to read is reached, or a byte matching the termination char is read, or the timeout expires. The manual implies that the instrument's responses are followed by the CR and LF characters, and the default configuration of the serial port in LabVIEW is to terminate each read when an LF is received, so you shouldn't need a time delay between each write and the following read; the instrument's response will be received into the buffer by the OS, then your code will read it out and return it as soon as it hits the LF.
What isn't completely clear from the manual is how the instrument responds to the activation command, G: - does it
Return the acknowledgement immediately, then execute the movement: you can check whether the movement is finished using the status command !:, or
Execute the movement, then return the acknowledgement to show that it's finished.
I think it's 1. but that's the first thing I would check. Unless all your movements are completed in less than 500 ms then I think this is what is wrong here: your program receives the acknowledgement then moves straight on to send the next command, but the actuator is still moving and not ready. In this case you have two options:
add a time delay after the read, calculated to be long enough for the actuator move to finish - this would be easiest, but potentially unreliable
in a While loop after you have got the acknowledgement of the G: command, send the !: command and check the response until you get R for 'ready'. (Remember that the acknowledgement string you receive will also have the CRLF on the end.) Use a time delay in this loop so you don't bombard the instrument with status checks - maybe something like 200 to 1000 ms would be suitable.
If it's case 2. then you would also have two options:
configure your serial port with a read timeout long enough to cover the longest move operation, then the read operation will just block until the acknowledgement is received - again this is the quick and dirty way, or
configure a short timeout, say 1000 ms, and place the read in a While loop that repeats until the acknowledgement is received or too many timeouts have occurred. Note that a timeout is considered an error, so you will have to turn off automatic error handling for the VI and instead test the error wire out of the VISA Read, discard the timeout error and handle any other error yourself.
Just as a general tip, whenever you pass an error wire into a loop and out again, I would use a shift register. That way if one iteration generates an error, the next iteration will see that error and fail immediately, so (for example) if communication fails you don't have to wait for the read timeouts to expire multiple times before your code can exit.
You'll probably have to do some experimenting and referring to LabVIEW help to get this fully working but hopefully this is enough to get you going.
Is there any way to look for console input under MicroPython without pausing the program?
Within a program, I can use, for example, uart1.any() to see if there is anything in the input buffer. If not, the program can just continue.
I have a system that runs autonomously. However, I want to be able to modify parameters after the program has started using the console. The problem is, if I just use input() then the program will pause, even if I don't want to take any action.
What I need is to be able to check the "console input buffer" periodically to see if I have entered anything and, if so, process that input, otherwise to just continue.
Is this possible?
=====================================
Many thanks for the suggestion! It works, but...
What I am trying to do is to run a process which can be interrupted by keyboard input and diverted to another process. When that is finished, I return to the original process.
The initial part works well; I poll stdin and nothing happens until I hit return (for example). The program then correctly diverts to the other routine. However, when that is finished, and I return to the original thread, it immediately diverts again, even though I have not pressed any further keys.
I have tried setting 'keypress' to None after trapping it; I have tried using stdin.flush - which doesn't work! It's as though there is still something in the input buffer that I need to purge.
Any ideas?
You can poll stdin to see if data is available before attempting to read it.
from sys import stdin
from select import poll, POLLIN
poll_obj = poll()
poll_obj.register(stdin, POLLIN)
keypress = stdin.read(1) if poll_obj.poll(0) else None
print(keypress)
On a WCF rest service I am dealing with streams. In a service method I am uploading a stream in a data contract which works fine. And on service side I process the stream and its position is now at eof. After doing that I need to set its position to 0 again therefore I can save it there. But it throws the exception:
Specified method is not supported.
Does it mean I can't process a stream more then once? If it does I will need a workaround for that :/ and only solution pops into my mind is sending the stream two times so I can process it separately, but it is not good since I would have to upload it twice.
Any help would be appreciated.
Funny that I found my own solution :) first I saved the stream, then read it from that path for further processes over that stream. its interesting that finding the solution didn't require more detailed, technical information but a change of logical approach.
I'm coding a simple TCP client/server in VB.NET to transfer files of all sizes. I decided to use System.Net.Sockets's command SendFile to transfer the bytes through the socket.
On the receiving side, my code to retrieve the bytes works fairly well, but occasionally the transfer randomly stops.
I figured out that by putting a small sleep delay between retrieving the next block of data makes the transfers 100% stable.
My code to retrieve the data (until there is no data available) is simplified as this:
While newSocket.Available > 0
Threading.Thread.Sleep(100)
newSocket.ReceiveFrom(data, Remote)
End While
I really hate using that sleep delay and figure there must be a proper method/function to retrieve data from SendFile?
Socket.Available returns the total number of bytes that have been received so far that have not yet been read. Therefore, if you read the data faster than its coming in (which is quite possible on a slow network), there will be no more data to read even though the client is still in the middle of sending the data.
If the client makes a new connection to the server for each file it sends, you could simply change it to something like this:
While newSocket.Connected
If newSocket.Available > 0 Then
newSocket.ReceiveFrom(data, Remote)
End If
End While
However, I would suggest using the asynchronous calls, instead, such as BeginReceive. Then, your delegate will be called as soon as there is data to be processed, rather than waiting in a constant loop. See this link for an example:
http://msdn.microsoft.com/en-us/library/dxkwh6zw.aspx
I'm new to driver programming in general and also to USB. However, I managed to write a driver for Windows CE (6.0) and I also had access to an USB-Sniffer to read all traffic between the host and the device.
The problem now occurs on some boards (2 out of the 3 I have):
When the device has no data to send and I issue an Interrupt-In-Transfer the device sends an ACK.
So far this is expected. However, something (I guess either the USB-Controller or WinCE) seems to automatically issue more IN-Transfers (3 on one board, 4 on another) and I get subsequent ACK. This isn't a problem so far either.
But the next IN-Transfer will also result in an ACK, no matter if there is data to send or not, I receive zero bytes in the driver.
Yet, when I look at the USB-Sniffer the proper telegram was send, however 2 more IN-Transfers are automatically issued and are responded with an ACK. So it seems like the data is overwritten by the ACK.
I tried everything that came to my mind so far: Reset the pipe, close and reopen the connection, but nothing seems to work out properly. Resetting the Pipe solves the problem in about half of the cases though. I really ran out of ideas for solving the problem.
Is there a way to tell the USB-Controller (or WinCE or whatever causes this behaviour) to always only issue one single transfer?
EDIT
Turns out it was a threading issue. Unfortunately I wasn't the one who fixed it and I have no access to the working solution, thus I cannot give further details.