How to test Golang channels / go-routines - testing

I have a type that contains a byte of data, and takes a channel to post new data there. Other code can read the last written byte of data using a Read function.
Edit: for actual, runnable code, see https://github.com/ariejan/i6502/pull/3 especially files acia6551.go and acia6551_test.go. Tests results can be viewed here: https://travis-ci.org/ariejan/i6502/jobs/32862705
I have the following:
// Emulates a serial interface chip of some kind.
type Unit struct {
// Channel used for others to use, bytes written here will be placed in rxChar
Rx chan byte
// Internal store of the last byte written.
rxChar byte // Internal storage
}
// Used internally to read data store in rxChar
func (u *Unit) Read() byte {
return u.rxChar
}
// Create new Unit and go-routing to listen for Rx bytes
func NewUnit(rx chan byte) *Unit {
unit := &Unit{Rx: rx}
go func() {
for {
select {
case data := <-unit.Rx:
unit.rxData = data
fmt.Printf("Posted 0x%02X\n", data)
}
}
}()
return unit
}
My test looks like this:
func TestUnitRx(t *testing.T) {
rx := make(chan byte)
u := NewUnit(rx)
// Post a byte to the Rx channel
// This prints "Posted 0x42", as you'd expect
rx <- 0x42
// Using testing
// Should read last byte, 0x42 but fails.
fmt.Println("Reading value...")
assert.Equal(t, 0x42, u.Read())
}
At first I figured the "Reading value" happened before the go-routing got around to writing the data. But the "Posted" message is always printed before "Reading".
So, two questions remain:
Is this the best way to handle an incoming stream of bytes (at 9600 baud ;-))
If this is the right way, how can I properly test it or what is wrong with my code?

Guessing by the pieces posted here, it doesn't look like you have anything guaranteeing the order of operations when accessing the stored data. You can use a mutex around any data shared between goroutines.
A better option here is to use buffered channels of length 1 to write, store, and read the bytes.
It's always a good idea to test your program with -race to use the race detector.
Since this looks very "stream" like, you very well may want some buffering, and to look at some examples of how the io.Reader and io.Writer interfaces are often used.

Related

SslStream<TcpStream> read does not return client's message

I am trying to implement a client-server application using TLS (openssl). I followed the example given in rust doc for my code's structure: example
Server Code
fn handle_client(mut stream: SslStream<TcpStream>){
println!("Passed in handling method");
let mut data = vec![];
let length = stream.read(&mut data).unwrap();
println!("read successfully; size read:{}", length);
stream.write(b"From server").unwrap();
stream.flush().unwrap();
println!("{}", String::from_utf8_lossy(&data));
}
fn main() {
//remember: certificate should always be signed
let mut acceptor = SslAcceptor::mozilla_intermediate(SslMethod::tls()).unwrap();
acceptor.set_private_key_file("src/keyfile/key.pem", SslFiletype::PEM).unwrap();
acceptor.set_certificate_file("src/keyfile/certs.pem",SslFiletype::PEM).unwrap();
acceptor.check_private_key().unwrap();
let acceptor = Arc::new(acceptor.build());
let listener = TcpListener::bind("127.0.0.1:9000").unwrap();
for stream in listener.incoming(){
match stream{
Ok(stream)=>{
println!("a receiver is connected");
let acceptor = acceptor.clone();
//thread::spawn(move || {
let stream = acceptor.accept(stream).unwrap();
handle_client(stream);
//});
}
Err(_e)=>{println!{"connection failed"}}
}
}
println!("Server");
}
Client Code
fn main() {
let mut connector = SslConnector::builder(SslMethod::tls()).unwrap();
connector.set_verify(SslVerifyMode::NONE); //Deactivated verification due to authentication error
connector.set_ca_file("src/keyfile/certs.pem");
let connector = connector.build();
let stream = TcpStream::connect("127.0.0.1:9000").unwrap();
let mut stream = connector.connect("127.0.0.1",stream).unwrap();
stream.write(b"From Client").unwrap();
stream.flush().unwrap();
println!("client sent its message");
let mut res = vec![];
stream.read_to_end(&mut res).unwrap();
println!("{}", String::from_utf8_lossy(&res));
// stream.write_all(b"client").unwrap();
println!("Client");
}
The Server code and the client code both compile without issues, albeit with some warnings. The client is able to connect to the server. But when the client writes its message From Client to the stream, the stream.read called in handle_client() returns nothing. Furthermore, when the server writes its message From Server, the client is able to receive that.
Hence, is there an issue with the way I use SslStream or on the way I configured my server?
I presume when you say stream.read returns nothing, that it returns a zero value indicating that nothing was read.
The Read trait API says this:
This function does not provide any guarantees about whether it blocks
waiting for data, but if an object needs to block for a read and
cannot, it will typically signal this via an Err return value.
If n is 0, then it can indicate one of two scenarios:
This reader has reached its "end of file" and will likely no longer be
able to produce bytes. Note that this does not mean that the reader
will always no longer be able to produce bytes.
The buffer specified was 0 bytes in length.
It is not an error if the returned value n is smaller than the buffer
size, even when the reader is not at the end of the stream yet. This
may happen for example because fewer bytes are actually available
right now (e. g. being close to end-of-file) or because read() was
interrupted by a signal.
So, you need to repeatedly call read until you receive all the bytes you expect, or you get an error.
If you know exactly how much you want to read (as you do in this case) you can call read_exact which will read exactly the number of bytes needed to fill the supplied buffer.
If you want to read up until a delimeter (such as a newline or other character) you can use a BufReader, which provides methods such as read_until or read_line.

fatfs f_write returns FR_DISK_ERR when passing a pointer to data in a mail queue

I'm trying to use FreeRTOS to write ADC data to SD card on the STM32F7 and I'm using V1 of the CMSIS-RTOS API. I'm using mail queues and I have a struct that holds an array.
typedef struct
{
uint16_t data[2048];
} ADC_DATA;
on the ADC half/Full complete interrupts, I add the data to the queue and I have a consumer task that writes this data to the sd card. My issue is in my Consumer Task, I have to do a memcpy to another array and then write the contents of that array to the sd card.
void vConsumer(void const * argument)
{
ADC_DATA *rx_data;
for(;;)
{
writeEvent = osMailGet(adcDataMailId, osWaitForever);
if(writeEvent.status == osEventMail)
{
// write Data to SD
rx_data = writeEvent.value.p;
memcpy(sd_buff, rx_data->data, sizeof(sd_buff));
if(wav_write_result == FR_OK)
{
if( f_write(&wavFile, (uint8_t *)sd_buff, SD_WRITE_BUF_SIZE, (void*)&bytes_written) == FR_OK)
{
file_size+=bytes_written;
}
}
osMailFree(adcDataMailId, rx_data);
}
}
This works as intended but if I try to change this line to
f_write(&wavFile, (uint8_t *)rx_data->data, SD_WRITE_BUF_SIZE, (void*)&bytes_written) == FR_OK)
so as to get rid of the memcpy, f_write returns FR_DISK_ERR. Can anyone help shine a light on why this happens, I feel like the extra memcpy is useless and you should just be able to pass the pointer to the queue straight to f_write.
So just a few thoughts here:
memcpy
Usually I copy only the necessary amount of data. If I have the size of the actual data I'll add a boundary check and pass it to memcpy.
Your problem
I am just guessing here, but if you check the struct definition, the data field has the type uint16_t and you cast it to a byte pointer. Also the FatFs documentation expects a void* for the type of buf.
EDIT: Could you post more details of sd_buff

issues about using async_write right after async_read_until

My code is as follows:
boost::asio::streambuf b1;
boost::asio::async_read_until(upstream_socket_, b1, '#',
boost::bind(&bridge::handle_upstream_read, shared_from_this(),
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
void handle_upstream1_read(const boost::system::error_code& error,
const size_t& bytes_transferred)
{
if (!error)
{
async_write(downstream_socket_,
b2,
boost::bind(&bridge::handle_downstream_write,
shared_from_this(),
boost::asio::placeholders::error));
}
else
close();
}
According to the documentation of async_read_until, http://www.boost.org/doc/libs/1_55_0/doc/html/boost_asio/reference/async_read_until/overload1.html,
After a successful async_read_until operation, the streambuf may contain additional data beyond the delimiter. An application will typically leave that data in the streambuf for a subsequent async_read_until operation to examine.
I know that the streambuf may contain additional data beyond the delimiter, but, in my case, will it write those additional data (the data beyond the char'#') to the downstream_socket_ inside the async_write operation? Or will async_write function be smart enough not to write those additional data until the next time the handle_upstream1_read function is being called?
According to the approaches in the documentation, the data in streambuf are stored in the istream first ( std::istream response_stream(&streambuf); )
and then put it into a string by using std::getline() funciton.
Do I really need to store the streambuf in istream first and then convert it into a string and then convert it back to char arrary (so that I can send the char array to the downstream_socket_ ) instead of just using the async_write to write the data( up to but not including the delimter, '#' ) to the downstream_socket_ ?
I prefer the second approach so that I don't need to make several conversion on the data. However, it seems that something is wrong when I tried the second approach.
My ideal case is that:
upstream_socket_ received xxxx#yyyy by using async_read_until
xxxx# is written to the downstream_socket_
upstream_socket_ received zzzz#kkkk by using async_read_until
yyyyzzzz# is written to the downstream_socket_
It seems that async_write operation still writes the data beyond the delimiter to the downstream_socket_. (but I am not 100% sure about this)
I appreciate if anyone can provide a little help !
The async_write() overload being used is considered complete when all of the streambuf's data, its input sequence, has been written to the WriteStream (socket). It is equivalent to calling:
boost::asio::async_write(stream, streambuf,
boost::asio::transfer_all(), handler);
One can limit the amount of bytes written and consumed from the streambuf object by calling this async_write() overload with the boost::asio::transfer_exactly completion condition:
boost::asio::async_write(stream, streambuf,
boost::asio::transfer_exactly(n), handler);
Alternatively, one can write directly from the streambuf's input sequence. However, one will need to explicitly consume from the streambuf.
boost::asio::async_write(stream,
boost::asio::buffer(streambuf.data(), n), handler);
// Within the completion handler...
streambuf.consume(n);
Note that when the async_read_until() operation completes, the completion handler's bytes_transferred argument contains the number of bytes in the streambuf's input sequence up to and including the delimiter, or 0 if an error occurred.
Here is a complete example demonstrating using both approaches. The example is written using synchronous operations in an attempt to simplify the flow:
#include <iostream>
#include <boost/asio.hpp>
#include <boost/bind.hpp>
// This example is not interested in the handlers, so provide a noop function
// that will be passed to bind to meet the handler concept requirements.
void noop() {}
/// #brief Helper function that extracts a string from a streambuf.
std::string make_string(
boost::asio::streambuf& streambuf,
std::size_t n)
{
return std::string(
boost::asio::buffers_begin(streambuf.data()),
boost::asio::buffers_begin(streambuf.data()) + n);
}
int main()
{
using boost::asio::ip::tcp;
boost::asio::io_service io_service;
// Create all I/O objects.
tcp::acceptor acceptor(io_service, tcp::endpoint(tcp::v4(), 0));
tcp::socket server_socket(io_service);
tcp::socket client_socket(io_service);
// Connect client and server sockets.
acceptor.async_accept(server_socket, boost::bind(&noop));
client_socket.async_connect(acceptor.local_endpoint(), boost::bind(&noop));
io_service.run();
// Mockup write_buffer as if it read "xxxx#yyyy" with read_until()
// using '#' as a delimiter.
boost::asio::streambuf write_buffer;
std::ostream output(&write_buffer);
output << "xxxx#yyyy";
assert(write_buffer.size() == 9);
auto bytes_transferred = 5;
// Write to server.
boost::asio::write(server_socket, write_buffer,
boost::asio::transfer_exactly(bytes_transferred));
// Verify write operation consumed part of the input sequence.
assert(write_buffer.size() == 4);
// Read from client.
boost::asio::streambuf read_buffer;
bytes_transferred = boost::asio::read(
client_socket, read_buffer.prepare(bytes_transferred));
read_buffer.commit(bytes_transferred);
// Copy from the read buffers input sequence.
std::cout << "Read: " <<
make_string(read_buffer, bytes_transferred) << std::endl;
read_buffer.consume(bytes_transferred);
// Mockup write_buffer as if it read "zzzz#kkkk" with read_until()
// using '#' as a delimiter.
output << "zzzz#kkkk";
assert(write_buffer.size() == 13);
bytes_transferred = 9; // yyyyzzzz#
// Write to server.
boost::asio::write(server_socket, buffer(write_buffer.data(),
bytes_transferred));
// Verify write operation did not consume the input sequence.
assert(write_buffer.size() == 13);
write_buffer.consume(bytes_transferred);
// Read from client.
bytes_transferred = boost::asio::read(
client_socket, read_buffer.prepare(bytes_transferred));
read_buffer.commit(bytes_transferred);
// Copy from the read buffers input sequence.
std::cout << "Read: " <<
make_string(read_buffer, bytes_transferred) << std::endl;
read_buffer.consume(bytes_transferred);
}
Output:
Read: xxxx#
Read: yyyyzzzz#
A few other notes:
The streambuf owns the memory, and std::istream and std::ostream use the memory. Using streams may be a good idea when one needs to extract formatted input or insert formatted output. For instance, when one wishes to read the string "123" as an integer 123.
One can directly access the streambuf's input sequence and iterate over it. In the example above, I use boost::asio::buffers_begin() to help construct a std::string by iterating over a streambuf's input sequence.
std::string(
boost::asio::buffers_begin(streambuf.data()),
boost::asio::buffers_begin(streambuf.data()) + n);
A stream-based transport protocol is being used, so handle incoming data as a stream. Be aware that even if the intermediary server reframes messages and sends "xxxx#" in one write operation and "yyyyzzzz#" in a subsequent write operation, the downstream may read "xxxx#yyyy" in a single read operation.

Reading dynamically growing file using NSInputStream

I should use Objective-C to read some slowly growing file (under Mac OS X).
"Slowly" means that I read to EOF before it grows bigger.
In means of POSIX code in plain syncronous C I can do it as following:
while(1)
{
res = select(fd+1,&fdset,NULL,&fdset,some_timeout);
if(res > 0)
{
len = read(fd,buf,sizeof(buf));
if (len>0)
{
printf("Could read %u bytes. Continue.\n", len);
}
else
{
sleep(some_timeout_in_sec);
}
}
}
Now I want to re-write this in some asynchronous manner, using NSInputSource or some other async Objective-C technique.
The problem with NSInputSource: If I use scheduleInRunLoop: method then once I get NSStreamEventEndEncountered event, I stop receiving any events.
Can I still use NSInputSource or should I pass to using NSFileHandle somehow or what would you recommend ?
I see a few problems.
1) some_Timeout, for select() needs to be a struct timeval *.
2) for sleep() some_timeout needs to be an integer number of seconds.
3) the value in some_timeout is decremented via select() (which is why the last parameter is a pointer to the struct timeval*. And that struct needs to be re-initialized before each call to select().
4) the parameters to select() are highest fd of interest+1, then three separate struct fd_set * objects. The first is for input files, the second is for output files, the third is for exceptions, however, the posted code is using the same struct fd_set for both the inputs and the exceptions, This probably will not be what is needed.
When the above problems are corrected, the code should work.

sprintf() and WriteFile() affecting string Buffer

I have a very weird problem which I cannot seem to figure out. Unfortunately, I'm not even sure how to describe it without describing my entire application. What I am trying to do is:
1) read a byte from the serial port
2) store each char into tagBuffer as they are read
3) run a query using tagBuffer to see what type of tag it is (book or shelf tag)
4) depending on the type of tag, output a series of bytes corresponding to the type of tag
Most of my code is implemented and I can get the right tag code sent back out the serial port. But there are two lines that I've added as debug statements which when I tried to remove them, they cause my program to stop working.
The lines are the two lines at the very bottom:
sprintf(buf,"%s!\n", tagBuffer);
WriteFile(hSerial,buf,strlen(buf), &dwBytesWritten,&ovWrite);
If I try to remove them, "tagBuffer" will only store the last character as oppose being a buffer. Same thing with the next line, WriteFile().
I thought sprintf and WriteFile are I/O functions and would have no effect on variables.
I'm stuck and I need help to fix this.
//keep polling as long as stop character '-' is not read
while(szRxChar != '-')
{
// Check if a read is outstanding
if (HasOverlappedIoCompleted(&ovRead))
{
// Issue a serial port read
if (!ReadFile(hSerial,&szRxChar,1,
&dwBytesRead,&ovRead))
{
DWORD dwErr = GetLastError();
if (dwErr!=ERROR_IO_PENDING)
return dwErr;
}
}
// resets tagBuffer in case tagBuffer is out of sync
time_t t_time = time(0);
char buf[50];
if (HasOverlappedIoCompleted(&ovWrite))
{
i=0;
}
// Wait 5 seconds for serial input
if (!(HasOverlappedIoCompleted(&ovRead)))
{
WaitForSingleObject(hReadEvent,RESET_TIME);
}
// Check if serial input has arrived
if (GetOverlappedResult(hSerial,&ovRead,
&dwBytesRead,FALSE))
{
// Wait for the write
GetOverlappedResult(hSerial,&ovWrite,
&dwBytesWritten,TRUE);
if( strlen(tagBuffer) >= PACKET_LENGTH )
{
i = 0;
}
//load tagBuffer with byte stream
tagBuffer[i] = szRxChar;
i++;
tagBuffer[i] = 0; //char arrays are \0 terminated
//run query with tagBuffer
sprintf(query,"select type from rfid where rfidnum=\"");
strcat(query, tagBuffer);
strcat(query, "\"");
mysql_real_query(&mysql,query,(unsigned int)strlen(query));
//process result and send back to handheld
res = mysql_use_result(&mysql);
while(row = mysql_fetch_row(res))
{
printf("result of query is %s\n",row[0]);
string str = "";
str = string(row[0]);
if( str == "book" )
{
WriteFile(hSerial,BOOK_INDICATOR,strlen(BOOK_INDICATOR),
&dwBytesWritten,&ovWrite);
}
else if ( str == "shelf" )
{
WriteFile(hSerial,SHELF_INDICATOR,strlen(SHELF_INDICATOR),
&dwBytesWritten,&ovWrite);
}
else //this else doesn't work
{
WriteFile(hSerial,NOK,strlen(NOK),
&dwBytesWritten,&ovWrite);
}
}
mysql_free_result(res);
// Display a response to input
//printf("query is %s!\n", query);
//printf("strlen(tagBuffer) is %d!\n", strlen(tagBuffer));
//without these, tagBuffer only holds the last character
sprintf(buf,"%s!\n", tagBuffer);
WriteFile(hSerial,buf,strlen(buf), &dwBytesWritten,&ovWrite);
}
}
With those two lines, my output looks like this:
s sh she shel shelf shelf0 shelf00 BOOKCODE shelf0001
Without them, I figured out that tagBuffer and buf only stores the most recent character at any one time.
Any help at all will be greatly appreciated. Thanks.
Where are you allocating tagbuffer, how large is it?
It's possible that you are overwriting 'buf' because you are writing past the end of tagbuffer.
It seems unlikely that those two lines would have that effect on a correct program - maybe you haven't allocated sufficient space in buf for the whole length of the string in tagBuffer? This might cause a buffer overrun that is disguising the real problem?
The first thing I'd say is a piece of general advice: bugs aren't always where you think they are. If you've got something going on that doesn't seem to make sense, it often means that your assumptions somewhere else are wrong.
Here, it does seem very unlikely that an sprintf() and a WriteFile() will change the state of the "buf" array variable. However, those two lines of test code do write to "hSerial", while your main loop also reads from "hSerial". That sounds like a recipie for changing the behaviour of your program.
Suggestion: Change your lines of debugging output to store the output somewhere else: to a dialog box, or to a log file, or similar. Debugging output should generally not go to files used in the core logic, as that's too likely to change how the core logic behaves.
In my opinion, the real problem here is that you're trying to read and write the serial port from a single thread, and this is making the code more complex than it needs to be. I suggest that you read the following articles and reconsider your design:
Serial Port I/O from Joseph Newcomer's website.
Serial Communications in Win32 from MSDN.
In a multithreaded implementation, whenever the reader thread reads a message from the serial port you would then post it to your application's main thread. The main thread would then parse the message and query the database, and then queue an appropriate response to the writer thread.
This may sound more complex than your current design, but really it isn't, as Newcomer explains.
I hope this helps!