issues about using async_write right after async_read_until - boost-asio

My code is as follows:
boost::asio::streambuf b1;
boost::asio::async_read_until(upstream_socket_, b1, '#',
boost::bind(&bridge::handle_upstream_read, shared_from_this(),
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
void handle_upstream1_read(const boost::system::error_code& error,
const size_t& bytes_transferred)
{
if (!error)
{
async_write(downstream_socket_,
b2,
boost::bind(&bridge::handle_downstream_write,
shared_from_this(),
boost::asio::placeholders::error));
}
else
close();
}
According to the documentation of async_read_until, http://www.boost.org/doc/libs/1_55_0/doc/html/boost_asio/reference/async_read_until/overload1.html,
After a successful async_read_until operation, the streambuf may contain additional data beyond the delimiter. An application will typically leave that data in the streambuf for a subsequent async_read_until operation to examine.
I know that the streambuf may contain additional data beyond the delimiter, but, in my case, will it write those additional data (the data beyond the char'#') to the downstream_socket_ inside the async_write operation? Or will async_write function be smart enough not to write those additional data until the next time the handle_upstream1_read function is being called?
According to the approaches in the documentation, the data in streambuf are stored in the istream first ( std::istream response_stream(&streambuf); )
and then put it into a string by using std::getline() funciton.
Do I really need to store the streambuf in istream first and then convert it into a string and then convert it back to char arrary (so that I can send the char array to the downstream_socket_ ) instead of just using the async_write to write the data( up to but not including the delimter, '#' ) to the downstream_socket_ ?
I prefer the second approach so that I don't need to make several conversion on the data. However, it seems that something is wrong when I tried the second approach.
My ideal case is that:
upstream_socket_ received xxxx#yyyy by using async_read_until
xxxx# is written to the downstream_socket_
upstream_socket_ received zzzz#kkkk by using async_read_until
yyyyzzzz# is written to the downstream_socket_
It seems that async_write operation still writes the data beyond the delimiter to the downstream_socket_. (but I am not 100% sure about this)
I appreciate if anyone can provide a little help !

The async_write() overload being used is considered complete when all of the streambuf's data, its input sequence, has been written to the WriteStream (socket). It is equivalent to calling:
boost::asio::async_write(stream, streambuf,
boost::asio::transfer_all(), handler);
One can limit the amount of bytes written and consumed from the streambuf object by calling this async_write() overload with the boost::asio::transfer_exactly completion condition:
boost::asio::async_write(stream, streambuf,
boost::asio::transfer_exactly(n), handler);
Alternatively, one can write directly from the streambuf's input sequence. However, one will need to explicitly consume from the streambuf.
boost::asio::async_write(stream,
boost::asio::buffer(streambuf.data(), n), handler);
// Within the completion handler...
streambuf.consume(n);
Note that when the async_read_until() operation completes, the completion handler's bytes_transferred argument contains the number of bytes in the streambuf's input sequence up to and including the delimiter, or 0 if an error occurred.
Here is a complete example demonstrating using both approaches. The example is written using synchronous operations in an attempt to simplify the flow:
#include <iostream>
#include <boost/asio.hpp>
#include <boost/bind.hpp>
// This example is not interested in the handlers, so provide a noop function
// that will be passed to bind to meet the handler concept requirements.
void noop() {}
/// #brief Helper function that extracts a string from a streambuf.
std::string make_string(
boost::asio::streambuf& streambuf,
std::size_t n)
{
return std::string(
boost::asio::buffers_begin(streambuf.data()),
boost::asio::buffers_begin(streambuf.data()) + n);
}
int main()
{
using boost::asio::ip::tcp;
boost::asio::io_service io_service;
// Create all I/O objects.
tcp::acceptor acceptor(io_service, tcp::endpoint(tcp::v4(), 0));
tcp::socket server_socket(io_service);
tcp::socket client_socket(io_service);
// Connect client and server sockets.
acceptor.async_accept(server_socket, boost::bind(&noop));
client_socket.async_connect(acceptor.local_endpoint(), boost::bind(&noop));
io_service.run();
// Mockup write_buffer as if it read "xxxx#yyyy" with read_until()
// using '#' as a delimiter.
boost::asio::streambuf write_buffer;
std::ostream output(&write_buffer);
output << "xxxx#yyyy";
assert(write_buffer.size() == 9);
auto bytes_transferred = 5;
// Write to server.
boost::asio::write(server_socket, write_buffer,
boost::asio::transfer_exactly(bytes_transferred));
// Verify write operation consumed part of the input sequence.
assert(write_buffer.size() == 4);
// Read from client.
boost::asio::streambuf read_buffer;
bytes_transferred = boost::asio::read(
client_socket, read_buffer.prepare(bytes_transferred));
read_buffer.commit(bytes_transferred);
// Copy from the read buffers input sequence.
std::cout << "Read: " <<
make_string(read_buffer, bytes_transferred) << std::endl;
read_buffer.consume(bytes_transferred);
// Mockup write_buffer as if it read "zzzz#kkkk" with read_until()
// using '#' as a delimiter.
output << "zzzz#kkkk";
assert(write_buffer.size() == 13);
bytes_transferred = 9; // yyyyzzzz#
// Write to server.
boost::asio::write(server_socket, buffer(write_buffer.data(),
bytes_transferred));
// Verify write operation did not consume the input sequence.
assert(write_buffer.size() == 13);
write_buffer.consume(bytes_transferred);
// Read from client.
bytes_transferred = boost::asio::read(
client_socket, read_buffer.prepare(bytes_transferred));
read_buffer.commit(bytes_transferred);
// Copy from the read buffers input sequence.
std::cout << "Read: " <<
make_string(read_buffer, bytes_transferred) << std::endl;
read_buffer.consume(bytes_transferred);
}
Output:
Read: xxxx#
Read: yyyyzzzz#
A few other notes:
The streambuf owns the memory, and std::istream and std::ostream use the memory. Using streams may be a good idea when one needs to extract formatted input or insert formatted output. For instance, when one wishes to read the string "123" as an integer 123.
One can directly access the streambuf's input sequence and iterate over it. In the example above, I use boost::asio::buffers_begin() to help construct a std::string by iterating over a streambuf's input sequence.
std::string(
boost::asio::buffers_begin(streambuf.data()),
boost::asio::buffers_begin(streambuf.data()) + n);
A stream-based transport protocol is being used, so handle incoming data as a stream. Be aware that even if the intermediary server reframes messages and sends "xxxx#" in one write operation and "yyyyzzzz#" in a subsequent write operation, the downstream may read "xxxx#yyyy" in a single read operation.

Related

fatfs f_write returns FR_DISK_ERR when passing a pointer to data in a mail queue

I'm trying to use FreeRTOS to write ADC data to SD card on the STM32F7 and I'm using V1 of the CMSIS-RTOS API. I'm using mail queues and I have a struct that holds an array.
typedef struct
{
uint16_t data[2048];
} ADC_DATA;
on the ADC half/Full complete interrupts, I add the data to the queue and I have a consumer task that writes this data to the sd card. My issue is in my Consumer Task, I have to do a memcpy to another array and then write the contents of that array to the sd card.
void vConsumer(void const * argument)
{
ADC_DATA *rx_data;
for(;;)
{
writeEvent = osMailGet(adcDataMailId, osWaitForever);
if(writeEvent.status == osEventMail)
{
// write Data to SD
rx_data = writeEvent.value.p;
memcpy(sd_buff, rx_data->data, sizeof(sd_buff));
if(wav_write_result == FR_OK)
{
if( f_write(&wavFile, (uint8_t *)sd_buff, SD_WRITE_BUF_SIZE, (void*)&bytes_written) == FR_OK)
{
file_size+=bytes_written;
}
}
osMailFree(adcDataMailId, rx_data);
}
}
This works as intended but if I try to change this line to
f_write(&wavFile, (uint8_t *)rx_data->data, SD_WRITE_BUF_SIZE, (void*)&bytes_written) == FR_OK)
so as to get rid of the memcpy, f_write returns FR_DISK_ERR. Can anyone help shine a light on why this happens, I feel like the extra memcpy is useless and you should just be able to pass the pointer to the queue straight to f_write.
So just a few thoughts here:
memcpy
Usually I copy only the necessary amount of data. If I have the size of the actual data I'll add a boundary check and pass it to memcpy.
Your problem
I am just guessing here, but if you check the struct definition, the data field has the type uint16_t and you cast it to a byte pointer. Also the FatFs documentation expects a void* for the type of buf.
EDIT: Could you post more details of sd_buff

Is it safe to call sc_fifo::nb_write() from a SC_THREAD process?

I am converting some of my code from a SC_THREAD to a SC_METHOD. My question is, do I need to stop using the sc_fifo class? I realize an SC_METHOD should not call sc_fifo.write() because this uses a wait call which is not allowed for functions that cannot be suspended. However, sc_fifo provides non-blocking versions of various functions and potentially I could use these instead. Some of the documentation I've read indicates you should never use sc_fifo from a SC_METHOD at all but provided no justification.
Here is a sample of code I am currently using.
class Example : public sc_module {
public:
sc_fifo<int> myFifo;
sc_in<bool> clock_in;
SC_HAS_PROCESS(Example);
// constructor
Example(sc_module_name name) : sc_module(name) {
SC_METHOD(read);
sensitive << clock_in;
}
void read() {
int value = -1;
bool success = myFifo.nb_read(value);
if (success) { cout << "Read value " << value << endl; }
else { cout << "No read done but that's okay." << endl; }
}
};
int sc_main(int argc, char* argv[]) {
sc_clock clock("clock");
Example example("example");
example.clock_in(clock);
sc_start(10, SC_NS);
return 0;
}
This throws no errors even though I am calling an sc_fifo function from a SC_METHOD. Is it bad policy to use nb_read() from inside a SC_METHOD? If so why?
Using sc_fifo non-blocking calls from SC_METHOD should be fine.
I have not found any place in standard manual that prohibits it.
Neither nb_read, nor nb_write, as their names suggest, call wait internally so it's fine to use them from an SC_METHOD.
While your example code works, it's rather inefficient when things are put into the fifo infrequently. If you want your code to be more event driven, you could make the SC_METHOD sensitive to sc_fifo.data_written_event(); then it will only be called when something is actually written to the fifo (though it's still a good idea to check that nb_read returns true in case something else pulled from the same fifo). Of course, this would skip your "No read done but that's okay." prints.
Also, I think the title of your question probably meant to ask about calling nb_write from SC_METHOD rather than SC_THREAD.

Reading dynamically growing file using NSInputStream

I should use Objective-C to read some slowly growing file (under Mac OS X).
"Slowly" means that I read to EOF before it grows bigger.
In means of POSIX code in plain syncronous C I can do it as following:
while(1)
{
res = select(fd+1,&fdset,NULL,&fdset,some_timeout);
if(res > 0)
{
len = read(fd,buf,sizeof(buf));
if (len>0)
{
printf("Could read %u bytes. Continue.\n", len);
}
else
{
sleep(some_timeout_in_sec);
}
}
}
Now I want to re-write this in some asynchronous manner, using NSInputSource or some other async Objective-C technique.
The problem with NSInputSource: If I use scheduleInRunLoop: method then once I get NSStreamEventEndEncountered event, I stop receiving any events.
Can I still use NSInputSource or should I pass to using NSFileHandle somehow or what would you recommend ?
I see a few problems.
1) some_Timeout, for select() needs to be a struct timeval *.
2) for sleep() some_timeout needs to be an integer number of seconds.
3) the value in some_timeout is decremented via select() (which is why the last parameter is a pointer to the struct timeval*. And that struct needs to be re-initialized before each call to select().
4) the parameters to select() are highest fd of interest+1, then three separate struct fd_set * objects. The first is for input files, the second is for output files, the third is for exceptions, however, the posted code is using the same struct fd_set for both the inputs and the exceptions, This probably will not be what is needed.
When the above problems are corrected, the code should work.

How to test Golang channels / go-routines

I have a type that contains a byte of data, and takes a channel to post new data there. Other code can read the last written byte of data using a Read function.
Edit: for actual, runnable code, see https://github.com/ariejan/i6502/pull/3 especially files acia6551.go and acia6551_test.go. Tests results can be viewed here: https://travis-ci.org/ariejan/i6502/jobs/32862705
I have the following:
// Emulates a serial interface chip of some kind.
type Unit struct {
// Channel used for others to use, bytes written here will be placed in rxChar
Rx chan byte
// Internal store of the last byte written.
rxChar byte // Internal storage
}
// Used internally to read data store in rxChar
func (u *Unit) Read() byte {
return u.rxChar
}
// Create new Unit and go-routing to listen for Rx bytes
func NewUnit(rx chan byte) *Unit {
unit := &Unit{Rx: rx}
go func() {
for {
select {
case data := <-unit.Rx:
unit.rxData = data
fmt.Printf("Posted 0x%02X\n", data)
}
}
}()
return unit
}
My test looks like this:
func TestUnitRx(t *testing.T) {
rx := make(chan byte)
u := NewUnit(rx)
// Post a byte to the Rx channel
// This prints "Posted 0x42", as you'd expect
rx <- 0x42
// Using testing
// Should read last byte, 0x42 but fails.
fmt.Println("Reading value...")
assert.Equal(t, 0x42, u.Read())
}
At first I figured the "Reading value" happened before the go-routing got around to writing the data. But the "Posted" message is always printed before "Reading".
So, two questions remain:
Is this the best way to handle an incoming stream of bytes (at 9600 baud ;-))
If this is the right way, how can I properly test it or what is wrong with my code?
Guessing by the pieces posted here, it doesn't look like you have anything guaranteeing the order of operations when accessing the stored data. You can use a mutex around any data shared between goroutines.
A better option here is to use buffered channels of length 1 to write, store, and read the bytes.
It's always a good idea to test your program with -race to use the race detector.
Since this looks very "stream" like, you very well may want some buffering, and to look at some examples of how the io.Reader and io.Writer interfaces are often used.

Complex check-methods with boost.test

I want to test different constructors of a string class. Therefore I wrote myself a test method that checks a couple standard things:
void checkStringStandards(String& s, size_t length, const char* text){
BOOST_CHECK_EQUAL(s.length(), length);
...
}
Then I added a test method
BOOST_AUTO_TEST_CASE(String_construct){
String s1;
checkStringStandards(s1, 0, "");
String s2("normal char");
checkStringStandards(s2, 11, "normal char");
}
The problem is, that when it fails, I only get the line- and file information from within checkStringStandards ! I can't know by the output whether the first or the second call caused this.
What's the common fix for that?
Cheers!
The solution to this problem is to write a custom predicate that performs the checks and use BOOST_REQUIRE(custom_predicate(args)) in the different test cases. A custom predicate can take any arguments you want and returns boost::test_tools::predicate_result, a type that is compatible with the assertion macros in Boost.Test into which you can build up a detailed diagnostic message during failure.
To use your example:
using namespace boost::test_tools;
predicate_result checkStringStandards(String& s, size_t length, const char* text) {
predicate_result result{true};
if (s.length() != length) {
result = false;
result.message() << "\nString " << s
<< " differs in length; expected: "
<< length << ", actual: " << s.length();
}
...
return result;
}
BOOST_AUTO_TEST_CASE(String_construct){
String s1;
BOOST_REQUIRE(checkStringStandards(s1, 0, ""));
String s2("normal char");
BOOST_REQUIRE(checkStringStandards(s2, 11, "normal char"));
}
The curious \n at the beginning of the message is so that when the diagnostic is printed, the text with "String ... differs in length" will be emitted on it's own line. If the predicate fails, it bubbles its failure up to BOOST_REQUIRE which will trigger the test failure and report the failure at the line invoking BOOST_REQUIRE instead of inside your custom predicate.
There is another yuckier alternative that also achieves the same result by making your custom assertions as gigantic megamacros, but I find that so horrid I'm not even going to show an example of how to do it :).
there is no common fix for that. these BOOST_CHECK_... macros exist by intention to avoid function calls where the line number gets lost (unless explicitely passed as param).
you can get round this problem by looping through the parameter set inside your test case.