Complex check-methods with boost.test - testing

I want to test different constructors of a string class. Therefore I wrote myself a test method that checks a couple standard things:
void checkStringStandards(String& s, size_t length, const char* text){
BOOST_CHECK_EQUAL(s.length(), length);
...
}
Then I added a test method
BOOST_AUTO_TEST_CASE(String_construct){
String s1;
checkStringStandards(s1, 0, "");
String s2("normal char");
checkStringStandards(s2, 11, "normal char");
}
The problem is, that when it fails, I only get the line- and file information from within checkStringStandards ! I can't know by the output whether the first or the second call caused this.
What's the common fix for that?
Cheers!

The solution to this problem is to write a custom predicate that performs the checks and use BOOST_REQUIRE(custom_predicate(args)) in the different test cases. A custom predicate can take any arguments you want and returns boost::test_tools::predicate_result, a type that is compatible with the assertion macros in Boost.Test into which you can build up a detailed diagnostic message during failure.
To use your example:
using namespace boost::test_tools;
predicate_result checkStringStandards(String& s, size_t length, const char* text) {
predicate_result result{true};
if (s.length() != length) {
result = false;
result.message() << "\nString " << s
<< " differs in length; expected: "
<< length << ", actual: " << s.length();
}
...
return result;
}
BOOST_AUTO_TEST_CASE(String_construct){
String s1;
BOOST_REQUIRE(checkStringStandards(s1, 0, ""));
String s2("normal char");
BOOST_REQUIRE(checkStringStandards(s2, 11, "normal char"));
}
The curious \n at the beginning of the message is so that when the diagnostic is printed, the text with "String ... differs in length" will be emitted on it's own line. If the predicate fails, it bubbles its failure up to BOOST_REQUIRE which will trigger the test failure and report the failure at the line invoking BOOST_REQUIRE instead of inside your custom predicate.
There is another yuckier alternative that also achieves the same result by making your custom assertions as gigantic megamacros, but I find that so horrid I'm not even going to show an example of how to do it :).

there is no common fix for that. these BOOST_CHECK_... macros exist by intention to avoid function calls where the line number gets lost (unless explicitely passed as param).
you can get round this problem by looping through the parameter set inside your test case.

Related

Does C++/WinRT provide mapping from enum symbol to string name?

I'm using C++/WinRT. The projection includes many enums. I find myself building my own table of enum values to string literals. This is not a big deal for enums with only a few defined values, but it's a pain when there are a lot of them.
What I really want is some form of compile-time or run-time reflection that converts an enum value into the string representation of the compile-time name that represents a given enum value. The code snippet below demonstrates. How can this be automated?
std::wostream& operator<< (
std::wostream& wout,
winrt::Windows::Graphics::DirectX::DirectXPixelFormat e)
{
// https://learn.microsoft.com/en-us/uwp/api/windows.graphics.directx.directxpixelformat
using winrt::Windows::Graphics::DirectX::DirectXPixelFormat;
switch (e) {
case DirectXPixelFormat::R8G8B8A8Int:
wout << L"R8G8B8A8Int";
break;
case DirectXPixelFormat::B8G8R8A8UIntNormalized:
wout << L"B8G8R8A8UIntNormalized";
break;
default:
// TODO: Many enums cases are missing.
// Find a way to compile-time-generate the string values from enum value.
wout << L"Unknown (" << std::to_wstring(static_cast<int32_t>(e)) << L")";
}
return wout;
}
I could build something that parses the winrt/*.h files to generate a header containing arrays of string literals, then #include the generated header. There probably exists sample code for doing this type of thing unrelated to C++/WinRT. But maybe C++/WinRT includes metadata in the SDK, which combined with one of the C++/WinRT command line tools, can easily do this for me? If it's there I have not found it.
I did find ApiInformation interface from winrt/Windows.Foundation.Metadata.h, as well as explanation of "Version Adaptive Code". I had hoped that OS COM interface behind ApiInformation has way to query a name for an enum value, but I was unable to find an answer there.
https://learn.microsoft.com/en-us/uwp/api/Windows.Foundation.Metadata.ApiInformation
how about this
https://learn.microsoft.com/en-us/windows/uwp/cpp-and-winrt-apis/move-to-winrt-from-cx#tostring
namespace winrt
{
hstring to_hstring(StatusEnum status)
{
switch (status)
{
case StatusEnum::Success: return L"Success";
case StatusEnum::AccessDenied: return L"AccessDenied";
case StatusEnum::DisabledByPolicy: return L"DisabledByPolicy";
default: return to_hstring(static_cast<int>(status));
}
}
}

Is it safe to call sc_fifo::nb_write() from a SC_THREAD process?

I am converting some of my code from a SC_THREAD to a SC_METHOD. My question is, do I need to stop using the sc_fifo class? I realize an SC_METHOD should not call sc_fifo.write() because this uses a wait call which is not allowed for functions that cannot be suspended. However, sc_fifo provides non-blocking versions of various functions and potentially I could use these instead. Some of the documentation I've read indicates you should never use sc_fifo from a SC_METHOD at all but provided no justification.
Here is a sample of code I am currently using.
class Example : public sc_module {
public:
sc_fifo<int> myFifo;
sc_in<bool> clock_in;
SC_HAS_PROCESS(Example);
// constructor
Example(sc_module_name name) : sc_module(name) {
SC_METHOD(read);
sensitive << clock_in;
}
void read() {
int value = -1;
bool success = myFifo.nb_read(value);
if (success) { cout << "Read value " << value << endl; }
else { cout << "No read done but that's okay." << endl; }
}
};
int sc_main(int argc, char* argv[]) {
sc_clock clock("clock");
Example example("example");
example.clock_in(clock);
sc_start(10, SC_NS);
return 0;
}
This throws no errors even though I am calling an sc_fifo function from a SC_METHOD. Is it bad policy to use nb_read() from inside a SC_METHOD? If so why?
Using sc_fifo non-blocking calls from SC_METHOD should be fine.
I have not found any place in standard manual that prohibits it.
Neither nb_read, nor nb_write, as their names suggest, call wait internally so it's fine to use them from an SC_METHOD.
While your example code works, it's rather inefficient when things are put into the fifo infrequently. If you want your code to be more event driven, you could make the SC_METHOD sensitive to sc_fifo.data_written_event(); then it will only be called when something is actually written to the fifo (though it's still a good idea to check that nb_read returns true in case something else pulled from the same fifo). Of course, this would skip your "No read done but that's okay." prints.
Also, I think the title of your question probably meant to ask about calling nb_write from SC_METHOD rather than SC_THREAD.

How to know that QSqlQuery::bindValue call was correct

Is there any possibility to know QSqlQuery::bindValue call is ok?
The functions returns void and I wonder why? At least the absence of the required place-holder should be reported (ihmo) as an error.
Maybe there is another possibility to check, that binding is correct?
use bindValue() and pass the placeholder name or the position and it will return the value that has been bound, so you will be able to check if values have bound correctly. Alternatively use boundValues() which will return a map of all the keys and values:
Sample from Qt Docs for boundValues()
QMapIterator<QString, QVariant> i(query.boundValues());
while (i.hasNext()) {
i.next();
cout << i.key().toUtf8().data() << ": "
<< i.value().toString().toUtf8().data() << endl;
}

issues about using async_write right after async_read_until

My code is as follows:
boost::asio::streambuf b1;
boost::asio::async_read_until(upstream_socket_, b1, '#',
boost::bind(&bridge::handle_upstream_read, shared_from_this(),
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
void handle_upstream1_read(const boost::system::error_code& error,
const size_t& bytes_transferred)
{
if (!error)
{
async_write(downstream_socket_,
b2,
boost::bind(&bridge::handle_downstream_write,
shared_from_this(),
boost::asio::placeholders::error));
}
else
close();
}
According to the documentation of async_read_until, http://www.boost.org/doc/libs/1_55_0/doc/html/boost_asio/reference/async_read_until/overload1.html,
After a successful async_read_until operation, the streambuf may contain additional data beyond the delimiter. An application will typically leave that data in the streambuf for a subsequent async_read_until operation to examine.
I know that the streambuf may contain additional data beyond the delimiter, but, in my case, will it write those additional data (the data beyond the char'#') to the downstream_socket_ inside the async_write operation? Or will async_write function be smart enough not to write those additional data until the next time the handle_upstream1_read function is being called?
According to the approaches in the documentation, the data in streambuf are stored in the istream first ( std::istream response_stream(&streambuf); )
and then put it into a string by using std::getline() funciton.
Do I really need to store the streambuf in istream first and then convert it into a string and then convert it back to char arrary (so that I can send the char array to the downstream_socket_ ) instead of just using the async_write to write the data( up to but not including the delimter, '#' ) to the downstream_socket_ ?
I prefer the second approach so that I don't need to make several conversion on the data. However, it seems that something is wrong when I tried the second approach.
My ideal case is that:
upstream_socket_ received xxxx#yyyy by using async_read_until
xxxx# is written to the downstream_socket_
upstream_socket_ received zzzz#kkkk by using async_read_until
yyyyzzzz# is written to the downstream_socket_
It seems that async_write operation still writes the data beyond the delimiter to the downstream_socket_. (but I am not 100% sure about this)
I appreciate if anyone can provide a little help !
The async_write() overload being used is considered complete when all of the streambuf's data, its input sequence, has been written to the WriteStream (socket). It is equivalent to calling:
boost::asio::async_write(stream, streambuf,
boost::asio::transfer_all(), handler);
One can limit the amount of bytes written and consumed from the streambuf object by calling this async_write() overload with the boost::asio::transfer_exactly completion condition:
boost::asio::async_write(stream, streambuf,
boost::asio::transfer_exactly(n), handler);
Alternatively, one can write directly from the streambuf's input sequence. However, one will need to explicitly consume from the streambuf.
boost::asio::async_write(stream,
boost::asio::buffer(streambuf.data(), n), handler);
// Within the completion handler...
streambuf.consume(n);
Note that when the async_read_until() operation completes, the completion handler's bytes_transferred argument contains the number of bytes in the streambuf's input sequence up to and including the delimiter, or 0 if an error occurred.
Here is a complete example demonstrating using both approaches. The example is written using synchronous operations in an attempt to simplify the flow:
#include <iostream>
#include <boost/asio.hpp>
#include <boost/bind.hpp>
// This example is not interested in the handlers, so provide a noop function
// that will be passed to bind to meet the handler concept requirements.
void noop() {}
/// #brief Helper function that extracts a string from a streambuf.
std::string make_string(
boost::asio::streambuf& streambuf,
std::size_t n)
{
return std::string(
boost::asio::buffers_begin(streambuf.data()),
boost::asio::buffers_begin(streambuf.data()) + n);
}
int main()
{
using boost::asio::ip::tcp;
boost::asio::io_service io_service;
// Create all I/O objects.
tcp::acceptor acceptor(io_service, tcp::endpoint(tcp::v4(), 0));
tcp::socket server_socket(io_service);
tcp::socket client_socket(io_service);
// Connect client and server sockets.
acceptor.async_accept(server_socket, boost::bind(&noop));
client_socket.async_connect(acceptor.local_endpoint(), boost::bind(&noop));
io_service.run();
// Mockup write_buffer as if it read "xxxx#yyyy" with read_until()
// using '#' as a delimiter.
boost::asio::streambuf write_buffer;
std::ostream output(&write_buffer);
output << "xxxx#yyyy";
assert(write_buffer.size() == 9);
auto bytes_transferred = 5;
// Write to server.
boost::asio::write(server_socket, write_buffer,
boost::asio::transfer_exactly(bytes_transferred));
// Verify write operation consumed part of the input sequence.
assert(write_buffer.size() == 4);
// Read from client.
boost::asio::streambuf read_buffer;
bytes_transferred = boost::asio::read(
client_socket, read_buffer.prepare(bytes_transferred));
read_buffer.commit(bytes_transferred);
// Copy from the read buffers input sequence.
std::cout << "Read: " <<
make_string(read_buffer, bytes_transferred) << std::endl;
read_buffer.consume(bytes_transferred);
// Mockup write_buffer as if it read "zzzz#kkkk" with read_until()
// using '#' as a delimiter.
output << "zzzz#kkkk";
assert(write_buffer.size() == 13);
bytes_transferred = 9; // yyyyzzzz#
// Write to server.
boost::asio::write(server_socket, buffer(write_buffer.data(),
bytes_transferred));
// Verify write operation did not consume the input sequence.
assert(write_buffer.size() == 13);
write_buffer.consume(bytes_transferred);
// Read from client.
bytes_transferred = boost::asio::read(
client_socket, read_buffer.prepare(bytes_transferred));
read_buffer.commit(bytes_transferred);
// Copy from the read buffers input sequence.
std::cout << "Read: " <<
make_string(read_buffer, bytes_transferred) << std::endl;
read_buffer.consume(bytes_transferred);
}
Output:
Read: xxxx#
Read: yyyyzzzz#
A few other notes:
The streambuf owns the memory, and std::istream and std::ostream use the memory. Using streams may be a good idea when one needs to extract formatted input or insert formatted output. For instance, when one wishes to read the string "123" as an integer 123.
One can directly access the streambuf's input sequence and iterate over it. In the example above, I use boost::asio::buffers_begin() to help construct a std::string by iterating over a streambuf's input sequence.
std::string(
boost::asio::buffers_begin(streambuf.data()),
boost::asio::buffers_begin(streambuf.data()) + n);
A stream-based transport protocol is being used, so handle incoming data as a stream. Be aware that even if the intermediary server reframes messages and sends "xxxx#" in one write operation and "yyyyzzzz#" in a subsequent write operation, the downstream may read "xxxx#yyyy" in a single read operation.

Reading dynamically growing file using NSInputStream

I should use Objective-C to read some slowly growing file (under Mac OS X).
"Slowly" means that I read to EOF before it grows bigger.
In means of POSIX code in plain syncronous C I can do it as following:
while(1)
{
res = select(fd+1,&fdset,NULL,&fdset,some_timeout);
if(res > 0)
{
len = read(fd,buf,sizeof(buf));
if (len>0)
{
printf("Could read %u bytes. Continue.\n", len);
}
else
{
sleep(some_timeout_in_sec);
}
}
}
Now I want to re-write this in some asynchronous manner, using NSInputSource or some other async Objective-C technique.
The problem with NSInputSource: If I use scheduleInRunLoop: method then once I get NSStreamEventEndEncountered event, I stop receiving any events.
Can I still use NSInputSource or should I pass to using NSFileHandle somehow or what would you recommend ?
I see a few problems.
1) some_Timeout, for select() needs to be a struct timeval *.
2) for sleep() some_timeout needs to be an integer number of seconds.
3) the value in some_timeout is decremented via select() (which is why the last parameter is a pointer to the struct timeval*. And that struct needs to be re-initialized before each call to select().
4) the parameters to select() are highest fd of interest+1, then three separate struct fd_set * objects. The first is for input files, the second is for output files, the third is for exceptions, however, the posted code is using the same struct fd_set for both the inputs and the exceptions, This probably will not be what is needed.
When the above problems are corrected, the code should work.