Issue Parsing File with YAML-CPP - yaml-cpp

In the following code, I'm having some sort of issue getting my .yaml file parsed using parser.GetNextDocument(doc);. After much gross debugging, I've found that the (main) issue here is that my for loop is not running, due to doc.size() == 0; What am I doing wrong?
void
BookView::load()
{
aBook.clear();
QString fileName =
QFileDialog::getOpenFileName(this, tr("Load Address Book"),
"", tr("Address Book (*.yaml);;All Files (*)"));
if(fileName.isEmpty())
{
return;
}
else
{
try
{
std::ifstream fin(fileName.toStdString().c_str());
YAML::Parser parser(fin);
YAML::Node doc;
std::map< std::string, std::string > entry;
parser.GetNextDocument(doc);
std::cout << doc.size();
for( YAML::Iterator it = doc.begin(); it != doc.end(); it++ )
{
*it >> entry;
aBook.push_back(entry);
}
}
catch(YAML::ParserException &e)
{
std::cout << "YAML Exception caught: "
<< e.what()
<< std::endl;
}
}
updateLayout( Navigating );
}
The .yaml file being read was generated using yaml-cpp, so I assume it is correctly formed YAML, but just in case, here's the file anyways.
^#^#^#\230---
-
address: ******************
comment: None.
email: andrew(dot)levenson(at)gmail(dot)com
name: Andrew Levenson
phone: **********^#
Edit: By request, the emitting code:
void
BookView::save()
{
QString fileName =
QFileDialog::getSaveFileName(this, tr("Save Address Book"), "",
tr("Address Book (*.yaml);;All Files (*)"));
if (fileName.isEmpty())
{
return;
}
else
{
QFile file(fileName);
if(!file.open(QIODevice::WriteOnly))
{
QMessageBox::information(this, tr("Unable to open file"),
file.errorString());
return;
}
std::vector< std::map< std::string, std::string > >::iterator itr;
std::map< std::string, std::string >::iterator mItr;
YAML::Emitter yaml;
yaml << YAML::BeginSeq;
for( itr = aBook.begin(); itr < aBook.end(); itr++ )
{
yaml << YAML::BeginMap;
for( mItr = (*itr).begin(); mItr != (*itr).end(); mItr++ )
{
yaml << YAML::Key << (*mItr).first << YAML::Value << (*mItr).second;
}
yaml << YAML::EndMap;
}
yaml << YAML::EndSeq;
QDataStream out(&file);
out.setVersion(QDataStream::Qt_4_5);
out << yaml.c_str();
}
}

Along the lines of what you thought, the problem is that you're writing using QDataStream but reading using plain std::ifstream. You need to do either one or the other.
If you want to use the QDataStream, you'll need to read it in as well. Check out the doc for more detail, but it looks like you can just grab the YAML string:
QDataStream in(&file);
QString str;
in >> str;
and then pass it to yaml-cpp:
std::stringstream stream; // remember to include <sstream>
stream << str; // or str.toStdString() - I'm not sure about how QString works
YAML::Parser parser(stream);
// etc.
The point of a std::stringstream is to transform your string containing YAML into a stream that the YAML parser can read.

Related

Defining strict_real_policies for reals with a comma decimal character

I would like to create a custom policy derived from strict_real_policies that will parse reals, such as "3,14", i.e. with a comma decimal point as used e.g. in Germany.
That should be easy, right?
#include <iostream>
#include <string>
#include <boost/spirit/home/x3.hpp>
template <typename T>
struct decimal_comma_strict_real_policies:boost::spirit::x3::strict_real_policies<T>
{
template <typename Iterator>
static bool
parse_dot(Iterator& first, Iterator const& last)
{
if (first == last || *first != ',')
return false;
++first;
return true;
}
};
void parse(const std::string& input)
{
namespace x3=boost::spirit::x3;
std::cout << "Parsing '" << input << "'" << std::endl;
std::string::const_iterator iter=std::begin(input),end=std::end(input);
const auto parser = x3::real_parser<double, decimal_comma_strict_real_policies<double>>{};
double parsed_num;
bool result=x3::parse(iter,end,parser,parsed_num);
if(result && iter==end)
{
std::cout << "Parsed: " << parsed_num << std::endl;
}
else
{
std::cout << "Something failed." << std::endl;
}
}
int main()
{
parse("3,14");
parse("3.14");
}

Anyone tried to use curl in veins application?

I am asking about veins the vehicle network simulator.
In my veins application, I want to fetch some data available on a backend server through an API call.
I tried to use curl inside initialize function and I get this error:
/home/veins/workspace.omnetpp/simple_road/simple_road: symbol lookup error: /home/veins/src/veins/src/libveins.so: undefined symbol: _ZN5veins10MyVeinsApp7callAPIENSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE
Simulation terminated with exit code: 127 Working directory: /home/veins/workspace.omnetpp/simple_road Command line: simple_road -m -n .:../../src/veins/examples/veins:../../src/veins/src/veins --image-path=../../src/veins/images -l ../../src/veins/src/veins omnetpp.ini
Environment variables: PATH=/home/veins/src/omnetpp-5.6.2/bin::/usr/lib/ccache:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/home/veins/bin:/home/veins/src/omnetpp/bin:/home/veins/src/sumo/bin:/home/veins/src/veins/bin LD_LIBRARY_PATH=/home/veins/src/omnetpp-5.6.2/lib::/home/veins/src/veins/src: OMNETPP_IMAGE_PATH=/home/veins/src/omnetpp-5.6.2/images
Code snippets:
void MyVeinsApp::initialize(int stage)
{
DemoBaseApplLayer::initialize(stage);
if (stage == 0) {
// Initializing members and pointers of your application goes here
callAPI("posts/1");
}
else if (stage == 1) {
// Initializing members that require initialized other modules goes here
}
}
size_t writeCallBack(void* contents, size_t size, size_t nmemb, void *userp) {
((std::string*)userp)->append((char*)contents, size * nmemb);
return size * nmemb;
}
void callAPI(string path) {
CURL* curl;
CURLcode res;
string buffer;
curl = curl_easy_init();
if(curl) {
const string url = SERVER_URL + path;
curl_easy_setopt(curl, CURLOPT_URL, url.c_str());
curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION, writeCallBack);
curl_easy_setopt(curl, CURLOPT_WRITEDATA, &buffer);
res = curl_easy_perform(curl);
curl_easy_cleanup(curl);
EV << "code: " << res << ", data: " << buffer << endl;
}
}

yaml-cpp always creates a scalar node with size 0

I'd like to use yaml-cpp for storeing some config-values. In order to get in touch with yaml-cpp, I've written a method which creates a node (_config is from Type YAML::Node), put some values in it and write it into a file:
void write_config()
{
std::ofstream fout("/home/user/config.yaml");
_config["Foo"]["0"] = "0";
_config["Foo"]["1"] = "1";
_config["Foo"]["2"] = "2";
_config["Foo"]["3"] = "3";
_config["Foo"]["4"] = "4";
_config["Foo"]["5"] = "5";
fout << _config;
}
after running this Method, a valid yaml file is created:
Foo:
1: 1
3: 3
0: 0
5: 5
4: 4
2: 2
After that, I have created a Method to read the file and print some information:
void load_config()
{
_config = YAML::Node("/home/user/config.yaml");
cout << "_config: " << _config << endl;
cout << "doc.Type(): " << _config.Type() << "\n";
cout << "doc.size(): " << _config.size() << "\n";
for (const auto& kv : _config)
{
std::cout << kv.first.as<std::string>() << "\n"; // prints Foo
std::cout << kv.second.as<std::string>() << "\n"; // prints Foo
}
}
but the output is:
_config: /home/user/config.yaml
doc.Type(): 2
doc.size(): 0
could someone tell me why the Node is empty (size == 0) and how I can read the file properly?
Thank you in advance!
I've found my Mistake...
_config = YAML::Node("/home/user/config.yaml");
should be
_config = YAML::LoadFile("/home/user/config.yaml");

What is the problem with generated SPIR-V code and how to verify it?

I have some generated SPIR-V code which I want to use with the vulkan API. But I get an
Exception thrown at 0x00007FFB68D933CB (nvoglv64.dll) in vulkanCompute.exe: 0xC0000005: Access violation reading location 0x0000000000000008. when trying to create the pipline with vkCreateComputePipelines.
The API calls should be fine, because the same code works with a shader compiled with glslangValidator. Therefore I assume that the generated SPIR-V code must be illformed somehow.
I've checked the SPIR-V code with the validator tool from khronos, using spirv-val --target-env vulkan1.1 mainV.spv which exited without error. Anyhow it is also known that this tool is still incomplete.
I've also tried to use the Radeon GPU Analyzer to compile my SPIR-V code, which is also available online at the shader playground and this tool throws the error Error: Error: internal error: Bil::BilInstructionConvert::Create(60) Code Not Tested! which is not really helpful, but encourages the assumption that the code is malformed.
The SPIR-V code is unfortunately to long to post it here, but it is in the link of the shader playground.
Does anyone know what the problem is with my setting or has any idea how I can verify my SPIR-V code in a better way, without checking all 700 lines of code manually.
I don't thinkt the problem is there, but anyway here is the c++ host code:
#include "vulkan/vulkan.hpp"
#include <iostream>
#include <fstream>
#include <vector>
#define BAIL_ON_BAD_RESULT(result) \
if (VK_SUCCESS != (result)) \
{ \
fprintf(stderr, "Failure at %u %s\n", __LINE__, __FILE__); \
exit(-1); \
}
VkResult vkGetBestComputeQueueNPH(vk::PhysicalDevice &physicalDevice, uint32_t &queueFamilyIndex)
{
auto properties = physicalDevice.getQueueFamilyProperties();
int i = 0;
for (auto prop : properties)
{
vk::QueueFlags maskedFlags = (~(vk::QueueFlagBits::eTransfer | vk::QueueFlagBits::eSparseBinding) & prop.queueFlags);
if (!(vk::QueueFlagBits::eGraphics & maskedFlags) && (vk::QueueFlagBits::eCompute & maskedFlags))
{
queueFamilyIndex = i;
return VK_SUCCESS;
}
i++;
}
i = 0;
for (auto prop : properties)
{
vk::QueueFlags maskedFlags = (~(vk::QueueFlagBits::eTransfer | vk::QueueFlagBits::eSparseBinding) & prop.queueFlags);
if (vk::QueueFlagBits::eCompute & maskedFlags)
{
queueFamilyIndex = i;
return VK_SUCCESS;
}
i++;
}
return VK_ERROR_INITIALIZATION_FAILED;
}
int main(int argc, const char *const argv[])
{
(void)argc;
(void)argv;
try
{
// initialize the vk::ApplicationInfo structure
vk::ApplicationInfo applicationInfo("VecAdd", 1, "Vulkan.hpp", 1, VK_API_VERSION_1_1);
// initialize the vk::InstanceCreateInfo
std::vector<char *> layers = {
"VK_LAYER_LUNARG_api_dump",
"VK_LAYER_KHRONOS_validation"
};
vk::InstanceCreateInfo instanceCreateInfo({}, &applicationInfo, static_cast<uint32_t>(layers.size()), layers.data());
// create a UniqueInstance
vk::UniqueInstance instance = vk::createInstanceUnique(instanceCreateInfo);
auto physicalDevices = instance->enumeratePhysicalDevices();
for (auto &physicalDevice : physicalDevices)
{
auto props = physicalDevice.getProperties();
// get the QueueFamilyProperties of the first PhysicalDevice
std::vector<vk::QueueFamilyProperties> queueFamilyProperties = physicalDevice.getQueueFamilyProperties();
uint32_t computeQueueFamilyIndex = 0;
// get the best index into queueFamiliyProperties which supports compute and stuff
BAIL_ON_BAD_RESULT(vkGetBestComputeQueueNPH(physicalDevice, computeQueueFamilyIndex));
std::vector<char *>extensions = {"VK_EXT_external_memory_host", "VK_KHR_shader_float16_int8"};
// create a UniqueDevice
float queuePriority = 0.0f;
vk::DeviceQueueCreateInfo deviceQueueCreateInfo(vk::DeviceQueueCreateFlags(), static_cast<uint32_t>(computeQueueFamilyIndex), 1, &queuePriority);
vk::StructureChain<vk::DeviceCreateInfo, vk::PhysicalDeviceFeatures2, vk::PhysicalDeviceShaderFloat16Int8Features> createDeviceInfo = {
vk::DeviceCreateInfo(vk::DeviceCreateFlags(), 1, &deviceQueueCreateInfo, 0, nullptr, static_cast<uint32_t>(extensions.size()), extensions.data()),
vk::PhysicalDeviceFeatures2(),
vk::PhysicalDeviceShaderFloat16Int8Features()
};
createDeviceInfo.get<vk::PhysicalDeviceFeatures2>().features.setShaderInt64(true);
createDeviceInfo.get<vk::PhysicalDeviceShaderFloat16Int8Features>().setShaderInt8(true);
vk::UniqueDevice device = physicalDevice.createDeviceUnique(createDeviceInfo.get<vk::DeviceCreateInfo>());
auto memoryProperties2 = physicalDevice.getMemoryProperties2();
vk::PhysicalDeviceMemoryProperties const &memoryProperties = memoryProperties2.memoryProperties;
const int32_t bufferLength = 16384;
const uint32_t bufferSize = sizeof(int32_t) * bufferLength;
// we are going to need two buffers from this one memory
const vk::DeviceSize memorySize = bufferSize * 3;
// set memoryTypeIndex to an invalid entry in the properties.memoryTypes array
uint32_t memoryTypeIndex = VK_MAX_MEMORY_TYPES;
for (uint32_t k = 0; k < memoryProperties.memoryTypeCount; k++)
{
if ((vk::MemoryPropertyFlagBits::eHostVisible | vk::MemoryPropertyFlagBits::eHostCoherent) & memoryProperties.memoryTypes[k].propertyFlags &&
(memorySize < memoryProperties.memoryHeaps[memoryProperties.memoryTypes[k].heapIndex].size))
{
memoryTypeIndex = k;
std::cout << "found memory " << memoryTypeIndex + 1 << " out of " << memoryProperties.memoryTypeCount << std::endl;
break;
}
}
BAIL_ON_BAD_RESULT(memoryTypeIndex == VK_MAX_MEMORY_TYPES ? VK_ERROR_OUT_OF_HOST_MEMORY : VK_SUCCESS);
auto memory = device->allocateMemoryUnique(vk::MemoryAllocateInfo(memorySize, memoryTypeIndex));
auto in_buffer = device->createBufferUnique(vk::BufferCreateInfo(vk::BufferCreateFlags(), bufferSize, vk::BufferUsageFlagBits::eStorageBuffer, vk::SharingMode::eExclusive));
device->bindBufferMemory(in_buffer.get(), memory.get(), 0);
// create a DescriptorSetLayout
std::vector<vk::DescriptorSetLayoutBinding> descriptorSetLayoutBinding{
{0, vk::DescriptorType::eStorageBuffer, 1, vk::ShaderStageFlagBits::eCompute}};
vk::UniqueDescriptorSetLayout descriptorSetLayout = device->createDescriptorSetLayoutUnique(vk::DescriptorSetLayoutCreateInfo(vk::DescriptorSetLayoutCreateFlags(), static_cast<uint32_t>(descriptorSetLayoutBinding.size()), descriptorSetLayoutBinding.data()));
std::cout << "Memory bound" << std::endl;
std::ifstream myfile;
myfile.open("shaders/MainV.spv", std::ios::ate | std::ios::binary);
if (!myfile.is_open())
{
std::cout << "File not found" << std::endl;
return EXIT_FAILURE;
}
auto size = myfile.tellg();
std::vector<unsigned int> shader_spv(size / sizeof(unsigned int));
myfile.seekg(0);
myfile.read(reinterpret_cast<char *>(shader_spv.data()), size);
myfile.close();
std::cout << "Shader size: " << shader_spv.size() << std::endl;
auto shaderModule = device->createShaderModuleUnique(vk::ShaderModuleCreateInfo(vk::ShaderModuleCreateFlags(), shader_spv.size() * sizeof(unsigned int), shader_spv.data()));
// create a PipelineLayout using that DescriptorSetLayout
vk::UniquePipelineLayout pipelineLayout = device->createPipelineLayoutUnique(vk::PipelineLayoutCreateInfo(vk::PipelineLayoutCreateFlags(), 1, &descriptorSetLayout.get()));
vk::ComputePipelineCreateInfo computePipelineInfo(
vk::PipelineCreateFlags(),
vk::PipelineShaderStageCreateInfo(
vk::PipelineShaderStageCreateFlags(),
vk::ShaderStageFlagBits::eCompute,
shaderModule.get(),
"_ZTSZZ4mainENK3$_0clERN2cl4sycl7handlerEE6VecAdd"),
pipelineLayout.get());
auto pipeline = device->createComputePipelineUnique(nullptr, computePipelineInfo);
auto descriptorPoolSize = vk::DescriptorPoolSize(vk::DescriptorType::eStorageBuffer, 2);
auto descriptorPool = device->createDescriptorPool(vk::DescriptorPoolCreateInfo(vk::DescriptorPoolCreateFlags(), 1, 1, &descriptorPoolSize));
auto commandPool = device->createCommandPoolUnique(vk::CommandPoolCreateInfo(vk::CommandPoolCreateFlags(), computeQueueFamilyIndex));
auto commandBuffer = std::move(device->allocateCommandBuffersUnique(vk::CommandBufferAllocateInfo(commandPool.get(), vk::CommandBufferLevel::ePrimary, 1)).front());
commandBuffer->begin(vk::CommandBufferBeginInfo(vk::CommandBufferUsageFlags(vk::CommandBufferUsageFlagBits::eOneTimeSubmit)));
commandBuffer->bindPipeline(vk::PipelineBindPoint::eCompute, pipeline.get());
commandBuffer->dispatch(bufferSize / sizeof(int32_t), 1, 1);
commandBuffer->end();
auto queue = device->getQueue(computeQueueFamilyIndex, 0);
vk::SubmitInfo submitInfo(0, nullptr, nullptr, 1, &commandBuffer.get(), 0, nullptr);
queue.submit(1, &submitInfo, vk::Fence());
queue.waitIdle();
printf("all done\nWoohooo!!!\n\n");
}
}
catch (vk::SystemError &err)
{
std::cout << "vk::SystemError: " << err.what() << std::endl;
exit(-1);
}
catch (std::runtime_error &err)
{
std::cout << "std::runtime_error: " << err.what() << std::endl;
exit(-1);
}
catch (...)
{
std::cout << "unknown error\n";
exit(-1);
}
return EXIT_SUCCESS;
}
Well after checking out line per line it showed that the problem is when working with pointers of pointers. For me it is still not clear from the specification that it is not allowed, but it is understandable that it does not work with logical pointers.
Still the behaviour is strange that the validator is not able to note that and that compiling the SPIRV code crashes instead of throwing a clear error message.
So in the end, it was the Shader code which was wrong.

SIGSEGV on second call to boost::asio::udp socket::async_recv on worker boost::thread

I get a SIGSEGV in following class on the second time I call the start_receive(). It works correctly in my open() function, but seems to fail when input is received and I try restarting listen for more input:
#0 0x0000555555584154 in boost::asio::basic_io_object<boost::asio::datagram_socket_service<boost::asio::ip::udp>, true>::get_service (this=0x100007f00000000)
at /usr/include/boost/asio/basic_io_object.hpp:225
#1 0x000055555558398b in boost::asio::basic_datagram_socket<boost::asio::ip::udp, boost::asio::datagram_socket_service<boost::asio::ip::udp> >::async_receive_from<boost::asio::mutable_buffers_1, boost::_bi::bind_t<int, boost::_mfi::mf2<int, Vast::net_udpNC_MChandler, boost::system::error_code const&, unsigned long>, boost::_bi::list3<boost::_bi::value<Vast::net_udpNC_MChandler*>, boost::arg<1> (*)(), boost::arg<2> (*)()> > > (this=0x100007f00000000, buffers=...,
sender_endpoint=..., handler=...)
at /usr/include/boost/asio/basic_datagram_socket.hpp:895
#2 0x000055555557a889 in Vast::net_udpNC_MChandler::start_receive (
this=0x7fffffff5c70) at net_udpnc_mchandler.cpp:58
#3 0x000055555557aa77 in Vast::net_udpNC_MChandler::handle_input (
this=0x7fffffff5c70, error=..., bytes_transferred=24)
at net_udpnc_mchandler.cpp:100
#4 0x000055555557abb3 in Vast::net_udpNC_MChandler::handle_buffer (
this=0x7fffffff5c70, buf=0x7fffffffdad0 "\035\300", bytes_transferred=24)
at net_udpnc_mchandler.cpp:114
#5 0x000055555556397f in test_process_encoded ()
at unittest_net_udpnc_mchandler.cpp:43
#6 0x000055555556400e in main () at unittest_net_udpnc_mchandler.cpp:101
Header:
class net_udpNC_MChandler
{
public:
net_udpNC_MChandler(ip::udp::endpoint local_endpoint);
//MChandler will run its own io_service
int open (AbstractRLNCMsgReceiver *msghandler);
int handle_buffer (char *buf, std::size_t bytes_transferred);
protected:
//Start the receiving loop
void start_receive ();
// handling incoming message
int handle_input (const boost::system::error_code& error,
std::size_t bytes_transferred);
private:
ip::udp::socket *_udp;
ip::udp::endpoint _remote_endpoint_;
ip::udp::endpoint _local_endpoint;
ip::udp::endpoint MC_address;
char _buf[VAST_BUFSIZ];
AbstractRLNCMsgReceiver *_msghandler = NULL;
io_service *_io_service;
boost::thread *_iosthread;
};
Source file:
net_udpNC_MChandler::net_udpNC_MChandler(ip::udp::endpoint local_endpoint) :
MC_address(ip::address::from_string("239.255.0.1"), 1037)
{
_io_service = new io_service();
_local_endpoint = local_endpoint;
}
int net_udpNC_MChandler::open(AbstractRLNCMsgReceiver *msghandler) {
_msghandler = msghandler;
if (_udp == NULL) {
_udp = new ip::udp::socket(*_io_service);
_udp->open(ip::udp::v4());
_udp->set_option(ip::udp::socket::reuse_address(true));
_udp->set_option(ip::multicast::join_group(MC_address.address ()));
boost::system::error_code ec;
_udp->bind(MC_address, ec);
std::cout << "net_udpnc_mchandler::open " + ec.message() << std::endl;
if (ec)
{
std::cout << "net_udpnc_mchandler:: open MC address failed" << ec.message() << std::endl;
}
//Add async receive to io_service queue
start_receive();
std::cout << "net_udpnc_mchandler::open _udp->_local_endpoint: " << _udp->local_endpoint() << " _local_endpoint" << _local_endpoint << std::endl;
//Start the thread handling async receives
_iosthread = new boost::thread(boost::bind(&boost::asio::io_service::run, _io_service));
}
return 0;
}
void net_udpNC_MChandler::start_receive()
{
_udp->async_receive_from(
boost::asio::buffer(_buf, VAST_BUFSIZ), _remote_endpoint_,
boost::bind(&net_udpNC_MChandler::handle_input, this,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
}
// handling incoming message
int net_udpNC_MChandler::handle_input (const boost::system::error_code& error,
std::size_t bytes_transferred)
{
RLNCHeader header;
if (!error)
{
//Store UDP messages
char *p = _buf;
memcpy(&header, p, sizeof(RLNCHeader));
if (RLNCHeader_factory::isRLNCHeader (header) && header.enc_packet_count > 1)
{
CPPDEBUG("net_udpnc_mchandler::handle_input: Encoded packet received" << std::endl);
process_encoded (bytes_transferred);
}
//Restart waiting for new packets
start_receive();
}
else {
CPPDEBUG("Error on UDP socket receive: " << error.message() << std::endl;);
}
return -1;
}
The strangest thing is that everything works if I use a default constructor without arguments (i.e. no local_endpoint), this SIGSEGV does not appear. But as soon as I change the constructor to the current one, I get the SIGSEGV.
The _io_service is a class object and it does not get destructed anywhere but the destructor, so I do not know how I can get a SIGSEGV for it...
Is there some requirement on the handler class that it has a no arguments constructor?