I have made a rapidjson document with all my objects and values using usual AddMember() method. Now I want to get the string out of that document for publishing to a mqtt broker. But inside that string, some members shall have 2 decimal places, some only one, and others all decimals.
I don't find how to set decimal place for a specific member after the document was fully builded.
I succeeded to do so by building my json document with a writer but this is not what i want to do because this document can't be easily modified:
#include <string>
#include <iostream>
#include <sstream>
#include <rapidjson/document.h> // rapidjson's DOM-style API
#include <rapidjson/prettywriter.h> // for stringify JSON
#include <rapidjson/stringbuffer.h>
using namespace rapidjson;
using namespace std;
int main (int argc, char* argv[])
{
Document doc;
StringBuffer buffer;
Writer<StringBuffer> writer(buffer);
writer.StartObject();
writer.Key("member1");
writer.SetMaxDecimalPlaces(2);
writer.Double(1.0000001);
writer.Key("member2");
writer.SetMaxDecimalPlaces(3);
writer.Double(3.123456);
writer.Key("member3");
writer.SetMaxDecimalPlaces(8);
writer.Double(2.123456);
writer.EndObject();
cout << buffer.GetString() << endl;
return 0;
}
./decimal
{"member1":1.0,"member2":3.123,"member3":2.123456}
Now, this how i build my document:
#include <string>
#include <iostream>
#include <sstream>
#include <rapidjson/document.h> // rapidjson's DOM-style API
#include <rapidjson/prettywriter.h> // for stringify JSON
#include <rapidjson/stringbuffer.h>
using namespace rapidjson;
using namespace std;
int main (int argc, char* argv[])
{
Document doc;
Document::AllocatorType& allocator = doc.GetAllocator();
StringBuffer buffer;
Writer<StringBuffer> writer(buffer);
doc.SetObject();
doc.AddMember("member1", 1.0000001, allocator);
doc.AddMember("member3", 3.123456, allocator);
doc.AddMember("member2", 2.123456, allocator);
writer.SetMaxDecimalPlaces(2);
doc.Accept(writer);
cout << buffer.GetString() << endl;
return 0;
}
./decimal
{"member1":1.0,"member2":2.12,"member3":3.12}
The SetMaxDecimalPlaces() applies to the whole document this way
I would like to get same output has first code example but using document made from second source code. How can i tell the writer to format each member differently ?
I'm super late to the party, but you can create a second writer with different writing settings:
StringBuffer buffer;
Writer<StringBuffer> writer1(buffer); // original writer
Writer<StringBuffer> writer2(buffer); // a new second writer
writer1.SetMaxDecimalPlaces(1);
writer2.SetMaxDecimalPlaces(2);
and then use the specific writers to write directly into the buffer instead of using the doc to call the writer:
writer.Key("member1");
writer.Double(1.0);
writer2.Key("member2");
writer2.Double(2.12);
writer2.Key("member3");
writer2.Double(3.12);
Full example:
using namespace rapidjson;
using namespace std;
int main (int argc, char* argv[])
{
StringBuffer buffer;
Writer<StringBuffer> writer1(buffer);
Writer<StringBuffer> writer2(buffer);
writer1.SetMaxDecimalPlaces(2);
writer2.SetMaxDecimalPlaces(2);
writer1.StartObject();
writer1.Key("member1");
writer1.Double(1.0);
writer2.Key("member2");
writer2.Double(2.12);
writer2.Key("member3");
writer2.Double(3.12);
writer1.EndObject();
cout << buffer.GetString() << endl;
return 0;
}
Related
Running this very little snippet, to show a problem I have with a much larger code:
// Type your code here, or load an example.
#include <iostream>
#include <vector>
#include <memory>
using namespace std;
int main() {
auto res = make_unique<int>();
auto ptr = res.get();
if (ptr) {
*ptr = 5;
cout << *ptr << endl;
}
return 0;
}
with the -fanalyzer switch, I get a warning
warning: dereference of possibly-NULL 'operator new(4)' [CWE-690] [-Wanalyzer-possible-null-dereference]
But clearly I made all I could do to avoid this warning, but it is buried in the STL, which returns a unique_ptr with no validity control..
I understand the word "possibly" though..
Anyway to correct this on my side?
Update:
I made a mistake in the first go, now corrected
Update 2:
Even that code is refused
// Type your code here, or load an example.
#include <iostream>
#include <memory>
#include <vector>
using namespace std;
int main() {
auto i = new int(3);
if (!i) {
return 1;
}
unique_ptr<int> res(i);
auto ptr = res.get();
if (!ptr) {
return 1;
}
*ptr = 5;
cout << *ptr << endl;
return 0;
}
Please, see here
As for now (gcc-12), the analyzer is not recommended for C++ code although work is underway to support it.
https://developers.redhat.com/articles/2022/04/12/state-static-analysis-gcc-12-compiler#toward_support_for_c__
I want to copy a mesh with the function copy_face_graph(source, target). But the target mesh is different (it has same number of vertices and faces, but the coordinates and the order are totally different).
The code:
#include <CGAL/Exact_predicates_inexact_constructions_kernel.h>
#include <CGAL/Exact_predicates_exact_constructions_kernel.h>
#include <CGAL/Surface_mesh.h>
#include <iostream>
#include <fstream>
#include <CGAL/boost/graph/copy_face_graph.h>
typedef CGAL::Exact_predicates_inexact_constructions_kernel Kernel;
typedef CGAL::Surface_mesh<Kernel::Point_3> Mesh;
namespace PMP = CGAL::Polygon_mesh_processing;
int main(int argc, char* argv[]) {
const char* filename1 = (argc > 1) ? argv[1] : "data/blobby.off";
std::cout << ".off loaded" << std::endl;
std::ifstream input(filename1);
Mesh mesh_orig;
if (!input || !(input >> mesh_orig))
{
std::cerr << "First mesh is not a valid off file." << std::endl;
return 1;
}
input.close();
// ========================================================
Mesh mesh_copy;
CGAL::copy_face_graph(mesh_orig, mesh_copy);
// ========================================================
std::ofstream mesh_cpy("CPY_ANYLYZE/mesh_copy.off");
mesh_cpy << mesh_copy;
mesh_cpy.close();
return 0;
}
Dose anyone knows how to get a complete same mesh from the original mesh? Do I need add the named parameters, or maybe using another function?
Thanks a lot
Except if you intend to write some code working with different data structures, you can use the copy constructor from the Surface_mesh class, Mesh mesh_copy(mesh_orig). copy_face_graph does not do a raw copy because it works also if the input and output are of different types. However the output should be the same up to the order of the simplices.
I'm using log4cplus as a logger for both CLR and non-CLR C++/CLI code and C# code so for that reason I'm using the Unicode x64 build of log4cplus, log4cplusU.lib/dll.
If I run the following code in a non-CLR C++/CLI x64 console application, I get a memory access exception.
int _tmain(int argc, _TCHAR* argv[])
{
std::string LogFileName = "log4cplus.log";
auto db = log4cplus::helpers::towstring(LogFileName);
Exception:
Unhandled exception at 0x00007FF8E4A1CDA1 (msvcr120.dll) in ConsoleApplication1.exe: 0xC0000005: Access violation reading location 0xFFFFFFFFFFFFFFFF.
What's up?
I'm using Visual Studio 2013. My call stack at the exception looks like:
> log4cplusU.dll!std::vector<wchar_t,std::allocator<wchar_t> >::vector<wchar_t,std::allocator<wchar_t> >(unsigned __int64 _Count) Line 691 C++
log4cplusU.dll!log4cplus::helpers::towstring_internal(std::basic_string<wchar_t,std::char_traits<wchar_t>,std::allocator<wchar_t> > & outstr, const char * src, unsigned __int64 size, const std::locale & loc) Line 70 C++
log4cplusU.dll!log4cplus::helpers::towstring(const std::basic_string<char,std::char_traits<char>,std::allocator<char> > & src) Line 124 C++
ConsoleApplication1.exe!wmain(int argc, wchar_t * * argv) Line 24 C++
ConsoleApplication1.exe!__tmainCRTStartup() Line 623 C
At the point where the exception fires in std::vector(size_type), _Count is a crazy number.
_Count 14757396612626683276 unsigned __int64
The reason appears to be that the string parameter gets scrambled or misinterpreted.
The same problem manifests itself on non-Unicode DEBUG MODE builds of log4cplus in unmanaged code on VS but not in release mode builds.
For example:
#include <string>
#include "stdafx.h"
#include <iostream>
#include <log4cplus/loggingmacros.h>
#include <log4cplus/configurator.h>
int _tmain(int argc, _TCHAR* argv[])
{
std::string LogConfigFileName = "WhenLoggingCppManagedCode.properties";
try
{
log4cplus::tstring cfn = LogConfigFileName;
log4cplus::PropertyConfigurator::doConfigure(cfn);
std::cout << "Good Deadpool." << std::endl;
}
catch (...)
{
std::cout << "BAD Deadpool." << std::endl;
}
std::cin.get();
return 0;
}
Consider the following SystemC code:
#include <iostream>
#include "systemc.h"
using namespace std;
int
sc_main(int argc, char* argv[])
{
sc_bv<3> foo;
operand_0 = "0d6";
cout << foo.to_long() << endl; // prints -2
return EXIT_SUCCESS;
}
This prints out -2 rather than 6 as I would have expected. The apparent reason for doing so would be that to_long() interprets the bit-vector 0b110 as signed. However, in IEEE Std 1666-2011, it says in Section 7.2.9 referring to integer conversion functions such as to_long():
These member functions shall interpret the bits within a SystemC integer,
fixed-point type or vector, or any part-select or concatenation thereof,
as representing an unsigned binary value, with the exception of signed
integers and signed fixed-point types.
Do I misunderstand something or is the SystemC implementation from Accellera not adhering to the standard in this aspect?
I think you are correct, there does seems to be a discrepancy between the SystemC LRM (IEEE Std 1666-2011) and the implementation.
If you want foo to be interpreted as an unsigned value, you must use to_ulong():
#include <iostream>
#include <systemc>
using namespace std;
int sc_main(int argc, char* argv[]) {
sc_bv<3> foo("0d6");
cout << foo.to_long() << endl; // prints -2
cout << foo.to_ulong() << endl; // prints 6
return EXIT_SUCCESS;
}
the .proto file:
package lm;
message helloworld
{
required int32 id = 1;
required string str = 2;
optional int32 opt = 3;
}
the writer.cc file:
#include <iostream>
#include <string>
#include "lm.helloworld.pb.h"
#include <fstream>
using namespace std;
int main()
{
lm::helloworld msg1;
msg1.set_id(101000);
msg1.set_str("helloworld,this is a protobuf writer");
fstream output("log", ios::out | ios::trunc | ios::binary);
string _data;
msg1.SerializeToString(&_data);
cout << _data << endl;
if(!msg1.SerializeToOstream(&output))
{
cerr << "Failed to write msg" << endl;
return -1;
}
return 0;
}
the reader.cc file:
#include <iostream>
#include <fstream>
#include <string>
#include "lm.helloworld.pb.h"
using namespace std;
void ListMsg(const lm::helloworld & msg)
{
cout << msg.id() << endl;
cout << msg.str() << endl;
}
int main(int argc, char* argv[])
{
lm::helloworld msg1;
{
fstream input("log", ios::in | ios::binary);
if (!msg1.ParseFromIstream(&input))
{
cerr << "Failed to parse address book." << endl;
return -1;
}
}
ListMsg(msg1);
return 0;
}
It's a simple reader and writer model using protobuf. but what's in the log is a readable string a typed in the write.cc file instead of "numeric format",why is that?
The encoding is described here.
Without an example of what comes out the other end, that is slightly hard to answer precisely, but there are two possible explanations of what you are seeing:
you have explicitly switched into TextFormat in your code; this is very unlikely - and indeed, the primary use of TextFormat is debugging etc
far more likely, you're just seeing the text data from your message in the binary; text is encoded as UTF-8, so if you open a protobuf file in a text editor, pieces of it will appear readable enough to display something in the file
The real question is: what are the actual bytes in your output file? If it is something like:
08-88-95-06-12-24-68-65-6C-6C-6F-77-6F-72-6C-64-2C-74-68-69-73-20-69-73-20-61-20-70-72-6F-74-6F-62-75-66-20-77-72-69-74-65-72
then that is the binary format; but note that most of that is simply the UTF-8 of the string "helloworld,this is a protobuf writer" - which dominates the message by sheer size:
68-65-6C-6C-6F-77-6F-72-6C-64-2C-74-68-69-73-20-69-73-20-61-20-70-72-6F-74-6F-62-75-66-20-77-72-69-74-65-72
So if you look in any text editor, it will appear as a few garbled characters at the start followed by the clearly legible helloworld,this is a protobuf writer.
The "binary" here is the bit at the start:
08-88-95-06-12-24
This is:
08: header: field 1, varint
88-95-06: the value (decimal) 101000 as a varint
12: header: field 2, length-prefixed
24: the value (decimal) 36 as a varint (the length of the string, in bytes)
The key points to note:
if your message is dominated by text, the yes, large parts of it will look human-readable even in binary form
look at the overheads; it to 6 bytes to encode the entire rest of the message, and 3 bytes of that was data (the 101000) - so only 3 bytes was actually lost as overhead; now compare and contrast to xml, json, etc to understand what protobuf is doing to help you