I would really like some help in this.
I am relatively new to programming in C++. I need my code to be able to deal with ASCII and Unicode. The following is taking an input szTestString1, finds "Ni" in the input and replaces it with "NI". szTestString1 is ASCII so my code works fine. How can I make the code be able to handle szTestString2 as well?
I substituted string with wstring and "Ni" with L"Ni". Furthermore I used wcout instead of cout, but the output did not make much sense. And I am now lost.
Any hint would be greatly appreciated.
#include <iostream>
#include <string> using namespace std;
class MyClass {
public: int getNiCount(string s1) { int Ni_counter=0; int found,pos;
for (pos=0; found!=-1; pos+=(found+2)){ found=s1.find("Ni",pos);
Ni_counter++; }
return(Ni_counter-1); }
string replaceNiWithNI(string s1) { int found,pos;
do{ found=s1.find("Ni",pos); if (found!=-1) s1.replace(found, sizeof("Ni")-1, "NI"); }while (found!=-1); return(s1); }
} obj1;
int main()
{
const char *szTestString1 = "Ni ll Ni ll Ni nI NI nI Ni Ni"; const wchar_t *szTestString2 = L"Ni ll Ni ll Ni nI NI nI Ni Ni";
int Ni_occur_number; string new_string;
// Invoke getNiCount(...) of class MyClass Ni_occur_number=obj1.getNiCount(szTestString1); // Invoke replaceNiWithNI(...) of class MyClass new_string=obj1.replaceNiWithNI(szTestString1);
// Display on screen: "Found X occurrences of Ni. New string: Y" cout << "Found " << Ni_occur_number << " occurrences of Ni. " << "New string: " << new_string << endl;
}
Related
I am new to CGAL.
I tried to modify Examples/Arrangement_on_surfaces_2 Bezier_curves.cpp to save arrangement to file as shown below:
//! \file examples/Arrangement_on_surface_2/Bezier_curves.cpp
// Constructing an arrangement of Bezier curves.
#include <fstream>
#include <CGAL/basic.h>
#ifndef CGAL_USE_CORE
#include <iostream>
int main ()
{
std::cout << "Sorry, this example needs CORE ..." << std::endl;
return 0;
}
#else
#include <CGAL/Cartesian.h>
#include <CGAL/CORE_algebraic_number_traits.h>
#include <CGAL/Arr_Bezier_curve_traits_2.h>
#include <CGAL/Arrangement_2.h>
#include <CGAL/IO/Arr_iostream.h>
#include "arr_inexact_construction_segments.h"
#include "arr_print.h"
typedef CGAL::CORE_algebraic_number_traits Nt_traits;
typedef Nt_traits::Rational NT;
typedef Nt_traits::Rational Rational;
typedef Nt_traits::Algebraic Algebraic;
typedef CGAL::Cartesian<Rational> Rat_kernel;
typedef CGAL::Cartesian<Algebraic> Alg_kernel;
typedef Rat_kernel::Point_2 Rat_point_2;
typedef CGAL::Arr_Bezier_curve_traits_2<Rat_kernel, Alg_kernel, Nt_traits>
Traits_2;
typedef Traits_2::Curve_2 Bezier_curve_2;
typedef CGAL::Arrangement_2<Traits_2> Arrangement_2;
//typedef CGAL::Arrangement_2<Traits_2> Arrangement;
int main (int argc, char *argv[])
{
// Get the name of the input file from the command line, or use the default
// Bezier.dat file if no command-line parameters are given.
const char *filename = (argc > 1) ? argv[1] : "Bezier.dat";
const char *outfilename = (argc > 1) ? argv[1] : "BezierOut.dat";
// Open the input file.
std::ifstream in_file (filename);
if (! in_file.is_open()) {
std::cerr << "Failed to open " << filename << std::endl;
return 1;
}
// Read the curves from the input file.
unsigned int n_curves;
std::list<Bezier_curve_2> curves;
Bezier_curve_2 B;
unsigned int k;
in_file >> n_curves;
for (k = 0; k < n_curves; k++) {
// Read the current curve (specified by its control points).
in_file >> B;
curves.push_back (B);
std::cout << "B = {" << B << "}" << std::endl;
}
in_file.close();
// Construct the arrangement.
Arrangement_2 arr;
insert (arr, curves.begin(), curves.end());
// Print the arrangement size.
std::ofstream out_file;
out_file.open(outfilename);
out_file << "The arrangement size:" << std::endl
<< " V = " << arr.number_of_vertices()
<< ", E = " << arr.number_of_edges()
<< ", F = " << arr.number_of_faces() << std::endl;
out_file << arr;
out_file.close();
return 0;
}
#endif
If I comment out the line out_file << arr; it works fine. Otherwise it generates a C2678 error in read_x_monotone_curve in Arr_text_formtter.h
I am using Visual Studio 15 x86.
Thank you for any help.
I solve this by modifying the print_arrangement(arr) routine in arr_print.h to save_arrangement(arr) with a std::ofstream in place of std::cout.
It appears that the << operator does not work.
If someone else has a better solution I am open to it.
Points of intersections in an arrangement of Bezier curves cannot be represented in an exact manner. Therefore, such an arrangement cannot be saved using the default export (<<) operator and the standard format.
The easiest solution is to store the curves, but this means that the arrangement must be recomputed each time the curves are read. Perhaps other solution could be devised, but they are not implemented.
the .proto file:
package lm;
message helloworld
{
required int32 id = 1;
required string str = 2;
optional int32 opt = 3;
}
the writer.cc file:
#include <iostream>
#include <string>
#include "lm.helloworld.pb.h"
#include <fstream>
using namespace std;
int main()
{
lm::helloworld msg1;
msg1.set_id(101000);
msg1.set_str("helloworld,this is a protobuf writer");
fstream output("log", ios::out | ios::trunc | ios::binary);
string _data;
msg1.SerializeToString(&_data);
cout << _data << endl;
if(!msg1.SerializeToOstream(&output))
{
cerr << "Failed to write msg" << endl;
return -1;
}
return 0;
}
the reader.cc file:
#include <iostream>
#include <fstream>
#include <string>
#include "lm.helloworld.pb.h"
using namespace std;
void ListMsg(const lm::helloworld & msg)
{
cout << msg.id() << endl;
cout << msg.str() << endl;
}
int main(int argc, char* argv[])
{
lm::helloworld msg1;
{
fstream input("log", ios::in | ios::binary);
if (!msg1.ParseFromIstream(&input))
{
cerr << "Failed to parse address book." << endl;
return -1;
}
}
ListMsg(msg1);
return 0;
}
It's a simple reader and writer model using protobuf. but what's in the log is a readable string a typed in the write.cc file instead of "numeric format",why is that?
The encoding is described here.
Without an example of what comes out the other end, that is slightly hard to answer precisely, but there are two possible explanations of what you are seeing:
you have explicitly switched into TextFormat in your code; this is very unlikely - and indeed, the primary use of TextFormat is debugging etc
far more likely, you're just seeing the text data from your message in the binary; text is encoded as UTF-8, so if you open a protobuf file in a text editor, pieces of it will appear readable enough to display something in the file
The real question is: what are the actual bytes in your output file? If it is something like:
08-88-95-06-12-24-68-65-6C-6C-6F-77-6F-72-6C-64-2C-74-68-69-73-20-69-73-20-61-20-70-72-6F-74-6F-62-75-66-20-77-72-69-74-65-72
then that is the binary format; but note that most of that is simply the UTF-8 of the string "helloworld,this is a protobuf writer" - which dominates the message by sheer size:
68-65-6C-6C-6F-77-6F-72-6C-64-2C-74-68-69-73-20-69-73-20-61-20-70-72-6F-74-6F-62-75-66-20-77-72-69-74-65-72
So if you look in any text editor, it will appear as a few garbled characters at the start followed by the clearly legible helloworld,this is a protobuf writer.
The "binary" here is the bit at the start:
08-88-95-06-12-24
This is:
08: header: field 1, varint
88-95-06: the value (decimal) 101000 as a varint
12: header: field 2, length-prefixed
24: the value (decimal) 36 as a varint (the length of the string, in bytes)
The key points to note:
if your message is dominated by text, the yes, large parts of it will look human-readable even in binary form
look at the overheads; it to 6 bytes to encode the entire rest of the message, and 3 bytes of that was data (the 101000) - so only 3 bytes was actually lost as overhead; now compare and contrast to xml, json, etc to understand what protobuf is doing to help you
I am using the wcin in order to store a single character in a wchar_t. Then I try to print it with a wcout call and the french character 'é' : but I can't see it at my console.
My compiler is g++ 4.5.4 and my OS is Ubuntu 12.10 64 bits.
Here is my attempt (wideChars.cpp) :
#include <iostream>
int main(){
using namespace std;
wchar_t aChar;
cout << "Enter your char : ";
wcin >> aChar;
wcout << L"You entered " << aChar << L" .\n";
return 0;
}
When I lauch the programm :
$ ./wideChars
Enter your char : é
You entered .
So, what's wrong with this code ?
First, add some error checking. Test what does wcin.good() return after the input and what does wcout.good() return after the "You entered" print? I suspect one of those will return false.
What are your LANG and LC_* environment variables set to?
Then try to fix this by adding this at the top of your main(): wcin.imbue(std::locale("")); wcout.imbue(std::locale(""));
I do not have my Ubuntu at hand right now, so I am flying blind here and mostly guessing.
UPDATE
If the above suggestion does not help then try to construct locale like this and imbue() this locale instead.
std::locale loc (
std::locale (),
new std::codecvt_byname<wchar_t, char, std::mbstate_t>("")));
UPDATE 2
Here is what works for me. The key is to set the C locale as well. IMHO, this is a bug in GNU C++ standard library implementation. Unless I am mistaken, setting std::locale::global(""); should also set the C library locale.
#include <iostream>
#include <locale>
#include <clocale>
#define DUMP(x) do { std::wcerr << #x ": " << x << "\n"; } while (0)
int main(){
using namespace std;
std::locale loc ("");
std::locale::global (loc);
DUMP(std::setlocale(LC_ALL, NULL));
DUMP(std::setlocale(LC_ALL, ""));
wcin.imbue (loc);
DUMP (wcin.good());
wchar_t aChar = 0;
wcin >> aChar;
DUMP (wcin.good());
DUMP ((int)aChar);
wcout << L"You entered " << aChar << L" .\n";
return 0;
}
UPDATE 3
I am confused, now I cannot reproduce it again and setting std::locale::global(loc); seems to do the right thing wrt/ the C locale as well.
So, after spending hours reading and understanding I have finally made my first OpenCL program that actually does something, which is it adds two vectors and outputs to a file.
#include <iostream>
#include <vector>
#include <cstdlib>
#include <string>
#include <fstream>
#define __CL_ENABLE_EXCEPTIONS
#include <CL/cl.hpp>
int main(int argc, char *argv[])
{
try
{
// get platforms, devices and display their info.
std::vector<cl::Platform> platforms;
cl::Platform::get(&platforms);
std::vector<cl::Platform>::iterator i=platforms.begin();
std::cout<<"OpenCL \tPlatform : "<<i->getInfo<CL_PLATFORM_NAME>()<<std::endl;
std::cout<<"\tVendor: "<<i->getInfo<CL_PLATFORM_VENDOR>()<<std::endl;
std::cout<<"\tVersion : "<<i->getInfo<CL_PLATFORM_VERSION>()<<std::endl;
std::cout<<"\tExtensions : "<<i->getInfo<CL_PLATFORM_EXTENSIONS>()<<std::endl;
// get devices
std::vector<cl::Device> devices;
i->getDevices(CL_DEVICE_TYPE_ALL,&devices);
int o=99;
std::cout<<"\n\n";
// iterate over available devices
for(std::vector<cl::Device>::iterator j=devices.begin(); j!=devices.end(); j++)
{
std::cout<<"\tOpenCL\tDevice : " << j->getInfo<CL_DEVICE_NAME>()<<std::endl;
std::cout<<"\t\t Type : " << j->getInfo<CL_DEVICE_TYPE>()<<std::endl;
std::cout<<"\t\t Vendor : " << j->getInfo<CL_DEVICE_VENDOR>()<<std::endl;
std::cout<<"\t\t Driver : " << j->getInfo<CL_DRIVER_VERSION>()<<std::endl;
std::cout<<"\t\t Global Mem : " << j->getInfo<CL_DEVICE_GLOBAL_MEM_SIZE>()/(1024*1024)<<" MBytes"<<std::endl;
std::cout<<"\t\t Local Mem : " << j->getInfo<CL_DEVICE_LOCAL_MEM_SIZE>()/1024<<" KBbytes"<<std::endl;
std::cout<<"\t\t Compute Unit : " << j->getInfo<CL_DEVICE_MAX_COMPUTE_UNITS>()<<std::endl;
std::cout<<"\t\t Clock Rate : " << j->getInfo<CL_DEVICE_MAX_CLOCK_FREQUENCY>()<<" MHz"<<std::endl;
}
std::cout<<"\n\n\n";
//MAIN CODE BEGINS HERE
//get Kernel
std::ifstream ifs("vector_add_kernel.cl");
std::string kernelSource((std::istreambuf_iterator<char>(ifs)), std::istreambuf_iterator<char>());
std::cout<<kernelSource;
//Create context, select device and command queue.
cl::Context context(devices);
cl::Device &device=devices.front();
cl::CommandQueue cmdqueue(context,device);
// Generate Source vector and push the kernel source in it.
cl::Program::Sources sourceCode;
sourceCode.push_back(std::make_pair(kernelSource.c_str(), kernelSource.size()));
//Generate program using sourceCode
cl::Program program=cl::Program(context, sourceCode);
//Build program..
try
{
program.build(devices);
}
catch(cl::Error &err)
{
std::cerr<<"Building failed, "<<err.what()<<"("<<err.err()<<")"
<<"\nRetrieving build log"
<<"\n Build Log Follows \n"
<<program.getBuildInfo<CL_PROGRAM_BUILD_LOG>(devices.front());
}
//Declare and initialize vectors
std::vector<cl_float>B(993448,1.3);
std::vector<cl_float>C(993448,1.3);
std::vector<cl_float>A(993448,1.3);
cl_int N=A.size();
//Declare and intialize proper work group size and global size. Global size raised to the nearest multiple of workGroupSize.
int workGroupSize=128;
int GlobalSize;
if(N%workGroupSize) GlobalSize=N - N%workGroupSize + workGroupSize;
else GlobalSize=N;
//Declare buffers.
cl::Buffer vecA(context, CL_MEM_READ_WRITE, sizeof(cl_float)*N);
cl::Buffer vecB(context, CL_MEM_READ_ONLY , (B.size())*sizeof(cl_float));
cl::Buffer vecC(context, CL_MEM_READ_ONLY , (C.size())*sizeof(cl_float));
//Write vectors into buffers
cmdqueue.enqueueWriteBuffer(vecB, 0, 0, (B.size())*sizeof(cl_float), &B[0] );
cmdqueue.enqueueWriteBuffer(vecB, 0, 0, (C.size())*sizeof(cl_float), &C[0] );
//Executing kernel
cl::Kernel kernel(program, "vector_add");
cl::KernelFunctor kernel_func=kernel.bind(cmdqueue, cl::NDRange(GlobalSize), cl::NDRange(workGroupSize));
kernel_func(vecA, vecB, vecC, N);
//Reading back values into vector A
cmdqueue.enqueueReadBuffer(vecA,true,0,N*sizeof(cl_float), &A[0]);
cmdqueue.finish();
//Saving into file.
std::ofstream output("vectorAdd.txt");
for(int i=0;i<N;i++) output<<A[i]<<"\n";
}
catch(cl::Error& err)
{
std::cerr << "OpenCL error: " << err.what() << "(" << err.err() <<
")" << std::endl;
return EXIT_FAILURE;
}
return EXIT_SUCCESS;
}
The problem is, for smaller values of N, I'm getting the correct result that is 2.6
But for larger values, like the one in the code above (993448) I get garbage output varying between 1 and 2.4.
Here is the Kernel code :
__kernel void vector_add(__global float *A, __global float *B, __global float *C, int N) {
// Get the index of the current element
int i = get_global_id(0);
//Do the operation
if(i<N) A[i] = C[i] + B[i];
}
UPDATE : Ok it seems the code is working now. I have fixed a few minor mistakes in my code above
1) The part where GlobalSize is initialized has been fixed.
2)Stupid mistake in enqueueWriteBuffer (wrong parameters given)
It is now outputting the correct result for large values of N.
Try to change the data type from float to double etc.
I want to parse scalar as bool.
This example works:
#include <yaml.h>
#include <iostream>
#include <sstream>
#include <string>
void operator>> (const YAML::Node & node, bool & b)
{
std::string tmp;
node >> tmp;
std::cout << tmp << std::endl;
b = (tmp == "1") || (tmp == "yes");
}
int main()
{
bool b1, b2;
std::stringstream ss("key: да\notherkey: no");
YAML::Parser parser(ss);
YAML::Node doc;
parser.GetNextDocument(doc);
doc["key"] >> b1;
doc["otherkey"] >> b2;
std::cout << b1 << std::endl;
std::cout << b2 << std::endl;
return 0;
}
But in more complicated case template operator is used:
YAML::operator>><bool> (node=..., value=#0x63f6e8) at /usr/include/yaml-cpp/nodeimpl.h:24
And I get 'YAML::InvalidScalar' if string in not 'yes' or 'no'.
yaml-cpp already reads bools by default, as specified by the YAML spec.
y, yes, true, on
produce true, and
n, no, false, off
produce false. If you want to extend or change this behavior (for example, so that "да" also produces true), as you found out, overloading operator >> in the YAML namespace works.
The reason it needs to be in the YAML namespace (but only for "more complicated examples" - meaning where you don't directly call operator >> with a bool argument) is the way C++ lookup works.
See this answer to my old question for a great explanation.