Save a YAML Emitter content to a file with YAML-CPP - yaml-cpp

I just started playing around with yaml-cpp, I managed to build it properly and run some of the examples from the yaml-cpp wiki but I can't find a way to save my emitter to a file.
Is this not possible? I mean the PyYAML library has a 'dump' function for this. Is there no such functionality in yaml-cpp?
Is there some workaround to converting a yaml emitter to a stl stream and then dumping this to a yaml file?
Please let me know
Thanks,
Adam

The function Emitter::c_str() returns a NULL-terminated C-style string (which you do not have to release), which you can then write to a file. For example:
YAML::Emitter emitter;
emitter << "Hello world!";
std::ofstream fout("file.yaml");
fout << emitter.c_str();
There is also Emitter::size(), which returns the number of bytes in that string, in case you want to do something more advanced and don't want to walk the string to find its length.
If you want to just dump a Node to a stream, there's a shortcut:
YAML::Node node = ...;
std::ofstream fout("file.yaml");
fout << node;

Related

Read file to String when a new document is created in Chromium?

I am looking to read a file to string so that I can execute it's contents (JS) when any new document (tab, iframe, etc) is created.
This is not an Extension but C++ code being built into a standalone binary
I have the following code -
InjectionRenderFrameObserver::DidCreateNewDocument(){
std::string jq;
base::FilePath path("jQuery.js");
LOG(INFO) << "FilePath::" << path.value();
if(base::PathExists(path)){
LOG(INFO) << "FileExists::";
}
if(!base::ReadFileToString(path, &jq)){
LOG(ERROR) << "Can't read content of '" << path.value() << "'.";
}
}
The Console logs the following:
[967856:1:1122/1950927.423702:INFO:injection_render_frame_observer.cc(16)] FilePath::jQuery.js
[967856:1:1122/1950927.423981:INFO:injection_render_frame_observer.cc(20)] Can't read content of 'jQuery.js'.
I haven't been able to connect a debugger to it yet (having some system issues) but was wondering if there is an obvious issue that would prevent the above from working that I am missing since the Chromium Dev wiki/knowledge space is limited for new developers (or at least I haven't found a good source yet).
The above code is my example code (what I am starting with) to get a file loaded. The end goal will be a feature switch that allows a default or provided path to script that will run before each document is created.

How to do an incremental read of binary files

TL;DR: can I do an incremental read of binary files with Red or Rebol?
I would like to use Red to process some large (13MB to 2GB) structured binary files (Kurzweil synthesizer files). I've used other languages (C, Go, Tcl, Ruby, Dart) to walk through these files, and now I'd like to do the same with Red or Rebol.
Is there a way to incrementally read binary files, byte by byte? All I see is read/binary which seems to slurp the entire file at once (or a part of a file).
I'll need to jump around a little bit, too (either peek at the next byte, or skip to the end of a section, or skip past variable length strings to the start of data).
(Yes, I could make some helpers that tracked the position and used read/part/seek.)
I would like to make a call to the low level OS read/seek if that is possible - something new to learn.
This is on macos, but a portable solution would be great.
Thanks!
PS: "open/read %abc" gives an error "*** Script Error: open does not allow file! for its port argument", even though the help message say the port argument is "port [port! file! url! block!]"
Rebol has ports for that, which are planned for 0.7.0 release in Red. So, current I/O is very basic and buffer-only, and open is a preliminary stub.
I would like to make a call to the low level OS read/seek if that is possible - something new to learn.
You can leverage Rebol or Red/System FFI as a learning excercise.
Here is how you would do it in Rebol:
>> file: open/direct/binary %file.dat
>> until [none? probe copy/part file 20]
>> close file
#{732F7072696E74657253657474696E6773312E62}
#{696E504B01022D00140006000800000021006149}
#{0910890100001103000010000000000000000000}
...
#{000000006A290000646F6350726F70732F617070}
#{2E786D6C504B0506000000000D000D0068030000}
#{292C00000000}
none
first file or pick file 1 will return the next byte value (integer!)
This even works with text files: open/lines/direct, in that case copy/part file 20 will return 20 lines, or you can use pick file 1 or first file to get the next line.
Soon this will be available on Red too.

Does libhdfs c/c++ api support read/write compressed file

I have found somebody talks libhdfs does not support read/write gzip file at about 2010.
I download the newest hadoop-2.0.4 and read hdfs.h. There is also no compressing arguments.
Now I am wondering if it supports reading compressed file now?
If it not, how can I make a patch for the libhdfs and make it work?
Thanks in advance.
Best Regards
Haiti
As I have known, libhdfs only uses JNI to access the HDFS. If you are familiar with HDFS Java API, libhdfs is just a wrapper of org.apache.hadoop.fs.FSDataInputStream. So it can not read compressed files directly now.
I guess that you want to access the file in the HDFS by C/C++. If so, you can use libhdfs to read the raw file, and use the zip/unzip C/C++ library to decompress the content. The compressed file format is the same. For example, if the files are compressed by lzo, then you can use lzo library to decompress them.
But if the file is a sequence file, then you may need to use the JNI to access them as they are Hadoop special file. I have seen Impala does the similar work before. But it's not out-of-the-box.
Thanks for the reply. Use libhdfs to read raw file, then use zlib to inflate the content. This can work. The file used gzip. I used the codes like these.
z_stream gzip_stream;
gzip_stream.zalloc = (alloc_func)0;
gzip_stream.zfree = (free_func)0;
gzip_stream.opaque = (voidpf)0;
gzip_stream.next_in = buf;
gzip_stream.avail_in = readlen;
gzip_stream.next_out = buf1;
gzip_stream.avail_out = 4096 * 4096;
ret = inflateInit2(&gzip_stream, 16 + MAX_WBITS);
if (ret != Z_OK) {
printf("deflate init error\n");
}
ret = inflate(&gzip_stream, Z_NO_FLUSH);
ret = inflateEnd(&gzip_stream);
printf("the buf \n%s\n", buf1);
return buf;

What is the most portable/cross-platform way to represent a newline in go/golang?

Currently, to represent a newline in go programs, I use \n. For example:
package main
import "fmt"
func main() {
fmt.Printf("%d is %s \n", 'U', string(85))
}
... will yield 85 is U followed by a newline.
However, this doesn't seem all that cross-platform. Looking at other languages, PHP represents this with a global constant ( PHP_EOL ). Is \n the right way to represent newlines in a cross-platform specific manner in go / golang?
I got curious about this so decided to see what exactly is done by fmt.Println. http://golang.org/src/pkg/fmt/print.go
If you scroll to the very bottom, you'll see an if addnewline where \n is always used. I can't hardly speak for if this is the most "cross-platform" way of doing it, and go was originally tied to linux in the early days, but that's where it is for the std lib.
I was originally going to suggest just using fmt.Fprintln and this might still be valid as if the current functionality isn't appropriate, a bug could be filed and then the code would simply need to be compiled with the latest Go toolchain.
You can always use an OS specific file to declare certain constants. Just like _test.go files are only used when doing go test, the _[os].go are only included when building to that target platform.
Basically you'll need to add the following files:
- main.go
- main_darwin.go // Mac OSX
- main_windows.go // Windows
- main_linux.go // Linux
You can declare a LineBreak constant in each of the main_[os].go files and have your logic in main.go.
The contents of you files would look something like this:
main_darwin.go
package somepkg
const LineBreak = "\n"
main_linux.go
package somepkg
const LineBreak = "\n"
main_windows.go
package somepkg
const LineBreak = "\r\n"
and simply in your main.go file, write the code and refer to LineBreak
main.go
package main
import "fmt"
func main() {
fmt.Printf("%d is %s %s", 'U', string(85), LineBreak)
}
Having the OS determine what the newline character is happens in many contexts to be wrong. What you really want to know is what the "record" separator is and Go assumes that you as the programmer should know that.
Even if the binary runs on Windows, it may be consuming a file from a Unix OS.
Line endings are determined by what the source of the file or document said was a line ending, not the OS the binary is running in.
You can use os.PathSeparator
func main() {
var PS = fmt.Sprintf("%v", os.PathSeparator)
var LineBreak = "\n"
if PS != "/" {
LineBreak = "\r\n"
}
fmt.Printf("Line Break %v", LineBreak)
}
https://play.golang.com/p/UTnBbTJyL9c

What Clever Solutions are There for Including Resources in a Static Lib?

I have a static library Xcode 4 project that includes a home-brewed rendering engine, and I re-use this engine in multiple apps. This engine uses OpenGL ES 2.0, and by extension, shaders. As shaders got more complicated, I moved away from storing them as NSStrings in a source file, and now store them as standalone text files with the .vert and .frag extensions.
This works fine for apps that include the rendering engine in their own source; the shaders are simply added to the app's "Copy Bundle Resources" build phase, and loaded at runtime into NSStrings and compiled, linked, etc.
This strategy doesn't work at all if the rendering engine that loads these shaders is in a static library project; there is no bundle into which to copy resources. I'm currently forced to have every client project of the static lib rendering engine include their own copies of the shaders in their own "Copy Bundle Resources" build phase. This is a giant pain, and defeats a large part of the convenience of making the render engine into a static lib in the first place.
I suppose this is specific instance of the more general problem of "Resources in a Static Library". The best solution I can think of is copying the shader files' contents into strings in a header file, which are then included in the rendering engine's source. I may even be able to automate the conversion from .frag to .h with some "Run Scripts" build phase magic, but it seems unfortunately complicated.
Is there anything I'm missing?
For the benefit of posterity, I'll share the solution I ended up using. At a high level, the solution is to compile the resource in question into the application binary, thus obviating the need to also copy it to bundle resources.
I decided a generic and reliable way to compile any file data into the binary would be to store the file contents in a static byte array in a header file. Assuming there is already a header file created and added to the static lib target, I made the following bash script to read a file, and write its contents as a byte array of hex literals with C syntax. I then run the script in "Run Script" build phase before the Compile Sources and Copy Headers build phases:
#!/bin/bash
# Hexify.sh reads an input file, and hexdumps its contents to an output
# file in C-compliant syntax. The final argument is the name of the array.
infile=$1
outfile=$2
arrayName=$3
fileSize=$(stat -f "%z" $infile)
fileHexString=$(hexdump -ve '1/1 "0x%.2x, "' $infile)
prefix=$arrayName
suffix="Size"
variableName=$arrayName$suffix
nullTermination="0x00"
echo "//" > $headerFile
echo "// This file was automatically generated by a build script." >> $headerFile
echo "// Do not modify; the contents of this file will be overwritten on each build." >> $headerFile
echo "//" >> $headerFile
echo "" >> $headerFile;
echo "#ifndef some_arbitrary_include_guard" >> $headerFile
echo "#define some_arbitrary_include_guard" >> $headerFile
echo "" >> $headerFile
echo "static const int $variableName = $((fileSize+1));" >> $outfile
echo "static const char $arrayName[$variableName] = {" >> $outfile
echo -e "\t$fileHexString$nullTermination" >> $outfile
echo "};" >> $outfile
echo "#endif" >> $headerFile
So, for example, if I have a resource file example.txt:
Hello this
is a file
And I were to run ./Hexify.sh example.txt myHeader.h exampleArray, the header would look like this:
//
// This file was automatically generated by a build script.
// Do not modify; the contents of this file will be overwritten on each build.
//
#ifndef some_arbitrary_include_guard
#define some_arbitrary_include_guard
static const int exampleArraySize = 21;
static const char exampleArray[exampleArraySize] = {
0x48, 0x65, 0x6c, 0x6c, 0x6f, 0x20, 0x74, 0x68, 0x69, 0x73, 0x0a,
0x69, 0x73, 0x20, 0x61, 0x20, 0x66, 0x69, 0x6c, 0x65, 0x00
};
#endif
Now, at any point in time that I would have loaded said resource from the main bundle, I can instead refer to the byte array in that header file. Note that my version of the script adds a null terminated byte, which makes the data suitable for creating string objects. That may not apply in all cases.
As one final addendum, I apologize if that bash script makes any real bash programmers cringe; I have almost no idea what I'm doing with bash.
I feel your pain buddy, static libraries and resources don't go well together. I think the easiest way to do this is the one you already mentioned: Write a script that reads your shaders, escapes them properly and wraps them in C-compliant code.
I'm no expert, but maybe you could add the shader data to some section of your Mach-O executable upon linkage? But this eventually boils down to the same solution as mentioned above, with the only disadvantage that you're left with the ugly part of the job.
I'd go for the string constants using some shell script. PHP in my experience is pretty good at doing this kind of work. And of course bash scripts, but I'm not too good at that.
You could try to create a framework, it seems to fit your needs. There's an example on how to create such a framework for iOS on this page:
http://db-in.com/blog/2011/07/universal-framework-iphone-ios-2-0/
The guy that wrote the guide actually uses this technique to distribute his own iOS 3D engine project.
Edit: linked to newer version of the guide.