why not a lua implementation of google's protocol buffers? is there already any better solution exist for lua? - serialization

why not a lua implementation of google's protocol buffers? is there already any better solution exist for lua?

I'm working on it as we speak: https://github.com/haberman/upb/wiki
Also, I'm the guy who wrote the 100-line parser above. But my upb library is much more full-featured.

I just created a Lua implementation of protocol buffers lua-pb. It dynamically loads/parsers .proto files to create message objects, so there is no dependence on the standard .proto compiler from Google.
It uses LPeg for parsing .proto files and struct & Lua BitOp for encoding/decoding.

Probably because a C or C++ implementation would be faster (and easier to write), and then you could hand the data off to Lua to be used if you want.
There's a 100 line C protocol buffer parser here: http://blog.reverberate.org/2008/07/12/100-lines-of-c-that-can-parse-any-protocol-buffer/
Or you could just use the Google C++ one, and then hand your data off to Lua from there.
Lua isn't built for bit twiddling, so perhaps that's why nobody has implemented protocol buffers in it yet. It doesn't even have bitwise operators built in: http://lua-users.org/wiki/BitwiseOperators

Related

In what way is gobject facilitating binding?

On the official website of gobject, we can read:
GObject, and its lower-level type system, GType, are used by GTK+ and most GNOME libraries to provide:
object-oriented C-based APIs and
automatic transparent API bindings to other compiled or interpreted
languages
The first part seems clear to me but not the second one.
Indeed, when talking about gobject and binding, the concept introduced is often gobject-intropspection, but as far as I understand, gobject-introspection can be used to create .gir and .typelib for any documented C library, not only for gobject-based library.
Therefore I wonder what makes gobject particularly binding-friendly.
as far as I understand, gobject-introspection can be used to create .gir and .typelib for any documented C library, not only for gobject-based library.
That's not really true in practice. You can do some very basic stuff, but you have to write the GIR by hand (instead of just running a program which scans the source code). The only ones I'm aware of are those distributed with gobject-introspection (the *.gir files, the *.c files there are to avoid cyclical dependencies), and even those are generally only a fairly small subset of the C API.
As for other features, almost everything in GObject helps… the basic idea is that bindings often need RTTI. There are types like GValue (a simple box to store a value + type information), GClosure (for callbacks), properties and signals describe themselves with GTypes, etc. If you use GObjects (instead of creating a new fundamental type) you get run-time data about inheritance and interfaces, and GObject's odd construction scheme even allows other languages to subclass types declared in C.
The reason g-ir-scanner can't really do much on non-GObject libraries is that all that information is missing. After scanning the source code looking for annotations, g-ir-scanner will actually load the compiled module and use GObject's API to grab this information (which makes cross-compiling painful). In other words, GObject-Introspection is a much smaller project than you think… a huge percentage of the data it needs it gets from the GObject API.

Is there a way to force thrift generate separate file for each class?

I have thrift file with many structs in it. For C#, thrift generates bunch of files, but for Objective C it gives me only .h and .m files. Version of thrift is 0.9.0.
Typically not. The way in which the Thrift compiler organizes generated sources are pretty much based on what the original developer considered being consistent with the usual way to do things in particular language. The generated code is not intended to be neatly organized or maintainable in the first run, the primary goal is performance (and of course perfectly working code).
Sorry, there is no other answer, except maybe providing a patch.

How (if possible) to use PostgreSQL's parser (in C) independently?

I need a parser (mainly for the "select" type of queries) and avoid the hassle of doing it from scratch. Does anybody know how to use the scan.l/gram.y of pgsql for this purpose? I've looked up pgpool too, but it seems similar. Currently, it might be very helpful if someone could give instructions to compile the parser (using the makefile provided maybe) without errors so that it can be supplied (valid?) queries and outputs the parse tree (in whatever form)!
You probably cannot take any file from postgres source tarball and compile it separately. Parser use internal OOP structures (implemented in C). But there is some possibility (not simple) - ecpg preprocessor try to transform PostgreSQL gram file to secondary gram file - and you can use same mechanism. It use a small utility parse.pl (it is part of PostgreSQL source code (src/postgresql/src/interfaces/ecpg/preproc))
PostgreSQL compiles the language parser using yacc. Presumably you could take the yacc files and create a compatible parser with very little effort. Note you must have flex and yacc installed to do this.
Note this is not taking a .c file from source and transplanting it into your system. All you are getting is the parser, not the planner or anything else.
Given the level of detail in the question no more detail can be possible. Perhaps you could start there and post another question when you get stuck.

In which language is the proto compiler (of google protocol buffers) written?

I would like to know in which language the "proto compiler" (the compiler used to generate source files from Java, Python or c++) is written? Is it maybe a mix of languages?
Any help would be appreciated.
Thanks in Advance
Horace
It appears to be written in C++. There's also documentation on Java and Python APIs, but those don't appear to contain the compiler itself (at least I don't see anything that's obviously the compiler in either case, though I didn't spend a whole lot of time looking for it either).
That said, I'm almost tempted to vote to close -- for most practical purposes, the language used to implement the compiler is basically a trivia question, irrelevant to actual use. There is, however, an entirely legitimate exception: if you're going to download and modify the compiler, knowing the language you'd need to work with could be quite useful.
The protoc compiler is written in C or C++ (its a native program anyway).
When I want to process proto files in java files, I
I use the protoc command to convert them to a Protocol Buffer File ie
protoc protofile.proto --descriptor_set_out=OutputFile
Read the new protocol buffer file (its a FileDescriptorSet) and use it
An over complicated example is example, is compileProto method in
http://code.google.com/p/protobufeditor/source/browse/trunk/%20protobufeditor/Source/ProtoBufEditor/src/net/sf/RecordEditor/ProtoBuf/re/display/ProtoLayoutSelection.java
its compilcated because options because the protoc command and options can be stored in a properties file.
Note: The getFileDescriptor method reads the newly created protocol buffer

Does Ada have a preprocessor?

To support multiple platforms in C/C++, one would use the preprocessor to enable conditional compiles. E.g.,
#ifdef _WIN32
#include <windows.h>
#endif
How can you do this in Ada? Does Ada have a preprocessor?
The answer to your question is no, Ada does not have a pre-processor that is built into the language. That means each compiler may or may not have one and there is not "uniform" syntax for pre-processing and things like conditional compilation. This was intentional: it's considered "harmful" to the Ada ethos.
There are almost always ways around a lack of a preprocessor but often times the solution can be a little cumbersome. For example, you can declare the platform specific functions as 'separate' and then use build-tools to compile the correct one (either a project system, using pragma body replacement, or a very simple directory system... put all the windows files in /windows/ and all the linux files in /linux/ and include the appropriate directory for the platform).
All that being said, GNAT realized that sometimes you need a preprocessor and has created gnatprep. It should work regardless of the compiler (but you will need to insert it into your build process). Similarly, for simple things (like conditional compilation) you can probably just use the c pre-processor or even roll your own very simple one.
AdaCore provides the gnatprep preprocessor, which is specialized for Ada. They state that gnatprep "does not depend on any special GNAT features", so it sounds as though it should work with non-GNAT Ada compilers. Their User Guide also provides some conditional compilation advice.
I have been on a project where m4 was used as well, with the Ada spec and body files suffixed as ".m4s" and ".m4b", respectively.
My preference is really to avoid preprocessing altogether, and just use specialized bodies, setting up CM and the build process to manage them.
No but the CPP preprocessor or m4 can be called on any file on the command line or using a building tool like make or ant. I suggest calling your .ada file something else. I have done this for some time on java files. I call the java file .m4 and use a make rule to create the .java and then build it in the normal way.
I hope that helps.
Yes, it has.
If you are using GNAT compiler, you can use gnatprep for doing the preprocessing, or if you use GNAT Programming Studio you can configure your project file to define some conditional compilation switches like
#if SOMESWITCH then
-- Your code here is executed only if the switch SOMESWITCH is active in your build configuration
#end if;
In this case you can use gnatmake or gprbuild so you don't have to run gnatprep by hand.
That's very useful, for example, when you need to compile the same code for several different OS's using even different cross-compilers.
Some old Ada1983-era compilers have a package called a.app that utilized a #-prefixed subset of Ada (interpreted at build-time) as a preprocessing language for generating Ada (to be then translated to machine code at compile-time). Rational's Verdix Ada Development System (VADS) appears to be the progenitor of a.app among several Ada compilers. Sun Microsystems, for example, derived the Ada SPARCompiler from VADS and thus also had a.app. This is not unlike the use of PL/I as the preprocessor of PL/I, which IBM did.
Chapter 2 is some documentation of what a.app looks like: http://dlc.sun.com/pdf/802-3641/802-3641.pdf
No, it does not.
If you really want one, there are ways to get one (Use C's, use a stand-alone one, etc.) However I'd argue against it. It was a purposeful design decision to not have one. The whole idea of a preprocessor is very un-Ada.
Most of what C's preprocessor is used for can be accomplished in Ada in other more reliable ways. The only major exception is in making minor changes to a source file for cross-platform support. Given how much this gets abused in a typical cross-platform C program, I'm still happy there's no support for it in Ada. Very few C/C++ developers can control themselves enough to keep the changes "minor". The result may work, but is often nearly impossible for a human to read.
The typical Ada way to accomplish this would be to put the different code in different files and use your build system to somehow choose between them at compile time. Make is plenty powerful enough to help you do this.