Using GNU C Libraries and other Data structure libraries on Mac/Windows/Linux.? - sql

I want to know if the C libraries system/string/data structure/data base etc are platform dependent.?
what are the things of these libraries that are dependent on a specific platform.?
like how a regular expression/string manipulation/sql connectivity etc libraries are dependent on a platform.?
can I use them on any platform for File I/O/Paths etc
just like we do things in python using the sys/os etc modules.?
I want to build a program which deals with strings,database(sqlite3,mysql,Oracle),data structures,File I/O and System paths. and can run on Windows,Linux and Mac when re compiled on that platform.
and I want it to be console based.
Please don't recommend me to do it in other programming languages, I want the C folks to answer please.
Thanks

Well, as long as you use the standard C library, all is well. The GNU C Library (glibc) is one implementation of the C standard, and for example Microsoft has their own implementation of it.
From a user's (your) perspective, the impelementation doesn't matter. If you for example
#include <stdio.h>
then you can, on any standards-compliant platform, call the fopen() and then use fread() for file i/o. Or any other standard C function.
Linux, Mac and Windows are all standards compliant (i.e. have implemented the ISO C) and thus the standard functions do the same thing on all platforms. The filepaths that you pass to fopen() are the same, too. The fact that Windows uses backslash ( \ ) in the filepath instead of the Unix way (forward slash, / ) doesn't matter: on Windows, in your C program, you use the Unix-style notation.

For C library functions, I suggest you ensure that you use functions within the POSIX standard. This will (at least in theory) guarantee compatibility with other POSIX platforms.
If you are programming on linux using glibc (i.e. the normal C library), the documentation thereof is reasonably good at pointing out what are GNU extensions, but the POSIX standard (referenced above) is the gold standard.
For other libraries, you'll have to look to the library in question.
Remember that if there are specific areas of incompatibility, you can use #ifdef's etc around those bits, and keep the machine specific elements in specific files.
To the specific points you mentioned:
* The POSIX regexp functions (regcomp, regexec, regerror, regfree) are part of the POSIX standard.
* Most of the string manipulation functions (e.g. strstr) are part of the POSIX standard. Some (e.g. asprintf) are GNU extensions.
* SQL connectivity is not provided by the C library. You will need a specific C library to deal with your SQL connection, or to use something like libdbi. You'll need to look at the specific library to see what support there is on other platforms.
In particular, be careful with path manipulation on windows (think about slashes vs backslashes, and drive letters), specifically dealing with how they are input by the user and what is passed to the function.

Related

Difference between typeof, __typeof and __typeof__ in Objective-C

In Objective-C I often use __typeof__(obj) when dealing with blocks etc. Why not __typeof(obj) or typeof(obj).
When to use which?
__typeof__() and __typeof() are compiler-specific extensions to the C language, because standard C does not include such an operator. Standard C requires compilers to prefix language extensions with a double-underscore (which is also why you should never do so for your own functions, variables, etc.)
typeof() is exactly the same, but throws the underscores out the window with the understanding that every modern compiler supports it. (Actually, now that I think about it, Visual C++ might not. It does support decltype() though, which generally provides the same behaviour as typeof().)
All three mean the same thing, but none are standard C so a conforming compiler may choose to make any mean something different.
As others have mentioned, typeof() is an extension of C that has various support in respective compilers.
If you happen to be writing Objective-C for iOS or Mac apps, chances are good that you will be compiling your app with the Clang compiler.
Clang does support the use of typeof(), but technically it's for when your C Language Dialect is set to be a gnu* type. However __typeof__() is supported in both c* and gnu* language dialects - as detailed in the Clang documentation.
Now if you're writing your code with Xcode, the default setting for the C language dialect appears to be GNU99 and the option of allowing 'asm' 'inline' 'typeof' is set to Yes, so using typeof() won't bring you any problems.
If you want to be (arguably) safer when using the Clang compiler, use __typeof__(). This way you won't be affected if the C Language Dialect being used for compilation changes or if someone decides to turn off the allowance of 'typeof'.
Hope this will be helpfull:
-ansi and the various -std options disable certain keywords. This causes trouble when you want to use GNU C extensions, or a general-purpose header file that should be usable by all programs, including ISO C programs. The keywords asm, typeof and inline are not available in programs compiled with -ansi or -std (although inline can be used in a program compiled with -std=c99 or -std=c11). The ISO C99 keyword restrict is only available when -std=gnu99 (which will eventually be the default) or -std=c99 (or the equivalent -std=iso9899:1999), or an option for a later standard version, is used.
The way to solve these problems is to put ‘__’ at the beginning and end of each problematical keyword. For example, use __asm__ instead of asm, and __inline__ instead of inline.
http://gcc.gnu.org/onlinedocs/gcc/Alternate-Keywords.html#Alternate-Keywords
https://clang.llvm.org/docs/UsersManual.html#c-language-features

Is using .. as parent directory cross platform?

More as a curiosity, if I want to prevent some code from looking at the parent directory (contained in a list of files/directories) and I do something along the lines of (e.g. Perl) next if /^.+$/ to exclude . and .. , is this sufficiently cross-platform? If not, which platforms are different and how might one prevent accessing the parent in that case?
It will work in most modern platforms. (It will also exclude Unix hidden files/directories, but this is probably a good thing given the context.) Windows has a special case at the root of a drive, but it's not so much "different syntax" as "not there in any syntax"; if you have any intention of using platforms such as OpenVMS or Z/OS, it won't work at all.
Note that Perl and Python ship with cross-platform path utilities that you should use instead. I couldn't tell you about PHP or Ruby but I presume both also do so.
Doesn't work in ZX Spectrum. :)
Seriously, pretty much all platforms in current wide use (i.e. MSDOS, Windows, *NIX including Linux) conform to that. Be aware you will also be excluding hidden directories in UNIX-like systems.

LD_PRELOAD on AIX

Can someone here tell me if there is something similar to LD_PRELOAD on recent versions of AIX? More specifically I need to intercept calls from my binary to time(), returning a constant time, for testing purposes.
AIX 5.3 introduced the LDR_PRELOAD (for 32-bit programs) and LDR_PRELOAD64 (for 64-bit programs) variables. They are analoguous to LD_PRELOAD on Linux. Both are colon-separated lists of libraries, and symbols will be pre-emptively loaded from the listed shared objects before anything else.
For example, if you have a shared object foo.so:
LDR_PRELOAD=foo.so
If you use archives, use the AIX style to specify the object within the archive:
LDR_PRELOAD="bar.a(shr.so)"
And separate multiple entries with a colon:
LDR_PRELOAD="foo.so:bar.a(shr.so)"
AIX 5L uses the LDR_PRELOAD variable.
Not that I'm aware of. Closest thing we've done (with malloc/free for debugging) is to
create a new library file with just the functions desired (same name as original).
place it in a different directory to the original.
make a dependency from our library file to the original.
change the LD_LIBRARY_PATH (or SHLIB_PATH?) to put our library first in the search chain.
That way, our functions got picked up first by the loader, any we didn't supply were provided by the original.
This was a while ago. AIX 5L is supposed to be much more like Linux (hence the L) so it may be able to do exactly what you require.
Alternatively, if you have the source, munge the calls to time() with mytime() and provide your function. You're not testing exactly the same software but the differences for that sort of minimal change shouldn't matter.

What language is to binary, as Perl is to text?

I am looking for a scripting (or higher level programming) language (or e.g. modules for Python or similar languages) for effortlessly analyzing and manipulating binary data in files (e.g. core dumps), much like Perl allows manipulating text files very smoothly.
Things I want to do include presenting arbitrary chunks of the data in various forms (binary, decimal, hex), convert data from one endianess to another, etc. That is, things you normally would use C or assembly for, but I'm looking for a language which allows for writing tiny pieces of code for highly specific, one-time purposes very quickly.
Any suggestions?
Things I want to do include presenting arbitrary chunks of the data in various forms (binary, decimal, hex), convert data from one endianess to another, etc. That is, things you normally would use C or assembly for, but I'm looking for a language which allows for writing tiny pieces of code for highly specific, one-time purposes very quickly.
Well, while it may seem counter-intuitive, I found erlang extremely well-suited for this, namely due to its powerful support for pattern matching, even for bytes and bits (called "Erlang Bit Syntax"). Which makes it very easy to create even very advanced programs that deal with inspecting and manipulating data on a byte- and even on a bit-level:
Since 2001, the functional language Erlang comes with a byte-oriented datatype (called binary) and with constructs to do pattern matching on a binary.
And to quote informIT.com:
(Erlang) Pattern matching really starts to get
fun when combined with the binary
type. Consider an application that
receives packets from a network and
then processes them. The four bytes in
a packet might be a network byte-order
packet type identifier. In Erlang, you
would just need a single processPacket
function that could convert this into
a data structure for internal
processing. It would look something
like this:
processPacket(<<1:32/big,RestOfPacket>>) ->
% Process type one packets
...
;
processPacket(<<2:32/big,RestOfPacket>>) ->
% Process type two packets
...
So, erlang with its built-in support for pattern matching and it being a functional language is pretty expressive, see for example the implementation of ueencode in erlang:
uuencode(BitStr) ->
<< (X+32):8 || <<X:6>> <= BitStr >>.
uudecode(Text) ->
<< (X-32):6 || <<X:8>> <= Text >>.
For an introduction, see Bitlevel Binaries and Generalized Comprehensions in Erlang.You may also want to check out some of the following pointers:
Parsing Binaries with erlang, lamers inside
More File Processing with Erlang
Learning Erlang and Adobe Flash format same time
Large Binary Data is (not) a Weakness of Erlang
Programming Efficiently with Binaries and Bit Strings
Erlang bit syntax and network programming
erlang, the language for network programming (1)
Erlang, the language for network programming Issue 2: binary pattern matching
An Erlang MIDI File Reader/Writer
Erlang Bit Syntax
Comprehending endianness
Playing with Erlang
Erlang: Pattern Matching Declarations vs Case Statements/Other
A Stream Library using Erlang Binaries
Bit-level Binaries and Generalized Comprehensions in Erlang
Applications, Implementation and Performance Evaluation of Bit Stream Programming in Erlang
perl's pack and unpack ?
Take a look at python bitstring, it looks like exactly what you want :)
The Python bitstring module was written for this purpose. It lets you take arbitary slices of binary data and offers a number of different interpretations through Python properties. It also gives plenty of tools for constructing and modifying binary data.
For example:
>>> from bitstring import BitArray, ConstBitStream
>>> s = BitArray('0x00cf') # 16 bits long
>>> print(s.hex, s.bin, s.int) # Some different views
00cf 0000000011001111 207
>>> s[2:5] = '0b001100001' # slice assignment
>>> s.replace('0b110', '0x345') # find and replace
2 # 2 replacements made
>>> s.prepend([1]) # Add 1 bit to the start
>>> s.byteswap() # Byte reversal
>>> ordinary_string = s.bytes # Back to Python string
There are also functions for bit-wise reading and navigation in the bitstring, much like in files; in fact this can be done straight from a file without reading it into memory:
>>> s = ConstBitStream(filename='somefile.ext')
>>> hex_code, a, b = s.readlist('hex:32, uint:7, uint:13')
>>> s.find('0x0001') # Seek to next occurence, if found
True
There are also views with different endiannesses as well as the ability to swap endianness and much more - take a look at the manual.
I'm using 010 Editor to view binary files all the time to view binary files.
It's especially geared to work with binary files.
It has an easy to use c-like scripting language to parse binary files and present them in a very readable way (as a tree, fields coded by color, stuff like that)..
There are some example scripts to parse zipfiles and bmpfiles.
Whenever I create a binary file format, I always make a little script for 010 editor to view the files. If you've got some header files with some structs, making a reader for binary files is a matter of minutes.
Any high-level programming language with pack/unpack functions will do. All 3 Perl, Python and Ruby can do it. It's matter of personal preference. I wrote a bit of binary parsing in each of these and felt that Ruby was easiest/most elegant for this task.
Why not use a C interpreter? I always used them to experiment with snippets, but you could use one to script something like you describe without too much trouble.
I have always liked EiC. It was dead, but the project has been resurrected lately. EiC is surprisingly capable and reasonably quick. There is also CINT. Both can be compiled for different platforms, though I think CINT needs Cygwin on windows.
Python's standard library has some of what you require -- the array module in particular lets you easily read parts of binary files, swap endianness, etc; the struct module allows for finer-grained interpretation of binary strings. However, neither is quite as rich as you require: for example, to present the same data as bytes or halfwords, you need to copy it between two arrays (the numpy third-party add-on is much more powerful for interpreting the same area of memory in several different ways), and, for example, to display some bytes in hex there's nothing much "bundled" beyond a simple loop or list comprehension such as [hex(b) for b in thebytes[start:stop]]. I suspect there are reusable third-party modules to facilitate such tasks yet further, but I can't point you to one...
Forth can also be pretty good at this, but it's a bit arcane.
Well, if speed is not a consideration, and you want perl, then translate each line of binary into a line of chars - 0's and 1's. Yes, I know there are no linefeeds in binary :) but presumably you have some fixed size -- e.g. by byte or some other unit, with which you can break up the binary blob.
Then just use the perl string processing on that data :)
If you're doing binary level processing, it is very low level and likely needs to be very efficient and have minimal dependencies/install requirements.
So I would go with C - handles bytes well - and you can probably google for some library packages that handle bytes.
Going with something like Erlang introduces inefficiencies, dependencies, and other baggage you probably don't want with a low-level library.

Does Ada have a preprocessor?

To support multiple platforms in C/C++, one would use the preprocessor to enable conditional compiles. E.g.,
#ifdef _WIN32
#include <windows.h>
#endif
How can you do this in Ada? Does Ada have a preprocessor?
The answer to your question is no, Ada does not have a pre-processor that is built into the language. That means each compiler may or may not have one and there is not "uniform" syntax for pre-processing and things like conditional compilation. This was intentional: it's considered "harmful" to the Ada ethos.
There are almost always ways around a lack of a preprocessor but often times the solution can be a little cumbersome. For example, you can declare the platform specific functions as 'separate' and then use build-tools to compile the correct one (either a project system, using pragma body replacement, or a very simple directory system... put all the windows files in /windows/ and all the linux files in /linux/ and include the appropriate directory for the platform).
All that being said, GNAT realized that sometimes you need a preprocessor and has created gnatprep. It should work regardless of the compiler (but you will need to insert it into your build process). Similarly, for simple things (like conditional compilation) you can probably just use the c pre-processor or even roll your own very simple one.
AdaCore provides the gnatprep preprocessor, which is specialized for Ada. They state that gnatprep "does not depend on any special GNAT features", so it sounds as though it should work with non-GNAT Ada compilers. Their User Guide also provides some conditional compilation advice.
I have been on a project where m4 was used as well, with the Ada spec and body files suffixed as ".m4s" and ".m4b", respectively.
My preference is really to avoid preprocessing altogether, and just use specialized bodies, setting up CM and the build process to manage them.
No but the CPP preprocessor or m4 can be called on any file on the command line or using a building tool like make or ant. I suggest calling your .ada file something else. I have done this for some time on java files. I call the java file .m4 and use a make rule to create the .java and then build it in the normal way.
I hope that helps.
Yes, it has.
If you are using GNAT compiler, you can use gnatprep for doing the preprocessing, or if you use GNAT Programming Studio you can configure your project file to define some conditional compilation switches like
#if SOMESWITCH then
-- Your code here is executed only if the switch SOMESWITCH is active in your build configuration
#end if;
In this case you can use gnatmake or gprbuild so you don't have to run gnatprep by hand.
That's very useful, for example, when you need to compile the same code for several different OS's using even different cross-compilers.
Some old Ada1983-era compilers have a package called a.app that utilized a #-prefixed subset of Ada (interpreted at build-time) as a preprocessing language for generating Ada (to be then translated to machine code at compile-time). Rational's Verdix Ada Development System (VADS) appears to be the progenitor of a.app among several Ada compilers. Sun Microsystems, for example, derived the Ada SPARCompiler from VADS and thus also had a.app. This is not unlike the use of PL/I as the preprocessor of PL/I, which IBM did.
Chapter 2 is some documentation of what a.app looks like: http://dlc.sun.com/pdf/802-3641/802-3641.pdf
No, it does not.
If you really want one, there are ways to get one (Use C's, use a stand-alone one, etc.) However I'd argue against it. It was a purposeful design decision to not have one. The whole idea of a preprocessor is very un-Ada.
Most of what C's preprocessor is used for can be accomplished in Ada in other more reliable ways. The only major exception is in making minor changes to a source file for cross-platform support. Given how much this gets abused in a typical cross-platform C program, I'm still happy there's no support for it in Ada. Very few C/C++ developers can control themselves enough to keep the changes "minor". The result may work, but is often nearly impossible for a human to read.
The typical Ada way to accomplish this would be to put the different code in different files and use your build system to somehow choose between them at compile time. Make is plenty powerful enough to help you do this.