Easy way to find out whether a Posix API is implemented by an OS - api

While writing my code I thought of having a common implementation for all POSIX OSes as opposed to separate implementation for each OS. One of the POSIX API I use is posix_fallocate() and while testing I found it not being supported by MacOS.
Had I known this earlier I would have not used this API or have had separate implementations for each OS.
So my question is - what is an easy way to find support for a particular posix call in different OSes? Do people always have to search documentation for each target OS?
Thanks.

Looking at the documentation is a good start, but it often won't tell you when a particular function has been implemented, which is also important. For obscure platforms, it may be difficult to tell which older versions are still relevant, which makes it even harder to decide whether a dependency on a particular POSIX feature is acceptable.
The other question is whether a feature is implemented, but with substandard quality. posix_fallocate is an interesting corner case in this regard. The glibc implementation uses emulation if the file system lacks support for an actual low-level fallocate operation (such as NFS until recently):
https://www.gnu.org/software/libc/manual/html_node/Storage-Allocation.html
Depending on what your application does, this behavior might not be acceptable. Just checking header files and documentation might not reveal this (the Note part in the documentation above was added only recently, for example).
In the end, there isn't a good substitute for building and testing as early as possible on all relevant targets, but I understand that this is increasingly difficult for non-Linux targets.

Related

How to implement custom build types for Modern CMake in a generalizable (multi-compiler/platform/generator) way?

We currently have a multi-platform, multi-compiler build for a C++ program based on a bunch of custom ad hoc scripts. I'm investigating switching over to Modern CMake (with "modern" used in the sense described here), and trying to do it in a "best practices" way as much as possible. (For example, doing so in a fashion which will work with all sorts of generator types.)
The issue I'm running into is that we have more build types (build modes? configuration types?) than just the standard Release/Debug/RelWithDebInfo/MinSizeRel. I'm wondering how to implement these with Modern CMake.
From what I can tell (e.g. from the FAQ), the "traditional" way to add new build modes would be to hardcode various variables with names containing the new mode name. I'm skeptical about this approach for several reasons. First off, the list of variables which need to be set is left rather unclear. Secondly, it looks like the Modern CMake approach is to attach compiler flags to targets, rather than messing with global settings, and I'm not sure how to square that with the build type globals. Lastly, given the hope to have things working in a multi-compiler/cross platform fashion, I'm a bit concerned that Release/Debug/etc. works "automagically" on multiple platforms/compilers, but the custom build modes seem to require hard coding things for a particular platform/compiler. I realize that some platform/compiler specific code may be unavoidable, but I'm hoping to minimize that to the extent possible, and do that robustly.
What's the recommended best practices "Modern CMake" way of creating custom build/configuration types/modes? Specifically one which is most extensible for use with multiple compilers and platforms, and works robustly with both multi-configuration and single-configuration generators?

Is it possible to embed LLVM Interpreter in my software and does it make sense?

Suppose I have a software and I want to make cross-plataform plugins. You compile the plugin for a virtual machine, and any platform running my software would be able to run this code.
I am wondering if it is possible to use LLVM interpreter and bytecode for this purpose. Also, I am wondering if does make sense using LLVM for this purpose instead of something else, i.e. is it what LLVM was made for?
I'm not sure that LLVM was designed for it. However, I doubt there is anything that hasn't been done using LLVM1
Other virtual-machines based script engines are specifically created for the job:
LUA is very popular
Wikipedia lists some other Extension/embeddable languages under the Scripting language entry
If you're looking for embeddable virtual machines:
IKVM supports embedding JVM and CLR in a bridged mode (interoperable)
Parrot supports embedding (and includes a Python interpreter; mind you, you can just run python bytecode images)
Perl has similar architecture and supports embedding
Javascript supports embedding (not sure about the architecture of v8, but I guess it would use a virtual machine)
Mono's CLR engine supports embedding: http://www.mono-project.com/Embedding_Mono
1 including compiling c++ information to javascript to run in your browser...
There is VMIR (https://github.com/andoma/vmir) which is a LLVM bitcode interpreter / JIT engine that's intended to be embedded into other apps.
Disclaimer: I'm the author of it and it's still work-in-progress but works reasonable well.
In theory, there exist a limited subset of LLVM IR which can be portable across various platforms. You shall not specify alignments, you shall not bitcast pointers to integral types, you must avoid intrinsics, etc. Which means - you can't immediately use a code generated by a stock C compiler (llvm-gcc, Clang, whatever), unless you specify a limited target for it and implement sanitising LLVM passes. Another issue is that the bitcode format from different LLVM versions is not guaranteed to be compatible.
In practice, I would not go there. Mono is a reasonably small, embeddable, fast VM, and all the .NET stack of tools is available for it. VM itself is pretty low-level (as long as you do not care about the verifyability).
LLVM includes an interpreter, so if you can build this interpreter for your target platforms, you can then evaluate LLVM bitcode on the fly.
It's apparently not so fast though.
In their classic discussion (that you do not want to miss if you're a fan of open source, LLVM, compilers) about LLVM vs libJIT, that has happened long before LLVM became famous and established, the author of libJIT Rhys Weatherley raised this particular issue, he stated that LLVM is not suitable to be embedded, while Chris Lattner, the author of LLVM stated that otherwise, it is modular and you can use it in any possible fashion including embedding only the parts you need.

When is code considered platform-independent?

This question arose as the result of a discussion on the reply to a question about platform-independence of PhysicsFS library. The question is whether particular code could be considered cross-platform or platform-independent? Should the code conform to certain standards or maybe just run on a particular set of platforms?
That's a very good question! I am venturing to guess here so bear with me as I don't have a definite answer really.
I think "platform independent" refers to code that is run by something that hides away the infrastructure. For instance the JVM hides the platform from the language -- there is nothing in the language that gives you access to the platform -- hence the platform independence.
Cross platform I believe is something that is not shielded away from the details of the platform -- think JavaScript for instance : you have access to the browser and all of its quirks. So writing Javascript code to run on all browsers would be cross-browser -- and you can extrapolate this I think to "cross platform".
Platform-independent: if the compiler/system library/VM/etc... are standard conformant for that language/library/etc..., the code must compile/run on every future platform that adheres to the prescribed standard. This means that the code cannot use platform-dependent #ifdefs anywhere, and that the program does not access API's not defined in said standard.
Cross-platform: this is kind of ambiguous, and mostly personal preference. For me, it means that it runs on at least two of the three big platform/OSes (x86(_64) Windows, Linux, and/or Mac). In most cases it will work on a lot more platforms and architectures, and use some or mostly POSIX API functionality (at least for non-Windows code). It will contain a limited number of #ifdefs to call specialized API's for platforms that require it (posix vs win32 vs...).

Will static linking on one unix distribution work but not another?

If I statically link an executable in ubuntu, is there any chance that that executable won't work within another distribution such as mint os? or fedora? I know processor types are affected, but other then that is there anything else I have to be wary of? Sorry if this is a dumb question. Thanks for any help
There are a few corner cases, but for the most part, you should be in good shape with static linking. The one that comes to mind is libnss. This particular library is essentially impossible to link statically, because of the way it does its job (permissions, authentication, security tasks). As long as the glibc-versions are similar, you should be ok on this issue, though.
If your program needs to work with subtle features of the kernel, like volume managers, you've got a pretty slim chance of getting your program to work, statically linked, across distros, because the kernel interfaces may change slightly.
Most typical applications, the kind that even makes sense to discuss portability, like network services, gui-applications, language tools (like compilers/interpreters) wont have a problem with any of this.
If you statically link a program on one computer and then move it to another computer in which the system basically runs the same way, then it should work just fine. That's the point of static linking; that there are no other files the program depends on - it's entirely self-contained, so as long as it can run at all, it will run the same way it does on its "host" system.
This contrasts with dynamic linking, in which the program incorporates elements of other files (libraries) at runtime. If you move a dynamically linked program to another system where the libraries it depends on are different (or nonexistent), it won't work.
In most cases, your executable will work just fine. As long as your executable doesn't depend on anything unusual being present for it to function, there will be no problem. (And, if it does depend on something unusual being present, then you'll have the same issue even if you dynamically link.)
Statically linking is usually safer than dynamically linking for compatibility between different UNIX environments, as long as the same CPU is in use.
To have a statically linked binary fail, again assuming the same processor architecture, you would have to do something such as link on a system using the a.out binary format and try to execute it on a system running ELF, in which case the dynamically linked version would fail just as badly.
So why do people not routinely link statically? Two reasons:
It makes the executable larger, sometimes MUCH larger, and
If bugs in the libraries are fixed, you'll have to relink your program to get access to the bug fixes. If a critical security bug is fixed in the libraries, you have to relink and redistribute your exe.
On the contrary. Whatever your chances are of getting a binary to work across distributions or even OSes, those chances are maximized by static linking. Static linking makes an executable self-contained in terms of libraries. It can still go wrong if it tries to read a file that's not there on another system.
For even better chances of portability, try linking against dietlibc or some other libc. An article at Linux Journal mentions some candidates. A smaller, simpler libc is less likely to depend on things in the filesystem that differ from distro to distro.
I would, for the reasons noted above avoid statically linking something unless you absolutely must.
That being said, it should work on any other similar kernel of the same architecture (i.e. if you statically link on a machine running linux 2.4.x , the loader VDSO is going to be different on linux 2.6, VDSO being virtual dynamic shared object, a shared object that the kernel exposes to every process containing loader code).
Other pitfalls include things in /etc not being where you'd think, logs being in different places, system utilities being absent or different (ubuntu uses update-rc.d, RHEL uses chkconfig), etc.
There are sometimes that you just have no choice. I was writing a program that talked to LVM2's string based cmdlib interface in favor of using execv().. low and behold, 30% of the distros I needed to support did NOT include that library and offered no way of getting it. So, I had to link against the static object when producing binary packages.
If you are using glibc, you can be confident that stuff like getpwnam() and friends will still work .. just make sure to watch any hard coded paths (better yet, make them configurable at run time)
As long as you can guarantee it'll only be executed on a similar version of the OS on similar hardware your program will work fine if it statically linked. so, if you build for a 2.6 Linux and statically link you will be fine to run on (almost) all 2.6 Linux distributions.
Be warned you can't statically link some parts of GLIBC so if you're using them you'll have to dynamically link anyway. From memory the name service stuff (nss) parts required dynamic linking when I was investigating it.
You can't statically link a program for (say) Linux then expect it to run on BSD or Windows. BSD and Unix don't present or handle their system calls in the same way Linux does. I tell a slight lie because the BSDs have a Linux emulation layer that can be enabled, but out of the box it won't work.
No it will not work. Static linking for distribution independence is a concept from the old unix ages and is not recommended. By the fact you can't as many libraries are not avail as static libraries anyway.
Follow the Linux Standard Base way, this is your only chance to get as much cross distribution portability as possible.
The LSB also works fine if you program for FreeBSD and Solaris.
There are two compatibility questions at issue here: library versions and library inventory.
You don't say what libraries you are using.
If you have no '-l' options, then the only 'library' is glibc itself, which serves as the interface to the kernel. Glibc versions are upward compatible. If you link on a glibc 2.x system you can run on a glibc 2.y, for y > x. The developers make a firm commitment to this.
If you have -l options, static linking is always safe. If you are dynamically linked, you have to ensure that (1) the library is present on the target system, and (2) has a compatible version. Your Mileage Might Vary as to whether the target distro has what you need.

Which environment, IDE or interpreter to put in practice Scheme?

I've been making my way through The Little Schemer and I was wondering what environment, IDE or interpreter would be best to use in order to test any of the Scheme code I jot down for myself.
Racket (formerly Dr Scheme) has a nice editor, several different Scheme dialects, an attempt at visual debugging, lots of libraries, and can run on most platforms. It even has some modes specifically geared around learning the language.
I would highly recommend both Chicken and Gauche for scheme.
PLT Scheme (DrScheme) is one of the best IDEs out there, especially for Scheme. The package you get when downloading it contains all you need for developing Scheme code - libraries, documentation, examples, and so on. Highly recommended.
If you just want to test your scheme code, I would recommend PLT Scheme. It offers a very complete environment, with debugger, help, etc., and works on most platforms.
But if you also want to get an idea of how the interpreter behind the scenes works, and have Visual Studio, I would recommend Tachy. It is a very lightweight scheme interpreter written in c#. It allows you to debug just your scheme code, or also step through the c# interpreter behind the scenes to see what is going on.
Just for the record I have to mention IronScheme.
IronScheme will aim to be a R6RS conforming Scheme implementation based on the Microsoft DLR.
Version 1.0 Beta 1 was just released. I think this should be good implementation for someone that is already using .NET framework.
EDIT
Current version is 1.0 RC 1 from Oct 23 2009
Google for the book's authors (Daniel Friedman and Matthias Felleisen). See whether either of them is involved with a popular, free, existing Scheme implementation.
It doesn't matter, as long as you subscribe to the mailing list(wiki/irc/online-community-site) for the associated community. It's probably worth taking a look at the list description and archives to be sure you are in the right one.
Most of these are friendly and welcoming to newcomers, so don't be afraid to ask.
It's also worth searching the archives of their mailing list(or FAQ or whatever they use) when you have a question - just in case it is a frequent question.
Good Luck!
Guile running under Geiser within Emacs provides a nice, lightweight implementation for doing the exercises. Racket will also run under Geiser and Emacs, though I personally prefer Guile and Chez Scheme a bit more.
Obviously installation of each will depend on your OS. I would recommend using Emacs version 24 and later since this allows you to use Melpa or Marmalade to install Geiser and other Emacs extensions.
The current version of Geiser also works quite nicely with Chicken Scheme, Chez Scheme, MIT Scheme and Chibi Scheme.
LispMe works on a Palm Pilot, take it anywhere, and scheme on the go. GREAT way to learn scheme.
I've used PLT as mentioned in some of the other posts and it works quite nicely. One that I have read about but have not used is Allegro Common LISP Express. I read a stellar review about their database app called Allegro Cache and found that they are heavy into LISP. Like I said, I don't know if it's any good, but it might be worth a try.
I am currently working through the Little Schemer as well and use Emacs as my environment, along Quack, which adds additional support and utilities for scheme-mode within Emacs.
If you are planning on experimenting with other Lisps (e.g. Common Lisp), Emacs has excellent support for those dialects as well (Emacs itself can be customized with its own dialect of Lisp, appropriately named Emacs Lisp).
As far as Scheme implementations go, I am currently using Petit Chez Scheme, which is an interpreted, freely distributable version of Chez Scheme (which uses a compiler and costs money to obtain a license).