Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
Felix is yet another programming language so what special features does it have?
Felix is a high performance scripting language which generates efficient C++. It's motto is hyperlight meaning it is intended to run faster than C. This is achieved by extensive static whole program analysis and inlining.
The language is used like Python by simply running scripts:
flx hello.flx
but underneath your code is translated to C++ and compiled to machine binary and then run. Extensive dependency checking and caching optimises build time automatically, and a pkgconfig like database is used to fully automate linking external libraries, including the Felix run time. C, Objective C, C++, and Objective C++ can also be compiled and linked, and can also use the autolinking feature, allowing C++ code to be run like a script too: say goodbye to makefiles and build systems!
The language provides an optional garbage collector, but also supports manual memory management, it conforms to your system C++ compiler ABI and allows embedding C++ types and functions easily:
type Int = "int";
const One : Int = "1";
fun +: Int * Int -> Int = "$1+$2";
proc show: Int = "::std::cout << $1 << ::std::endl;"
requires header '#include <iostream>'
;
show (One + One + One);
Felix has a sophisticated high power first order type system including explicit kinding constraints, parametric polymorphism, Haskell style type classes, Ocaml style polymorphic variants, and supports polyadic (rank independent) array programming using compact linear types.
Felix appears as a traditional Algol like procedural language with a very strong functional programming subsystem, including support for Monads. However the procedural coding model is based on coroutines using channels to communicate. The resulting lightweight threading model can be elevated to true concurrency, achieving Go-lang performance without sacrificing C/C++ compatibility.
Whilst many programming language now provide operator overloading, and some even allow user defined operators, Felix goes a lot further by placing the whole grammar in the user library. This allows the programmer to design Domain Specific Sub-Languages (DSSLs). For example the Regular Definition DSSL allows one to write:
regdef cident = (underscore | letter) (underscore | letter | digit)*;
using a BNF like syntax, the grammar for which is defined in user space. Similarly bindings to objective C can be conveniently expressed using the ObjC DSSL.
Source: https://github.com/felix-lang/felix
Homepage: http://felix-lang.org
Some docs: https://felix.readthedocs.io/en/latest/index.html
Related
On the official website of gobject, we can read:
GObject, and its lower-level type system, GType, are used by GTK+ and most GNOME libraries to provide:
object-oriented C-based APIs and
automatic transparent API bindings to other compiled or interpreted
languages
The first part seems clear to me but not the second one.
Indeed, when talking about gobject and binding, the concept introduced is often gobject-intropspection, but as far as I understand, gobject-introspection can be used to create .gir and .typelib for any documented C library, not only for gobject-based library.
Therefore I wonder what makes gobject particularly binding-friendly.
as far as I understand, gobject-introspection can be used to create .gir and .typelib for any documented C library, not only for gobject-based library.
That's not really true in practice. You can do some very basic stuff, but you have to write the GIR by hand (instead of just running a program which scans the source code). The only ones I'm aware of are those distributed with gobject-introspection (the *.gir files, the *.c files there are to avoid cyclical dependencies), and even those are generally only a fairly small subset of the C API.
As for other features, almost everything in GObject helps… the basic idea is that bindings often need RTTI. There are types like GValue (a simple box to store a value + type information), GClosure (for callbacks), properties and signals describe themselves with GTypes, etc. If you use GObjects (instead of creating a new fundamental type) you get run-time data about inheritance and interfaces, and GObject's odd construction scheme even allows other languages to subclass types declared in C.
The reason g-ir-scanner can't really do much on non-GObject libraries is that all that information is missing. After scanning the source code looking for annotations, g-ir-scanner will actually load the compiled module and use GObject's API to grab this information (which makes cross-compiling painful). In other words, GObject-Introspection is a much smaller project than you think… a huge percentage of the data it needs it gets from the GObject API.
I want to use several encodings in the presentation layer to encode a object/structure in the application layeri independenty from encoding scheme (such as binary, XML, etc) and programming language (Java, Javascript, PHP, C).
An example would be to transfer an object from a producer to a consumer in a byte stream. The Java client would encode it using something like this:
Object var = new Dog();
output.writeObject(var);
The server would share the Dog class definitions and could regenerate the object doing something like this:
Object var = input.readObject();
assertTrue(var instanceof Dog); // passes
It is important to note that producer and consumer would not share the type of var, and, therefore, the consumer would not need the type to decode var. They only would share data type definitions, if ever:
public interface Pojo {}
public class Dog implements Pojo { int i; String s; } // Generated by framework from a spec
What I found:
Java Serialization: It is language dependent. Cannot be used with for example javascript.
Protobuf library: It is limited to a specific binary format. It is not possible to support additional binary formats. Need name of class ("class" of message).
XStream, Simple, etc. They are rather limited to text/XML and require name of the class.
ASN.1: The standards are there and could be used with OBJECT IDENTIFIER and type definitions but they lack on documentation and tutorials.
I prefer 4th option because, among others, it is a standard. Is there any active project that support such requirements (specially something based on ASN.1)? Any usage example? Does the project include codecs (DER, BER, XER, etc.) that can be selected at runtime?
Thanks
You can find several open source and commercial implementation of tools for ASN.1. These usually include:
a compiler for the schema, which will generate code in your desired programming language
a runtime library which is used together with the generated code for encoding and decoding
ASN.1 is mainly used with the standardized communication protocols for telecom industry, so the commercial tools have very good support for the ASN.1 standard and various encoding rules.
Here are some starter tutorials and even free e-books:
http://www.oss.com/asn1/resources/asn1-made-simple/introduction.html
http://www.oss.com/asn1/resources/reference/asn1-reference-card.html
http://www.oss.com/asn1/resources/books-whitepapers-pubs/asn1-books.html
I know that the OSS ASN.1 commercial tools (http://www.oss.com/asn1/products/asn1-products.html) will support switching the encoding rules at runtime.
To add to bosonix's answer, there's also Objective System's tools at http://www.obj-sys.com/. The documentation from both OSS and Objective Systems includes many example uses.
ASN.1 is pretty much perfect for what you're looking for. I know of no other serialisation system that does this quite so thoroughly.
As well as a whole array of different binary encodings (ranging from the comprehensively tagged BER all the way down to the very packed-together PER), it does XML and now also JSON encodings too. These are well standardised by the ITU, so it is in theory fully inter operable between tool vendors, programming languages, OSes, etc.
There are other significant benefits to ASN.1. The schema language lets you define constraints on the value of message fields, or the sizes of arrays. These then get checked for you by the generated code. This is far more complete than many other serialisations. For instance, Google Protocol Buffers doesn't let you do this, meaning that you have to check the range of message fields (where applicable) in hand written code. That's tedious, error prone, and hard to maintain.
The only other ones that do this are XSD and JSON schemas. However with those you're at the mercy of the varying quality of tools used to turn those into source code - I've not yet seen any decent ones for JSON schemas. I'm not aware of whether or not Microsoft's xsd.exe honours such constraints either.
I want to know if the C libraries system/string/data structure/data base etc are platform dependent.?
what are the things of these libraries that are dependent on a specific platform.?
like how a regular expression/string manipulation/sql connectivity etc libraries are dependent on a platform.?
can I use them on any platform for File I/O/Paths etc
just like we do things in python using the sys/os etc modules.?
I want to build a program which deals with strings,database(sqlite3,mysql,Oracle),data structures,File I/O and System paths. and can run on Windows,Linux and Mac when re compiled on that platform.
and I want it to be console based.
Please don't recommend me to do it in other programming languages, I want the C folks to answer please.
Thanks
Well, as long as you use the standard C library, all is well. The GNU C Library (glibc) is one implementation of the C standard, and for example Microsoft has their own implementation of it.
From a user's (your) perspective, the impelementation doesn't matter. If you for example
#include <stdio.h>
then you can, on any standards-compliant platform, call the fopen() and then use fread() for file i/o. Or any other standard C function.
Linux, Mac and Windows are all standards compliant (i.e. have implemented the ISO C) and thus the standard functions do the same thing on all platforms. The filepaths that you pass to fopen() are the same, too. The fact that Windows uses backslash ( \ ) in the filepath instead of the Unix way (forward slash, / ) doesn't matter: on Windows, in your C program, you use the Unix-style notation.
For C library functions, I suggest you ensure that you use functions within the POSIX standard. This will (at least in theory) guarantee compatibility with other POSIX platforms.
If you are programming on linux using glibc (i.e. the normal C library), the documentation thereof is reasonably good at pointing out what are GNU extensions, but the POSIX standard (referenced above) is the gold standard.
For other libraries, you'll have to look to the library in question.
Remember that if there are specific areas of incompatibility, you can use #ifdef's etc around those bits, and keep the machine specific elements in specific files.
To the specific points you mentioned:
* The POSIX regexp functions (regcomp, regexec, regerror, regfree) are part of the POSIX standard.
* Most of the string manipulation functions (e.g. strstr) are part of the POSIX standard. Some (e.g. asprintf) are GNU extensions.
* SQL connectivity is not provided by the C library. You will need a specific C library to deal with your SQL connection, or to use something like libdbi. You'll need to look at the specific library to see what support there is on other platforms.
In particular, be careful with path manipulation on windows (think about slashes vs backslashes, and drive letters), specifically dealing with how they are input by the user and what is passed to the function.
I have some questions:
A dynamic programming language is always interpreted? I think so, but why?
Are there any dynamic languages with static typing system?
A programming language with static typing system is always compiled?
In others words, are there really a link between :
Static / dynamic typing system and static / dynamic language
Static / dynamic typing system and compiler / interpreter
Static / dynamic language and compiler / interpreter
There is no inherent connection between the type system and the method of execution. Dynamic languages can be compiled and static languages can be interpreted. Arguably static type systems make a lot of sense with programs which are compiled before execution, as a method of catching certain kinds of errors before the program is ever executed. However, dynamic type systems solve different problems than static type systems, and interpreted execution solves different problems than compilation.
See What to know before debating type systems.
A dynamic programming language is always interpreted? I think so, but why?
No. Most dynamic languages in wide use internally compile to either bytecode or machine code ("JIT"). There are also a number of ahead-of-time compilers for dynamically-typed languages. There are a number of compilers for Scheme and Lisp, as well as other languages.
Are there any dynamic languages with static typing system?
Yes. The terms your are looking for here are "optional typing" and "gradual typing".
A programming language with static typing system is always compiled?
Most are, but this isn't strictly required. Many statically typed functional languages like ML, F#, and Haskell support an interactive mode where it will interpret (or internally compile and execute) code on the fly. Go also has a command to immediately compile and run code directly from source.
In others words, are there really a link between :
Static / dynamic typing system and static / dynamic language
Static / dynamic typing system and compiler / interpreter
Static / dynamic language and compiler / interpreter
There's a soft link between the two. Most people using dynamically typed languages are using them in part because they want quick iteration while they develop. Meanwhile, most people using statically typed languages want to catch as many errors as early as they can. That means that dynamically typed languages tend to be run directly from source while statically typed languages tend to compile everything ahead of time.
But there's no technical reason preventing you from mixing it up.
I am learning DTrace, and it is very powerful tool. But one problem is that DTrace outputs too much information and most of those are NS classes.
But my question is how I can filter system classes if users' classes do not have proper prefix?
(There was a similar Stack Overflow question for this topic, [How to detect without the system method or system framework with DTrace on Mac OS X?].)
DTrace uses Filename generation-like syntax to specify probe names. E.g. you could specify first characters of a class name by using brackets [ and ].
E.g. if you want to filter all NS* classes:
objc$target::[ABCDEFGHIJKLMOPQRSTUVWXYZ]*:entry (N is removed)
objc$target::N[ABCDEFGHIJKLMNOPQRTUVWXYZ]*:entry (S is removed)
But you have to repeat it for each prefix Apple uses, like CA, IK etc.