I have a C structure that allow users to configure options in an embedded system. Currently the GUI we use for this is custom written for every different version of this configuration structure. What I'd like for is to be able to describe the structure members in some format that can be read by the client configuration application, making it universal across all of our systems.
I've experimented with describing the structure in XML and having the client read the file; this works in most cases except those where some of the fields have inter-dependencies. So the format that I use needs to have a way to specify these; for instance, member A must always be less than or equal to half of member B.
Thanks in advance for your thoughts and suggestions.
EDIT:
After reading the first reply I realized that my question is indeed a little too vague, so here's another attempt:
The embedded system needs to have access to the data as a C struct, running any other language on the processor is not an option. Basically, all I need is a way to define metadata with the structure, this metadata will be downloaded onto flash along with firmware. The client configuration utility will then read the metadata file over RS-232, CAN etc. and populate a window (a tree-view) that the user can then use to edit options.
The XML file that I mentioned tinkering with was doing exactly that, it contained the structure member name, data type, number of elements etc. The location of the member within the XML file implicitly defined its position in the C struct. This file resides on flash and is read by the configuration program; the only thing lacking is a way to define dependencies between structure fields.
The code is generated automatically using MATLAB / Simulink so I do have access to a scripting language to help with the structure creation. For example, if I do end up using XML the structure will only be defined in the XML format and I'll use a script to create the C structure during code generation.
Hope this is clearer.
For the simple case where there is either no relationship or a relationship with a single other field, you could add two fields to the structure: the "other" field number and a pointer to a function that compares the two. Then you'd need to create functions that compared two values and return true or false depending upon whether or not the relationship is met. Well, guess you'd need to create two functions that tested the relationship and the inverse of the relationship (i.e. if field 1 needs to be greater than field 2, then field 2 needs to be less than or equal to field 1). If you need to place more than one restriction on the range, you can store a pointer to a list of function/field pairs.
An alternative is to create a validation function for every field and call it when the field is changed. Obviously this function could be as complex as you wanted but might require more hand coding.
In theory you could generate the validation functions for either of the above techniques from the XML description that you described.
I would have expected you to get some answers by now, but let me see what I can do.
Your question is a bit vague, but it sounds like you want one of
Code generation
An embedded extension language
A hand coded run-time mini language
Code Generation
You say that you are currently hand tooling the configuration code each time you change this. I'm willing to bet that this is a highly repetitive task, so there is no reason that you can't write program to do it for you. Your generator should consume some domain specific language and emit c code and header files which you subsequently build into you application. An example of what I'm talking about here would be GNU gengetopt. There is nothing wrong with the idea of using xml for the input language.
Advantages:
the resulting code can be both fast and compact
there is no need for an interpreter running on the target platform
Disadvantages:
you have to write the generator
changing things requires a recompile
Extension Language
Tcl, python and other languages work well in conjunction with c code, and will allow you to specify the configuration behavior in a dynamic language rather than mucking around with c typing and strings and and and...
Advantages:
dynamic language probably means the configuration code is simpler
change configuration options without recompiling
Disadvantages:
you need the dynamic language running on the target platform
Mini language
You could write your own embedded mini-language.
Advantages:
No need to recompile
Because you write it it will run on your target
Disadvantages:
You have to write it yourself
How much does the struct change from version to version? When I did this kind of thing I hardcoded it into the PC app, which then worked out what the packet meant from the firmware version - but the only changes were usually an extra field added onto the end every couple of months.
I suppose I would use something like the following if I wanted to go down the metadata route.
typedef struct
{
unsigned char field1;
unsigned short field2;
unsigned char a_string[4];
} data;
typedef struct
{
unsigned char name[16];
unsigned char type;
unsigned char min;
unsigned char max;
} field_info;
field_info fields[3];
void init_meta(void)
{
strcpy(fields[0].name, "field1");
fields[0].type = TYPE_UCHAR;
fields[0].min = 1;
fields[0].max = 250;
strcpy(fields[1].name, "field2");
fields[1].type = TYPE_USHORT;
fields[1].min = 0;
fields[1].max = 0xffff;
strcpy(fields[2].name, "a_string");
fields[2].type = TYPE_STRING;
fields[2].min = 0 // n/a
fields[2].max = 0 // n/a
}
void send_meta(void)
{
rs232_packet packet;
memcpy(packet.payload, fields, sizeof(fields));
packet.length = sizeof(fields);
send_packet(packet);
}
Related
I have the following in my constants file:
typedef enum
{
AnimalTypeBear,
AnimalTypeCamel,
AnimalTypeCow,
AnimalTypeCount
}
AnimalType;
If I declare an AnimalType variable somewhere in my code like following and set it to AnimalTypeBear:
AnimalType animalType = 0;
Is there away to somehow derive the string "Bear" from that animalType variable or just in general to access the string of its corresponding constant type (in this case AnimalTypeBear).
Enums are constant expressions like #define. Enums at compile time will be "translated" into the code as constants (while #define will be evaluated before compilation). So basically it is not possible to reference the enum string in this way.
As suggested by others you can use a string array.
You cannot do this without code in (Objective-)C. If you want to be able to use actual enumeration literals as strings in your code, or during I/O, with language support then you need to use a language with enumeration type support such as Pascal or Ada.
If you are keen to have this and don't mind work as long as it is reusable then you need to learn about reading the symbol tables structures from a binary and make sure that the information is not stripped from your application. You'll see the debugger can show the correct literals, also if you use Xcode's "Product > Generate Output > Assembly File" menu item you'll see the literals are in there as strings. It will be a lot of work for you, but would be reusable once done.
After that give up and write some code - a simple static array of labels and an index operation. Yes, it's a maintenance headache if you ever change your enumeration.
Alternatively you can write some different code, say in Ruby... Xcode supports adding your own file "types" and running scripts to (pre-)process them. So you could define, say, a file type ".enum" and use a Ruby script to convert that into a C enumeration definition and code to provide the strings. Apple has examples of using Ruby to pre-process files in this way. Once you have your script Xcode will do the rest, on each compilation it will run your script to convert your ".enum" into ".m" (or ".c") and the compile the result. This approach is usually best though for files which contain only one thing, e.g. localised string file processing, you don't usually write your enum declarations in their own files.
Using objective c , I have 2 classes that are using hardware, and are written in c +objC .
The other classes in the project are objective c, and they create instance of those classes.
My question .
Lets say I have classA.m and classB.m . they both have an integer const that needs to be the same say : const int numOfSamples=7;
I am seeking for the best solution to create some configuration file, that will holds all this const variables, that both A and B can see them .
I know some ways,but I wonder whats the RIGHT thing to do .
I wonder if I can just create a : configuration.m and write them into it .
to use a singleton file that holds all globals .
Number 1 seems to me the best , but how exactly should I do it ?
Thanks.
For approach 1 to work, you need to define two files:
a header file where you declare all of your constants;
a .m file where your constants are defined and initialized.
In your example:
/* .h file */
extern const int numOfSamples;
/* .m or .c file */
const int numOfSamples = 7;
Then, you include the .h header in every other file where you need those constants. Notice the extern keyword, this will declare the variable without also defining; in this way, you can include the .h file multiple times without having a duplicate symbol error.
EDIT:
The approach I suggest is the correct way to handle global variables in a C program.
Now, if global variables are a good thing or not, well, that is a longer story.
Generally speaking, global variables are tricky and go against a 40 years long effort towards better encapsulation (aka, information hiding) of data and behavior in a program (see "On the Criteria to Be Used in Decomposing Systems Into Modules", David Parnas, 1972).
To further explain this, one aspect of the problem is exactly what you mention in your comment: the possibility of one module changing the value of a global variable and thus affecting the whole behavior of the program. This is recognizedly bad and leads to uncontrollable side effects (in any non trivially sized program).
In your case, I think things are a bit different, in that you are talking about "configuration" and "const" values. This is an altogether different case than the general one and I think you could safely use a header file of consts to that aim.
That said, you understand that the whole theme of configuration is a huge one, in general. E.g., you could need mechanisms to change your program configuration on the fly; in this case the constant variable header approach would be not correct. Or, your program configuration could depend on the state of some remote system (imagine: you are logged in vs. you are not logged in).
I can't guarantee that using a header file is the best approach for your case, but I hope that the above discussion and the example I gave you can help you figure that out.
I think the best way is to use a plist file with all your configuration values.
If you have few configuration values, you can use the Info.plist file.
I'm implementing a dynamic language that will compile to C#, and it's implementing its own reflection API (.NET's is too slow, and the DLR is limited only to more recent and resourceful implementations).
For this, I've implemented a simple .GetField(string f) and .SetField(string f, object val) interface. Until recently, the implementation just switches over all possible field string values and makes the corresponding action.
Also, this dynamic language has the possibility to define anonymous objects. For those anonymous objects, at first, I had implemented a simple hash algorithm.
By now, I am looking for ways to optimize the dynamic parts of the language, and I have come across the fact that a hash algorithm for anonymous objects would be overkill. This is because the objects are usually small. I'd say the objects contain 2 or 3 fields, normally. Very rarely, they would contain more than 15 fields. It would take more time to actually hash the string and perform the lookup than if I would test for equality between them all. (This is not tested, just theoretical).
The first thing I did was to -- at compile-time -- create a red-black tree for each anonymous object declaration and have it laid onto an array so that the object can look for it in a very optimized way.
I am still divided, though, if that's the best way to do this. I could go for a perfect hashing function. Even more radically, I'm thinking about dropping the need for strings and actually work with a struct of 2 longs.
Those two longs will be encoded to support 10 chars (A-za-z0-9_) each, which is mostly a good prediction of the size of the fields. For fields larger than this, a special function (slower) receiving a string will also be provided.
The result will be that strings will be inlined (not references), and their comparisons will be as cheap as a long comparison.
Anyway, it's a little hard to find good information about this kind of optimization, since this is normally thought on a vm-level, not a static language compilation implementation.
Does anyone have any thoughts or tips about the best data structure to handle dynamic calls?
Edit:
For now, I'm really going with the string as long representation and a linear binary tree lookup.
I don't know if this is helpful, but I'll chuck it out in case;
If this is compiling to C#, do you know the complete list of fields at compile time? So as an idea, if your code reads
// dynamic
myObject.foo = "some value";
myObject.bar = 32;
then during the parse, your symbol table can build an int for each field name;
// parsing code
symbols[0] == "foo"
symbols[1] == "bar"
then generate code using arrays or lists;
// generated c#
runtimeObject[0] = "some value"; // assign myobject.foo
runtimeObject[1] = 32; // assign myobject.bar
and build up reflection as a separate array;
runtimeObject.FieldNames[0] == "foo"; // Dictionary<int, string>
runtimeObject.FieldIds["foo"] === 0; // Dictionary<string, int>
As I say, thrown out in the hope it'll be useful. No idea if it will!
Since you are likely to be using the same field and method names repeatedly, something like string interning would work well to quickly generate keys for your hash tables. It would also make string equality comparisons constant-time.
For such a small data set (expected upper bounds of 15) I think almost any hashing will be more expensive then a tree or even a list lookup, but that is really dependent on your hashing algorithm.
If you want to use a dictionary/hash then you'll need to make sure the objects you use for the key return a hash code quickly (perhaps a single constant hash code that's built once). If you can prevent collisions inside of an object (sounds pretty doable) then you'll gain the speed and scalability (well for any realistic object/class size) of a hash table.
Something that comes to mind is Ruby's symbols and message passing. I believe Ruby's symbols act as a constant to just a memory reference. So comparison is constant, they are very lite, and you can use symbols like variables (I'm a little hazy on this and don't have a Ruby interpreter on this machine). Ruby's method "calling" really turns into message passing. Something like: obj.func(arg) turns into obj.send(:func, arg) (":func" is the symbol). I would imagine that symbol makes looking up the message handler (as I'll call it) inside the object pretty efficient since it's hash code most likely doesn't need to be calculated like most objects.
Perhaps something similar could be done in .NET.
I'm having a hard time understanding what I'm supposed to do. The only thing I've figured out is I need to use yacc on the cminus.y file. I'm totally confused about everything after that. Can someone explain this to me differently so that I can understand what I need to do?
INTRODUCTION:
We will use lex/flex and yacc/Bison to generate an LALR parser. I will give you a file called cminus.y. This is a yacc format grammar file for a simple C-like language called C-minus, from the book Compiler Construction by Kenneth C. Louden. I think the grammar should be fairly obvious.
The Yahoo group has links to several descriptions of how to use yacc. Now that you know flex it should be fairly easy to learn yacc. The only base type is int. An int is 4 bytes. Booleans are handled as ints, as in C. (Actually the grammar allows you to declare a variable as a type void, but let's not do that.) You can have one-dimensional arrays.
There are no pointers, but references to array elements should be treated as pointers (as in C).
The language provides for assignment, IF-ELSE, WHILE, and function calls and returns.
We want our compiler to output MIPS assembly code, and then we will be able to run it on SPIM. For a simple compiler like this with no optimization, an IR should not be necessary. We can output assembly code directly in one pass. However, our first step is to generate a symbol table.
SYMBOL TABLE:
I like Dr. Barrett’s approach here, which uses a lot of pointers to handle objects of different types. In essence the elements of the symbol table are identifier, type and pointer to an attribute object. The structure of the attribute object will differ according to the type. We only have a small number of types to deal with. I suggest using a linear search to find symbols in the table, at least to start. You can change it to hashing later if you want better performance. (If you want to keep in C, you can do dynamic allocation of objects using malloc.)
First you need to make a list of all the different types of symbols that there are—there are not many—and what attributes would be necessary for each. Be sure to allow for new attributes to be added, because we
have not covered all the issues yet. Looking at the grammar, the question of parameter lists for functions is a place where some thought needs to be put into the design. I suggest more symbol table entries and pointers.
TESTING:
The grammar is correct, so taking the existing grammar as it is and generating a parser, the parser will accept a correct C-minus program but it won’t produce any output, because there are no code snippets associated with the rules.
We want to add code snippets to build the symbol table and print information as it does so.
When an identifier is declared, you should print the information being entered into the symbol table. If a previous declaration of the same symbol in the same scope is found, an error message should be printed.
When an identifier is referenced, you should look it up in the table to make sure it is there. An error message should be printed if it has not been declared in the current scope.
When closing a scope, warnings should be generated for unreferenced identifiers.
Your test input should be a correctly formed C-minus program, but at this point nothing much will happen on most of the production rules.
SCOPING:
The most basic approach has a global scope and a scope for each function declared.
The language allows declarations within any compound statement, i.e. scope nesting. Implementing this will require some kind of scope numbering or stacking scheme. (Stacking works best for a one-pass
compiler, which is what we are building.)
(disclaimer) I don't have much experience with compiler classes (as in school courses on compilers) but here's what I understand:
1) You need to use the tools mentioned to create a parser which, when given input will tell the user if the input is a correct program as to the grammar defined in cminus.y. I've never used yacc/bison so I don't know how it is done, but this is what seems to be done:
(input) file-of-some-sort which represents output to be parsed
(output) reply-of-some-sort which tells if the (input) is correct with respect to the provided grammar.
2) It also seems that the output needs to check for variable consistency (ie, you can't use a variable you haven't declared same as any programming language), which is done via a symbol table. In short, every time something is declared you add it to the symbol table. When you encounter an identifier, if it is not one of the language identifiers (like if or while or for), you'll look it up in the symbol table to determine if it has been declared. If it is there, go on. If it's not - print some-sort-of-error
Note: point(2) there is a simplified take on a symbol table; in reality there's more to them than I just wrote but that should get you started.
I'd start with yacc examples - see what yacc can do and how it does it. I guess there must be some big example-complete-with-symbol-table out there which you can read to understand further.
Example:
Let's take input A:
int main()
{
int a;
a = 5;
return 0;
}
And input B:
int main()
{
int a;
b = 5;
return 0;
}
and assume we're using C syntax for parsing. Your parser should deem Input A all right, but should yell "b is undeclared" for Input B.
In my language I can use a class variable in my method when the definition appears below the method. It can also call methods below my method and etc. There are no 'headers'. Take this C# example.
class A
{
public void callMethods() { print(); B b; b.notYetSeen();
public void print() { Console.Write("v = {0}", v); }
int v=9;
}
class B
{
public void notYetSeen() { Console.Write("notYetSeen()\n"); }
}
How should I compile that? what i was thinking is:
pass1: convert everything to an AST
pass2: go through all classes and build a list of define classes/variable/etc
pass3: go through code and check if there's any errors such as undefined variable, wrong use etc and create my output
But it seems like for this to work I have to do pass 1 and 2 for ALL files before doing pass3. Also it feels like a lot of work to do until I find a syntax error (other than the obvious that can be done at parse time such as forgetting to close a brace or writing 0xLETTERS instead of a hex value). My gut says there is some other way.
Note: I am using bison/flex to generate my compiler.
My understanding of languages that handle forward references is that they typically just use the first pass to build a list of valid names. Something along the lines of just putting an entry in a table (without filling out the definition) so you have something to point to later when you do your real pass to generate the definitions.
If you try to actually build full definitions as you go, you would end up having to rescan repatedly, each time saving any references to undefined things until the next pass. Even that would fail if there are circular references.
I would go through on pass one and collect all of your class/method/field names and types, ignoring the method bodies. Then in pass two check the method bodies only.
I don't know that there can be any other way than traversing all the files in the source.
I think that you can get it down to two passes - on the first pass, build the AST and whenever you find a variable name, add it to a list that contains that blocks' symbols (it would probably be useful to add that list to the corresponding scope in the tree). Step two is to linearly traverse the tree and make sure that each symbol used references a symbol in that scope or a scope above it.
My description is oversimplified but the basic answer is -- lookahead requires at least two passes.
The usual approach is to save B as "unknown". It's probably some kind of type (because of the place where you encountered it). So you can just reserve the memory (a pointer) for it even though you have no idea what it really is.
For the method call, you can't do much. In a dynamic language, you'd just save the name of the method somewhere and check whether it exists at runtime. In a static language, you can save it in under "unknown methods" somewhere in your compiler along with the unknown type B. Since method calls eventually translate to a memory address, you can again reserve the memory.
Then, when you encounter B and the method, you can clear up your unknowns. Since you know a bit about them, you can say whether they behave like they should or if the first usage is now a syntax error.
So you don't have to read all files twice but it surely makes things more simple.
Alternatively, you can generate these header files as you encounter the sources and save them somewhere where you can find them again. This way, you can speed up the compilation (since you won't have to consider unchanged files in the next compilation run).
Lastly, if you write a new language, you shouldn't use bison and flex anymore. There are much better tools by now. ANTLR, for example, can produce a parser that can recover after an error, so you can still parse the whole file. Or check this Wikipedia article for more options.