I have designed a control system in simulink for my project. Now I need to convert this design into C code. But presently no specific hardware processor has been decided on which the code will reside. So I need to run my code from within matlab. I am very new to the industry, so I am unaware of the steps that are followed to change the control design from simulink to embedded C.
Since I have no practical experience about the workflow that I am supposed to follow can I please get some guidance on what are the general norms that has to be done in order to achieve my requirement.
Workflow recommendation:
Make sure your design is tested enough with Simulation. You don't want to detect simple errors when you control real hardware.
Investigate/decide on target requirements. Do you have limited resources (memory/speed) and must customize the generated code to fit a target interface you should use embedded coder. Otherwise Simulink coder could be enough (If you have embedded coder use it anyway).
Make sure your model interfaces match what you expect on target considering datatypes, sizes, logged data and states. If you have special requirements for how to interface the code, you need to set storage classes on signals and other data. If you can live with the default code interface your life will be a lot easier.
Set the proper target in configuration parameters/Code generation/System target file. grt.tlc for rapid prototyping code and ert.tlc for embedded code. Then you can look through optimization and code generation properties and set as you would like. If your target has specific datatypes you should also change the embedded hardware implementation to match datatypes on your target.
Generate code (ctrl-b).
Integrate the code in your target project. Call _initialize once first then in a time based loop set inputs, call _step and read outputs.
It is also possible to make you own custom target to customize the code interface and provide desired output directly, including compiling and downloadingn to target. This is mainly for rapid prototyping and I recommend doing it manually first a few times and then decide if it is worth the effort to automate.
You might want to start looking at some of the examples or videos of Simulink Coder and Embedded Coder. Simulink Coder is for generating C/C++ code, but not necessarily optimised for running on embedded processors (it may be for Rapid Prototyping or Hardware-in-the-Loop purposes for example). Embedded Coder is an add-on to Simulink Coder for optimising the generated code to run on embedded hardware.
You might also want to register some some of their webinars on that topic or look at some recorded ones (there are plenty to choose from).
Related
i want to learn test automation using meta programming.i googled it could not find any thing.can anybody suggest me some resources where can i get info about "how to use Meta Programming for making test automation easy"?
That's a broad topic and not a lot has been written about it, because of the "dark corners" of metaprogramming.
What do you mean by "metaprogramming"?
As background, I consider metaprogramming to be any activity in which a tool (which we call a "metaprogramming tool") is used to inspect or modify the application software to achieve some effect.
Many people consider "reflection" to be a kind of metaprogramming; other consider (C++-style) templates to be metaprogramming; some suggest aspect-oriented programming.
I sort of agree but think these are weak versions of what you want, because each has severe limits on what it can see or do to source code. What you really want is a metaprogramming tool that has access to everything in your source program (yes, comments too!) Such tools are called Program Transformation Systems (PTS); they work by parsing the source code and operating on the parsed representation of the program. (I happen to build one of these, see my bio). PTSes can then analyze the code accurate, and/or make reliable changes to the code and regenerate valid source with the changes. PS: a PTS can implement all those other metaprogramming techniques as special cases, so it is strictly more general.
Where can you use metaprogramming for testing?
There are at least 2 areas in which metaprogramming might play a role:
1) Collection of information from tests
2) Generation of tests
3) Avoidance of tests
Collection.
Collection of test results depends on the nature of tests. Many tests are focused on "is this white/black box functioning correctly"? Assuming the tests are written somehow, they have to have access to the box under test,
be able to invoke that box in a realistic ways, determine if the result is correct, and often tabulate the results to that post-testing quality assessments can be made.
Access is the first problem. The black box to be tested may not be easily accessible to a testing framework: driven by a UI event, in a non-public routine, buried deep inside another function where it hard to get at.
You may need metaprogramming to "temporarily" modify the program to provide access to the box that needs testing (e.g., change a Private method to Public so it can be called from outside). Such changes exist only for the duration of the test project; you throw the modified program away because nobody wants it for anything but the test results. Yes, you have to ensure that the code transformations applied to make things visible don't change the program functionality.
The second problem is exercising the targeted black box in a realistic environment. Each code module runs in a world in which it assumes data and the environment are "properly" configured. The test program can set up that world explicitly by making calls on lots of the program elements or using its own custom code; this is usually the bulk of a test routine, and this code is hard to write and fragile (the application under test keeps changing; so do its assumptions about the world). One might use metaprogramming to instrument the application to collect the environment under which a test might need to run, thus avoiding the problem of writing all the setup code.
Finally, one might want to record more than just "test failed/passed". Often it is useful to know exactly what code got tested ("test coverage"). One can instrument the application to collect what-got-executed data; here's how to do it for code blocks: http://www.semdesigns.com/Company/Publications/TestCoverage.pdf using a PTS. More sophisticated instrumentation might be used to capture information about which paths through the code have been executed. Uncovered code, and/or uncovered paths, show where tests have not been applied and you arguably know nothing about what the program does, let alone whether it is buggy in a straightforward way.
Generation of tests
Someone/thing has to produce tests; we've already discussed how to produce the set-up-the-environment part. What about the functional part?
Under the assumption that the program has been debugged (e.g, already tested by hand and fixed), one could use metaprogramming to instrument the code to capture the results of execution of a black box (e.g., instance execution post-conditions). By exercising the program, one can then produce (by definition) "correctly produces" results which can be transformed into a test. In this way, one might construct a huge variety of regression tests for an existing program; these will be valuable in verifying the further enhancements to the program don't break most of its functionality.
Often a function has qualitatively different behaviors on different ranges of input (e.g., for x<10, produced x+1, else produces x*x). Ideally one would like to provide a test for each qualitively different results (e.g, x<10, x>=10) which means one would like to partition the input ranges. Metaprogrammning can help here, too, by enumerating all (partial) paths through module, and providing the predicate that controls each path.
The separate predicates each represent the input space partition of interest.
Avoidance of Tests
One only tests code one does not trust (surely you aren't testing the JDK?) Any code consructed by a reliable method doesn't need tests (the JDK was constructed this way, or at least Oracle is happy to have you beleive it).
Metaprogramming can be used to automatically generate code from specifications or DSLs, in relaible ways. Such generated code is correct-by-construction (we can argue about what degree of rigour), and doesn't need tests. You might need to test that DSL expression achieves the functionaly you desired, but you don't have to worry about whether the generated code is right.
This is about a typical problem in larger embedded system projects:
Data from a text-file/database must be hard coded to C-code. The data will change the control flow, table dimension etc.
What is the solution of choice?
Background:
In our (large) embedded software we have to connect hundreds of signals (outputs of finite state machines) with busses (e.g. a CAN-bus). We are using Simulink/Stateflow as model-based development tool (state machines) and auto-coding.
The connection will have to scale, do dataype conversions etc. Usually all the information for the conversion/connection is stored in a database or file (e.g. dbc-text-file).
Apparently the usual dynamic way: reading the database and dynamically connect/convert accordingly is not indicated if hard deterministic realtime capability must be ensured. This data has to be hard coded.
We have not found a realistic and more practical way other than to use the Simulink API: write external m-code which reads the data from file and automates "drawing a picture" of the entangeld connections into the Simulink model! This is finally auto-coded to C.
Needless to say, that this scripted "painting-code" - while effective - is not very reusable, maintainable etc. I can't find an effective solution even taking : compiler/code generation, model driven architecture, Autosar, model transformations, into consideration. Constructing for each external data-document an own compiler, which transforms the data to C-code, seems to be unrealistic ...
Is there a practical alternative? Is this a fundamental weakness of Simulink "language" (i.e. it has no underlying high level language like other model driven embedded-tools like modelica) ?
Your actual problem isn't really well described by your question.
The way I read it is that you have a complex set of data, that requires a complex code generation process. Whenever one needs a complex translation, you'll end up building complicated analysis and code generation machinery. That's generally not an easy task.
You've done it in two stages: 1) read the database and make Simulink, 2) let MATLAB compile the Simulink to source code. Your "database" content is in effect a specification (e.g., a DSL). You have a front end that reads the "spec" and interprets it into a Simulink model; you seem to be complaining that there is no underlying semantics for Simulink. Well... is there an underlying robust semantics for your specification DSL? That is probably part of your problem. Simulink itself has poorly defined semantics, too. The combination means that whatever ad hoc transformations you are doing from your DSL to Simulink, and from Simulink to C, is in effect ad hoc and likely difficult to maintain. We can argue about whether C has cleanly defined semantics; it sort of does but it isn't easy to see in the standards documents.
In any case, you are building a staged translator. Your first stage probably needs more structure and better analyses. Ideally you'd transform to an intermediate represention that was more formal; I always thought that Colored Petri nets were much better than Simulink, partly because they do have clear, formal semantics. (Modelica is pretty nice, too). Ideally you'd transform the intermediate stage into C using transformation rules you can control, rather that what Simulink happens to do.
This is easier if you have good foundation machinery on which to build translators. The best class of machinery for this purpose IMHO are program transformation systems. (I happen to build one
of them [see bio], that knows C and Simulink already, and could be taught Modelica). You can read a bit more about what is needed in this SO answer on what it takes to build translators.
It's difficult to make out what your exact question is, but here are two suggestions:
If you're looking for a code generator that can generate code from your state diagrams which you can understand try fsmpro.io
For making individual logic, you'll have to paste code rather than using any inbuilt utilities to design units.
I know this sounds a little bizarre, but there is a very simple application I want to write, a sort of unique image viewer, which requires some interactivity with the host system at the user level. Simplicity when developing is a must as this is a very small side project. The project does require some amount of graphical work and quite a bit of mouse based interactivity (as well as some keyboard shortcuts), but quite frankly, I don't want to dig my hands into OGL for something this small. I looked at the available options, and I think I've narrowed it down to two main choices: Webkit (through either QtWebkit or WebkitGtk), and the language Processing.
Since I haven't actually used Processing but I do have some amount of HTML5 canvas and Javascript experience, I am somewhat tempted to using a Webkit based solution. There are however, several concerns I have.
How is Webkit's support for canvas, specifically for more graphically intensive processes?
I've heard that bridging is handled better in QtWebkit than WebkitGtk. Is this still true?
To what degree can bridging actually do? Can a Webkit based application do everything that an application which interacts with the files on the system needs?
Looking at Processing, there are similarly, a couple things I'm wondering.
Processing is known for its graphical capabilities, but how capable is it for writing a general everyday desktop application?
There are many sources that link Processing to Java, both in lineage as well as in distributing applications over the web (ie: JApplets). Is the "Application Export" similarly closely integrated with Java?
As for directly comparing the two, the main concern I do have is the overhead of each. I want the application to start up as snappy as possible, and I know that Java has a bit of an overhead regarding start up because it first has to start up the interpreter. How do Processing and QtWebkit/WebkitGtk compare for start up?
Note that I am targeting the Linux platform only.
Thanks!
It's difficult to give a specific answer, because you're actually asking a few different kind of questions - and some of them you could be more precise.
Processing is a subset or child of java - it's really "just" a java framework with an free ide that hides the messy setup work of building an applet, so that a user can dive in and write something quickly without getting bogged down in widgets and ui, etc. So processing can exist by itself and the end user needs to know nothing of Java (except syntax - processing is java, so the user must learn java syntax).
But a programmer who already knows java can exploit the fun quick nature of processing and then leverage their normal java experience for whatever else is needed - everything of java is in processing, just a maybe slightly hidden (but only at first) It's also possible to import the processing.jars into an existing java program and use them there. See http://processing.org/learning/eclipse/ form more information.
"how capable is it for writing a general everyday desktop application?" - Not particularly on it's own (it's not made to be), but some things are possible and easy (i.e. file saving & loading, non-standard gui, etc.), and in some ways it's similar to old school actionscript or lingo. There is a library called controlP5 that makes gui stuff a bit easier.
Webkit is another kettle of fish, especially if you aren't making a web-based thing (it sounds like you're thinking on using the webkit libraries as part of a larger program. I'll admit I don't have the dev expertise with those specific libraries to give you the answer you really want, but I'm pretty certain that unless you have programming experience beyond html5/javascript you'll probably get going much faster with processing.
Good luck with whichever path you choose!
At the moment I am learning Objective-C 2. I'm aware that it's used heavily by Mac developers, but I'm more interested in learning the language at this point in time than the frameworks for developing on Mac OS X/iPhone (except for Foundation). In order to do this I want to write a few intermediate* console applications, but I'm stuck for ideas.
Most examples are something along the lines of "Write a Fraction class that has getters/setters and a print function", which isn't very challenging coming from a C++ background. I'd like some generic examples of programs, but I don't want them to include any Objective-C implementation details. I want to figure out the program structure/write my own interfaces and learn the language from there.
In summary: I am curious as to what example programs Objective-C programmers would recommend for exploring the language.
An example of an "intermediate" application would be something along the lines of "Write a program that takes a URL from the command line and returns the number of occurrences of a certain word in data returned:
example -url www.google.com -word search
"Project Euler" is a standard response for this kind of thing, but I get the feeling that you're less interested in being told to implement algorithmic stuff (since that knowledge is easier to port between languages) and more interested in miniprojects that will familiarize you with core libraries. Is this fair?
If so, IMO, you ought to know the basics of how to do the following with the standard libraries of language you hope to use for serious work:
Standard IO
Network IO
Disk IO and navigating the filesystem
Regexp utilities
Structured data (XML libraries and CSV libraries if they exist)
Programming problems I would recommend for those:
It sounds like you've already done this.
A very simple proxy - something like what you described in your post, but that listens on a port for a message containing a URL rather than taking it on the command line, and likewise returns the results to whatever contacted it over the network rather than outputting to stdio. [Obviously you need to have the machine behind an appropriate firewall for this!]
Something which takes a directory path and recursively tallies the number of lines its children contain. (So, get the directory's listing, open each child file and count the number of line breaks. Then open each of its child directories, get their listings, ...) Record any errors encountered (e.g., no read privileges) in a reasonable way. Write out the final results to file in the directory supplied.
Usually if I tool around in a language enough, I'll run across some problem which I just naturally find myself using regexps for. I'll assume the same is true for you and punt this element for now.
Fetch StackOverflow.com, and [by putting it into a DOM model and navigating that] determine whether this question is still on the front page.
I got the most out of Objective-C by exploring it with a testing framework. I have written a short blog post about it. You should also wrap your head around the memory management conventions employed by Objective-C, reference counting takes a little time to get used to but works very well if responsibilities are clearly segregated (I have written about that on my blog too).
By getting my hands dirty on a testing framework (GHUnit for that matter), I was able to learn far more about the language than I could have in a "traditional" way. Of course you'll need a little pet project, otherwise this approach doesn't make sense.
I don't think your example is a very good idea as it requires you to mess with http connections, resources etc. which is a little framework specific after all. Parsing a text file would be a little easier in this regard. Using a unit testing framework has the following advantages for you:
learn about platform specific build systems and deployment details
forced to develop components in a loosely coupled fashion from the ground up
thereby exploring unique mechanisms of the language, that might require new or make known patterns redundant (e.g. categories make dependency injection obsolete etc.)
fast compile-test cycle, less time spent in front of the debugger
combined with source control: painless experiments
You should also look into the testing framework implementation, as testing frameworks always require to work with metadata to some extend. Testing frameworks are often used together with isolation frameworks. They basically create objects at runtime that comply to certain interfaces and act as stand-ins for concrete objects. Looking at their implementation will teach you about the runtime manipulations that can be done in Objective-C (keyword: Method-Swizzling)
Highly embedded (limited code and ram size) projects pose unique challenges for code organization.
I have seen quite a few projects with no organization at all. (Mostly by hardware engineers who, in my experience are not typically concerned with non-functional aspects of code.)
However, I have been trying to organize my code accordingly:
hardware specific (drivers, initialization)
application specific (not likely to be reused)
reusable, hardware independent
For each module I try to keep the purpose to one of these three types.
Due to limited size of embedded projects and the emphasis on performance, it is often keep this organization.
For some context, my current project is a limited DSP application on a MSP430 with 8k flash and 256 bytes ram.
I've written and maintained multiple embedded products (30+ and counting) on a variety of target micros, including MSP430's. The "rules of thumb" I have been most successful with are:
Try to modularize generic concepts as much as possible (e.g. separate driver code from application code). -- It makes for easier maintenance and reuse/porting of a project to another target micro in the future.
DO NOT start by worrying about optimized code at the very beginning. Try to solve the domain's problem first and optimize second. -- Your target micro can handle a lot more "stuff" than you might expect.
Work to ensure readability. Although most embedded projects seem to have short development-cycles, the projects often live longer than you might expect and another developer will undoubtedly have to work with your code.
I've worked on 8-bit PIC processors with similar limitations.
One restriction you don't have is how many comments you make or what you choose to name your methods, variables, etc.. Take advantage. Speed and size constraints do sometimes trump organization, but you can always explain.
Another tip is to break up a logical source file into even more pieces than you need, then bind them by #includeing them in a compilation unit. This allows you to have lots of reusable code (even one routine per file) but combine in whatever order you need. This is useful e.g. when trying to meet compilation unit size restrictions, or to pick and choose which common subroutines you need on the next project.
I try to organize it as if I had unlimited RAM and ROM, and it usually works out fine. As mentioned elsewhere, do not try to optimize it until you absolutely need to.
If you can get a pin-compatible processor that has more resources, it's better to get it working on that, concentrating on good structure and layout, then optimize for size later when you understand the code better.
Except under exceptional circumstances (see note), the organisation of your code will have no impact on the final product. (contents of the code are obviously a different matter)
So with that in mind you should organise your code as you would any other project.
With that said, the following are fairly typical:
If this is a processor that you've worked on before, or will be working on in the future, you will usually want to keep a dedicated hardware abstraction layer that can be shared between projects in the future. Typically this module would contain items like routines for managing any uarts, timers etc.
Usually it's reasonable to maintain a set of platform specific code for initialisation and setup that performs all of the configuration and initialisation up to the point where your executive takes over and runs your application. It will also include platform specific hal routines.
The executive/application is probably maintained as a separate module. All of the hardware specific code should be hidden in the hal (as mentioned above).
By splitting your code up like this you also have the option of compiling and running your application as a simulation, on a completely different platform, just by replacing the hardware specific code with routines that mimic the hardware.
This can be good for unit testing and debugging and algorithmic problems you might have.
Exceptional circumstances as might be imposed by unusual compiler restrictions. eg. I've come across some compilers that expect all interrupt service routines to be compiled within a single object file.
I've worked with some sensors like the Tmote Sky, I too have seen poor organization, and I have to admit i have contributed to it. Anyway I'd say that some confusion has to be, because loading too much modules or too much part of program will be (imho) resource killing too, so try to be aware of a threshold between organization and usability on the low resources.
Obviously this don't mean let caos begin, but for example try to get a look on the organization of the tinyOS source code and applications, it's an idea on what I'm trying to say.
Although it is a bit painful, one organization technique that is somewhat common with embedded C libraries is to split every single function and variable into a separate C source file, and then aggregate the resulting collection of O files into a library file.
The motivation for doing this is that for most normal linkers the unit of linkage is an object, for every object you either get the whole object or none of it. Since there is a 1-1 relationship between C files and object files, putting each symbol in it's own C file gives each one it's own object. This in turn lets the linker pull in only that subset of functions and variables that are actually used.
This sort of game doesn't help at all for headers they can happily be left as single files.