How can I use functions in another program - module

I want to create a simulation program, for BUS related simulations.
These simulations use lot of calculations, and methods, and there are more and more.
First I want to write the Core of the program, which can use separately coded and compiled program blocks. What kind of programming technology is good for this?
For example: I want to send a CAN frame with checksum calculation. In the core program I choose from a directory that which checksum calculation is good for me.

Related

Analyzing bitstreams using Icestorm

I'm trying to understand the bitstreams generated by Yosys/arachne-pnr as described on http://www.clifford.at/icestorm/:
The recommended approach for learning how to use this documentation is to synthesize very simple circuits using Yosys and Arachne-pnr, run the icestorm tool icebox_explain on the resulting bitstream files, and analyze the results using the HTML export of the database mentioned above. icebox_vlog can be used to convert the bitstream to Verilog. The output file of this tool will also outline the signal paths in comments added to the generated Verilog code.
In order to understand the effect a change in the bitstream has, it would be helpful if I could change the .ex file and convert it back to an ASCII bitstream (instead of having to identify the bit manually) for uploading to the FPGA. Is there a way to do so?
I'm a bit concerned about damaging the FPGA with an invalid bitstream. Are there situations where this is known to happen? Is there a way to simulate a bitstream?
Also, it would be helpful to have some kind of “higher-level” explanation format which e.g. shows the IE/REN bits on the I/O blocks to which they correspond, not the one on which they have to be set in the bitstream. Is there such a format?
I know of the possibility to generate an equivalent Verilog circuit, but the problem with this is that it doesn't usually allow me a lossless round-trip back into a bitstream. Is there a way to generate an equivalent Verilog circuit which (e.g. by instantiating the blocks explicitly) yields the exact same bitstream when processed with Yosys/arachne-pnr?
I'm a bit concerned about damaging the FPGA with an invalid bitstream. Are there situations where this is known to happen? Is there a way to simulate a bitstream?
I have not damaged any FPGA so far. (I have, however, managed to damage the serial flash on one icestick after running some test that reprogrammed it in a loop.)
But this does not mean that you cannot damage your FPGA by programming it with an invalid bitstream. You could theoretically configure the FPGA in a way that produces a driver-driver conflict. I don't know how well the hardware deals with something like that. I have not run any experiments to find out..
Also, it would be helpful to have some kind of “higher-level” explanation format which e.g. shows the IE/REN bits on the I/O blocks to which they correspond, not the one on which they have to be set in the bitstream. Is there such a format?
icebox_vlog produces a higher-level output. But it does not output things like I/O blocks, so it might be too high-level for your needs.
I know of the possibility to generate an equivalent Verilog circuit, but the problem with this is that it doesn't usually allow me a lossless round-trip back into a bitstream. Is there a way to generate an equivalent Verilog circuit which (e.g. by instantiating the blocks explicitly) yields the exact same bitstream when processed with Yosys/arachne-pnr?
Not at the moment. But it should not be too hard to extend icebox_vlog to provide this functionality. So if you really need that, it might be something within your reach to add yourself.

MATLAB & Mex-Files: Auto-optimization of CUDA-Code depending on Input-Parameters size

Hey there,
I'm currently developing a Mex-file in matlab including CUDA computation. I wonder if there's a good way to 'automatically' optimize the program for arbitrary input parameters from the user. E.g. when the input-parameters don't exceed a certain size, try to use shared and/or constant memory... which will only work up to certain limits. From there on, global memory has to be used. But such optimizations can only be made in runtime because that's the point I get to know the size of input parameters from the user. Any simple solution?
Thanks!
You can simply write different kernels and decide which ones to call at runtime.
You can also use the device query API or do some micro-benchmarking to figure out the sizes of shared/constant memory at runtime. This is probably necessary if you don't want to assume a particular GPU model.

Creating simple waveforms with CoreAudio

I am new to CoreAudio, and I would like to output a simple sine wave and square wave with a given frequency and amplitude through the speakers using CA. I don't want to use sound files as I want to synthesize the sound.
What do I need to do this? And can you give me an example or tutorial? Thanks.
There are a number of errors in the previous answer. I, the legendary :-) James McCartney, not James Harkins wrote the sinewavedemo, I also wrote SuperCollider which is what the audiosynth.com website is about. I also now work at Apple on CoreAudio. The sinewavedemo DOES use CoreAudio, since it uses AudioHardware.h from CoreAudio.framework as its way to play the sound.
You should not use the sinewavedemo. It is very old code and it makes dangerous assumptions about the buffer layout of the audio hardware. The easiest way nowadays to play a sound that you are generating is to use the AudioQueue, or to use an output audio unit with a render callback set.
The best and easiest way to do that without files is to prepare a single cycle buffer, containing one cycle of the wave (this is called technically a wavetable)
In the playback function called by CoreAudio thread, fill the output buffer with samples read from the wave buffer.
Note however that you will face two problems very quickly :
- for the sine wave, if the playback frequency is not an integer multiple of the desired sine frequency, you will probably need to implement an interpolator if you want to have a good quality. Using only integer pointers will generate a significant level of harmonic noise.
for the square wave, avoid to just program an array with +1 / -1 values. Such a signal is not bandlimited and will alias a lot. Do not forget that the spectrum of a square wave is virtually infinite!
To get good algorithms for signal generation, take a look to musicdsp.org, that's probably one of the best resource for that
Are you new to audio programming in general? As a starting point i would check out
http://www.audiosynth.com/sinewavedemo.html
This is a minimum osx sinewave implementation by the legendary James Harkins. Note, it doesn't use CoreAudio at all.
If you specifically want to use CoreAudio for your sinewave you need to create an output unit (RemoteIO on the iphone, AUHAL on osx) and supply an input callback, where you can pretty much use the code from the above example. Check out
http://developer.apple.com/mac/library/technotes/tn2002/tn2091.html
The benefits of CoreAudio are chiefly, chain other effects with your sinewave, write plugins for hosts like Logic & provide the interfaces for them, write a host (like Logic) for plugins that can be chained together.
If you don't wont to write a plugin, or host plugins then CoreAudio might not actually be for you. But one of the best things about using CoreAudio is that once you get your sinewave callback working it is easy to add effects, or mix multiple sines together
To do this you need to put your output unit in a graph, to which you can effects, mixers, etc.
Here is some help on setting up graphs http://timbolstad.com/2010/03/16/core-audio-getting-started-pt2/
It isn't as difficult as it looks. Apple provides C++ helper classes for many things (/Developer/Examples/CoreAudio/PublicUtility) and even if you don't want to use C++ (you don't have to!) they can be a useful guide to the CoreAudio API.
If you are not doing this realtime, using the sin() function from math.h is not a bad idea. Just fill however many samples you need with sin() beforehand when it is time to play it, just send it to the audio buffer. sin() can be quite slow to call once every sample if you are doing this realtime, using an interpolated wavetable lookup method is much faster, but the resulting sound will not be as spectrally pure.
There is a good and well documented sine wave player code example in Chapter 7 of the Adamson/Avila "Learning Core Audio" book, published by Addison-Wesley Professional (ISBN-10: 0-321-63684-8 ):
http://www.informit.com/store/learning-core-audio-a-hands-on-guide-to-audio-programming-9780321636843
It is a rather new publication (2012) and addresses precisely the issue of this question. It's only a starting point, but it's a valuable starting point.
BTW. Don't jump to graphs before having this basic lesson (which involves some math) behind.
Concerning example code, a quick and efficient method I often use deals with a pre-filled sinewave lookup table which has as many members as sample rate, for 44100 Hz the table has size of 44100. In other words, cycle length equals sample rate. This gives an acceptable trade-off between speed and quality in many cases. You can initialize it with the program.
If you generate floating point samples (which is default in OSX), and use math functions, use sinf() rather than (float)sin(). Promotions in inner loop cycles of a render callback are always resource-expensive. So are repetitive multiplications of constants, such as 2.0*M_PI, which can too often be found in code examples.

When to use Simulink in an embedded processor

We are developing a motor controller on a dsPIC. We intend to use Simulink to model the motor control algorithm with Real Time Embedded Workshop to convert the Simulink model into C code.
Our firmware will have some other minor logic operations, but its main function is motor control. We are wondering if we should try to do all the firmware in Simulink or seperate the logic operations into C code, while the motor control algorithm stays in Simulink?
Does anyone have a recommendation on which path we should start down?
thanks,
Brent
FYI I just built a system like yours but with a TI DSP.
I'm assuming you're doing something complex like vector control. If so, here's what you do: In your model, make one block for each task / each time period you need. This may just be the PWM interrupt with control in it. Define all the IO each task will need - try to keep each signal to 16 bits which are atomic on the DsPIC (this eliminates most rate transitions). Get simulink to make each top-level block a function call. Do only the control inside this/these blocks and leave all hardware configuration, task scheduling, other logic to C code. Simulink can generate a C and H file that you just include in the project with the other code. You'll fill out a structure of inputs, call the function, and get back a structure with the outputs. Keep the model clean of all hardware dependencies.
Do not believe the Mathworks marketing folks. They want you to do everything in Simulink. Don't get me wrong, it's a great tool for certain types of things. But for stuff you can't do in the model (like hello world) they suggest using the "legacy code tool" as if anything that's not a model is "like totally old-school". Restrict your model to control loops and signal flows - which it's good for - and it'll be fine.
Do the logic operations interact with the motor control or are they just unrelated operations? The degree of interaction could help make the decision.
If they are unrelated then for maintainability it might be best to keep them out of the model. Then you can update the logic without having to regenerate the C for the entire SimuLink model. There would be less chance of a regression problem.
If they are related to or interact with the model, then of course it is a case to keep them in the model so that you don't get incompatible versions linked into a build.

Correctness testing for process modelling application

Our group is building a process modelling application that simulates an industrial process. The final output of this process is a set of number representing chemistry and flow rates.
This application is based on some very old software that uses the exact same underlying mathematical model to create the simulation. Thousands of variables are involved in the simulation.
Although each component has been unit tested, we now need to be able to make sure that the data output produced by our software matches that of the old simulation software. I am wondering how best to approach this issue in a formalised and rigorous manner.
The old program works by specifying the input via a text file, so I was thinking we could programatically take each variable, adjust its value in the file (and correspondingly in our new application), then compare the outputs between the new and old application. We do this for every variable in the model.
We know the allowable range for each variable so I suppose a random sample across each variable of a few values is enough to show correctness for that particular variable.
Any thoughts on this approach? Any other ideas?
The comparison of output of the old and new applications id definitely good idea. This is sometime called back-to-back testing.
Regarding test input samples - get familiarized with following concepts:
Equivalence partitioning
Boundary-value analysis