How do I combine multiple signals for synchronous playback in NAudio without using wav files? - naudio

I want to combine multiple signals for playback without having to write to wav files first. In other words, a basic additive synthesizer.
I noticed in the "Play Sine Wave" demo (https://github.com/naudio/NAudio/blob/master/Docs/PlaySineWave.md) I could probably call the Play method on multiple signal generators in a nested using statement. However, I am not sure how many of these I can call without introducing latency between signals. This looks like a sketchy approach even with just two signals.
So do I have any other options, and how would I implement them?

You should use the MixingSampleProvider and add as many signal generators as you like as input (best to reduce their output volumes so you don't end up with clipping).
Then just play from the MixingSampleProvider directly.

Related

How can I use functions in another program

I want to create a simulation program, for BUS related simulations.
These simulations use lot of calculations, and methods, and there are more and more.
First I want to write the Core of the program, which can use separately coded and compiled program blocks. What kind of programming technology is good for this?
For example: I want to send a CAN frame with checksum calculation. In the core program I choose from a directory that which checksum calculation is good for me.

Can we use GPIO_PinAFConfig function to make a pin as output on STM32L1?

I'm using stm32l100rc board. I need to make a pin output/input several times so can I use GPIO_PinAFConfig function to do that? or do I have to initialize whole GPIO_InitTypeDef structure for it.
A GPIO can be configured as either an input, output, or one of possibly several alternate functions. As its name suggests, GPIO_PinAFConfig sets one of those alternate functions, so would not achieve your aim at all.
If switching between input and output using the standard peripheral library, there will be some redundancy - register values that do not change, or which are mutually exclusive to input or output. If you need the switch to be as fast as possible (and we are talking tens or hundreds of nanoseconds here), then direct register access might make significant savings by changing only those registers necessarily necessary.

Analyzing bitstreams using Icestorm

I'm trying to understand the bitstreams generated by Yosys/arachne-pnr as described on http://www.clifford.at/icestorm/:
The recommended approach for learning how to use this documentation is to synthesize very simple circuits using Yosys and Arachne-pnr, run the icestorm tool icebox_explain on the resulting bitstream files, and analyze the results using the HTML export of the database mentioned above. icebox_vlog can be used to convert the bitstream to Verilog. The output file of this tool will also outline the signal paths in comments added to the generated Verilog code.
In order to understand the effect a change in the bitstream has, it would be helpful if I could change the .ex file and convert it back to an ASCII bitstream (instead of having to identify the bit manually) for uploading to the FPGA. Is there a way to do so?
I'm a bit concerned about damaging the FPGA with an invalid bitstream. Are there situations where this is known to happen? Is there a way to simulate a bitstream?
Also, it would be helpful to have some kind of “higher-level” explanation format which e.g. shows the IE/REN bits on the I/O blocks to which they correspond, not the one on which they have to be set in the bitstream. Is there such a format?
I know of the possibility to generate an equivalent Verilog circuit, but the problem with this is that it doesn't usually allow me a lossless round-trip back into a bitstream. Is there a way to generate an equivalent Verilog circuit which (e.g. by instantiating the blocks explicitly) yields the exact same bitstream when processed with Yosys/arachne-pnr?
I'm a bit concerned about damaging the FPGA with an invalid bitstream. Are there situations where this is known to happen? Is there a way to simulate a bitstream?
I have not damaged any FPGA so far. (I have, however, managed to damage the serial flash on one icestick after running some test that reprogrammed it in a loop.)
But this does not mean that you cannot damage your FPGA by programming it with an invalid bitstream. You could theoretically configure the FPGA in a way that produces a driver-driver conflict. I don't know how well the hardware deals with something like that. I have not run any experiments to find out..
Also, it would be helpful to have some kind of “higher-level” explanation format which e.g. shows the IE/REN bits on the I/O blocks to which they correspond, not the one on which they have to be set in the bitstream. Is there such a format?
icebox_vlog produces a higher-level output. But it does not output things like I/O blocks, so it might be too high-level for your needs.
I know of the possibility to generate an equivalent Verilog circuit, but the problem with this is that it doesn't usually allow me a lossless round-trip back into a bitstream. Is there a way to generate an equivalent Verilog circuit which (e.g. by instantiating the blocks explicitly) yields the exact same bitstream when processed with Yosys/arachne-pnr?
Not at the moment. But it should not be too hard to extend icebox_vlog to provide this functionality. So if you really need that, it might be something within your reach to add yourself.

Continuous resampling in parallel in Labview

I need to resample an arbitrary number of complex signals, perform some miscellaneous operations on them, and finally sum them and save them to a file. The length of the signals forces me to buffer the signals into chunks and operate on them as such.
Most (all that I could find) resampling VIs can operate on chunks, using a reset flag to differentiate between new and appended data. My issue is that I would like to perform resampling on my signals in parallel (or at least interweavingly), which doesn't work as the resample VI keeps its previous state. A way around this would be to resample each signal sequentially, save it to a temporary file and then operate using the new files. This is a poor solution.
Practically, what I need (I think) is to have the resampling VI be cloneable, then I could make an instance for each signal. The VI I am currently using is the "Rational Resample" VI.
Any ideas?
Rational Resampling VI is polymorphic, so you can just select "Multi-Channel instance" to process several channels directly.
Moreover, even a single Rational Resampling VI is defined as "Preallocated clones reentrant execution" (LV2014, 32bit, windows). So if you place several Rational Resampling VIs into several different loops, each of them will maintain its own state (independent on the other instances). They will execute as parallel as LabVIEW execution system allows. Source: http://zone.ni.com/reference/en-XX/help/371361J-01/lvconcepts/reentrancy/

Creating simple waveforms with CoreAudio

I am new to CoreAudio, and I would like to output a simple sine wave and square wave with a given frequency and amplitude through the speakers using CA. I don't want to use sound files as I want to synthesize the sound.
What do I need to do this? And can you give me an example or tutorial? Thanks.
There are a number of errors in the previous answer. I, the legendary :-) James McCartney, not James Harkins wrote the sinewavedemo, I also wrote SuperCollider which is what the audiosynth.com website is about. I also now work at Apple on CoreAudio. The sinewavedemo DOES use CoreAudio, since it uses AudioHardware.h from CoreAudio.framework as its way to play the sound.
You should not use the sinewavedemo. It is very old code and it makes dangerous assumptions about the buffer layout of the audio hardware. The easiest way nowadays to play a sound that you are generating is to use the AudioQueue, or to use an output audio unit with a render callback set.
The best and easiest way to do that without files is to prepare a single cycle buffer, containing one cycle of the wave (this is called technically a wavetable)
In the playback function called by CoreAudio thread, fill the output buffer with samples read from the wave buffer.
Note however that you will face two problems very quickly :
- for the sine wave, if the playback frequency is not an integer multiple of the desired sine frequency, you will probably need to implement an interpolator if you want to have a good quality. Using only integer pointers will generate a significant level of harmonic noise.
for the square wave, avoid to just program an array with +1 / -1 values. Such a signal is not bandlimited and will alias a lot. Do not forget that the spectrum of a square wave is virtually infinite!
To get good algorithms for signal generation, take a look to musicdsp.org, that's probably one of the best resource for that
Are you new to audio programming in general? As a starting point i would check out
http://www.audiosynth.com/sinewavedemo.html
This is a minimum osx sinewave implementation by the legendary James Harkins. Note, it doesn't use CoreAudio at all.
If you specifically want to use CoreAudio for your sinewave you need to create an output unit (RemoteIO on the iphone, AUHAL on osx) and supply an input callback, where you can pretty much use the code from the above example. Check out
http://developer.apple.com/mac/library/technotes/tn2002/tn2091.html
The benefits of CoreAudio are chiefly, chain other effects with your sinewave, write plugins for hosts like Logic & provide the interfaces for them, write a host (like Logic) for plugins that can be chained together.
If you don't wont to write a plugin, or host plugins then CoreAudio might not actually be for you. But one of the best things about using CoreAudio is that once you get your sinewave callback working it is easy to add effects, or mix multiple sines together
To do this you need to put your output unit in a graph, to which you can effects, mixers, etc.
Here is some help on setting up graphs http://timbolstad.com/2010/03/16/core-audio-getting-started-pt2/
It isn't as difficult as it looks. Apple provides C++ helper classes for many things (/Developer/Examples/CoreAudio/PublicUtility) and even if you don't want to use C++ (you don't have to!) they can be a useful guide to the CoreAudio API.
If you are not doing this realtime, using the sin() function from math.h is not a bad idea. Just fill however many samples you need with sin() beforehand when it is time to play it, just send it to the audio buffer. sin() can be quite slow to call once every sample if you are doing this realtime, using an interpolated wavetable lookup method is much faster, but the resulting sound will not be as spectrally pure.
There is a good and well documented sine wave player code example in Chapter 7 of the Adamson/Avila "Learning Core Audio" book, published by Addison-Wesley Professional (ISBN-10: 0-321-63684-8 ):
http://www.informit.com/store/learning-core-audio-a-hands-on-guide-to-audio-programming-9780321636843
It is a rather new publication (2012) and addresses precisely the issue of this question. It's only a starting point, but it's a valuable starting point.
BTW. Don't jump to graphs before having this basic lesson (which involves some math) behind.
Concerning example code, a quick and efficient method I often use deals with a pre-filled sinewave lookup table which has as many members as sample rate, for 44100 Hz the table has size of 44100. In other words, cycle length equals sample rate. This gives an acceptable trade-off between speed and quality in many cases. You can initialize it with the program.
If you generate floating point samples (which is default in OSX), and use math functions, use sinf() rather than (float)sin(). Promotions in inner loop cycles of a render callback are always resource-expensive. So are repetitive multiplications of constants, such as 2.0*M_PI, which can too often be found in code examples.