How to increase interrupt sampling frequency on Beagle Bone Black? - interrupt

Currently I'm attempting to read a 600ppr optical encoder with a simple attachInterrupt() function through the built in Cloud 9 IDE (node.js), the issue is that if the rotary encoder is rotated too quickly the position data becomes lost; it appears that the frequency of the signal provided by the encoder exceeds the interrupt's sampling rate.
My question is there a way to increase the sampling rate to somewhere in the range of 100KHz, currently it seems to sample at roughly 2KHz.
Thank you for your help!

Related

two analog channel affect each other in pic

Iam doing a project to recognize gestures by reading adc values in pic 16f73 using embedded c. Everything works fine while using single adc channel. When i use multiple channels, values are affected each other. is this a hardware error or software problem?
Probably. It's very likely to be one, or the other, or both. Split problem in half.
Eliminate one at a time. Scope/meter on both analog inputs. Change one input - does the other change too? If it does, there is a hardware issue at least. If not, it's software.
This is debugging 101.
It's a hardware effect, but not an error.
From the datasheet:
11.1 A/D Acquisition Requirements
For the A/D converter to meet its specified accuracy,
the charge holding capacitor (CHOLD) must be allowed
to fully charge to the input channel voltage level. The
analog input model is shown in Figure 11-2. The source
impedance (RS) and the internal sampling switch (RSS)
impedance directly affect the time required to charge
the capacitor CHOLD. The sampling switch (RSS)
impedance varies over the device voltage (VDD), see
Figure 11-2. The source impedance affects the offset
voltage at the analog input (due to pin leakage current).
The maximum recommended impedance for analog sources is 10 kΩ. After the analog input channel is
selected (changed), the acquisition period must pass
before the conversion can be started.
To calculate the minimum acquisition time, TACQ, see
the PICmicro™ Mid-Range MCU Family Reference
Manual (DS33023). In general, however, given a maximum source impedance of 10 kΩ and at a temperature
of 100°C, TACQ will be no more than 16 µsec.
It will likely be because you have high impedance sources driving all the ADC pins. When the multiplexer switches from one input to the next, any charge that is stored on the sampling capacitor of the ADC from the previous input will still be there.
If you drive each input with the output of a suitable op amp, when the ADC's multiplexer switches, the op amp is able to drive charge in or suck charge out of the sampling capacitor and the time needed for the new input you are reading can be significantly reduced. Plus, with this method you are not loading the voltage you are wanting to read.
If you cannot drive with a low impedance source, then ensure you have plenty of time for the new input's value to settle.

Regeneration of sine wave using microcontroller

I don't have much knowledge about microcontrollers. In my project, I need to shift the sine wave. Here, I want to know, if I feed pure sine at port A pin 2. Then, will i get the shifted version of pure sine wave at port B pin 2 . will the following instruction work?
Inialise port A as input and port B as output
call delay
portb=porta
we can generate sine wave using DAC in microcontroller. But, as it is not perfect, it wont meet required conditions.
First of all the input needs to be to an ADC, and the output needs to be from a DAC (or a PWM with appropriate output filtering). It is not clear from your question that the pins you have chosen are appropriate for that.
If you are generating the sine from the DAC, why would you apply it to an input only to output it again? If you need two sine waves shifted in phase, why not simply generate calculated outputs from two DAC or PWM? Either way you need two analogue outputs, but that way you do not need any input. A PWM will need greater analogue filtering than a DAC and is likely to support lower bandwidth, but most microcontroller have more PWMs than DACs.
You cannot simply call a delay than copy port a to port b, that would be simply be a copy of a to b after a delay. You need to take samples from A and place then in a FIFO buffer, then apply the output of the FIFO to B. The length of the FIFO determines the delay.
A microcontroller is not an analogue device, you cannot put in an analogue signal on just any old pin and and transfer that signal to another pin. Most pins are digital GPIO, they except just two states representing 0 or 1. No matter what voltage you apply, it will be interpreted as either high or low.
Rather you will have to use an ADC input, sample at sufficiently high frequency, delay the samples through a FIFO, then apply the delayed samples to a DAC. Reconstruction of a "pure" sine wave from the quantized DAC output requires analogue filtering circuitry. With a filter cut-off lower than half the sampling rate you will recover a reasonably good representation of the original signal (which can be any signal with components below half the sampling frequency - it need not be a sine wave). If you do use a more complex signal, you will need to analogue filter the input to remove components above half the sampling rate to avoid aliasing.
It might be possible to do all that on one chip using a Cypress PSoC, since these are hybrid chips with reconfigurable analogue elements as well as a microcontroller.

audio-unit sample rate and buffer size

i am facing a really misunderstanding when sampling the iphone audio with remoteIO.
from one side, i can do this math: 44khz sample rate means 44 samples per 1ms. which means if i set bufferSize to 0.005 with :
float bufferLength = 0.00005;
AudioSessionSetProperty(kAudioSessionProperty_PreferredHardwareIOBufferDuration, sizeof(bufferLength), &bufferLength);
which means 5ms buffer size -which means 44*5=220 samples in buffer each callback.
BUT i get 512 samples from inNumberFrames each callback . and it stay's fixed even when i change buffer length.
another thing , my callbacks are every 11ms and is not changing! i need faster callbacks .
so !
what is going on here ?
who set what ?
i need to pass a digital information in an FSK modulation, and have to know exactly buffer size in samples, and what time from the signal it has , in order to know how to FFT it correctly .
any explanation on this ?
thanks a lot.
There is no way on all current iOS 10 devices to get RemoteIO audio recording buffer callbacks at a faster rate than every 5 to 6 milliseconds. The OS may even decide to switch to sending even larger buffers at a lower callback rate at runtime. The rate you request is merely a request, the OS then decides on the actual rates that are possible for the hardware, device driver, and device state. This rate may or may not stay fixed, so your app will just have to deal with different buffer sizes and rates.
One of your options might be to concatenate each callback buffer onto your own buffer, and chop up this second buffer however you like outside the audio callback. But this won't reduce actual latency.
Added: some newer iOS devices allow returning audio unit buffers that are shorter than 5.x mS in duration, usually a power of 2 in size at a 48000 sample rate.
i need to pass a digital information in an FSK modulation, and have to know exactly buffer size in samples, and what time from the signal it has , in order to know how to FFT it correctly.
It doesn't work that way - you don't mandate various hosts or hardware to operate in an exact manner which is optimal for your processing. You can request reduced latency - to a point. Audio systems generally pass streaming pcm data in blocks of samples sized by a power of two for efficient realtime io.
You would create your own buffer for your processor, and report latency (where applicable). You can attempt to reduce wall latency by choosing another sample rate, or by using a smaller N.
The audio session property is a suggested value. You can put in a really tiny number but will just go to the lowest value it can. The fastest that I have seen on an iOS device when using 16 bit stereo was 0.002902 second ( ~3ms ).
That is 128 samples (LR stereo frames) per callback. Thus, 512 bytes per callback.
So 128/44100 = 0.002902 seconds.
You can check it with:
AudioSessionGetProperty(kAudioSessionProperty_CurrentHardwareIOBufferDuration, &size, &bufferDuration)
Could the value 512 in the original post have meant bytes instead of samples?

Recognition of a short high frequency sound in low frequency noise (objc/c)

I am currently creating an application which signals readiness to other devices using a high frequency sound.
(transmitter): A device will produce a short burst of sound of around 20khz.
(receiver): Another device will be listening for a sound at this frequency at a small distance from the transmitter(10m approx) The device recieves audio data from a microphone
The background noise will be fairly loud, varying from around 0 - 10khz(about human speech range), and would be produced by a small crowd of people.
I need the receiving device to be able to detect the 20khz sound, separated from the noise,
and know the time at which it was received.
Any help with an appropriate algorithm, a library, or even better, code in C or
Objc to detect this high frequency sound would be greatly appreciated.
20 kHz may be pushing it, as (a) most sound cards have low pass (anti aliassing) filters at 18 - 20 kHz and (b) most speakers and microphones tend to have a poor response at 20 kHz. You might want to consider say 15 kHz ?
The actual detection part should be easy - just implement a narrow band pass filter at the tone frequency, rectify the output and low pass filter (e.g. 10 Hz).
You may want to look into FFT (Fast Fourier Transform). This algorithm will allow you to analyse a waveform and convert it to the frequency spectrum for further analysis.
If this is for Mac OS or iOS, I'd start looking into Core Audio's Audio Units.
1 Here's Apple's Core Audio Overview.
2 Some AudioUnits for Mac OS
3 Or for iOS AudioUnit Hosting
A sound with that high frequency will not travel at all with the iphone speaker.

detecting heartbeat peakpower using iphone sdk?

i want to detect heart rate using iphone sdk does someone knows any method for calculating heartbeat rate?
Fast Fourier Transform is a class of algorithms that can quickly turn samples into an analysis that tells you how prominently ceratin frequencies occur in that sample. For more check out:
Wikipedia: FFT
Literate program example: Cooley-Tukey FFT
This is relevant to your problem because: (1) heart rate is itself a frequency, and (2) most of the sound that comes through the body that you can measure will be within a certain frequency range. Dropping frequencies outside this range means dropping all or mostly noise.
Good luck!
Well I've seen various implementations. Some of them use the accelerometer to detect minute movements in your arm/hand when you hold the phone, some of them can use the microphone, you could also do a manual 'tap' interface where you tap the screen while checking your own pulse.