I am going through some documentation of a voip software that uses Live555 as the underlying network layer. As per RFC for RTSP - live555 seems to have implemented it. But the output is not clear to me. From archives of Live555 here question it seems that to get jitters in terms of mirco or milli seconds, I have to divide the jitter value by sampling frequency. But what about the network bit-rate? Should I use it to divide the jitter value to derive jitter in terms of micro/milliseconds?
Any help is appreciated
Related
I am trying to build an ofdm flow graph with usrp, as shown below
Flowgraph
The flowgraph works fine without any errors. However, there is no received signal at the receiver end as shown in results(1) and results(2).
It seems that the USRP does not get data/signal from the OFDM Trasnmitter, and the signal shown at Rx spectrum is just noise.
Any recommendation to solve this problem?
your sampling rate is too low. The USRPs you're using probably aren't running at the rate - look for the warnings at the beginning of your flow graph execution about other sampling rates being used
The gain is too low, probably, for over-the-air transmission.
The analog bandwidth settings make no sense. Since this seems to be a network-connected USRP: only very few of these have adjustable frontend bandwidths, anyway, but using a frontend bandwidth much larger than your sampling rate makes no sense.
Most importantly: This is communications, with a random channel, noise, interferers and device imperfections. It's very normal for packets to get lost.
I have interfaced 8051 with an image sensor,the frame valid is connected to INT1,image sensor is capable of giving 60frames per second.that is 60 times frame valid interrupt per second,is 8051 is capable of handling 60 interrupts per second?
correct me if i am wrong.
regards,
geetha.
8051 based MCUs come in many shapes and forms. Most of modern ones have a much more efficient execution performance. The original ran at 12 cycles per instruction as far as I remember(don’t recall the isr to first instruction though.). That being said even the original ones are capable of 60 iterrupts per second. The real question though is how much processing you need to actually transfer each frame and how this will be implemented in your MCU(dma etc).
I am trying to set up 5 USRP1 and some daughterboards on 2.4 and 5 GHz.
Some of them are out of order and some work properly, but I don't know which is which. I am trying to send a symbol (QAM modulation) sequence then I am trying to pass it with a file source to both a USRP sink and an FFT sink.
I am trying to find articles and tutorials, how sample rates are connected and how to set up them but I can't understand what am I missing. Could somebody please help with the schemes?
128 MS/s is not a rate that is possible with the USRP1. The console will contain a UHD warning that a differen, possible rate was chosen (most likely, 8MS/s).
Now, you contradict that rate by having a "Throttle" block in your flow graph - that block's job is only (and nothing more) to slow down the average rate at which samples are being let through – and that is something your "USRP Sink" already does. In fact, modern versions of GRC will warn you that using a throttle block in the same flow graph as a hardware sink or source is a bad idea.
Now, you'll say "ok, if the USRP sink actually needs to consume but 8MS/s, and my interpolator makes 128 MS/s out of my nominally 1M/s flow (really, signals within GNU Radio don't have a sampling rate), then that's gotta be fast enough to satisfy the 8MS/s demand!".
But the fact is that a 128-interpolator is really a CPU-intense thing, and the resulting rate might not be that high, made even worse by the choppy nature of how Throttle works.
In fact, your interpolator is totally unnecessary. The USRP internally has proper interpolators for integer fractions of its master clock rate 64MS/s, which means that you can set the USRP Sink to a sampling rate of 1MS/s and just directly connect the file source to it.
i want to detect heart rate using iphone sdk does someone knows any method for calculating heartbeat rate?
Fast Fourier Transform is a class of algorithms that can quickly turn samples into an analysis that tells you how prominently ceratin frequencies occur in that sample. For more check out:
Wikipedia: FFT
Literate program example: Cooley-Tukey FFT
This is relevant to your problem because: (1) heart rate is itself a frequency, and (2) most of the sound that comes through the body that you can measure will be within a certain frequency range. Dropping frequencies outside this range means dropping all or mostly noise.
Good luck!
Well I've seen various implementations. Some of them use the accelerometer to detect minute movements in your arm/hand when you hold the phone, some of them can use the microphone, you could also do a manual 'tap' interface where you tap the screen while checking your own pulse.
I have a ZyXEL USB Omni56K Duo modem and want to send and receive voice streams on it, but to reach adequate quality I probably need to implement some "ZyXEL ADPCM" encoding because plain PCM provides too small sampling rate to transmit even medium quality voice, and it doesn't work through USB either (probably because even this bitrate is too high for USB-Serial converter in it).
This mysterious codec figures in all Microsoft WAV-related libraries as one of many codecs theoretically supported by it, but I found no implementations.
Can someone offer an implementation in any language or maybe some documentation? Writing a custom mu-law decoding algorithm won't be a problem for me.
Thanks.
I'm not sure how ZyXEL ADPCM varies from other flavors of ADPCM, but various ADPCM implementations can be found with some google searches.
However, the real reason for my post is why the choice of ADPCM. ADPCM is adaptive differential pulse-code modulation. This means that the data being passed is the difference in samples, not the current value (which is also why you see such great compression). In a clean environment with no bit loss (ie disk drive), this is fine. However, in a streaming environment, its generally assumed that bits may be periodically mangled. Any bit damage to the data and you'll be hearing static or other audio artifacts very quickly and usually, fairly badly.
ADPCM's reset mechanism isn't framed based, which means the audio problems can go on for an extended period of time depending on the encoder. The reset code is a usually a set of 0s (16 comes to mind, but its been years since I wrote my own ports).
ADPCM in the telephony environment usually converts a 12 bit PCM sample to a 4 bit ADPCM sample (not bad). As for audio quality...not bad for phone conversations and the spoken word, but most people, in a blind test, can easily detect the quality drop.
In your last sentence, you throw a curve ball into the question. You start mentioning muLaw. muLaw is a PCM implementation that takes a 12 bit sample and transforms it using a logarithmic scale to an 8 bit sample. This is the typical compression mechanism for TDM (phone) networkworks in North America (most of the rest of the world uses a similar algorithm called ALaw).
So, I'm confused what you are actually trying to find.
You also mentioned Microsft and WAV implementations. You probably know, but just in case, that WAV is just a wrapper around the audio data that provides format, sampling information, channel, size and other useful information. Without WAV, AU or other wrappers involved, muLaw and ADPCM are usually presented as raw data.
One other tip if you are implementing ADPCM. As I indicated, they use 4 bits to represent a 12 bit sample. They get away with this by both sides having a multiplier table. Your position in the table changes based on the 4 bit value (in other words, the value is both multiple against a step size and used to figure out the new step size). I've seen a variety of algorithms use slightly different tables (no idea why, but you typically see the sent and received signals slowly stray off the bias). One of the older, popular sound packages was different than what I typically saw from the telephony hardware vendors.
And, for more useless trivia, there are multiple flavors of ADPCM. The variances involve the table, source sample size and destination sample size, but I've never had a need to work with them. Just documented flavors that I've found when I did my internet search for specifications for the various audio formats used in telephony.
Piping your pcm through ffmpeg -f u16le -i - -f wav -acodec adpcm_ms - will likely work.
http://ffmpeg.org/