How to set jitter and loss rate of channel while using PointToPoint in NS3? - ns-3

I am new to NS3, learning NS3 from its tutorial. In the tutorial example first.cc, it shows how to use PointToPointHelper and UdpEchoClientHelper to make a P2P test, we can find how to set Data Rate and Delay of channel. But I want to set jitter and loss rate of channel, is there any method?
PointToPointHelper pointToPoint;
pointToPoint.SetDeviceAttribute ("DataRate", StringValue ("5Mbps"));
pointToPoint.SetChannelAttribute ("Delay", StringValue ("2ms"));
....

Loss Rate of Channel
The good news is that the tutorial already covers this
ns-3 provides ErrorModel objects which can be attached to Channels. We are using the RateErrorModel which allows us to introduce errors into a Channel at a given rate.
Ptr<RateErrorModel> em = CreateObject<RateErrorModel> ();
em->SetAttribute ("ErrorRate", DoubleValue (0.00001));
devices.Get (1)->SetAttribute ("ReceiveErrorModel", PointerValue (em));
The above code instantiates a RateErrorModel Object, and we set the “ErrorRate” Attribute to the desired value. We then set the resulting instantiated RateErrorModel as the error model used by the point-to-point NetDevice. This will give us some retransmissions and make our plot a little more interesting.
Jitter
Jitter is a function of the processing and queuing delay encountered by the packets. It's not a quantity that you set directly. Rather, it is calculated based on measurements of latency of all packets across the life of a connection. The typical definition of jitter is the standard deviation of latency of all packets across the life a connection.
So, ns-3 does not offer a way to set jitter directly (though it could since it's a simulator, but I digress).
But there is hope: if you want to change the jitter, you need to change the processing and queuing delays. Processing delay is a bit iffy, but queuing delays can easily be changed by choosing the type of Queue on the NetDevice. Refer to the TxQueue Attribute of the PointToPointNetDevice.

Related

ofdm sysetm with usrp implemntation

I am trying to build an ofdm flow graph with usrp, as shown below
Flowgraph
The flowgraph works fine without any errors. However, there is no received signal at the receiver end as shown in results(1) and results(2).
It seems that the USRP does not get data/signal from the OFDM Trasnmitter, and the signal shown at Rx spectrum is just noise.
Any recommendation to solve this problem?
your sampling rate is too low. The USRPs you're using probably aren't running at the rate - look for the warnings at the beginning of your flow graph execution about other sampling rates being used
The gain is too low, probably, for over-the-air transmission.
The analog bandwidth settings make no sense. Since this seems to be a network-connected USRP: only very few of these have adjustable frontend bandwidths, anyway, but using a frontend bandwidth much larger than your sampling rate makes no sense.
Most importantly: This is communications, with a random channel, noise, interferers and device imperfections. It's very normal for packets to get lost.

Media Source Extension Javascript API vis-a-vis WebRTC. Some questions

The closest I came across this is this question on SO but that is just for basic understanding.
My question is: when Media Source Extension (MSE) is used where the media source is fetched from a remote end point, for example, through AJAX or fetch API or even websocket, the media is sent over TCP.
That will handle packet loss and sequencing so protocol like RTP with RTCP is not used. Is that correct?
But this will result in delay so it cannot be truly used for real-time communication. Yes?
There is no security/encryption requirement for MSE like in WebRTC (DTLS/SRTP). Yes?
One cannot, for example, mix a remote audio source from MSE with an audio mediaStreamTrack from a RTCPeerConnection as they do not have any common param like CNAME (RTCP) or are part of the same mediastream). In other words, the world of MSE and WebRTC cannot mix unless synchronization is not important. Correct?
That will handle packet loss and sequencing so protocol like RTP with RTCP is not used. Is that correct?
AJAX and Fetch are just JavaScript APIs for making HTTP requests. Web Socket is just an API and protocol extended from an initial HTTP request. HTTP uses TCP. TCP takes care of ensuring packets arrive and arrive in-order. So, yes, you won't need to worry about packet loss and such, but not because of MSE.
But this will result in delay so it cannot be truly used for real-time communication. Yes?
That depends entirely on your goals. It's a myth that TCP isn't fast, or that TCP increases general latency for every packet. What is true is that the initial 3-way handshake takes a few round trips. It's also true that if a packet does actually get dropped, the application sees latency as suddenly sharply increased until the packet is requested again and sent again.
If your goals are something like a telephony application where the loss of a packet or two is meaningless overall, then UDP is more appropriate. (In voice communications, we talk slow enough that if a few milliseconds of sound go missing, we can still decipher what was being said. Our spoken language is robust enough that if entire words get garbled or are silent, we can figure out the gist of what was being said from context.) It's also important that immediate continuity be kept for voice communications. The tradeoff is that realtime-ness is better than accuracy at any particular instant/packet.
However, if you're doing something, say a one-way stream, you might choose a protocol over TCP. In this case, it may be important to be as realtime as possible, but more important that the audio/video don't glitch out. Consider the Super Bowl, or some other large sporting event. It's a live event and important that it stays realtime. However, if the time reference for the viewer is only 3-5 seconds delayed from live, it's still "live" enough for the viewer. The viewer would be far more angry if the video glitched out and they missed something happening in the game, rather than if they were just behind a few seconds. Since it's one-way streaming and there is no communication feedback loop, the tradeoff for reliability and quality over extreme low latency makes sense.
There is no security/encryption requirement for MSE like in WebRTC (DTLS/SRTP). Yes?
MSE doesn't know or care how you get your data.
One cannot, for example, mix a remote audio source from MSE with an audio mediaStreamTrack from a RTCPeerConnection as they do not have any common param like CNAME (RTCP) or are part of the same mediastream). In other words, the world of MSE and WebRTC cannot mix unless synchronization is not important. Correct?
Mix, where? Synchronization, where? No matter what you do, if you have streams coming from different places... or even different devices without sync/gen lock, they're out of sync. However, if you can define a point of reference where you consider things "synchronized", then it's all good. You could, for example, have independent streams going into a server and the server uses its current timestamps to set everything up and distribute together via WebRTC.
How you do this, or what you do, depends on the specifics of your application.

Usrp1 and Gnuradio

I am trying to set up 5 USRP1 and some daughterboards on 2.4 and 5 GHz.
Some of them are out of order and some work properly, but I don't know which is which. I am trying to send a symbol (QAM modulation) sequence then I am trying to pass it with a file source to both a USRP sink and an FFT sink.
I am trying to find articles and tutorials, how sample rates are connected and how to set up them but I can't understand what am I missing. Could somebody please help with the schemes?
128 MS/s is not a rate that is possible with the USRP1. The console will contain a UHD warning that a differen, possible rate was chosen (most likely, 8MS/s).
Now, you contradict that rate by having a "Throttle" block in your flow graph - that block's job is only (and nothing more) to slow down the average rate at which samples are being let through – and that is something your "USRP Sink" already does. In fact, modern versions of GRC will warn you that using a throttle block in the same flow graph as a hardware sink or source is a bad idea.
Now, you'll say "ok, if the USRP sink actually needs to consume but 8MS/s, and my interpolator makes 128 MS/s out of my nominally 1M/s flow (really, signals within GNU Radio don't have a sampling rate), then that's gotta be fast enough to satisfy the 8MS/s demand!".
But the fact is that a 128-interpolator is really a CPU-intense thing, and the resulting rate might not be that high, made even worse by the choppy nature of how Throttle works.
In fact, your interpolator is totally unnecessary. The USRP internally has proper interpolators for integer fractions of its master clock rate 64MS/s, which means that you can set the USRP Sink to a sampling rate of 1MS/s and just directly connect the file source to it.

How can I calculate an optimal UDP packet size for a datastream?

Short radio link with a data source attached with a needed throughput of 1280 Kbps over IPv6 with a UDP Stop-and-wait protocol, no other clients or noticeable noise sources in the area. How on earth can I calculate what the best packet size is to minimise overhead?
UPDATE
I thought it would be an idea to show my working so far:
IPv6 has a 40 byte header, so including ACK responses, that's 80 bytes overhead per packet.
To meet the throughput requirement, 1280 K/p packets need to be sent a second, where p is the packet payload size.
So by my reckoning that means that the total overhead is (1280 K/p)*(80), and throwing that into Wolfram gives a function with no minima, so no 'optimal' value.
I did a lot more math trying to shoehorn bit error rate calculations into there but came up against the same thing; if there's no minima, how do I choose the optimal value?
Your best bet is to use a simulation framework for networks. This is a hard problem, and doesn't have an easy answer.
NS2 or SimPy can help you devise a discrete event simulation to find optimal conditions, if you know your model in terms of packet loss.
Always work with the largest packet size available on the network, then in deployment configure the network MTU for the most reliable setting.
Consider latency requirements, how is the payload being generated, do you need to wait for sufficient data before sending a packet or can you immediately send?
The radio channel is already optimized for noise as the low packet level, you will usually have other demands of the implementation such as power requirements: sending in heavy batches or light continuous load.

detecting heartbeat peakpower using iphone sdk?

i want to detect heart rate using iphone sdk does someone knows any method for calculating heartbeat rate?
Fast Fourier Transform is a class of algorithms that can quickly turn samples into an analysis that tells you how prominently ceratin frequencies occur in that sample. For more check out:
Wikipedia: FFT
Literate program example: Cooley-Tukey FFT
This is relevant to your problem because: (1) heart rate is itself a frequency, and (2) most of the sound that comes through the body that you can measure will be within a certain frequency range. Dropping frequencies outside this range means dropping all or mostly noise.
Good luck!
Well I've seen various implementations. Some of them use the accelerometer to detect minute movements in your arm/hand when you hold the phone, some of them can use the microphone, you could also do a manual 'tap' interface where you tap the screen while checking your own pulse.