I'm using the WebRTC data channel to send JSON data. Seems to be working fine for small packets of data.
However, I'm trying to send a larger package (HTML of a webpage, base64 encoded, so maybe a few hundred KB), it never makes it to the other end.
Is there a max size?
I think the spec doesn't say a word about max data size. In practice 16 KB is the maximum. Take a look at this blog post and especially the throughput / packet size diagram. This result has been achieved experimentally and is the one allowing for most compatibility between webrtc implementations.
I've managed to send packets as large as 256 KB (and even larger if memory serves me right) between two Firefox instances. This was about an year ago and since then the implementation may have changed an the max data size with it.
If you want to send packets larger than 16 K you have to fragment them first. The fragmentation has to be implemented as part of you application protocol.
Related
I'm looking for a Multiplatform video format that allows for a variable bit rate for streaming over high or low quality links, and also a seek function that allows a byte[] range offset (as in HTTP commands) to fetch a missing or lower than desired quality.
I think it is worth separating encoding and streaming to help the background.
Most streaming protocols will stream video contained in 'containers' e.g. MP4, WebM etc. The containers will include video tracks which can be encoded with different encoders, e.g. H.264, H.265, VP9 etc.
The term Variable bit rate is usually used to describe how the encoding is done - i.e. the encoder may compress or encode the video so it has variable quality and fixed bit rate or try to maintain a given quality level but with a variable bit rate.
I suspect what you may be more interested in is what is called Adaptive Bit Rate streaming - here the video is 'transcoded' into multiple copies, each with a different bit rate. The copies are all segmented at the same points, for example every two seconds.
A client can choose which bit rate to request for the next segment of the video, i.e. the next 2 second chunk, depending on its capabilities and on the network conditions at that time. Hence, the bit rate actually being streamed to the device can vary over time. See this answer for how you can view this in action in live examples: https://stackoverflow.com/a/42365034/334402
Assuming this meets your needs, then the two dominant ABR streaming formats at the moment are HLS and MPEG-DASH.
Traditionally HLS uses TS segments while DASH uses fragmented MP4. Both are now converging on CMAF, which means that the bulk of the media will be a single Multiplatform format, as you are looking for. There is a good overview of CMAF here at the time of writing: https://developer.apple.com/documentation/http_live_streaming/about_the_common_media_application_format_with_http_live_streaming
One caveat is that if your content is encrypted then, at the moment, different devices support different encryption modes so you may need to have separate HLS and DASH encrypted media for the medium term, until device support evolves over time.
I am trying to transfer data over USB. These are packets of at most 64 bytes which get sent at a frequency of 4KHz. This gives a bit rate of about 2Mb/s.
The software task that picks up this data runs at 2.5 KHz.
Ideally we never want packets to get there slower than 2.5 KHz (so 2 KHz isn't very good).
Is anyone aware of any generic limits on what USB can achieve?
We are running on a main board which has a 1.33 GHz running QNX and a daughter board which is a TWR K60F120M tower system running MQX.
Apart from the details of the system, is USB supposed to be used in this kind of data transfers, i.e., high frequency and short packet sizes?
Many Thanks for your help
MG
USB, even at its slowest spec (1.1), can transfer data at up to 12MB/sec, provided you use the proper transfer mode. USB will process 1000 "frames" per second. The frames contain control and data information, and various portions of each frame are used for various purposes, and thus the total information content is "multiplexed" amongst these competing requirements.
Low speed devices will use just a few bytes in a frame to send or receive their data. Examples are modems, mice, keyboards, etc. The so-called Full Speed devices (in USB 1.1) can achieve up to 12 MB/sec by using isochronous mode transfers, meaning they get carved out a nice big chunk of each frame, and can send that much data (a fixed size) each time a frame comes along. This is the mode audio devices use to stream the relatively data-intensive music to USB speakers, for example.
If you are able to do a little bit of local buffering, you may be able to use isochronous mode to get your 64 bytes of data sent at 1 KHz, but with 2 or 3 periods (at 2.5KHz) worth of data in the USB frame transfer. You'd want to reserve 64 x 3 = 192 bytes of data (plus maybe a few extra bytes for control info, such as how many chunks are present: 2 or 3?). Then, as the USB frames come by, you'd put your 2 chunks or 3 chunks of data onto the wire, and the receiving end would then get that data, albeit in a more bursty way than just smoothly at a precise 2.5KHz rate. However, this way of transferring the data would more than keep up, even with USB 1.1, and still only use a fraction of the total available USB bandwidth.
The problem, as I see it, is whether your system design can tolerate a data delivery rate that is "bursty"... in other words, instead of getting 64 bytes at a rate of 2.5KHz, you'll be getting (on average) 160 bytes at a 1 KHz rate. You'll actually get something like this:
So, I think with USB that will be the best you can do -- get a somewhat bursty delivery of either 2 or 3 of your device's data packet per 1 mSec USB frame rep rate.
I am not an expert in USB, but I have done some work with it, including debugging a device-to-host tunneling protocol which used USB "interrupts", so I have seen this kind of implementation on other systems, to solve the problem of matching the USB frame rate to the device's data rate.
I'm using WasapiLoopbackCapture to capture sound coming from my speakers and then using onDataAvailable to send it to another device and I'm attempting to play the data sent using the WaveOut class and a BufferedWaveProvider and just adding a sample everytime data is sent from my client using the onDataAvailable. I'm having problems sending sound. The most functioning I've managed to get it is:
Not syncing the Wave format of the client and the server, just sending data and adding it to the sample. Problem is this is stutters very much even though I checked the buffer stored size and it has 51 seconds. I even have to increase the buffer size which eventually overflows anyway.
I tried syncing the Wave format and I just get clicks but have no problem with buffer size. I also tried making sure that at least a second was stored in the buffer but that had zero effect.
If anyone could point me in the right direction that would be great.
Uncompressed audio takes up a lot of space on a network. On my machine the WasapiLoopbackCapture object produces 32-bit (IeeeFloat) stereo samples at 44100 samples per second, for around 2.7Mbit/sec total raw bandwidth. Once you factor in TCP packet overheads and so on, that's quite a lot of data you're transferring.
The first thing I would suggest though is that you plug in some profiling code at each step in the process to get an idea of where your bottlenecks are happening. How fast is data arriving from the capture device? How big are your packets? How long does it take to service each call to your OnDataAvailable event handler? How much data are you sending per second across the network? How fast is the data arriving at the client? Figure out where the bottlenecks are and you get a much better idea of what the bottlenecks are.
Try building a simulated server that reads data from a wave file in various WaveFormats (channels, bits per sample and sample rate) and simulates sending that data across the network to the client. You might find that the problem goes away at lower bandwidth. And if bandwidth is the issue, compression might be the solution.
If you're using a single-threaded model, and servicing each OnDataAvailable event takes longer than the recording frequency (ie: number of expected calls to OnDataAvailable per second) then there's going to be a data loss issue. Multiple threads can help with this - one to get the data from the audio system, another to process and send the data. But you can end up in the same position: losing data because you're not dealing with it quickly enough. When that happens it's handy to know about it, because it indicates a problem in the program. Find out when and where it happens - overflow in input, processing or output buffers all have different potential reasons and need different attention.
(I am not a native speaker and might not be correct in terms of terminology. Sorry about that.)
I am transmitting data via radio between AVR microcontrollers for personal use and would like for clients to demonstrate the authenticity of transmitted data in that it originates from one of the authorized clients. This means I am not requiring non-repudiation and would be able to pre-define a shared key. I have done some research on different approaches and found that I need some assistance on chosing one that best meets my requirements.
Please understand that I do not require maximum security. I would simply like to prevent a potential script kiddie neighbor from breaking in within a matter of hours. Should breaking in with average consumer gear require a number of weeks as of today I would be OK.
The messages I am transmitting are rather small in size (no more than 30 bytes with only a few bytes payload) and the frequency would be no more than 30 messages / min.
One use case is a motion detector sending a message over the air to a processing unit which then sends another message over the air to a light switch. Please do not focus on transport. This question is only on data autheticity.
I am running the client / server software (in C) on 20 MHz AVR microcontrollers with very limited Flash and RAM. So I am looking for a solution with small code size and RAM utilization while still providing a high data rate.
I did some performance testing with an MD5 implementation (C) creating hashes from 20 bytes data and found that it might be too slow. I understand that an MD5 implementation by itself would not solve the requirement. I did the testing only for evaluating hashing performance.
Thanks for comments
I would use 128-bit AES to sign the messages. Here is an excellent source that has already implemented this for AVR, with full documentation of sizes and cycle counts, including different versions that trade off size/speed. http://avrcryptolib.das-labor.org/trac/wiki/AES
If you are happy with a compromise, calculated the CRC-32 or CRC-64 of the message payload with a secret key appended to the end (of the payload, not the CRC checksum). Both ends can do this with the same secret key to get the same result. Not sure of the exact hackability of this but it sure isn't zero.
i am facing a really misunderstanding when sampling the iphone audio with remoteIO.
from one side, i can do this math: 44khz sample rate means 44 samples per 1ms. which means if i set bufferSize to 0.005 with :
float bufferLength = 0.00005;
AudioSessionSetProperty(kAudioSessionProperty_PreferredHardwareIOBufferDuration, sizeof(bufferLength), &bufferLength);
which means 5ms buffer size -which means 44*5=220 samples in buffer each callback.
BUT i get 512 samples from inNumberFrames each callback . and it stay's fixed even when i change buffer length.
another thing , my callbacks are every 11ms and is not changing! i need faster callbacks .
so !
what is going on here ?
who set what ?
i need to pass a digital information in an FSK modulation, and have to know exactly buffer size in samples, and what time from the signal it has , in order to know how to FFT it correctly .
any explanation on this ?
thanks a lot.
There is no way on all current iOS 10 devices to get RemoteIO audio recording buffer callbacks at a faster rate than every 5 to 6 milliseconds. The OS may even decide to switch to sending even larger buffers at a lower callback rate at runtime. The rate you request is merely a request, the OS then decides on the actual rates that are possible for the hardware, device driver, and device state. This rate may or may not stay fixed, so your app will just have to deal with different buffer sizes and rates.
One of your options might be to concatenate each callback buffer onto your own buffer, and chop up this second buffer however you like outside the audio callback. But this won't reduce actual latency.
Added: some newer iOS devices allow returning audio unit buffers that are shorter than 5.x mS in duration, usually a power of 2 in size at a 48000 sample rate.
i need to pass a digital information in an FSK modulation, and have to know exactly buffer size in samples, and what time from the signal it has , in order to know how to FFT it correctly.
It doesn't work that way - you don't mandate various hosts or hardware to operate in an exact manner which is optimal for your processing. You can request reduced latency - to a point. Audio systems generally pass streaming pcm data in blocks of samples sized by a power of two for efficient realtime io.
You would create your own buffer for your processor, and report latency (where applicable). You can attempt to reduce wall latency by choosing another sample rate, or by using a smaller N.
The audio session property is a suggested value. You can put in a really tiny number but will just go to the lowest value it can. The fastest that I have seen on an iOS device when using 16 bit stereo was 0.002902 second ( ~3ms ).
That is 128 samples (LR stereo frames) per callback. Thus, 512 bytes per callback.
So 128/44100 = 0.002902 seconds.
You can check it with:
AudioSessionGetProperty(kAudioSessionProperty_CurrentHardwareIOBufferDuration, &size, &bufferDuration)
Could the value 512 in the original post have meant bytes instead of samples?