I kept my head in many of the links and website but failed to get the answer. I hate to ask this, I know JPEG compression, it makes only compressed Images. Even Motion JPEG makes compressed Images I-frames . My question is what is difference. I am writing an application for camera which need to send video but my camera unit supports jpeg and mjpeg. whats the advantage of motion JPEG over JPEG. Thanks for any advise
I found
V4L2 difference between JPEG and MJPEG pixel formats
http://www.axis.com/in/en/learning/web-articles/technical-guide-to-network-video/video-compression
The First link mean that the capture rate can be made high with MJPEG but the size of images will be same.
The second links confirms that there is no difference between MJPEG and JPEG compression.
If the above conclusions are true then i can open the mjpeg frame on any image viewer, But i can't as told in first link
JPEG is a single-page file format. Motion JPEG is a motion video adaptation of the JPEG standard for still photos. MJPEG treats a video stream as a series of still photos, compressing each frame individually, and uses no interframe compression.
The advantage of using the JPEG compression is that it has low complexity while doing a decent job of compression. MJPEG is simply extending the single frame format to a series of frames.
Related
I want to know why I normally transmit a 320240 resolution video and default my uplink traffic is at 1.5MB;
when I modify the SDP bandwidth limit, for example at 500kbps/s my video width and height are still 320240 and the frame rate is not reduced;
So what exactly is this reduced upside traffic?
= =
WebRTC uses so-called "lossy perceptual video compression." That is, the video is capable of being compressed into bit streams of various bandwidths... in your case 1.5mbps and 0.5mbps. It's like JPEG's quality parameter: in JPEG, adjusting that parameter changes the size of the image file. In video compression, instead of a quality parameter you request a bitrate.
When a lower-bitrate video stream is decompressed, it's a less faithful representation of the original. If you know what to look for you can see various compression artifacts in the decompressed imagery" blockiness, "mosquitoes" around objects, and so forth.
Streaming video and DVD video programs (cinema) use high bandwidth to minimize these effects below the threshold of perception at 1080p or 4K resolution.
In your SIF (320x240) resolution case, your decoded 0.5mbps video has more artifacts in it than your 1.5mbps video. But, because the resolution is relatively low, it will take some looking to find those artifacts. If they don't annoy you or your users, you can conclude that 0.5mbps is fine for your application. Long experience suggests that you should succeed just fine with that bitrate and resolution. You can even try 250kbps.
Reducing the frame rate doesn't proportionally save bandwidth; most compressed video frames represent differences from the previous frame.
Lower bitrates are better for mobile devices; they save power and your users' data plans.
If you want to see exaggerated compression artifacts and what they look like, set the bitrate down to 125kbps or lower.
I am trying to work with Media live and I'm supposed to send RTMP stream from my iOS/Android devices.
I was checking the pricing model of Media live and I can see there're multiple kinds of input. As I am new to all this (media stuff), I am not sure what they are.
If I send RTMP stream from my iOS device, what kind of input will it be?
MPEG-2 Inputs
AVC Inputs
HEVC Inputs
These are video compression standards you have listed from the oldest (MPEG-2 in 90') to the newest (HEVC in 2013).
They have different features and specificiations. Most importantly, the bitrate they output in the same quality level is significantly different. HEVC is the best in terms of bitrate saving, also the most complex in terms of HW/SW.
Boss handed me a bit of a challenge that is a bit out of my usual ballpark and I am having trouble identifying which technologies/projects I should use. (I don't mind, I asked for something 'new' :)
Job: Build a .NET server-side process that can pick up a bitmap from a buffer 10 times per second and produce/serve a 10fps video stream for display in a modern HTML5 enabled browser.
What Lego blocks should I be looking for here?
Dave
You'll want to use FFmpeg. Here's the basic flow:
Your App -> FFmpeg STDIN -> VP8 or VP9 video wrapped in WebM
If you're streaming in these images, probably the easiest thing to do is decode the bitmap into a raw RGB or RGBA bitmap, and then write each frame to FFmpeg's STDIN. You will have to read the first bitmap first to determine the size and color information, then execute the FFmpeg child process with the correct parameters. When you're done, close the pipe and FFmpeg will finish up your output file. If you want, you can even redirect FFmpeg's STDOUT to somewhere like blob storage on S3 or something.
If all the images are uploaded at once and then you create the video, it's even easier. Just make a list of the files in-order and execute FFmpeg. When FFmpeg is done, you should have a video.
One additional bit of information that will help you understand how to build an FFmpeg command line: WebM is a container format. It doesn't do anything but keep track of how many video streams, how many audio streams, what codecs to use for those streams, subtitle streams, metadata (like thumbnail images), etc. WebM is basically Matroska (.mkv), but with some features disabled to make adopting the WebM standard easier for browser makers. Inside WebM, you'll want at least one video stream. VP8 and VP9 are very compatible codecs. If you want to add audio, Opus is a standard codec you can use.
Some resources to get you started:
FFmpeg Documentation (https://ffmpeg.org/documentation.html)
Converting raw images to video (https://superuser.com/a/469517/48624)
VP8 Encoding (http://trac.ffmpeg.org/wiki/Encode/VP8)
FFmpeg Binaries for Windows (https://ffmpeg.zeranoe.com/builds/)
I'm trying to create a basic algorithm that does packet loss concealment for core audio. I simply want to replace the missing data with silence.. in the book learning core audio, the author says that in lossless PCM, zeros mean silence. I was wondering if I'm playing VBR (ie compressed data), would putting zeros suffice for silence as well?
In my existing code.. when I plug zeros into the audio queue.. it suddenly jams (ie it no longer frees up consumed data in the audio queue callback..) and i'm wondering why
PCM is the raw encoded sample. All 0 (when using signed data for samples) is indeed silence. (In fact, all of any value is silence, but such a DC offset has the potential to damage your amplifier and/or speakers, if it isn't filtered out.)
When you compress with a lossy codec, you enter a digital format where it is not trivial to just add silence. Think of adding data to a ZIP file to add null bytes to the end of a file. It isn't as simple as just inserting them arbitrarily into the ZIP file.
If you want to add silence to a compressed file, you must do so using the appropriate codec. Then, you have to fit it into the bitstream, which is also not trivial. Usually the stream is broken up by frames, but you can't even split on those frames in some formats. MP3 and AAC use a bit reservoir where unused data in prior frames can be used to encode more complicated frames later on, making splitting the file very difficult.
Is there any way to compress the video data while taking from camera ? There is huge difference in video data bytes from taking camera and from photo library.I want to reduce some memory while taking video from camera. Is any way ?
Thanks
I filed a bug report with Apple on this matter, you could do the same, seems the more reports from developers the faster they fix things up.
No matter what videoQuality level you set on the UIImagePickerController, it always defaults to High when recording from the camera. Videos chosen from the user library respect your choice and compress really well with the hardware H.264 encoder present on the 3GS and up.
You can use FFMpeg to get video directly from camera, compress it and store it to a file.
Also FFMpeg is a standalone console application, and it doesn't need any other dlls.
Of course, this isn't objective-c, but it can be very useful in your case.