Strange compression ratio with x265 and jpeg2000 codecs - hevc

I've implement a picture encoding software using x265 and libjasper(jasper Jpeg2000) codecs, and I'm having strange ratio compression in lossless compression mode.
My ratio compression is
( 1 - output_stream_size / initial_picture_size).
Most of the studies says that hevc is better than jpeg2000 but in my case, jpeg2000 ratio is better than x265 ratio.
I'm also having the same result with jasper software and x265 software.
So I'm thinking maybe my input params for x265 are not correct ...
Normally I'm working with monochroma pictures, 8 bitdepth. but I have done the same test with color picture and I'm getting the same result.
There are same results with colored picture from here (Original Images) http://mmspg.epfl.ch/iqa
x265 --psy-rd 1.0 --lossless --input-res 1280x1600 --input-csp i420 --fps 1 --preset veryslow --profile mainstillpicture bike_orig.yuv bike_orig.bin
jasper -f bike_orig.ppm -F bike_orig.jp2 -T jp2
Hevc ouput trace:
Hevc encoder output trace
Outpute file size :
Original Input : 6144017 bytes
hevc bitstream : 5637967 bytes
jp2 bitstream : 3261791 bytes
Codecs Version :
Japser :
1.900.1 , libjasper 1.900.1
x265 :
x265 [info]: HEVC encoder version 2.0
x265 [info]: build info [Linux][GCC 4.4.7][64 bit] 8bit
x265 [info]: using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX AVX2 FMA3 LZCNT BMI2
Does anyone have an idea why I'm having such results.
Thanks you.

Thanks for your answer. I have already read this paper to justify my results. But I noticed that the article presents results regarding video (moving picture). Can we extend this to still picture. So we can compare Jpeg2000 and HEVC MainStill Picture (MSP)?

JPEG2000 compression ratio is superior to HEVC when considering lossless encoding. Main reason for that seems to be an advantage of the discrete wavelet transformation used in JPEG2000 over the discrete cosine transformation.
See this for further detials. (Evaluation: HEVC, H.264/AVC, JPEG 2000 and JPEG LS)
EDIT: Regarding still picture encoding, JPEG2000 is probably even more superior as HEVC can't take an advantage of inter frame video compression.

Related

MPEG-2 vs AVC vs HEVC Inputs

I am trying to work with Media live and I'm supposed to send RTMP stream from my iOS/Android devices.
I was checking the pricing model of Media live and I can see there're multiple kinds of input. As I am new to all this (media stuff), I am not sure what they are.
If I send RTMP stream from my iOS device, what kind of input will it be?
MPEG-2 Inputs
AVC Inputs
HEVC Inputs
These are video compression standards you have listed from the oldest (MPEG-2 in 90') to the newest (HEVC in 2013).
They have different features and specificiations. Most importantly, the bitrate they output in the same quality level is significantly different. HEVC is the best in terms of bitrate saving, also the most complex in terms of HW/SW.

What technologies should I use to produce a WebM live stream from a series of in-memory bitmaps?

Boss handed me a bit of a challenge that is a bit out of my usual ballpark and I am having trouble identifying which technologies/projects I should use. (I don't mind, I asked for something 'new' :)
Job: Build a .NET server-side process that can pick up a bitmap from a buffer 10 times per second and produce/serve a 10fps video stream for display in a modern HTML5 enabled browser.
What Lego blocks should I be looking for here?
Dave
You'll want to use FFmpeg. Here's the basic flow:
Your App -> FFmpeg STDIN -> VP8 or VP9 video wrapped in WebM
If you're streaming in these images, probably the easiest thing to do is decode the bitmap into a raw RGB or RGBA bitmap, and then write each frame to FFmpeg's STDIN. You will have to read the first bitmap first to determine the size and color information, then execute the FFmpeg child process with the correct parameters. When you're done, close the pipe and FFmpeg will finish up your output file. If you want, you can even redirect FFmpeg's STDOUT to somewhere like blob storage on S3 or something.
If all the images are uploaded at once and then you create the video, it's even easier. Just make a list of the files in-order and execute FFmpeg. When FFmpeg is done, you should have a video.
One additional bit of information that will help you understand how to build an FFmpeg command line: WebM is a container format. It doesn't do anything but keep track of how many video streams, how many audio streams, what codecs to use for those streams, subtitle streams, metadata (like thumbnail images), etc. WebM is basically Matroska (.mkv), but with some features disabled to make adopting the WebM standard easier for browser makers. Inside WebM, you'll want at least one video stream. VP8 and VP9 are very compatible codecs. If you want to add audio, Opus is a standard codec you can use.
Some resources to get you started:
FFmpeg Documentation (https://ffmpeg.org/documentation.html)
Converting raw images to video (https://superuser.com/a/469517/48624)
VP8 Encoding (http://trac.ffmpeg.org/wiki/Encode/VP8)
FFmpeg Binaries for Windows (https://ffmpeg.zeranoe.com/builds/)

Exact difference Between Motion JPEG and JPEG , Embedded and multimedia codec

I kept my head in many of the links and website but failed to get the answer. I hate to ask this, I know JPEG compression, it makes only compressed Images. Even Motion JPEG makes compressed Images I-frames . My question is what is difference. I am writing an application for camera which need to send video but my camera unit supports jpeg and mjpeg. whats the advantage of motion JPEG over JPEG. Thanks for any advise
I found
V4L2 difference between JPEG and MJPEG pixel formats
http://www.axis.com/in/en/learning/web-articles/technical-guide-to-network-video/video-compression
The First link mean that the capture rate can be made high with MJPEG but the size of images will be same.
The second links confirms that there is no difference between MJPEG and JPEG compression.
If the above conclusions are true then i can open the mjpeg frame on any image viewer, But i can't as told in first link
JPEG is a single-page file format. Motion JPEG is a motion video adaptation of the JPEG standard for still photos. MJPEG treats a video stream as a series of still photos, compressing each frame individually, and uses no interframe compression.
The advantage of using the JPEG compression is that it has low complexity while doing a decent job of compression. MJPEG is simply extending the single frame format to a series of frames.

How to generate progressive JPG with ImageResizer.net

I am using the following code to generate a progressive JPG:
ImageResizer.ImageBuilder.Current.Build(srcFileName, dstFileName, new ResizeSettings("progressive=true"));
I am verifying if the JPG is progressive using ImageMagik identify command:
identify -verbose dstfile.jpg
And I get:
Interlace: None
I am generating progressive JPG files using Photoshop and identify does report:
Interlace: JPEG
Looking at documentation, this feature was added on version 3.1 Dec 7 2011. I am using version 3.3.2.447.
I am not sure if I am missing a plug-in or additional commands.
Progressive jpeg encoding is only available with the FreeImageEncoder plugin installed - you must also use the &encoder=freeimage command. Neither WIC nor GDI+ offer progressive jpeg encoding, but both WIC and FreeImage support subsampling control.
Also note that progressive jpegs require more mobile device RAM to decompress, and only offer a speed benefit at larger output sizes (I.e, > 600x600)

How to get face detection information from Panasonic smartHD camera embedded in video stream?

I've got a Panasonic WV SP-306 digital camera. It has a built-in function for face detection, and the information can be sent via xml notifications or embedded in the video stream. I'm trying to figure out how to get this information from mjpeg stream.
My discoveries so far:
I've found official documentation and SDK here
There's a PDF document describing the jpeg header format (Panasonic Camera JPEG Format )
According to the document, the header of jpeg after FF FE bytes and the two length bytes consists of sections. Each section has 2-byte ID followed by 2 bytes indicating length. Then goes the body of the section. There're three sections described in the document: section with ID 0010 (related to motion detection), ID 0011 (time information) and ID 0012 (frame information, it has something about the time of the frame, not sure what it is for).
When I turn on the face detection feature, the fourth section appears. It has ID 000F and is not described in the documentation.
The sample programs and library reference were not useful, too. All I can do with face detection is turn it on or off and set the color of the face detection rectangle. I think all the processing of face detection data in the stream is done by the library.
So, my question is: can anybody tell me how to get the face detection data provided by this camera from the stream?