Get the bitmap from decoded output buffers of android mediacodec without using any surface output - android-bitmap

An android mediacodec is configured to have bytebuffer output not configured as surface output.
Then How can I save the bitmap from the decoded frame output (outputBuffer) obtained from getoutputBuffer(output_buf_index)?
I have tried bmp.copyPixelsFromBuffer(outputBuffer) after calling outputBuffer.rewind()
But I am getting an error of "java.lang.RuntimeException: Buffer not large enough for pixels" from copyPixelsFromBuffer() API.
Can anyone please tell me how can I get the bitmap from the decoded output buffer if mediacodec is not configured as surface?

Related

Video creation with Microsoft Media Foundation and Desktop Duplication API

I'm using DDA for capturing the desktop image frames and sending them to the server, where these frames should be used to create video with MMF. I want to understand, what needs to be done with MMF, if i will use Source Reader and Sink Writer to render video from captured frames.
There are two questions:
1) Well, first of all, i can't fully understand is there, actually, need for the Source Reader with Media Source, if i already receive the video frames from DDA? Can i just send them to the Sink Writer and render the video?
2) As far as i understand, first thing to do, if there is still a need for Source Reader and Media Source, is write my own Media Source, which will understand the DXGI_FORMAT_B8G8R8A8_UNORM frames, that captured with DDA. Then i should use Souce Reader and Sink Writer with suitable Decoders\Encoders and send the media data to the Media Sinks. Could you, please, explain in more detail what needs to be done in this case?
Implementing SourceReader is not necessary in your case, but you can go ahead and implement it, it will work.
Instead, you can also directly feed your input buffer captured through Desktop Duplication to SinkWriter. Just as below,
CComPtr<IMFAttributes> attribs;
CComPtr<IMFMediaSink> m_media_sink;
IMFSinkWriterPtr m_sink_writer;
MFCreateAttributes(&attribs, 0);
attribs->SetUINT32(MF_LOW_LATENCY, TRUE);
attribs->SetUINT32(MF_READWRITE_ENABLE_HARDWARE_TRANSFORMS, TRUE);
IMFMediaTypePtr mediaTypeOut = MediaTypeutput(fps, bit_rate);
MFCreateFMPEG4MediaSink(stream, mediaTypeOut, nullptr, &m_media_sink));
MFCreateSinkWriterFromMediaSink(m_media_sink, attribs, &m_sink_writer);
//Set input media type
mediaTypeIn->SetGUID(MF_MT_SUBTYPE, MFVideoFormat_RGB32);
//Set output media type
mediaTypeOut->SetGUID(MF_MT_SUBTYPE, MFVideoFormat_H264);
IMFSamplePtr sample;
MFCreateSample(&sample);
sample->AddBuffer(m_buffer); // m_buffer is source buffer in R8G8B8A8 format
sample->SetSampleTime(m_time_stamp);
sample->SetSampleDuration(m_frame_duration);
m_sink_writer->WriteSample(m_stream_index, sample);
Here is a perfectly working sample based on SinkWriter. It supports both network and file sink. It actually captures the desktop through GDI approach though. DDA is almost the same, you can indeed obtain better performance using DDA.
I have also uploaded one more sample here which is in fact based on Desktop duplication, and directly uses IMFTransform instead, and streams the output video as RTP stream using Live555. I'm able to achieve up to 100FPS through this approach.
If you decide to follow the SinkWriter approach, you don't have to worry about the color conversion part as it is taken care by SinkWriter under the hood. And with IMFTransform, you will have to deal with color conversion part, but you will have a fine grained control over the encoder.
Here are some more reference links for you.
https://github.com/ashumeow/webrtc4all/blob/master/gotham/MFT_WebRTC4All/test/test_encoder.cc
DXGI Desktop Duplication: encoding frames to send them over the network
Getting green screen in ffplay: Streaming desktop (DirectX surface) as H264 video over RTP stream using Live555
Intel graphics hardware H264 MFT ProcessInput call fails after feeding few input samples, the same works fine with Nvidia hardware MFT
Color conversion from DXGI_FORMAT_B8G8R8A8_UNORM to NV12 in GPU using DirectX11 pixel shaders
GOP setting is not honored by Intel H264 hardware MFT
Encoding a D3D Surface obtained through Desktop Duplication using Media Foundation

NSdata to writeImageDataToSavedPhotosAlbum bytes not exactly same?

I am trying to save my NSData using writeImageDataToSavedPhotosAlbum.
My NSdata size is '49894' and I saved it using writeImageDataToSavedPhotosAlbum. if I read my saved image raw Data bytes using ALAssetsLibrary, I am getting my image size as '52161'.
I am expecting both as same. Can somebody guide me what is going wrong ?
Below link also not providing the proper solution.
saving image using writeImageDataToSavedPhotosAlbum modifies the actual image data
You can not and should not rely on the size, firstly because you don't know what the private implementation does and secondly because the image data is supplied with metadata (and if you don't supply metadata then a number of default values will be applied).
You can check what the metadata contains for the resulting asset and see how it differs from the metadata you supplied.
If you need to save exactly the same bytes, and / or you aren't saving valid image files then you should not use the photo library. Instead you should save the data to disk in your app sandbox, either in the documents or library directory.

How to grab video sequence using basler ip camera in opencv 2.1?

I could not stream over the ip camera because it say it return null.
It stated warning:
Error opening file c:\user\vp\ocv\opencv\src\highgui\cvcap_ffmpeg.cpp:452
hope someone can help me with it.
You cannot retrieve image data from OpenCV directly from IP cameras. You need to acces raw data through the Pylon API (get char* data), and then copy to iplimage->imageData.
Then you could grab every frame with OpenCV, but don't use ffmpeg, try VideoForWindows (vfw), it's easier to use and give less problems with OpenCV.
The error you're getting is due to some problems ffmpeg trying to find the codec you selected, or an incorrect configuration.

iPad: Captured video w/ AVCam Sample, but it is 1080x720, how do you compress it?

Captured video with AVCam Sample Project, but it is huge at 1080x720 resolution. How can I compress for saving to a web server?
I modified the sample code to not save the video file to the AssetsLibrary in "AvCamCaptureManager.m" "recordingDidFinishToOutputFileURL", so I take that outputfile url and send it to my webserver using ASIHttp. These video files are huge, I want to reduce their resolution to 568x320 to reduce the file size.
Given the uncompressed url, how do I compress it to a smaller file format and/or resolution?
I just saw your question, if it's still of any help, try reducing the quelity for the whole session
session.sessionPreset = AVCaptureSessionPresetMedium;
or
session.sessionPreset = AVCaptureSessionPresetLow;
The first will give smaller files while keeping a decent quality, and the latter will give a very small file with the lowest quality available for your device.

How to determine WCF message size at the encoder level

I am building a custom encoder that compresses WCF responses. It is based on the Gzip encoder in Microsoft's WCF samples and this blog post:
http://frenk.wordpress.com/2009/12/04/gzip-compression-wcfsilverlight/
I've got it all working, but now I would like to apply the compression only if the reply is beyond a certain size, but I am not sure how to retrieve the total size of the actual message from the encoder level.
I would need to get the message size at both the WriteMessage(...) method in the EncoderFactory, so I know whether to compress the message) and at the BeforeSendReply(...) method in the DispatchMessageInspector so that I can add the "gzip" ContentEncoding header to the response. Requests are always small and not compressed, so I don't need to worry about that.
Any help appreciated.
Jon.
I think you would do this in two stages. First, write a custom MessageEncoder that encodes the message to a byte[] just normal. Once you have the encoded byte-array (and this can be any message encoding format... Xml, Json, binary, whatever) you can examine the byte-array size and determine whether you want to create another compressed byte array.
Several resources you may find useful:
MSDN WCF Sample Code for a custom compression message encoder
Nicholas Allen's "Build a Custom Message Encoder" blog series. In this series He creates a "counting encoder" that basically wraps another encoder of any type and allows you to know what the encoded message size is (based on the byte[] size). You could probably adapt this and create a "ThresholdCompressionEncoder".
You can try calculating it based on reply.ToString.Length() and message.ToString.Length()