Stream encode hevc annex b format file - hevc

I have a files of hevc annex b format,and I want to simulate streaming mode without any wrapper or container. It is possible? Which decoder and player needed?

If you are using the hevc annex b format file then you need an HEVC decoder.
VLC player support it, you can try to play.

Related

Is it possible to set Windows.Media.SpeechSynthesis stream format as in SAPI 5.3?

I'm using Windows.Media.SpeechSynthesis (C++/WinRT) to convert text to audio file. Previously I was using SAPI where was possible to set Audio Format when binding to a file via SPBindToFile(...) before speaking.
Is there any similar method in Windows.Media.SpeechSynthesis? Seems that there is only possible to get 16kHz, 16Bit, Mono wave stream, does it?
Does SpeechSynthesisStream already contain a real audio stream after speech synthesis, or does it hold some precalculated raw data, and does actual encoding happen when accessing its data (playback on a device or copying to another not-speech-specific stream)?
Thank you!
I think there should be possible to control the speech synthesis stream format somehow.
The WinRT synthesis engines output 16Khz 16-bit mono data. There isn't any resampling layer to change the format.

MPEG-2 vs AVC vs HEVC Inputs

I am trying to work with Media live and I'm supposed to send RTMP stream from my iOS/Android devices.
I was checking the pricing model of Media live and I can see there're multiple kinds of input. As I am new to all this (media stuff), I am not sure what they are.
If I send RTMP stream from my iOS device, what kind of input will it be?
MPEG-2 Inputs
AVC Inputs
HEVC Inputs
These are video compression standards you have listed from the oldest (MPEG-2 in 90') to the newest (HEVC in 2013).
They have different features and specificiations. Most importantly, the bitrate they output in the same quality level is significantly different. HEVC is the best in terms of bitrate saving, also the most complex in terms of HW/SW.

What technologies should I use to produce a WebM live stream from a series of in-memory bitmaps?

Boss handed me a bit of a challenge that is a bit out of my usual ballpark and I am having trouble identifying which technologies/projects I should use. (I don't mind, I asked for something 'new' :)
Job: Build a .NET server-side process that can pick up a bitmap from a buffer 10 times per second and produce/serve a 10fps video stream for display in a modern HTML5 enabled browser.
What Lego blocks should I be looking for here?
Dave
You'll want to use FFmpeg. Here's the basic flow:
Your App -> FFmpeg STDIN -> VP8 or VP9 video wrapped in WebM
If you're streaming in these images, probably the easiest thing to do is decode the bitmap into a raw RGB or RGBA bitmap, and then write each frame to FFmpeg's STDIN. You will have to read the first bitmap first to determine the size and color information, then execute the FFmpeg child process with the correct parameters. When you're done, close the pipe and FFmpeg will finish up your output file. If you want, you can even redirect FFmpeg's STDOUT to somewhere like blob storage on S3 or something.
If all the images are uploaded at once and then you create the video, it's even easier. Just make a list of the files in-order and execute FFmpeg. When FFmpeg is done, you should have a video.
One additional bit of information that will help you understand how to build an FFmpeg command line: WebM is a container format. It doesn't do anything but keep track of how many video streams, how many audio streams, what codecs to use for those streams, subtitle streams, metadata (like thumbnail images), etc. WebM is basically Matroska (.mkv), but with some features disabled to make adopting the WebM standard easier for browser makers. Inside WebM, you'll want at least one video stream. VP8 and VP9 are very compatible codecs. If you want to add audio, Opus is a standard codec you can use.
Some resources to get you started:
FFmpeg Documentation (https://ffmpeg.org/documentation.html)
Converting raw images to video (https://superuser.com/a/469517/48624)
VP8 Encoding (http://trac.ffmpeg.org/wiki/Encode/VP8)
FFmpeg Binaries for Windows (https://ffmpeg.zeranoe.com/builds/)

How to Get Audio sample data from mp3 using NAudio

I have an mp3 file into one large array of audio samples.
I want the audio samples to be floats.
NAudio.Wave.WaveStream pcm=NAudio.Wave.WaveFormatConversionStream.CreatePcmStream(new NAudio.Wave.Mp3FileReader(OFD.FileName));
so far I get the pcm stream and can play that back fine but I don't know how to read the raw data out of the stream.
Use AudioFileReader. This implements ISampleProvider so the Read method allows you to read directly into a float array of samples.
Alternatively use the ToSampleProvider method after your Mp3FileReader. You don't need to use WaveFormatConversionStream, since Mp3FileReader (and AudioFileReader) already decompress the MP3 frames.

Convert from AIFF to AAC using Apple API only

I am creating a movie file using QTMovie from QTKit and everything's working nicely. The only problem I have is that the audio in the resulting MOV file is just the raw AIFF hence the file size is larger than I'd like. I've seen plenty about third party libraries capable of encoding to AAC but are there any Apple APIs which I can call to do this job? I don't mind converting the AIFF to AAC prior to adding it to my QTMovie or having the encoding done as part of writing the QTMovie to disk.
This was actually easily achievable using QTKit. I just needed to set the QTMovieExportType to 'mpg4' and QTMovieExport to be YES when calling writeToFile:withAttributes:.