Embedded ASS subtitle track in streamed video? - html5-video

We'd like to build a small specialized clone of the ill-fated popcorn-time project, that is to say a node-webkit frontend for peerflix. The videos we'd like to play are mkv files that have embedded ASS subtitle tracks, and we can't seem to get the embedded subtitles to show up: while VLC nicely shows them, html5 video players in webkit-based things don't, not even in Google Chrome (so it's not a matter of Chromium's reduced codec support).
Now, I'm a bit out of our depths here, I don't really know much about these things, but it seems to me the media engine underneath webkit just ignores the ASS subtitle track here. Is it because it's ASS? Is it a matter of codecs somehow? Or is it, after all, a html5 thing? Now, the html5 video "living standard" mentions that "captions can be provided, either embedded in the video stream or as external files using the track element" - so the feature is at least planned, but I do realize that implementation is lacking. However, given that node-webkit uses ffmpeg as the underlying engine, it seems strange to me that the subtitles are not picked up at all.
Could someone more knowledgeable please advise us the problem? Also, is there anything we could do about it?
Extracting the subtitles beforehand is not an option, though I have been playing with the idea of extracting the subtitles on the fly, and feeding that stream back to the player - I had some modest success with this, and it looks like it could be done with some effort, but I'm really out of my depth here, and the whole idea is pretty contrived anyway.
However, I find it improbable that nobody has run into this problem before, hence this question: is there any way to show embedded (ASS) subtitle tracks in a streamed video in node-webkit?

Not sure if this would help but according to this page node-webkit doesn't ship with codec for patented media formats. They do have a few suggestions on the page, one of which is to compile your own node-webkit.
You could try using Popcorn Time's ffmpegsumo file which is what I used when I needed mp3 support and Chrome's version didn't work. Although, I don't know if that supports ASS subtitle format(considering its use, I would think it has to).
Note: I would have commented this answer but unfortunately I don't have commenting privileges yet. A couple of upvotes sure would be nice ;)

Related

video/mp2t browser live streaming support

Is there any way we could live stream "video/mp2t" content in the browser? I'm building a live stream app where some urls don't have any mimetype specified but the content is "video/mp2t". I've tried to use the major html 5 players: jwplayer, shaka-player, video.js, and none of them seem to support this kind of content out of the box. I've read that might be possible to transmux on the fly to mp4, do you guys know any example or some guidelines?
Android and ios seem to support this but the browser not, why is that? Do you think it's something to be incorporated in the future?
Thanks!
I've read that might be possible to transmux on the fly to mp4
Yes, you can write the code yourself, or base it on a another library like mux.js. But as you said, nothing does this out of the box.
Android and ios seem to support this but the browser not, why is that?
There are dozens or hundreds of container formats. Supporting them all would be ridiculous. Different companies, and different standard bodies make different decisions on what they think their users will require.
Do you think it's something to be incorporated in the future?
No, I don't.

Can I sync multiple live radio streams with video?

Is it possible to sync multiple live radio streams to a pre-recorded video simultaneously and vary the volume at defined time-indexes throughout? Ultimately for an embedded video player.
If so, what tools/programming languages would be best suited for doing this?
I've looked at Gstreamer, WebChimera and ffmpeg but am unsure which route to go down.
This can be done with WebChimera, as it is open source and extremely flexible.
The best possible implementation of this is in QML by modifying the .qml files from WebChimera Player directly with any text editor.
The second best implementation of this is in JavaScript with the Player JS API.
The difference between these two methods would firstly be resource consumption.
The second method that would use only JavaScript would require adding one <object> tag for the video, and one more for each audio file you need to play. So for every media source you add to the page, you will need to call a new instance of the plugin.
While the first method made only in QML (mostly knowing JavaScript would be needed here too, as that handles the logic part behind QML), would load all your media sources in one plugin instance, with multiple VlcVideoSurface components that each has it's own Plugin QML API.
The biggest problem I can foresee for what you want to do is the buffering state, as all media sources need to be paused as soon as one video/audio starts buffering. Synchronizing them by time should not be to difficult though.
WebChimera Wiki is a great place to start, it has lots of demos and examples. And at WebChimera Questions we've helped developers modify WebChimera Player to suit even the craziest of needs. :)

Playing MIDI files in VB.NET

How can I play MIDI files with VB.NET? I tried using WAV, but they are too big. Any help?
look at this article i used it before and it works.
http://www.codeproject.com/Articles/8506/Simple-VB-NET-MIDI-Wave-Play-Class
just copy paste that code in your project.
create variable which holds your midi and
call the play method.
you can also try this.(not sure about it.)
http://support.microsoft.com/kb/141756
Using MIDI files can be a good idea in regard to size, but IMO a horrible idea when it comes to actual sound (or lack there-of :) ). You can find users which do music too, and has a little better alternative set up or connected to their system, but to make users fire up their MIDI instruments and so forth to listen to a MIDI-track in a software can be a bad idea if unexpected.
Most users though are stuck with the built in wave synth from Microsoft which is a torture instrument (pun intended) and should probably not be used ;)
Why not consider compressing your wave data instead using MP3 or some other excellent compressor such as AAC, Ogg Vorbis ?
This will reduce the original data amount to at least 1/10 of the original size and unless you are providing a whole album, should be overcome-able.
You can find various ways to do this, from simple such as this one using the Media Player, or more low-level such as this one which decodes the MP3 file.
Also take a look at SlimDX.

Technique to identify a video in iOS camera roll

I'm trying to solve a specific problem (but this could benefit others) which from googling around doesn't seem to have a definitive solution. I think there are probably several partial solutions out there, I'd like to find the best of those (or a combination) that does the trick most of the time.
My specific example is: users in my app can send videos to each other and I'm going to allow them to save videos they have received to their camera roll. I would like to prevent them from forwarding the video on to others. I don't need to identify a particular video, just that it was originally saved from my app.
I have achieved a pretty good solution for images by saving some EXIF metadata that I can use to identify that the image was saved from my app and reject any attempts to forward it on, but the same solution doesn't work for videos.
I'm open to any ideas. So far I've seen suggested:
Using ALAssetRepresentation in some way to save a filename and then compare it when reading in, but I've read that upgrading iOS wipes these names out
x-Saving metadata. Not possible.
MD5. I suspect iOS would modify the video in some way on saving which would invalidate this.
I've had a thought about appending a frame or two to the start of the video, perhaps an image which is a solid block of colour, magenta for example. Then when reading in, get the first frame, do some kind of processing to identify this. Is this practical or even possible?
What are your thoughts on these, and/or can you suggest anything better?
Thanks!
Steven
There are 2 approaches you could try. Both solutions only work under iOS5.
1) Save the url returned by [ALAssetRepresentation url]. Under iOS 5 this URL contains a CoreData objectID and should be persistent.
2) Use the customMetadata property of ALAsset to append custom info to any asset you saved yourself.
Cheers,
Hendrik

Providing data as needed for QTMovie

I understand that if I wanted to provide a QTMovie with data from an arbitrary source as it is needed I probably have to deal with the QTDataReference class, but unfortunately my experience with anything similar is limited to an audio data callback with Audio Buffers on the iPhone.
If I initialized the QTDataReference with some NSMutableData, how would I know when it needs more data and furthermore, how would I "clear" already played movie data and provide it again when the user seeks back (I want them to be able to)?
Basically the movie data I want to provide would in the end come from a set of files (which are really just one movie file split up), which become available sequentially during playback. This part is crucial.
Anybody who gets me going in the right direction can get beta access to the first Mac OS X Usenet movie streamer ;)
You likely can't do this using QTKit. Start with the QuickTime Overview and go from there.
The first idea that occurs to me is creating a QuickTime movie that directs QuickTime to look to the files you expect to download (or even to look directly to their remote URLs) for its media, then siccing QuickTime Player/QTMovieView on it and hoping that QuickTime's support for progressive download patches over any rough spots.