Can I sync multiple live radio streams with video? - html5-video

Is it possible to sync multiple live radio streams to a pre-recorded video simultaneously and vary the volume at defined time-indexes throughout? Ultimately for an embedded video player.
If so, what tools/programming languages would be best suited for doing this?
I've looked at Gstreamer, WebChimera and ffmpeg but am unsure which route to go down.

This can be done with WebChimera, as it is open source and extremely flexible.
The best possible implementation of this is in QML by modifying the .qml files from WebChimera Player directly with any text editor.
The second best implementation of this is in JavaScript with the Player JS API.
The difference between these two methods would firstly be resource consumption.
The second method that would use only JavaScript would require adding one <object> tag for the video, and one more for each audio file you need to play. So for every media source you add to the page, you will need to call a new instance of the plugin.
While the first method made only in QML (mostly knowing JavaScript would be needed here too, as that handles the logic part behind QML), would load all your media sources in one plugin instance, with multiple VlcVideoSurface components that each has it's own Plugin QML API.
The biggest problem I can foresee for what you want to do is the buffering state, as all media sources need to be paused as soon as one video/audio starts buffering. Synchronizing them by time should not be to difficult though.
WebChimera Wiki is a great place to start, it has lots of demos and examples. And at WebChimera Questions we've helped developers modify WebChimera Player to suit even the craziest of needs. :)

Related

May I use video.js as a minimalistic polyfill?

I'd like to have a library that would offer a javascript API to control a player and manage its events, nothing more.
All the GUI would (optionnaly) not be part of the library. I've tried to set a player without controls, but even in that case the GUI is created in the DOM but not shown.
I can see 2 benefits: i can reuse my previous GUI in an easier way, and the video.js script is smaller.
But this also question the nature of the polyfill. Adding a track to manage subtitles without an interface to render it would not make a true polyfill. There would be 2 kinds of polyfill: the first would just let the browser play the video, and the second would create a consistent graphical interface to manage all the player's features.
The answerable question is: does video.js offers a way to only provide a js API (and modify the dom when flash is required)?
If there is not such feature, is it an option for the future (and why not)?
Thank you all!

Embedded ASS subtitle track in streamed video?

We'd like to build a small specialized clone of the ill-fated popcorn-time project, that is to say a node-webkit frontend for peerflix. The videos we'd like to play are mkv files that have embedded ASS subtitle tracks, and we can't seem to get the embedded subtitles to show up: while VLC nicely shows them, html5 video players in webkit-based things don't, not even in Google Chrome (so it's not a matter of Chromium's reduced codec support).
Now, I'm a bit out of our depths here, I don't really know much about these things, but it seems to me the media engine underneath webkit just ignores the ASS subtitle track here. Is it because it's ASS? Is it a matter of codecs somehow? Or is it, after all, a html5 thing? Now, the html5 video "living standard" mentions that "captions can be provided, either embedded in the video stream or as external files using the track element" - so the feature is at least planned, but I do realize that implementation is lacking. However, given that node-webkit uses ffmpeg as the underlying engine, it seems strange to me that the subtitles are not picked up at all.
Could someone more knowledgeable please advise us the problem? Also, is there anything we could do about it?
Extracting the subtitles beforehand is not an option, though I have been playing with the idea of extracting the subtitles on the fly, and feeding that stream back to the player - I had some modest success with this, and it looks like it could be done with some effort, but I'm really out of my depth here, and the whole idea is pretty contrived anyway.
However, I find it improbable that nobody has run into this problem before, hence this question: is there any way to show embedded (ASS) subtitle tracks in a streamed video in node-webkit?
Not sure if this would help but according to this page node-webkit doesn't ship with codec for patented media formats. They do have a few suggestions on the page, one of which is to compile your own node-webkit.
You could try using Popcorn Time's ffmpegsumo file which is what I used when I needed mp3 support and Chrome's version didn't work. Although, I don't know if that supports ASS subtitle format(considering its use, I would think it has to).
Note: I would have commented this answer but unfortunately I don't have commenting privileges yet. A couple of upvotes sure would be nice ;)

Record sound of one application

I want to develop an application for Mac OS X to record audio from one application.
I played around with Soundflower, but it only grabs the full system audio.
I know that I have to use a HAL plug-in. This plug-in is loaded from an application that uses Core Audio and then I can communicate with the plug-in to grab the audio.
My question is: How does such a plug-in look like? Are there examples on the internet? I have not found anything about this topic.
Now that you've decided that using Cocoa injection is a feasible solution to your problem, let's start there.
What you need to do is find out how the ObjC classes in the app are setting up to play audio, and hook in to set a different AU in place of the default system out.
There are two options (besides writing your own custom AU from scratch, which you don't need to do). You can use AUHAL as the AU, and capture the data from AUHAL. This is a bit easier from the point of view of hooking things up, but it means you have to write the code that renderers and saves the audio. Or you can just hook in a save-to-file AU, which is a bit harder to hook up, but once you do it takes care of rendering automatically.
So, how do you hook things in? Well, most of the higher-level CA calls are written to just write to the current output. If the app is doing things that way, you just need to hook in at startup to find your replacement AU and set it as the current output, in place of the default. On the other hand, if the app is writing directly to an AU that it stores in a variable, you have to hook it to store your AU as a variable. And if it's building a graph of AUs, you either replace the default output, or stick yours in front of it, in the graph.
See TN2091 for some sample code fragments for most of the hard parts for most of the possibilities. It doesn't show you how to put them together, and it's got a lot more about setting inputs than outputs (because that's harder), and the terminology can get confusing, but if you read it carefully, you should be able to find the parts you need.
If you haven't yet built a simple AU host and AU plugin before, you really should take the time to work through the whole Audio Unit Development Fundamentals guide. (And if you don't think you really need to know all that to do something simple, you're wrong. Why CoreAudio is Hard explains half of the reason; the changes between OS X versions versions are the other half of the reason.)
You probably also want to look at CocoaDev's CoreAudioAndAudioUnitsTutorial page for a placeholder page for a complete tutorial that nobody's ever written, with links to a lot of useful stuff.
Meanwhile, if injecting the whole MTCoreAudio framework into the app is feasible, it comes with a ton of nice, complete samples. In fact, even if you aren't going to use the framework, it's worth reading the Overview documentation, and possibly the source code.

iOS Localization of large files like movies

I am creating a new iOS project, where I have to include a video in my app. We need to access this video also in offline mode - so I need to include it as video file in my project.
The question - what is the best practice for such large file localization? App will be in at least 7 languages and I can not decide - include video in 7 languages which would dramatically change size of app or include it only in English and localize other stuff? Probably someone knows - if my phone language is for example Spanish and I download localizable app - does this app include videos in all languages or only in my selected?
Any answer will be appropriated and thank you in advance.
You could do something nifty with bundling the video and audio separated. You could then play the correct audio for each localization. The same could be done with subtitles.
Most optimized way would be to not bundle any video in the application and just allow the users to download the video for the current localization from within the app. That's what I think.
A single video can have multiple audio streams. So you can simply create one video with multiple audio streams. Audio is general smaller compared to video. So should not hurt as much. User can then select the audio langauage [or program can preselect based on some preference] to play it out.
Almost all container formats can contain more than 1 audio stream. [Almost all, definitely mp4 which is what I guess you are using.]
EDIT: You can possibly also have two files. One with half the languages which are the most common and one with other half which are less common to reduce some size.

What is a good way to implement a message board or other common UI plugins

I am thinking there must be some libraries out there that people have developed which can be used as "plugins" or whatever people call them to do simple and common UI types of things.
I am using the message board idea as just an example, but I am looking for a general solution. For example, is there a place where I can browse "gems" for RoR that just take care of some UI component?
How do people usually integrate such pieces as a message board present at the bottom of every page, or some other ui tool without writing their own, or using a CMS?
Thanks,
Alex
Two good places to browse gems are http://ruby-toolbox.com/ and of course http://rubygems.org/