May I use video.js as a minimalistic polyfill? - video.js

I'd like to have a library that would offer a javascript API to control a player and manage its events, nothing more.
All the GUI would (optionnaly) not be part of the library. I've tried to set a player without controls, but even in that case the GUI is created in the DOM but not shown.
I can see 2 benefits: i can reuse my previous GUI in an easier way, and the video.js script is smaller.
But this also question the nature of the polyfill. Adding a track to manage subtitles without an interface to render it would not make a true polyfill. There would be 2 kinds of polyfill: the first would just let the browser play the video, and the second would create a consistent graphical interface to manage all the player's features.
The answerable question is: does video.js offers a way to only provide a js API (and modify the dom when flash is required)?
If there is not such feature, is it an option for the future (and why not)?
Thank you all!

Related

Appcelerator Hyperloop vs. Plain Titanium Modules

I've started playing around with Appcelerator Hyperloop. While it seems great to access native APIs from JS from day zero, it does raise a few questions about architecture of the platform and the performance.
Currently (AFAIK) a Titanium app has a main UI thread (that runs the native UI controllers) and a JS thread (that runs the JS logic). Each call from JS to Native is passed though the "Bridge" (which is the expansive operation in an app).
Also, Titanium API doesn't cover all the native API and abstracts as much as it can. But if new APIs are introduced it could take time for Appcelerator to implement those into the platform.
One of my favorite things about Titanium is the ability to extend it (using objective-c for iOS and java for Android) - allowing to use native APIs that are not covered by Titanium, and also developing a really native performance controls in case we need to do anything that's too "heavy" for JS. And, as mentioned it's developed 100% native for each platform.
Now that Appcelerator introduced Hyperloop I've done a simple test app and saw that Hyperloop is not translated into native code but just to normal JS code:
var UILabel = require('hyperloop/uikit/uilabel');
var label = new UILabel();
label.text = "HELLO WORLD!";
$.index.add(label);
And another thing about it is that you have to run on the main thread.
So we basically have a few things come to mind here as far as Hyperloop architecture goes:
We still have a bridge? if Hyperloop is JS that calls "special" Hyperloop require then we still have a bridge, that now not only acts as a bridge but also needs to do some sort of reflection (which is also an expansive operation)?
Until now JS ran in it's own thread - so now running in a single main thread seems to be a potential source to more UI blocking operation.
The old-fashioned modules were truly native (not including the bridge call) - so how do Hyperloop-enabled apps compare with those?
There isn't much documentation or articles about Hyperloop that explain the inner working yet - so if anyone has any answers have been trying apps with it could be very helpful.
Answering your questions straight-forward:
There are no Kroll-Proxies involved anymore, since actual classes are being generated on runtime. This is done by using the hyperloop-metabase that does reflection (as you already said) to build an AST that grabs the actual signatures, types, classes, methods, properties, etc.
We did not see any performance-issues with running on the main-thread for now. If you do so, please file a JIRA-ticket so we can investigate the use-case.
The old-modules were "less native" then now, simply because they were all wrapped by the Kroll-proxy (by extending every view from TiUIView and every proxy from TiProxy / TiViewProxy. Hyperloop does not work with those, making the module-development much more faster by also allowing the developer to test his/her process live in their app without the need of packaging and referencing the module manually. Hyperloop modules are nothing else then CommonJS modules that are already used frequently across Alloy and other Ti-components.
I hope that gives you a quick overview on how Hyperloop works. If you have further questions, let us know!
Hans
(As a detailed answer to the above comment)
So let's say you have a tableview in iOS. The native class is UITableView and the Titanium-API is Ti.UI.TableView / Ti.UI.ListView.
While the ListView already provides a huge performance-boost compared to the TableView by abstracting the Child-API usage to templates, those child-API's (Ti.UI.Label, Ti.UI.ImageView, ...) are still custom classes that are wrapped and provide custom logic (!) e.g. keeping track of it's parent-references, internal data-structures and locks to jump between the threads.
If you now check the Hyperloop example of a native UITableView, you access the native API's directly, so no proxy behind it needs to manage sections, templates, items etc. Of course we deliver that API through a kroll proxy in order to display it in Titanium, but you don't "jump between the bridge" with every call you make from the SDK.
The easiest way to see that is to actually run some bigger example like the tableview, collectionview and view-animation. If you do a fast scroll through these, you already feel the performance boost compared to "classic" Titanium API's, simply because the only communication between your proxy and (like a Ti.UI.Window you want to add it to) is the .add() to receive the native API of the type HyperloopClass.
Finally, of course it still makes sense to use Ti.UI.ListView for example, because it comes with the builtin utilities that Titanium devs love (events, easy configuration and layout-handling). But thats also where the benefit of Hyperloop comes along, by allowing the developer to access those API's him-/herself.
I hope that helps a bit more to understand it.

Can I sync multiple live radio streams with video?

Is it possible to sync multiple live radio streams to a pre-recorded video simultaneously and vary the volume at defined time-indexes throughout? Ultimately for an embedded video player.
If so, what tools/programming languages would be best suited for doing this?
I've looked at Gstreamer, WebChimera and ffmpeg but am unsure which route to go down.
This can be done with WebChimera, as it is open source and extremely flexible.
The best possible implementation of this is in QML by modifying the .qml files from WebChimera Player directly with any text editor.
The second best implementation of this is in JavaScript with the Player JS API.
The difference between these two methods would firstly be resource consumption.
The second method that would use only JavaScript would require adding one <object> tag for the video, and one more for each audio file you need to play. So for every media source you add to the page, you will need to call a new instance of the plugin.
While the first method made only in QML (mostly knowing JavaScript would be needed here too, as that handles the logic part behind QML), would load all your media sources in one plugin instance, with multiple VlcVideoSurface components that each has it's own Plugin QML API.
The biggest problem I can foresee for what you want to do is the buffering state, as all media sources need to be paused as soon as one video/audio starts buffering. Synchronizing them by time should not be to difficult though.
WebChimera Wiki is a great place to start, it has lots of demos and examples. And at WebChimera Questions we've helped developers modify WebChimera Player to suit even the craziest of needs. :)

Record sound of one application

I want to develop an application for Mac OS X to record audio from one application.
I played around with Soundflower, but it only grabs the full system audio.
I know that I have to use a HAL plug-in. This plug-in is loaded from an application that uses Core Audio and then I can communicate with the plug-in to grab the audio.
My question is: How does such a plug-in look like? Are there examples on the internet? I have not found anything about this topic.
Now that you've decided that using Cocoa injection is a feasible solution to your problem, let's start there.
What you need to do is find out how the ObjC classes in the app are setting up to play audio, and hook in to set a different AU in place of the default system out.
There are two options (besides writing your own custom AU from scratch, which you don't need to do). You can use AUHAL as the AU, and capture the data from AUHAL. This is a bit easier from the point of view of hooking things up, but it means you have to write the code that renderers and saves the audio. Or you can just hook in a save-to-file AU, which is a bit harder to hook up, but once you do it takes care of rendering automatically.
So, how do you hook things in? Well, most of the higher-level CA calls are written to just write to the current output. If the app is doing things that way, you just need to hook in at startup to find your replacement AU and set it as the current output, in place of the default. On the other hand, if the app is writing directly to an AU that it stores in a variable, you have to hook it to store your AU as a variable. And if it's building a graph of AUs, you either replace the default output, or stick yours in front of it, in the graph.
See TN2091 for some sample code fragments for most of the hard parts for most of the possibilities. It doesn't show you how to put them together, and it's got a lot more about setting inputs than outputs (because that's harder), and the terminology can get confusing, but if you read it carefully, you should be able to find the parts you need.
If you haven't yet built a simple AU host and AU plugin before, you really should take the time to work through the whole Audio Unit Development Fundamentals guide. (And if you don't think you really need to know all that to do something simple, you're wrong. Why CoreAudio is Hard explains half of the reason; the changes between OS X versions versions are the other half of the reason.)
You probably also want to look at CocoaDev's CoreAudioAndAudioUnitsTutorial page for a placeholder page for a complete tutorial that nobody's ever written, with links to a lot of useful stuff.
Meanwhile, if injecting the whole MTCoreAudio framework into the app is feasible, it comes with a ton of nice, complete samples. In fact, even if you aren't going to use the framework, it's worth reading the Overview documentation, and possibly the source code.

Global object for Javascript to interact with Safari plug-in

The issue is that I've written a Safari plug-in (Growler) that allows web applications to send Growl notifications by calling Javascript functions. However, at the moment the way it is written, people need to use <embed> to initialise the plug-in so that Javascript can begin using it (something I picked up from Apple's examples).
I was wondering if there was a way I could define something like window.<pluginName> so that they didn't have to embed it everytime? That'll allow a lot of sites to begin using it without changing any code.
I've looked at a lot of examples and documentation, and two things came up — 'WebView' and 'WebScriptObject'. I'm pretty new to this, so I'm not really sure what to do.
There's no way to write a WebKit plug-in that doesn't handle a content type. That's why so many Safari “plug-ins” or “extensions” (including GrowlSafari) are implemented as input manager hacks.
The way you've done it is the only reliable, safe, supported, and not doomed way to do it.

Beginning XUL & XPCOM development with XULRunner?

I am planning to design an application XUL & XPCOM for proprietary system. So i have decided to use C/C++ but how can I start the development as a beginner in this field
I cannot find a good guide to start around. It will be good if you can give some links
and books. I also would like to know how to prevent the user from modifying the code specially in the view part because the logic can be done in XPCOM.
XUL explorer is a tool that lets you drag and drop XUL. It's good for mocking up an interface or starting to learn about the various elements you can use.
xulrunner is Mozilla's binary that allows you to run XUL/XPCOM/javascript applications.
The Mozilla Developer Center is your friend.
If you use IRC, check out #xulrunner on irc.mozilla.org . They are fairly tolerant of some questions from beginners.
I don't think there's going to be away around allowing the user to see (or potentially modify) the actually XUL interface. There are some paths for trying to secure JavaScript in some way (some surface level, like obscuring, minifying, but then some possible secure loading methods). XPCOM can be written in C++ or JavaScript, to name a few, if you put more of your code in XPCOM it should be more secure, I think.
A fun start for seeing what you can do in XUL is to check out the XUL Periodic Table.
Preventing the user from modifying your code is futile, as they will always be able to do this.
You could of course ship a modified build of xulrunner (containing some required XPCOM as well) which only loads jars signed by some key, but they could trivially hack around that by modifying the binary or the image in memory.
So don't bother trying to stop people modifying your code - you can't - unless you're on a trusted platform such as a games console - and even then it's not guaranteed.
This helped me to create my first XPCOM.