Could UPnP control point display UI on Media Render - upnp

i've need to design UPnP control point device to remote control DLNA-certified TV box.
The question is could UPnP control point display UI on Media Render ? What I mean is how I could create simple control point device (like remote controller for TV), that have not any display, for example, someone could imagine UPnP Joystick.
For now, I see the only way is to send HTML (+Javasript) page to TV Box with my menu and then TV by javascript would subsribe to "cursor move" event on my control point device. And when I move my joystick to the left the cursor on TV also move to the left. Is it real scenario ?
Thank you

You should have a look at the device description of the TV box (or STB). This may contain a link to a UI to control the device, which might or might not (more likely) be helpful. If you want to roll your own, you are stuck with using the services exposed by the device.
Of course unless this device exposes other services like e.g. Samsung Smart TVs do, which expose a UPnP service to allow remote control of the TV. This is not part of DLNA though.
In essence, check out the UPnP device and service descriptions for your box and see what it offers you. You could use something like Intel Device Spy to do this. I don't think you will be able to push HTML and JavaScript to your TV though. Instead you should rather implement the event handling for user input (touch, mousemove, whatever) on let's say a mobile device and use the exposed services via SOAP/UPnP.
Depending on what platform you are targeting, you will need to bring your own UPnP library for that. Cling is a Java library that is supposed to work on Android too. It is not yet possible to use only Web technologies to implement a UPnP client (although with NetworkServiceDiscovery it may be in the future).

Related

How to determine which cameras are front and back facing when using HTML5 getUserMedia and enumerateDevices APIs?

When accessing the camera using HTML5 getUserMedia APIs, you can either:
Request an unspecified "user" facing camera
Request an unspecified "environment" facing camera (optionally left or right)
Request a list of cameras available
Originally we used the "facing" constraint to choose the camera. If the camera faces the "user" we show it mirror image as is the convention.
We run into problems, however, when a user does not have exactly 1 user-facing and 1 environment-facing camera. They might be missing one of these, or have multiple. This can result in the wrong camera being used, or the camera not being mirrored appropriately.
So we are looking at enumerating the devices. However, I have not found a way to determine whether a video device is "user facing" and should be mirrored.
Is there any API available to determine whether a video input is "user" facing the in these APIs?
When you enumerate devices, devices that are an input may have a method called getCapabilities(). If this method is available you may call it to get a MediaTrackCapabilities object. This object has a field called facingMode which lists the valid facingMode options for that device.
For me this was empty on the PC but on my Android device it populated correctly.
Here's a jsfiddle you can use to check this on your own devices: https://jsfiddle.net/juk61c07/4/
Thanks to the comment from O. Jones for setting me in the right direction.

Identify the monitor with the browser window in FireBreath

I am using FireBreath to create a cross browser plugin which makes use of some native libraries for the respective platform (some .NET based DLLs for Windows and Objective-C based dylibs/frameworks for Mac). Native libraries display UI screens. In order to improve usability, if the user has a multi/extended monitor setup, i would like the native UIs to appear on the same screen as the browser window is currently on.
If an identifier to the monitor with the browser window can be retrieved, that can be passed down to the native components which can be configured to display their UIs on that monitor. I have used FireBreath's getWindowPosition() method to get the rect coordinates of the plugin and used that info to identify the correct monitor in the Windows platform.
However, the coordinates returned in Mac seems to be always 0 (or 1) irrespective of monitor on which the browser window currently resides. I understand that we have to configure an event model and a drawing model in order for this to work in Mac. I have tried following event/drawing model combinations without much success.
1) Cocoa/CoreGraphics
2) Carbon/CoreGraphics
Any help in this regard is much appreciated. Also please do share if there are other approaches to achieve the same. What i want to achieve is to identify the monitor on which the current active browser window resides in Mac. I am unsure at this point, but it maybe possible to achieve this at Objective-C level (without any changes at FireBreath level). Also please note that i want to support Safari, Firefox and Chrome browsers.
You won't like this answer, but simply put you can't do that on Mac. The problem is that with CoreGraphics you are only given a CGContextRef to work with, and it doesn't know where it will be drawn. It was technically possible in older browsers to get an NSWindow by exploiting some internal implementation details, but many browsers that's no longer possible and it was never supported.
Other drawing models are the same; CoreAnimation you have a CALayer but it doesn't know which screen or monitor it is drawn to. I personally think it's a bit annoying as well, but I do not know of any way to find out which monitor your plugin is rendered to, particularly since most of them actually copy the buffer to something else and render in a different process.
I did manage to come up with a workaround and i am just replying here for the completeness of the thread. As #taxilian explained, it is not possible to retrieve plugin coordinates using the window reference. As an alternative approach, Javascript 'Window' object has 2 properties called 'screenX' and 'screenY' that return X and Y coordinates of the browser window relative to the screen. If the user has an extended monitor setup, these are the absolute coordinates with respect to the full extended screen. We can use these values to determine the monitor with the browser window (if the X coordinate is outside the bounds of the primary monitor's width, then the browser should essentially be on the extended monitor). We can retrieve DOM properties from Firebreath as explained in the following link:
http://www.firebreath.org/display/documentation/Invoking+methods+on+the+DOM

Rally AppSDK: Is there a way to facilitate "Inter-Panel" communication between Apps in the new layout schema

So I'm just getting used to and getting my arms around the new "panel-based" App scheme released with the 5/5/2012 version of Rally. At first it was a bit frustrating to lose the window real estate when I've been accustomed to full-page iFrames.
I am curious however - from a desire to optimize the way I use real estate onscreen for an App page - I would like to setup and utilize a multi-panel App whose components can communicate. For instance, I'd like to have one App panel display some control widgets and perhaps an AppSDK table, and a second App panel display a chart or grid that responds to events/controls in the first panel.
I've been scanning the AppSDK docs for hints as to how this might be accomplished, but I'm coming up short. Is there a way to wire up event listeners in one App panel that respond to widget controls in another?
We have not decided the best way to have the Apps communicate yet. That is something we are still spiking out internally to find the best way to do it.
Each custom App is in an IFrame so figuring out how to make them communicate can be a bit tricky. Once we figure out a good way to do it we will be sure to let you know.
Has this topic, "app Communication", been addressed yet? I would to have one Custom Grid show User Stories. When a user story is selected another grid show the related tasks.

How to compose an MMS with an audio file programmatically in iOS?

I'm interested in seeing working code for how to compose an SMS/MMS programmatically using the latest iOS in order to include a sound file, taking into consideration that if the file is too big (unsure of the max size at this time, any info is appreciated) an error should be displayed to the user.
I know this can be done, because the built-in recorder for the apple iphone allows for sending audio files via a text message if they're not too big. I'd like to understand how it achieves this programmatically, what sound formats are available to me and what are the limitations if any.
You are not allowed to send MMS through the MessageUI framework, which is the framework iOS allows developers to interact with the Messaging interface. Apple uses private APIs in their apps, and any use of private APIs = automatic rejection in the App Store.
Raphael is right, there is currently no way in the current iOS version (iOS 5) to send an MMS using the MessageUI framework.
One potential workaround we've found was to create a "send MMS" screen, where a user can attach their selected audio / pxt, and then when the user hits the send button, make a call to a 3rd party MMS gateway to deliver the audio / image.

How do access native APIs with Sencha Touch?

If I wanted to create a mobile app that allows the user to take pictures with their phone, record audio notes and record video, how would I do that?
I was browsing through the Sencha Touch 2 API and while I see documentation on video and audio files, it seems like it is just providing a way for me to access files stored on the phone - not actual triggers to record, or take pictures.
Am I missing something?
How would I do what I want?
In order for Sencha Touch to have access to your phone capabilities, you need to use a product like Phone Gap
Unless there is a HTML5 api for doing those sorts of things I don't think you can do that. I know on PhoneGap there are native extensions added into that platform for access to things like microphone, camera, etc. I don't know if Sencha Touch has added any of those sorts of extensions in order for you do this.
Just thinking out of the box here, but you might be able to put Sencha javascript into a Web View from within an Android Java process. Then the Java code could expose an object in its process as an extension point to the Javascript engine for access to Camera, Microphone, what not.