I am using FireBreath to create a cross browser plugin which makes use of some native libraries for the respective platform (some .NET based DLLs for Windows and Objective-C based dylibs/frameworks for Mac). Native libraries display UI screens. In order to improve usability, if the user has a multi/extended monitor setup, i would like the native UIs to appear on the same screen as the browser window is currently on.
If an identifier to the monitor with the browser window can be retrieved, that can be passed down to the native components which can be configured to display their UIs on that monitor. I have used FireBreath's getWindowPosition() method to get the rect coordinates of the plugin and used that info to identify the correct monitor in the Windows platform.
However, the coordinates returned in Mac seems to be always 0 (or 1) irrespective of monitor on which the browser window currently resides. I understand that we have to configure an event model and a drawing model in order for this to work in Mac. I have tried following event/drawing model combinations without much success.
1) Cocoa/CoreGraphics
2) Carbon/CoreGraphics
Any help in this regard is much appreciated. Also please do share if there are other approaches to achieve the same. What i want to achieve is to identify the monitor on which the current active browser window resides in Mac. I am unsure at this point, but it maybe possible to achieve this at Objective-C level (without any changes at FireBreath level). Also please note that i want to support Safari, Firefox and Chrome browsers.
You won't like this answer, but simply put you can't do that on Mac. The problem is that with CoreGraphics you are only given a CGContextRef to work with, and it doesn't know where it will be drawn. It was technically possible in older browsers to get an NSWindow by exploiting some internal implementation details, but many browsers that's no longer possible and it was never supported.
Other drawing models are the same; CoreAnimation you have a CALayer but it doesn't know which screen or monitor it is drawn to. I personally think it's a bit annoying as well, but I do not know of any way to find out which monitor your plugin is rendered to, particularly since most of them actually copy the buffer to something else and render in a different process.
I did manage to come up with a workaround and i am just replying here for the completeness of the thread. As #taxilian explained, it is not possible to retrieve plugin coordinates using the window reference. As an alternative approach, Javascript 'Window' object has 2 properties called 'screenX' and 'screenY' that return X and Y coordinates of the browser window relative to the screen. If the user has an extended monitor setup, these are the absolute coordinates with respect to the full extended screen. We can use these values to determine the monitor with the browser window (if the X coordinate is outside the bounds of the primary monitor's width, then the browser should essentially be on the extended monitor). We can retrieve DOM properties from Firebreath as explained in the following link:
http://www.firebreath.org/display/documentation/Invoking+methods+on+the+DOM
Related
I'm currently adapting my personal engine to Vulkan and I want to reimplement transparent windows, which I already had with OpenGL.
I thought that all I need to do is to select the correct color format ( with alpha channel ) and to set the compositeAlpha property of VkSwapchainCreateInfoKHRto VK_COMPOSITE_ALPHA_POST_MULTIPLIED_BIT_KHR.
However clearing the window with a full transparent color doesn't provide the expected results. It's fully opaque.
Of course my window system, which didn't change since I had OpenGL, supports it and when I just disable the rendering I also can't click through at the supposed position of the window, this tells me that it's there.
Are there any other required changes to make this work?
Some infos
The image format is VK_FORMAT_B8G8R8A8_UNORM and I oriented the vulkan setup as found in Sascha Willems examples.
That capability (as most others) have to be queried before usage about whether it is supported. Otherwise it is invalid to use it.
This particular feature is queried by vkGetPhysicalDeviceSurfaceCapabilitiesKHR as pSurfaceCapabilities->supportedCompositeAlpha. It is a bitfield/flag-set, so more than one mode or none can be supported.
I think the result/feature support may be influenced by the VkSurface. That is, how the platform window was created. Or maybe the driver maker simply did not implement it yet (despite that feature being supportable).
Since it worked for you before in OGL, the later is more likely. But couldn't hurt to play with the platform window creation parameters...
Dunno if this is still relevant, but I got it working with transparent windows through GLFW. (If you are not using GLFW you may dismiss this answer!)
As stated here, there are two ways of obtaining window transparency: framebuffer transparency (alpha bit), and window transparency.
For window transparency it is sufficient to call glfwSetWindowOpacicity(GLFWwindow*, float), where the opacity value should be in the range (0, 1].
NOTE: Since GLFW does not support using both transparency methods at the same time, we must still use VK_COMPOSITE_ALPHA_OPAQUE_BIT_KHR in the compositeAlpha field of the VkSwapchainCreateInfoKHR object.
Window transparency may not be supported on all systems, which is why GLFW provides us with a function glfwGetWindowOpacity(GLFWwindow*), to check if calling the first methods was successful.
I'm new to UWP (windows 10), working on an app for windows phone, I wanted to know what is the difference between:
Using multiple frames and navigate from one to other.
Using a single xaml with no frames but with multiple grids (or other patterns), and instead of navigate- just change visibility so only the desired grid will be visible.
which option is better? and why?
The system keeps track of the Page you are currently on. So even when your App exists and even if it's removed from memory the OS can tell the App to reopen on that page.
Similarily when your App provides other Apps with the capability of calling into it to open certain file types of to perform certain actions (e.g. start navigation, etc.) pages will be used.
Lastly if you put everything on a single page and just manipulate visibility this will increase memory consumption of your App (as everything needs to be loaded even if it's not visible) and it also might increase load times.
Of course how much that impacts you is up to the type of App you are building. In general however I'd advice you to start building using separate pages in case your App grows. Also you get a lot of stuff out of the box if you do so (e.g. animated transitions, etc.)
Ok so this question is actually in two parts.
I coded a video filter for VLC and I would like to add a control to the Video Effects panel on the OS X UI. So far I've been able to link my plugin the the UI by hijacking one of the existing controls, but this isn't ideal.
Now, if I open up the Xcode project (I'm running Xcode 6.3.1) and try to open the VideoEffect.xib file, I get the following error:
I tried to google this but it sounds like the only alternative would be to play archaeologist and dig up an old copy of Xcode 3. Is there any other way to be able to open this file and edit it somehow? I tried to look at the XML code but if I started to change that I'd do more damage than good.
The second thing I'd like to do is sending back values from the effect module to the UI. At the moment (by hijacking one of the existing sliders), all I can do is read a value from the panel with
config_ChainParse(p_filter, FILTER_PREFIX, ppsz_filter_options, p_filter->p_cfg);
p_filter->p_sys->i_factor = var_CreateGetIntegerCommand(p_filter, FILTER_PREFIX "factor");
and then, inside the callback function:
p_sys->i_factor = VLC_CLIP( newval.i_int, 0, 255 );
However, I haven't been able to write back the value. I'd like the filter to set p_sys->i_factor to a random value at start. This works (using var_SetInteger()), but it isn't reflected in the position of the slider in the Video Effect panel. I suspect I need to hack a bit deeper for that. Any ideas?
Regarding your first question with the xib-file. Consider downloading and using our forthcoming 3.0 code from git://git.videolan.org/vlc.git - it allows editing of said file without Xcode 3.
Regarding your second question, why would you want your video filter to interfere with the UI? This is not how the architecture of VLC works and there is no correct way to do it at this point. You would need to edit the core to do another global variable callback to ask the UI to reload the presented filter configuration.
Perhaps, if you give details about what your filter does and what you want to achieve, we find a more supported way :)
I would like to use use Google Map in my MAC application.
I found the iOS SDK of Google Maps but not for OS X.
I want to show two annotation and a line connecting them on Google Map. Coordinate of both annotation are dynamic as per user selection.
Below is the way I find out that can work:
Call a API and pass the location coordinate for both annotation.
Now Server side a html form is generate using javascript and create a page which is showing the 2 annotation and line connecting them.
In Api Response I will get the URL of that html page.
I will show this page in UIWebView.
I want to know is there any other way I can achieve this.
I want to distribute application outside the mac app store and to distribute outside mac store I need to sign app with Developer ID which does not support the MAPs.
I didn't find anything related to this that's why I created this thread.
Thanks in advance.
I recently ported the Mapbox iOS SDK over to OS X. It has a lot of the features of MapKit, but it’s open source and should also work in a developer-signed application such as yours. To use the Mapbox OS X SDK, download the latest release from the GitHub repository (look for releases beginning with “osx-”) and follow the instructions in README.md. An API reference is included.
I want to show two annotation and a line connecting them on Google Map. Coordinate of both annotation are dynamic as per user selection.
To display the annotations on-screen, you’ll need the MGLPointAnnotation and MGLPolyline classes. You can move the point annotations dynamically by setting their coordinate properties. The polyline, however, is immutable; to change its path, remove the existing polyline and add a new one with the new coordinates.
You will have to make it with WebKit and the Google Maps API.
MapKit is available in OS X 10.9 Mavericks: https://developer.apple.com/library/mac/documentation/MapKit/Reference/MapKit_Framework_Reference/index.html
There are of course many ways of hiding the fact that you're using WebKit but if they violate Apple's or Google's TOS then submission to the App Store won't be possible.
Hope this will be helpful!
So I'm just getting used to and getting my arms around the new "panel-based" App scheme released with the 5/5/2012 version of Rally. At first it was a bit frustrating to lose the window real estate when I've been accustomed to full-page iFrames.
I am curious however - from a desire to optimize the way I use real estate onscreen for an App page - I would like to setup and utilize a multi-panel App whose components can communicate. For instance, I'd like to have one App panel display some control widgets and perhaps an AppSDK table, and a second App panel display a chart or grid that responds to events/controls in the first panel.
I've been scanning the AppSDK docs for hints as to how this might be accomplished, but I'm coming up short. Is there a way to wire up event listeners in one App panel that respond to widget controls in another?
We have not decided the best way to have the Apps communicate yet. That is something we are still spiking out internally to find the best way to do it.
Each custom App is in an IFrame so figuring out how to make them communicate can be a bit tricky. Once we figure out a good way to do it we will be sure to let you know.
Has this topic, "app Communication", been addressed yet? I would to have one Custom Grid show User Stories. When a user story is selected another grid show the related tasks.