OS Data Hub not working on iPad or iPhone - openlayers-6

I look after a website which displays UK Ordnance Survey maps for people going on walks in SW England, and it relies on displaying a marker on an OS map, in response either to an alphanumeric OSGB grid reference, appended as a query string to the name of the HTML file which displays the map, or to a click on the OS map produced by the HTML file, in the absence of a query string. This has all worked well for several years, on both Windows PCs and mobile devices, using OS OpenSpace, which is based on OpenLayers 2, but now Ordnance Survey have decided to enforce a new system based on Open Layers 6, and requiring a complete re-code of the JavaScript. OS provide example code for various scenarios, from which I have re-written my code to run perfectly on a Windows PC with a mouse, but it fails on an iPad or an iPhone, particularly if the iPad is several years old.
Using a current Apple device, I cannot display a scaled vector graphic on an OS map, either at hard-coded co-ordinates, or in response to a tap on a map. A tap will however bring up a pop-up at the tapped point, and a swipe on the map will pan it. Using an iPad several years old, in addition to the above problems, opening two fingers will not zoom the map, and a swipe will not pan it. The only things that work are the + and - buttons on the map.
I have read that I may need to include a Controls section within my Map command, along the lines of:
controls: [
new OpenLayers.Control.TouchNavigation({
dragPanOptions: {
enableKinetic: true
}
}),
new OpenLayers.Control.Zoom()
],
but if I include this code, my JavaScript crashes, and the JavaScript Error Console within MS Edge gives error message 'OpenLayers is not defined'
Ordnance Survey tell me that my problem 'is an issue relating to the mapping library, OpenLayers, rather than our APIs' so my question is, how do I get code based on the OS examples to run on mobile devices of any age, in the same way my existing code based on Open Layers 2 has run for several years?
I will gladly supply a copy of my code which works on a Windows PC, but not on an Apple device, on request.

OpenLayers needs polyfills on older devices/browsers. Try adding
<script src="https://unpkg.com/elm-pep"></script>
<script src="https://cdn.polyfill.io/v2/polyfill.min.js?features=fetch,requestAnimationFrame,Element.prototype.classList,URL"></script>
before the ol.js script. The default controls should be sufficient (the code you have used is Openlayers 2 format and will not work with OpenLayers 6).

Related

How do we get Qt to render to memory rather than a device?

I have an application that uses Qt 5.6 for various purposes and that runs on an embedded device. Currently I have it rendering via eglfs to a Linux frame buffer on an attached display but I also want to be able to grab the data and send it to a single-color LED display unit (a device will either have that unit or a full video device, never both at the same time).
Based on what I've found on the net so far, the best approach is to:
turn off anti-aliasing;
set Qt up for 1 bit/pixel display device;
select a 1bpp font, no grey-scale allowed; and
somehow capture the graphics scene that Qt produces so I can transfer it to the display unit.
It's just that last one I'm having issues with. I suspect I need to create a surface of some description and inject that into the Qt display "stack", but I cannot find any good examples on how to do this.
How does one do this and, assuming I have it right, is there a synchronisation method used to ensure I'm only getting complete buffers from the surface (i.e., no tearing)?

Editing Video Effects panel on VLC for Mac

Ok so this question is actually in two parts.
I coded a video filter for VLC and I would like to add a control to the Video Effects panel on the OS X UI. So far I've been able to link my plugin the the UI by hijacking one of the existing controls, but this isn't ideal.
Now, if I open up the Xcode project (I'm running Xcode 6.3.1) and try to open the VideoEffect.xib file, I get the following error:
I tried to google this but it sounds like the only alternative would be to play archaeologist and dig up an old copy of Xcode 3. Is there any other way to be able to open this file and edit it somehow? I tried to look at the XML code but if I started to change that I'd do more damage than good.
The second thing I'd like to do is sending back values from the effect module to the UI. At the moment (by hijacking one of the existing sliders), all I can do is read a value from the panel with
config_ChainParse(p_filter, FILTER_PREFIX, ppsz_filter_options, p_filter->p_cfg);
p_filter->p_sys->i_factor = var_CreateGetIntegerCommand(p_filter, FILTER_PREFIX "factor");
and then, inside the callback function:
p_sys->i_factor = VLC_CLIP( newval.i_int, 0, 255 );
However, I haven't been able to write back the value. I'd like the filter to set p_sys->i_factor to a random value at start. This works (using var_SetInteger()), but it isn't reflected in the position of the slider in the Video Effect panel. I suspect I need to hack a bit deeper for that. Any ideas?
Regarding your first question with the xib-file. Consider downloading and using our forthcoming 3.0 code from git://git.videolan.org/vlc.git - it allows editing of said file without Xcode 3.
Regarding your second question, why would you want your video filter to interfere with the UI? This is not how the architecture of VLC works and there is no correct way to do it at this point. You would need to edit the core to do another global variable callback to ask the UI to reload the presented filter configuration.
Perhaps, if you give details about what your filter does and what you want to achieve, we find a more supported way :)

Google Map Integration in OS X Application

I would like to use use Google Map in my MAC application.
I found the iOS SDK of Google Maps but not for OS X.
I want to show two annotation and a line connecting them on Google Map. Coordinate of both annotation are dynamic as per user selection.
Below is the way I find out that can work:
Call a API and pass the location coordinate for both annotation.
Now Server side a html form is generate using javascript and create a page which is showing the 2 annotation and line connecting them.
In Api Response I will get the URL of that html page.
I will show this page in UIWebView.
I want to know is there any other way I can achieve this.
I want to distribute application outside the mac app store and to distribute outside mac store I need to sign app with Developer ID which does not support the MAPs.
I didn't find anything related to this that's why I created this thread.
Thanks in advance.
I recently ported the Mapbox iOS SDK over to OS X. It has a lot of the features of MapKit, but it’s open source and should also work in a developer-signed application such as yours. To use the Mapbox OS X SDK, download the latest release from the GitHub repository (look for releases beginning with “osx-”) and follow the instructions in README.md. An API reference is included.
I want to show two annotation and a line connecting them on Google Map. Coordinate of both annotation are dynamic as per user selection.
To display the annotations on-screen, you’ll need the MGLPointAnnotation and MGLPolyline classes. You can move the point annotations dynamically by setting their coordinate properties. The polyline, however, is immutable; to change its path, remove the existing polyline and add a new one with the new coordinates.
You will have to make it with WebKit and the Google Maps API.
MapKit is available in OS X 10.9 Mavericks: https://developer.apple.com/library/mac/documentation/MapKit/Reference/MapKit_Framework_Reference/index.html
There are of course many ways of hiding the fact that you're using WebKit but if they violate Apple's or Google's TOS then submission to the App Store won't be possible.
Hope this will be helpful!

Codename one Camera app

I'm trying to build sort of a Camera app using Codename one, taking a picture is no problem. But streaming camera feed to the background, like it is on a regular camera on mobile phones, so you can actually see what you're about to film or photograph.
We don't currently support some of the more elaborate AR API's introduced by Google/Apple but we do support placing a camera view finder right into your app with a new cn1lib: https://github.com/codenameone/CameraKitCodenameOne
Since this is implemented in a library you can effectively edit the native code and add functionality as needed.
The original answer is out of date by now I'm keeping the original answer below for reference:
You can record video or take a photo with Codename One.
However, augmented reality type applications where you can place elements on top of the camera viewfinder are currently not supported by Codename One. This functionality is somewhat platform specific and hard to implement in a portable manner.

Identify the monitor with the browser window in FireBreath

I am using FireBreath to create a cross browser plugin which makes use of some native libraries for the respective platform (some .NET based DLLs for Windows and Objective-C based dylibs/frameworks for Mac). Native libraries display UI screens. In order to improve usability, if the user has a multi/extended monitor setup, i would like the native UIs to appear on the same screen as the browser window is currently on.
If an identifier to the monitor with the browser window can be retrieved, that can be passed down to the native components which can be configured to display their UIs on that monitor. I have used FireBreath's getWindowPosition() method to get the rect coordinates of the plugin and used that info to identify the correct monitor in the Windows platform.
However, the coordinates returned in Mac seems to be always 0 (or 1) irrespective of monitor on which the browser window currently resides. I understand that we have to configure an event model and a drawing model in order for this to work in Mac. I have tried following event/drawing model combinations without much success.
1) Cocoa/CoreGraphics
2) Carbon/CoreGraphics
Any help in this regard is much appreciated. Also please do share if there are other approaches to achieve the same. What i want to achieve is to identify the monitor on which the current active browser window resides in Mac. I am unsure at this point, but it maybe possible to achieve this at Objective-C level (without any changes at FireBreath level). Also please note that i want to support Safari, Firefox and Chrome browsers.
You won't like this answer, but simply put you can't do that on Mac. The problem is that with CoreGraphics you are only given a CGContextRef to work with, and it doesn't know where it will be drawn. It was technically possible in older browsers to get an NSWindow by exploiting some internal implementation details, but many browsers that's no longer possible and it was never supported.
Other drawing models are the same; CoreAnimation you have a CALayer but it doesn't know which screen or monitor it is drawn to. I personally think it's a bit annoying as well, but I do not know of any way to find out which monitor your plugin is rendered to, particularly since most of them actually copy the buffer to something else and render in a different process.
I did manage to come up with a workaround and i am just replying here for the completeness of the thread. As #taxilian explained, it is not possible to retrieve plugin coordinates using the window reference. As an alternative approach, Javascript 'Window' object has 2 properties called 'screenX' and 'screenY' that return X and Y coordinates of the browser window relative to the screen. If the user has an extended monitor setup, these are the absolute coordinates with respect to the full extended screen. We can use these values to determine the monitor with the browser window (if the X coordinate is outside the bounds of the primary monitor's width, then the browser should essentially be on the extended monitor). We can retrieve DOM properties from Firebreath as explained in the following link:
http://www.firebreath.org/display/documentation/Invoking+methods+on+the+DOM