How to interactively create an edge on canvas between two nodes - cytoscape.js

As in the cytoscape.js to make an interactive communication between nodes? Is it possible? In similarity how it is implemented in a factoid (http://webservice.baderlab.org:3000/).
That is, you can interactively create an edge on canvas between two nodes and then save it in the database.

The bundled edgehandles plugin is what you're looking for, though you'll also need the cxtmenu plugin or some other manual UI for starting the gesture (e.g. a mode + tap) to support touch devices. They have their options documented at the top of the source, and they will be further documented in 2.1 once they are migrated to the jQuery plugin site -- those plugins are simply jQuery plugins that use the cy.js API afterall.

Related

Different UI for Android/iOS in interpreted JS cross-platform tools

Some cross-platform tools (like Xamarin native and RubyMotion) allow the development of two separate views for Android and iOS, while keeping the business logic shared for both of them. Others (like Apache Cordova or Xamarin.Forms) share both UI and business layer, with the option to use platform-specific overrides when necessary.
What is the state of the interpreted JavaScript frameworks (NativeScript, React Native or Appcelerator)? Are they all focused on creating single UI with platform overrides, or do they allow creating two separate views for each platform? For example, is it possible to create a view using Fragments in Android, but a different view on iOS (since Fragments do not exist there)?
Cordova uses WebView, that mean GUI level will be the same for both Android and iOS but different per Device version. In case of Android each client has own Chronium version and it can break UI behaviour. So developers use Crosswalk to set fixed Chronium version. (extra 20M to your application).
BTW Ionic that uses Cordova architecture uses native behaviour per platform. For example for Android Tabs located at the top, on iOS - at the bottom
On other hand Xamarin (C#), React-Native(JS) and NativeScript(JS) call native APIs. They don't use WebView but generate Native code.
For example if you create button - it will look different: on Android - material theme, on iOS - iPhone theme
Anyways, the bottom line is: everything depends on resources and time. If you want to build application fast, with the same view - I would go on Ionic2+ Angular2 + Cordova.
If you you have more time - go on React-Native or NativeScript (Still has poor documentation) or Xamarin (C#).
React-native's slogan is Learn once, write everywhere. So, you can choose what suits your needs, you can:
Share UI between platforms.
Share Only business logic.
So, the answer for react-native is yes. You can create separate UIs or you can share it.
Since you are writing components, one way of separating this logic is to write component.android.js and component.ios.js and the platform loads the appropriate one for you. Note that you can also do that programmatically.
You can see that in action in the official f8 app made by facebook using react-native

Google Map Integration in OS X Application

I would like to use use Google Map in my MAC application.
I found the iOS SDK of Google Maps but not for OS X.
I want to show two annotation and a line connecting them on Google Map. Coordinate of both annotation are dynamic as per user selection.
Below is the way I find out that can work:
Call a API and pass the location coordinate for both annotation.
Now Server side a html form is generate using javascript and create a page which is showing the 2 annotation and line connecting them.
In Api Response I will get the URL of that html page.
I will show this page in UIWebView.
I want to know is there any other way I can achieve this.
I want to distribute application outside the mac app store and to distribute outside mac store I need to sign app with Developer ID which does not support the MAPs.
I didn't find anything related to this that's why I created this thread.
Thanks in advance.
I recently ported the Mapbox iOS SDK over to OS X. It has a lot of the features of MapKit, but it’s open source and should also work in a developer-signed application such as yours. To use the Mapbox OS X SDK, download the latest release from the GitHub repository (look for releases beginning with “osx-”) and follow the instructions in README.md. An API reference is included.
I want to show two annotation and a line connecting them on Google Map. Coordinate of both annotation are dynamic as per user selection.
To display the annotations on-screen, you’ll need the MGLPointAnnotation and MGLPolyline classes. You can move the point annotations dynamically by setting their coordinate properties. The polyline, however, is immutable; to change its path, remove the existing polyline and add a new one with the new coordinates.
You will have to make it with WebKit and the Google Maps API.
MapKit is available in OS X 10.9 Mavericks: https://developer.apple.com/library/mac/documentation/MapKit/Reference/MapKit_Framework_Reference/index.html
There are of course many ways of hiding the fact that you're using WebKit but if they violate Apple's or Google's TOS then submission to the App Store won't be possible.
Hope this will be helpful!

How to apply page background images in tabris, preferable using stylesheets for iOS and/or Android devices

this is not clear to me from the documentation and from the current behavior I see in my app: The stylesheets work nice using a web browser, but not on the mobile app.
So what I was looking for is how to apply different background images in our mobile app (or at least colors) to the navigation page (top level pages list) and any other pages. We would like to apply different styles to the our current, I guess default style but don't know how to do this. So at this point I do not know what I can ask our graphics designer to provide.
Any docs that I missed or examples I can look at?
Thanks,
Vincent
The styles you are using for the web are applied by RAP's theming. Currently Tabris does not support theming. The only option you have at the moment is to use the SWT setBackground.Image methods on the widget itself. To behave different as in the web you could use RWT.getClient().getService( ClientDevice.class ).getPlatform(); to distinguish between the mobile and web client.

IBM Worklight 6.1 - Custom UI Pattern

I'm trying out the mobile pattern, and have been trying to crate my own custom pattern that is now supposedly supported in Worklight 6.1.
When I tried creating jquery UI pattern, several issues:
1. The rich page editor for the pattern.html does not display the jquery component correctly on the design page (e.g button is displayed as link).
2. When I added a new page (into a jquery hybrid app) based on the custom UI pattern, it does not create a new page. It only adds the content code into the index.html, and I had to create the page myself.
Is this the correct behaviour?
I'm also having difficulty in creating Dojo UI pattern… as there is no Dojo component available on the palette when I open up the dojo > pattern.html file.
Do I have to add the libraries and code manually (i.e. no Drag-and-Drop)?
Appreciate any pointers on this.
PS: I'm using Eclipse Juno R2
1) For jQuery based patterns you need to append a jQuery core file to the project besides the jQuery mobile ones, for example, append this one: http://code.jquery.com/jquery-1.10.2.js to your project, next to jquery mobile JS file. This is just because "UI Pattern" projects don't have this file available, but they need it to handle a proper preview.
2) For Dojo patterns, there is still no official support (for example Drag and drop), so even you can modify pattern.html to get some "insertable" code, you may still need some additional tuning to get a valid pattern.

Identify the monitor with the browser window in FireBreath

I am using FireBreath to create a cross browser plugin which makes use of some native libraries for the respective platform (some .NET based DLLs for Windows and Objective-C based dylibs/frameworks for Mac). Native libraries display UI screens. In order to improve usability, if the user has a multi/extended monitor setup, i would like the native UIs to appear on the same screen as the browser window is currently on.
If an identifier to the monitor with the browser window can be retrieved, that can be passed down to the native components which can be configured to display their UIs on that monitor. I have used FireBreath's getWindowPosition() method to get the rect coordinates of the plugin and used that info to identify the correct monitor in the Windows platform.
However, the coordinates returned in Mac seems to be always 0 (or 1) irrespective of monitor on which the browser window currently resides. I understand that we have to configure an event model and a drawing model in order for this to work in Mac. I have tried following event/drawing model combinations without much success.
1) Cocoa/CoreGraphics
2) Carbon/CoreGraphics
Any help in this regard is much appreciated. Also please do share if there are other approaches to achieve the same. What i want to achieve is to identify the monitor on which the current active browser window resides in Mac. I am unsure at this point, but it maybe possible to achieve this at Objective-C level (without any changes at FireBreath level). Also please note that i want to support Safari, Firefox and Chrome browsers.
You won't like this answer, but simply put you can't do that on Mac. The problem is that with CoreGraphics you are only given a CGContextRef to work with, and it doesn't know where it will be drawn. It was technically possible in older browsers to get an NSWindow by exploiting some internal implementation details, but many browsers that's no longer possible and it was never supported.
Other drawing models are the same; CoreAnimation you have a CALayer but it doesn't know which screen or monitor it is drawn to. I personally think it's a bit annoying as well, but I do not know of any way to find out which monitor your plugin is rendered to, particularly since most of them actually copy the buffer to something else and render in a different process.
I did manage to come up with a workaround and i am just replying here for the completeness of the thread. As #taxilian explained, it is not possible to retrieve plugin coordinates using the window reference. As an alternative approach, Javascript 'Window' object has 2 properties called 'screenX' and 'screenY' that return X and Y coordinates of the browser window relative to the screen. If the user has an extended monitor setup, these are the absolute coordinates with respect to the full extended screen. We can use these values to determine the monitor with the browser window (if the X coordinate is outside the bounds of the primary monitor's width, then the browser should essentially be on the extended monitor). We can retrieve DOM properties from Firebreath as explained in the following link:
http://www.firebreath.org/display/documentation/Invoking+methods+on+the+DOM