How to pass data between pages through worklight client API - ibm-mobilefirst

I want to invoke a procedure in one page and use it in another page, and the response is only used by the next page, so I think JsonStore is not suit for that. Should I define a global var?
Is there any code sample to do such things? Thanks for your help.

I presume by pages you mean different HTML files. If so, that is not recommended, Worklight is intended for single page applications. There are no code samples that show how to do that.
I would recommended having a single HTML page and using something like jQuery.load to inject new HTML / DOM elements. By dynamically injecting new HTML your single/main HTML file shouldn't be too big and you can destroy (i.e. remove from memory / the DOM) unused DOM elements. Searching on Google for page fragments and html templates could help you find examples. The idea is that you don't lose the JavaScript context.
Maybe you can get away with doing a new init to re-initialize JSONStore (it won't delete any the data, just give you access) on every new HTML page and use get to get access to the JSONStore collections to perform operations such as find.

Related

How to render a jade block(section) using links?

I was hoping someone had any insight on this basic approach. Sample scenario:
I have a dashboard template with menu links a(href "/page") and I want to click the links to render a different section/view on the template. I used block content...but does it need a specific route?
If I understand correctly, you want to update the content of the page on click of the link without the page getting refreshed.
In that case, no you can't do it using block content.
The purpose of block content is to apply inheritance in your templates.
The typical use of block content would be creating a layout and then creating more specific page from the layout. This is what the official documentation says.
The reason why you cannot do it because, jade is server side templating library. This resolves the block content on server. Once rendered in client, the html looses all the information that was specific to jade (which is obvious because its an html afterall).
What you can do here is
Create a /page.jade and make a ajax call to a service. That service should return an already compiled html string. Since you are using jade, you can easily use jade.compile(source, options) to template / generate html.
Jade API documentation here

Svg-edit usage reference where i can find it?

I found this tool(https://code.google.com/p/svg-edit/) very useful, but there is any reference for this project allow to integrate properly in your applicatione instead of simply add it to an iframe?
For example i want to retrieve the svg code for save it in a variables or something like this
It's possible but might be a little tricky to strip out the required resources and load them in your own application while making sure that there aren't any conflicts. This is actually something I plan on doing for one of my own projects.
Do you need to do this though? You can talk to the iframe from your application pretty easily. For example, to get the SVG content you would use this (assuming you only have 1 iframe on the page) ->
var svgedit = window.frames[0];
svgedit.svgCanvas.svgCanvasToString();

Efficient way to create pages from multiple similar type links

I have a page where there are about 30 links and those links would have similar page except for a few contents changed(there is also pictuures). Now is there an efficient way to do that without repeating the codes and repeated nestings of the codes. thank you.
Using plain HTML you won't be able to do this.
The most straightforward way to do it, I think, is using server-side scripting to implement a rendering template. You could then have a default "main" template with everything those 30 pages have in common and then in each of those pages use the main template and load the custom content.
So if you want to modify something in the main template you'd only have to modify the main.html (or whatever you called it) page and not each of the 30 pages.
See this.

SEO: Can dynamically generated links be crawled?

I have a page containing <div> tags with onclick="" code that calls an ajax request to get json data, and then iterates through the results to form links (<a />) to append to the page. These links do not exist in any other place on my website. How can I make these dynamically generated links crawlable?
My initial thought was to turn the <div> tags into <a> tags with a href="#", but with my limited knowledge of how typical crawlers work, i don't think this would solve my problem since the "#" would be what's recognized by the crawler, and not necessarily the dynamically generated output. This is besides the point that i don't want the scroll positioning to be altered at all, which would also rule out giving the <a> tag an id and having it reference itself.
Do I have any options aside from making a new page containing all of the links i need to be crawled? Thanks.
As a general rule, content that is created or made available through JavaScript cannot be found or indexed by search engines. Google does support crawlable Ajax but using it as the only means of accessing your content is bad for accessibility. Also, other search engines can't get to that content which is also not a good thing. Basically crawable ajax is a bad thing.
You should always make your content available without requiring JavaScript to get it. Then you can improve your site by adding JavaScript to make getting the content faster or easier. This is called Progressive Enhancement and is how good websites are built.

how to read/parse dynamically generated web content?

I need to find a way to write a program (in any language) that will connect to a website and read dynamically generated data from the website.
Note that it's dynamically generated--it's not enough to get the source html, because the data I'm interested in is generated via javascript that references back-end code. So when i view the webpage source, I can't see the data. (For example, go to google, and do a search. Check the source code on the search results page. Very little of the data your browser is displaying is reflected in the source--most of it is dynamically generated. I need some way to access this data.)
Pick a language and environment that includes an HTML renderer (e.g. .NET and the WebBrowser control). Use the HTML renderer to get the URL and produce an HTML DOM in memory (making sure that scripting is enabled). Read the contents of the HTML DOM after the renderer has done its work.
Example (you'll need to do this inside a System.Windows.Form derived class):
WebBrowser browser = new WebBrowser();
browser.Navigate("http://www.google.com");
HtmlDocument document = browser.Document;
// extract what you want from the document
I used to have a Perl program to access Mapguide.com to get the drive direction from one location to another location. I parsed the returned page and save to database. If the source never change their format, it is OK. the problem is the source format often change, your parser also need change.
A simple thought: if we're talking about AJAX, you can rather look up the urls for the dynamic data. Then you can use the javascript on the page you're talking about to reformat this.
If you have Firefox/greasemonkey making a DOM dumper should be a simple matter.