How to pass data between Different Pages in Windows Phone 8.1 - xaml

Passing Data from Page to Page for Windows Phone 8.1
i found this great article :
http://www.windowsapptutorials.com/windows-phone/how-to-pass-data-between-different-pages-in-windows-phone-application/
and i understood it very well,
there are few question i came after reading this article is :
[1] which method is better, in which scenarios ?
[2] what is the benefits of all 3 methods?

Small hint: Please state if you are using Silverlight or WinRT, as it makes a big difference.
I assume you are using Silverlight here.
Like demas already stated: Global variables are almost never a good idea.
Recommendation: Always use queryString and always only pass IDs in the query.
This means, keep your data in some kind of storage and always read it from there on any page.
If you want to pass complex objects, put them to your storage, tell the new page the id and on the new page load it from the storage.
If your app gets terminated (tombstoned) in the background and is
relaunched on your detail-pages, it may always be that your global
variables are empty.
It also improves your maintainability: All data accessed by a page will be loaded on that pages code/codebehind/viewmodel; You don't have to check other parts of the app to find out where that data comes from.
Further hint:
It helped me a lot, to think of a Silverlight app like a "web app": The pages are individual pages and the viemodels are the database servers. There is no way to pass data between these pages other than the query string.

Public property in App.xaml.cs and global variables causes namespace pollution and make the application less testable, so I prefer to use QueryString.
On the other hand, sometimes I need to pass complex object or even collections of complex objects and in this case public property in App.xaml.cs is more preferable in my opinion.

Related

jQuery DataTables: How can I explicitly set the table instance name / table ID to use with state saving?

Background:
I'm using DataTables in conjunction with a JS library called "Turbolinks", which basically turns your application into a Single Page Application (SPA) without all the overhead of using a true client-side framework. It is extremely useful for Ruby on Rails application performance.
There's a couple of headaches it introduces though - one is compatibility with DataTables. I've got it working pretty well by basically destroying any DataTable on a Turbolinks navigation, and then re-initializing it on turbolinks page load again. This method works well and seems to be the all-around accepted answer as to the best practice to get DataTables to work with Turbolinks.
Question:
On of the last features / finishing touches I'm trying to add to some of my applications is DataTable state saving. The issue I'm facing is that every time a table is destroyed/re-initialized on a page navigation, the...I'm actually not quite sure what to call it, but it looks like from inspecting the settings object on the stateSaveCallback - it looks like its the sInstance and/or the sTableId:
DataTables_Table_0
Then the localStorage key gets set as:
DataTables_DataTables_Table_0_/current_path: "{data: data}"
where current_path is whatever path/page you're on.
Then when it get re-initialized upon returning to the page, it gets set as DataTables_Table_1, and so on and so forth - so the state never gets correctly loaded.
Is there a way to override that ID (or some way to set the name of it in the stateSaveCallback / stateLoadCallback) so that it doesn't increase the last '0', '1', etc at the end of it? That way when the table is re-initialized, it will pull the saved state from just DataTables_Table/current_path?
The answer is to simply give the table an ID! Then DataTables won't assign it its own ID with the incrementing number and the saveState option just works.
Also, the destroy/re-init actually causes the server to get hit twice in the case of an AJAX table.
The better way to do it is to disable the turbolinks cache for any index pages with datatables. If not, you'll end up doing two requests to the server when only one is needed.

Asynchronous Pluggable Protocol for CID: (email), how to handle duplicate URLs

This is somewhat a duplicate of this question, but that question has no (valid) answer and is 1.5 years old so asking my own with hopes people have more info now.
If you are using multiple instances of a WebBrowser control, MSHTML, IHTMLDocument, or whatever... from inside the APP instance, mostly IInternetProtocol::Start, is there a way to know which instance is loading the resource? Or is there a way to use a different APP for each instance of the control, maybe by providing one via IDocHostUIHandler or ICustomDoc or otherwise? I'm currently using IInternetSession::RegisterNameSpace to make it process wide.
Optional reading below, don't feel you need to read it unless above isn't clear.
I'm working on a legacy (Win32 C++) email client that uses the MS ActiveX WebBrowser control (MSHTML or other names it goes by) to display HTML emails. It was saving everything to temp files, updating the cid: URLs, and then having the control load that. Now I want to do it the correct way, using APP. I've got it all working with some test code that just uses static variables/globals and loads one email.
My problem now is, the app might have several instances of the control all loading different emails (and other stuff) at the same time... not really multiple threads so much, just the asynchronous nature of the control. I can give each instance of the control a unique URL to load the email, say, cid:email-GUID, and then in my APP code I can use that URL to know which email to load. However, when it comes to loading any content inside the email, like attached images using src="cid:", those will not always be unique so I will not always know which image it is, for which email. I'd like to avoid having to modify the URLs of the HTML before displaying it (I'm doing that now for the temp file thing, but want to do it a better way).
IInternetBindInfo::GetBindString can return the referrer, BINDSTRING_XDR_ORIGIN, or the root URL, BINDSTRING_ROOTDOC_URL, but those require newer versions of IE and my legacy app must support older XP installs that might even have IE6 or IE7, so I'd rather not use these.
Tagged as TWebBrowser because that is actually what I'm using (Borland Builder 6 C++), but don't need answers specific to that platform.
As the Asynchronous Pluggable Protocol Handler us very low level, you cannot attach handlers individually to different rendering controls.
Here is a way to get the referrer:
Obtain BINDSTRING_HEADERS
Extract the referrer by parsing the line Referer: http://....
See also How can I add an extra http header using IHTTPNegotiate?
Here is another crazy way:
Create another Asynchronous Pluggable Protocol Handler by calling RegisterMimeFilter.
Monitor text/plain and text/html
Scan the incoming email source (content comes incrementally) and parse store all image links in a dictionary
In NameSpaceHandler you can use this dictionary to find the reference of any image resources.

Serve a seaside page/component without creating a session

We have a Seaside Application in place that creates a session and handles user login etc. So we're happy with that.
But we'd like to have the ability to serve a few pages using a fixed url. This is not a problem using #initialRequest: and delegating to a certain component depending on the url. What I'd like to avoid, however, is that some of these pages create a new session and start up all the machinery that's coming with it.
Any ideas?
Seaside 2
You could create a WASession (or WAMain) subclass which will be used if the request was static. Then in that session (or main) you could override those methods that do too much for your liking.
Seaside 3
You could use the new filter mechanism. If I recall correctly you can take control of the request pretty much at any time. That should give you enough leverage to do what you want.
Or if you don't need session state, just subclass WARequestHandler and register an instance somewhere in your handler tree (presumably in a WADispatcher).
There's some messiness currently if you want to use a Canvas for rendering but there should be some examples in the image.

Why is my Navigation Properties empty in Entity Framework 4?

The code:
public ChatMessage[] GetAllMessages(int chatRoomId)
{
using (ChatModelContainer context = new ChatModelContainer(CS))
{
//var temp = context.ChatMessages.ToArray();
ChatRoom cr = context.ChatRooms.FirstOrDefault(c => c.Id == chatRoomId);
if (cr == null) return null;
return cr.ChatMessages.ToArray();
}
}
The problem:
The method (part of WCF-service) returns an empty array. If I uncomment the commented line it starts working as expected. I have tried turning of lazy loading but it didnt help.
Also, when it works, I get ChatMessages with a reference to ChatRoom populated but not the ChatParticipant. They are both referenced by the ChatMessage-entity in the schema with both Id and Navigation Properties. The Ids are set and points to the right entities but on the client-side only the ChatRoom-reference has been populated.
Related questions:
Is an array the preferred method to return collections of EF-entities like this?
When making a change in my model (edmx) Im required to run the "Generate Database from Model..."-option before I can run context.CreateDatabase(). Why? I get some error message pointing to old SSDL but I cant find where the SSDL is stored. Is this created when I run this "Generate Database..."-option?
Is it safe to return entire entity-graphs to the client? Ive read some about "circular reference exeptions" but is this fixed in EF4?
How and when is references populated in EF4? If I have lazy-loading turned on I suspect only entities I touch is populated? But with lazy loading turned off, should the entire graph be populated always then?
Are there any drawbacks of using self-updating entities over ordinary entities in EF4? I dont need self-updating right now but I might do later. Can I upgrade easily or should I start with self-updating from the start?
Why cant I use entity-keys with type string?
Each of your questions needs a separate answer, but I'll try to answer them as briefly as possible.
First of all, in the code sample you provided, you get a ChatRoom object and then try to access a related object that is not included in your query (ChatMessages). If lazy loading is turned off as you had suggested, then you will need the Include("ChatMessages") call in your query, so your LINQ query should look like this:
ChatRoom cr = context.ChatRooms.Include("ChatMessages").FirstOrDefault(c => c.Id == chatRoomId);
Please ensure that your connection string is in your config file as well.
For the related questions:
You can return collections in any way you choose - I have typically done them in a List object (and I think that's the common way), but you could use arrays if you want. To return as a list, use the .ToList() method call on your query.
I don't understand what you're trying to do here, are you using code to create your database from your EDMX file or something? I've typically used a database-first approach, so I create my tables etc then update my EDMX from the database. Even if you generate your DB from your model, you shouldn't have to run CreateDatabase in code, you should be able to run the generated script against your DB. If you are using code-only then you need to dump the EDMX file.
You can generally return entity graphs to the client, should handle ok.
EF4 should only populate what you need. If you use lazy loading, it will automatically load things that you do not include in your LINQ query when you reference them and execute the query (e.g. do a ToList() operation). This won't work so well if your client is across a physical boundary (eg a service boundary) obviously :) If you don't use lazy loading, it will load what you tell it to in your query and that is all.
Self tracking entities are used for n-tier apps, where objects have to be passed across physical boundaries (eg services). They come with an overhead of generated code for each object to keep track of its changes, they also generate POCO objects which are not dependent on EF4 (but obviously contain generated code that would make the tracked changes work with the EF4 tracker). I'd say it depends on your usage, if you're building a small app that's quite self contained, and don't really care about separation for testability without the infrastructure in place, then you don't need to use self tracking entities. I say only use framework features when you need them, so if you're not writing an enterprise scale application (enterprise doesn't have to be big, but something scalable, highly testable, high quality etc) then no need to go for self tracking POCOs.
I haven't tried but you should be able to do that - that would be a candidate for a separate question if you can't get it to work :)

Class design for serialization - ideas or patterns?

Let me begin with an illustrative example (assume the implementation is in a statically typed language such as Java or C#).
Assume that you are building a content management system (CMS) or something similar. The data is hierarchically organised into Folders. Each folder has a collection of children; a child may be a Page or a Folder. All items are stored within a root folder. No cycles are allowed. We have an acyclic graph.
The system will have a remote API and instances of Folder and Page must be serialized / de-serialized across the network. With a typical implementation of folder, in which a folder's children are a List, serialization of the root node would send the entire graph. This is unacceptable for obvious reasons.
I am interested to hear people have solved this problem in the past.
I have two potential suggestions:
Navigation by query: Change the domain model so that the folder class contains only a list of IDs for each child. To access a child we must query for it. Serialisation is now trivial since the graph ends at a well defined point. The major downside is that we lose type safety - the ID could be for something other than a folder/child.
Stop and re-attach: During serialization stop whenever we detect a reference to a folder or page, send the ID instead. When de-serializing we must then look up the corresponding object for each ID and re-attach it at the relevant position in the nascent object.
I don't know what kind of API you are trying to build, but your suggestion #1 sounds like it is close to what is recommended for REST style services and APIs. Basically, a Folder object would contain a list of URLs to its children.
The Navigation by query solution was used for NFS. By reading through your question, it looks to me, as if you're trying to implements kind of a file system yourself.
If you're looking specifically into sending objects over the network there is always CORBA. Aside from that there is DCOM and the newer WCF. But wait there is more like RMI. Furthermore there are Web Services. I'll stop here now.
Suppose You model the whole tree with every element being a Node, specialisations of Node being Folder and, umm, Leaf. You have a "root" Node. Nodes have a methods
canHaveChildren()
getChildren()
Leaf nodes have the obvious behaviours (never even need to hit the network)
Folders getChildren() get the next set of nodes.
I did devise a system with Restful services along these lines. Seemed to be reasonably easy to program to.
I would not do it by the Navigation by query method. Simply because I would like to stick with the domain model where folders contains folders or pages.
Customizing the serialization might also be tricky, bug prone and difficult to change\understand.
I would suggest that you introduce and object like FolderBowser in your model which takes an id and gives you a list of contents of the folder. That will make your service operations simpler.
Cheers,
Unmesh
The classical solution is probably to use a proxy pattern, where some of the graph is sent over the network and some of the folders are replaced by proxies that will not have their lists of children populated until they are queried. A round trip to the server takes a significant amount of time and it will probably result in too many requests if all folders are proxies (this would yield a new request each time the contents of a folder is inspected), so you want to go for some trade off between the size of each chunk of data and the number of server requests needed in a typical scenario. This is of course application specific, but sending the contents of all child folders in for instance depth 2 might be a useful strategy...
Long story short: What will probably work best is your solution #1 with the exception that you want to send more than one folder at a time because of the overhead of a round trip to the server...