Sharing Data bwtween different AppDomain - appdomain

I'm trying to sending data from newDomain to cureentDomain.
I used DoCallBack for loading list of .dll and extract file&assembly information as Dictionary type.
And tried to send key/value data to currentDomain.
It's my first time to use appDomain, so I just found rough way, Set/GetData.
For using that, too much converting process needed, and it shows it can make exceptions in a variety of situations.
If I can send Dictionary, it will be very excellent way of that.
Please, let me know~

Related

How to pass data between Different Pages in Windows Phone 8.1

Passing Data from Page to Page for Windows Phone 8.1
i found this great article :
http://www.windowsapptutorials.com/windows-phone/how-to-pass-data-between-different-pages-in-windows-phone-application/
and i understood it very well,
there are few question i came after reading this article is :
[1] which method is better, in which scenarios ?
[2] what is the benefits of all 3 methods?
Small hint: Please state if you are using Silverlight or WinRT, as it makes a big difference.
I assume you are using Silverlight here.
Like demas already stated: Global variables are almost never a good idea.
Recommendation: Always use queryString and always only pass IDs in the query.
This means, keep your data in some kind of storage and always read it from there on any page.
If you want to pass complex objects, put them to your storage, tell the new page the id and on the new page load it from the storage.
If your app gets terminated (tombstoned) in the background and is
relaunched on your detail-pages, it may always be that your global
variables are empty.
It also improves your maintainability: All data accessed by a page will be loaded on that pages code/codebehind/viewmodel; You don't have to check other parts of the app to find out where that data comes from.
Further hint:
It helped me a lot, to think of a Silverlight app like a "web app": The pages are individual pages and the viemodels are the database servers. There is no way to pass data between these pages other than the query string.
Public property in App.xaml.cs and global variables causes namespace pollution and make the application less testable, so I prefer to use QueryString.
On the other hand, sometimes I need to pass complex object or even collections of complex objects and in this case public property in App.xaml.cs is more preferable in my opinion.

How to send objects over tcp efficiently

Okay, so my goal is to build a easy to use protocol for sending data over tcp. basically, it would send a message, and an object(of unknown type) over tcp. To send, it would only require one method call and to receive it would only require one also.
So this is how I was thinking to format the "message".
length_of_message - "A string that is a message" - length_of_Object - object
length_of_message would be a set number of bytes. along with length_of_Object.
the actual message string and the actual object would be of variable length.
If the actual class of the object wouldn't be know, could I just declare it as a "generic object" somehow? and then get its class name from the "generic object" and the message would tell the receiver what to do with the object?
It would be simple if it was a constant object type but i want to be able use one send function and one receive function for ever object that needs to be send/recieved.
Any suggestions?
Thanks,
Andrew
Make sure you aren't reinventing the wheel (unless doing so is your primary goal).
With that in mind, consider:
• Implement and use the NSCoding protocol. It allows for the efficient archival of complexly connected object graphs, including cycles.
• Instead of raw TCP, use HTTP. While it adds a bit over overhead in the headers, the body can be straight encoded data. More importantly, HTTP is ubiquitous. It routes through just about anything whereas other protocols might be blocked (think proxy servers).
• Via HTTP, you can leverage compression. If one side of your communication pipe is an existing web server of some kind, it probably already supports gzip'd communication. Compressing an NSData (that would be the result of NSCoding) is trivial.
• Alternatively, stick with straight plists.
Unless you truly have some requirement that makes the above inviable, you are likely better off leveraging the above technologies instead of rolling a new one.
With that said, what you propose is fine. I would add, possibly, a structure like:
[HEADER][MSGID][LEN][TYPE][DATA of len][POST]
Where the POST is a known sequence of bytes that the receiver can verify to make sure that, maybe, all the data was received correctly. Or you could go whole hog and integrate a checksum. Or sub-pieces could be repeated, as needed (i.e. [LEN][TYPE][DATA] over and over.

Using Documentum DQL to get contents of all users' worklows

I know almost nothing about Documentum, so there are probably omissions in the information you need to answer my questions. But I'm going to try, anyway...
We use Documentum (obviously). Within Documentum, users can create workflows. These workflows contain ordered lists of services that are used to process data. So, we may have ServiceA, ServiceB, ServiceC, ServiceD, and ServiceE, and a user can create a workflow that says to process the data using, in order: ServiceC, ServiceA, and ServiceB. Another user's list might be: ServiceA, ServiceD, ServiceE.
I've been asked to find a way to get a list containing the id/name of each user, the user's workflow id (name?), and items within the workflow. From what I've read here on StackOverflow and elsewhere, it looks like this is possible via DQL.
And, if I have the DQL, it turns out that this will be simple to do using interfaces we've already built. If it's too complex, I'll need to write Java and use the API. I'd prefer the DQL.. :-)
So, can someone here provide me with a pointer to a reference on DQL, and perhaps some pointers on what to look at/for?
Maybe you need more than one DQL-Query. However, I would strongly recommend writing some DFC code and iterating over the results.
I would suggest to have a look in the Documentum Content Server Object Reference to find out more about the attributes of type dm_workflow (and, of course, related types like dmi_workitem, dmc_workqueue, etc.).
These types should provide the information you are looking for and where you might start best.

WCF Paged Results & Data Export

I've walked into a project that is using a WCF service for the data tier. Currently, when data is needed for a grid, all rows are returned and the results are bound to a grid and the dataset is stuffed into a session variable for paging/sorting/rebinding. We've already hit a max message size problem, so I'm thinking it's time to convert from fetch and cache to fetch only the current page.
Face value this seems easy enough, but there's a small catch. The user is allowed to export the entire result set at any point. This means that for grid viewing purposes fetching the current page is fine, but when they want to do an export, I still need to make a call for all data.
This puts me back into the max message size issue. What is the recommended approach for this type of setup?
We are currently using the wsHttpBinding...
Thanks for any assistance.
I think the recommended approach for large files is to use WCF streaming. I'm not sure the exact details for your scenario, but you could take a look at this as a starting point:
http://msdn.microsoft.com/en-us/library/ms789010.aspx
I would probably do something like this in your case
create a service with a "paged" GetData() method - where you specify the page index and the page size as additional parameters. This should give you a nice clean interface for "regular" use, and that should not hit the maxMessageSize limits
create a second service (or method) that would send all data - ideally, you could bundle that up into a ZIP file or something on the server, before sending it. If that ZIP file is still too large, you might want to check out WCF streaming for handling large files, as Andy already pointed out
The maxMessageSizeLimit is in place for a good reason: to avoid Denial of Service attacks where a WCF service would just get flooded with large messages and thus brought to its knees. If you can, always try to keep that in mind and don't just jack up the maxMessageSize to 2 GB - it might come back to bite you :-)

Class design for serialization - ideas or patterns?

Let me begin with an illustrative example (assume the implementation is in a statically typed language such as Java or C#).
Assume that you are building a content management system (CMS) or something similar. The data is hierarchically organised into Folders. Each folder has a collection of children; a child may be a Page or a Folder. All items are stored within a root folder. No cycles are allowed. We have an acyclic graph.
The system will have a remote API and instances of Folder and Page must be serialized / de-serialized across the network. With a typical implementation of folder, in which a folder's children are a List, serialization of the root node would send the entire graph. This is unacceptable for obvious reasons.
I am interested to hear people have solved this problem in the past.
I have two potential suggestions:
Navigation by query: Change the domain model so that the folder class contains only a list of IDs for each child. To access a child we must query for it. Serialisation is now trivial since the graph ends at a well defined point. The major downside is that we lose type safety - the ID could be for something other than a folder/child.
Stop and re-attach: During serialization stop whenever we detect a reference to a folder or page, send the ID instead. When de-serializing we must then look up the corresponding object for each ID and re-attach it at the relevant position in the nascent object.
I don't know what kind of API you are trying to build, but your suggestion #1 sounds like it is close to what is recommended for REST style services and APIs. Basically, a Folder object would contain a list of URLs to its children.
The Navigation by query solution was used for NFS. By reading through your question, it looks to me, as if you're trying to implements kind of a file system yourself.
If you're looking specifically into sending objects over the network there is always CORBA. Aside from that there is DCOM and the newer WCF. But wait there is more like RMI. Furthermore there are Web Services. I'll stop here now.
Suppose You model the whole tree with every element being a Node, specialisations of Node being Folder and, umm, Leaf. You have a "root" Node. Nodes have a methods
canHaveChildren()
getChildren()
Leaf nodes have the obvious behaviours (never even need to hit the network)
Folders getChildren() get the next set of nodes.
I did devise a system with Restful services along these lines. Seemed to be reasonably easy to program to.
I would not do it by the Navigation by query method. Simply because I would like to stick with the domain model where folders contains folders or pages.
Customizing the serialization might also be tricky, bug prone and difficult to change\understand.
I would suggest that you introduce and object like FolderBowser in your model which takes an id and gives you a list of contents of the folder. That will make your service operations simpler.
Cheers,
Unmesh
The classical solution is probably to use a proxy pattern, where some of the graph is sent over the network and some of the folders are replaced by proxies that will not have their lists of children populated until they are queried. A round trip to the server takes a significant amount of time and it will probably result in too many requests if all folders are proxies (this would yield a new request each time the contents of a folder is inspected), so you want to go for some trade off between the size of each chunk of data and the number of server requests needed in a typical scenario. This is of course application specific, but sending the contents of all child folders in for instance depth 2 might be a useful strategy...
Long story short: What will probably work best is your solution #1 with the exception that you want to send more than one folder at a time because of the overhead of a round trip to the server...