in my WP7 application I have a listbox containing an image. I include several images within my application, but if an image is not found it should be retrieved from internet and then of course be stored in Isolated Storage. Now i have 1-2 questions:
1) On initial application start, should I copy all images into IsolatedStorage so that Isolated Storage contains all images (and therefore the images from images-folder of application are availabe twice: In application image directory and in isolated storage)?
2) Is it possible within the listbox to display images in one case from IsoloatedStorage and in the other from application file directory?
many thanks!
P.S. Code examples are welcome, especially in vb.net.
1 - No. Why waste time and storage?
2 - Possible solution - write a class that implements IValueConverter. In your Convert method, if the value is Uri with IsAbsoluteUri=true and Scheme="isostore", you read the file from isolated store, and return a BitmapImage, as described here. Otherwise, you just return unconverted value from your Convert method. And, you specify your converter in the binding.
Sorry I don't have code examples to share.
P.S. For your task, I'd recommend a 3-rd party lib called "Kawagoe Toolkit". The only drawback is the license that obliges you to mention them in your about page. If using Kawagoe, you could just define a property "imageSource" returning object, and return either Uri for images from resources/XAP, or delay loaded ImageSource object obtained from Kawagoe's ImageCache.Default.Get() method, which will eventually load itself from either isolated store of the Internets. They already have the downloading and caching code you need.
Yes, u'd better copy them to unify location.
What for ? Just show images from IsolatedStorage. Initial images are copied there, and new images are downloaded from internet and put to IsolatedStorage (of course u'd have to write this code).
Related
I'm developing an application which will have a web crawler for some sites.
The application will trigger a Azure Function by URL where the crawler will start the work.
So far, so good, but, we'll have to save some evidence that the crawler passed though the site. We're thinking of save a PDF file with the screen that the crawler passed, but, as Azure Functions doesn't have GDI+, it won't work with Selenium or PhantomJS.
One different approach can be download the HTML content and somehow save this HTML string (with all the JS and CSS dependency) into a PDF file.
i'd like of some library which can work with Azure Functions to make the screenshot of some URL (or HTML string) and save to PDF.
Thanks.
Unfortunately the App Service Sandbox whose rules Azure Functions live by is going to block most GDI+ API calls. We have had success with one third party library (ByteScout) for some PDF generation needs but I think in your case that type of operation is explicitly blocked. You can find out more details here https://github.com/projectkudu/kudu/wiki/Azure-Web-App-sandbox#win32ksys-user32gdi32-restrictions
There is no workaround that I'm aware of because at the end of the day most of these solutions are relying on GDI+ in the underlying OS (directly or indirectly).
Your only real option is to offload that workload to virtual machine without the restriction on the API.That could take the form of a dedicated VM or something like an Azure Container Instance whose life-cycle you can manage more dynamically as needed. We do something similar today where we have a message queue being monitored on a VM and our azure function drops the request into the queue for processing.
I built a virtual filesystem (not a namespace extension) for Windows which acts as a frontend of our document management server consisting of files and folders. In order to be able to display some metadata of the DMS objects in Windows Explorer as additional selectable columns, I successfully provided properties to the Windows Property System by implementing a COM Property Handler. Wheras normal property handlers focus on specific file types for which they feel responsible, my Property Handler adds properties to all files regardless of their type. Because Property Handlers can only be registered on the file type level, I registered my handler for about 30 types under
HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\PropertySystem\PropertyHandlers\<.Extension>
However, I did not manage to register the Property Handler for folder objects. Since all objects in our file system are virtual I build the property store (IPropertyStore) by implementing IInitializeWithFile instead of IInitializeWithStream. The properties are requested from our DMS with the path of IInitializeWithFile acting as key and were not read from an objects content. This concept would work for folders as well.
For getting called on folders I tried to associate the handler by registering under different well known identifiers like Folder, Directory, AllFileSystemObjects and * instead of the file extension without success.
I also didn’t find anything in the MSDN documentation regarding this aspect.
Is there a way to register a Windows Property Handler on folders? Or is there some other way to add custom columns to folders in Windows Explorer?
I'm not sure if it is possible to do this.
Property handlers are clearly not the right approach, they are system wide and there can only be one per file extension. They should only be implemented by the software that "owns" the file extension and can parse the file to extract properties.
The old column handlers would have been your best bet (IMHO) but they are officially dead and you already said you can't use them.
Have you considered creating a namespace extension? Either as a root item somewhere (Desktop or My Computer) the way My Documents used to work in 2000/XP or maybe something more along the lines of how OneDrive works?
I'm not sure if desktop.ini files work in the root of a drive but it might be worth looking into. You would then find yourself in the poorly documented land of [.ShellClassInfo] and its CLSID, CLSID2 and UICLSID members. The general idea would be to act as a IShellFolder proxy on top of the "real" IShellFolder so you could create a multiplex property store. I think there are some (undocumented?) property keys you can override to change the folders default columns and tooltips as well.
There is also something called a delegated folder that allows you to play with nested PIDLs but the documentation is once again pretty useless so I'm not sure if this is something worth looking into.
A 3rd option is to pretend to be a cloud storage provider. I don't know if this gets you any closer to your goal and you would still have to implement some NSE bits to get to the point where you can layer yourself on top of the underlying IShellFolder. This feature is rather new and only documented to work on Windows 10.
The inner workings of how Explorer/IShellBrowser is connected to the IShellFolder/IShellView is one of the least documented parts of Windows. There are hundreds of undocumented interfaces. Explorer gives DefView special treatment leaving other 3rd-party implementations out in the cold.
My feeling is that there is no clean solution to implement this on top of a drive letter but you might get lucky, if Raymond Chen drops by he might have some tips for you...
Passing Data from Page to Page for Windows Phone 8.1
i found this great article :
http://www.windowsapptutorials.com/windows-phone/how-to-pass-data-between-different-pages-in-windows-phone-application/
and i understood it very well,
there are few question i came after reading this article is :
[1] which method is better, in which scenarios ?
[2] what is the benefits of all 3 methods?
Small hint: Please state if you are using Silverlight or WinRT, as it makes a big difference.
I assume you are using Silverlight here.
Like demas already stated: Global variables are almost never a good idea.
Recommendation: Always use queryString and always only pass IDs in the query.
This means, keep your data in some kind of storage and always read it from there on any page.
If you want to pass complex objects, put them to your storage, tell the new page the id and on the new page load it from the storage.
If your app gets terminated (tombstoned) in the background and is
relaunched on your detail-pages, it may always be that your global
variables are empty.
It also improves your maintainability: All data accessed by a page will be loaded on that pages code/codebehind/viewmodel; You don't have to check other parts of the app to find out where that data comes from.
Further hint:
It helped me a lot, to think of a Silverlight app like a "web app": The pages are individual pages and the viemodels are the database servers. There is no way to pass data between these pages other than the query string.
Public property in App.xaml.cs and global variables causes namespace pollution and make the application less testable, so I prefer to use QueryString.
On the other hand, sometimes I need to pass complex object or even collections of complex objects and in this case public property in App.xaml.cs is more preferable in my opinion.
This is somewhat a duplicate of this question, but that question has no (valid) answer and is 1.5 years old so asking my own with hopes people have more info now.
If you are using multiple instances of a WebBrowser control, MSHTML, IHTMLDocument, or whatever... from inside the APP instance, mostly IInternetProtocol::Start, is there a way to know which instance is loading the resource? Or is there a way to use a different APP for each instance of the control, maybe by providing one via IDocHostUIHandler or ICustomDoc or otherwise? I'm currently using IInternetSession::RegisterNameSpace to make it process wide.
Optional reading below, don't feel you need to read it unless above isn't clear.
I'm working on a legacy (Win32 C++) email client that uses the MS ActiveX WebBrowser control (MSHTML or other names it goes by) to display HTML emails. It was saving everything to temp files, updating the cid: URLs, and then having the control load that. Now I want to do it the correct way, using APP. I've got it all working with some test code that just uses static variables/globals and loads one email.
My problem now is, the app might have several instances of the control all loading different emails (and other stuff) at the same time... not really multiple threads so much, just the asynchronous nature of the control. I can give each instance of the control a unique URL to load the email, say, cid:email-GUID, and then in my APP code I can use that URL to know which email to load. However, when it comes to loading any content inside the email, like attached images using src="cid:", those will not always be unique so I will not always know which image it is, for which email. I'd like to avoid having to modify the URLs of the HTML before displaying it (I'm doing that now for the temp file thing, but want to do it a better way).
IInternetBindInfo::GetBindString can return the referrer, BINDSTRING_XDR_ORIGIN, or the root URL, BINDSTRING_ROOTDOC_URL, but those require newer versions of IE and my legacy app must support older XP installs that might even have IE6 or IE7, so I'd rather not use these.
Tagged as TWebBrowser because that is actually what I'm using (Borland Builder 6 C++), but don't need answers specific to that platform.
As the Asynchronous Pluggable Protocol Handler us very low level, you cannot attach handlers individually to different rendering controls.
Here is a way to get the referrer:
Obtain BINDSTRING_HEADERS
Extract the referrer by parsing the line Referer: http://....
See also How can I add an extra http header using IHTTPNegotiate?
Here is another crazy way:
Create another Asynchronous Pluggable Protocol Handler by calling RegisterMimeFilter.
Monitor text/plain and text/html
Scan the incoming email source (content comes incrementally) and parse store all image links in a dictionary
In NameSpaceHandler you can use this dictionary to find the reference of any image resources.
Let me begin with an illustrative example (assume the implementation is in a statically typed language such as Java or C#).
Assume that you are building a content management system (CMS) or something similar. The data is hierarchically organised into Folders. Each folder has a collection of children; a child may be a Page or a Folder. All items are stored within a root folder. No cycles are allowed. We have an acyclic graph.
The system will have a remote API and instances of Folder and Page must be serialized / de-serialized across the network. With a typical implementation of folder, in which a folder's children are a List, serialization of the root node would send the entire graph. This is unacceptable for obvious reasons.
I am interested to hear people have solved this problem in the past.
I have two potential suggestions:
Navigation by query: Change the domain model so that the folder class contains only a list of IDs for each child. To access a child we must query for it. Serialisation is now trivial since the graph ends at a well defined point. The major downside is that we lose type safety - the ID could be for something other than a folder/child.
Stop and re-attach: During serialization stop whenever we detect a reference to a folder or page, send the ID instead. When de-serializing we must then look up the corresponding object for each ID and re-attach it at the relevant position in the nascent object.
I don't know what kind of API you are trying to build, but your suggestion #1 sounds like it is close to what is recommended for REST style services and APIs. Basically, a Folder object would contain a list of URLs to its children.
The Navigation by query solution was used for NFS. By reading through your question, it looks to me, as if you're trying to implements kind of a file system yourself.
If you're looking specifically into sending objects over the network there is always CORBA. Aside from that there is DCOM and the newer WCF. But wait there is more like RMI. Furthermore there are Web Services. I'll stop here now.
Suppose You model the whole tree with every element being a Node, specialisations of Node being Folder and, umm, Leaf. You have a "root" Node. Nodes have a methods
canHaveChildren()
getChildren()
Leaf nodes have the obvious behaviours (never even need to hit the network)
Folders getChildren() get the next set of nodes.
I did devise a system with Restful services along these lines. Seemed to be reasonably easy to program to.
I would not do it by the Navigation by query method. Simply because I would like to stick with the domain model where folders contains folders or pages.
Customizing the serialization might also be tricky, bug prone and difficult to change\understand.
I would suggest that you introduce and object like FolderBowser in your model which takes an id and gives you a list of contents of the folder. That will make your service operations simpler.
Cheers,
Unmesh
The classical solution is probably to use a proxy pattern, where some of the graph is sent over the network and some of the folders are replaced by proxies that will not have their lists of children populated until they are queried. A round trip to the server takes a significant amount of time and it will probably result in too many requests if all folders are proxies (this would yield a new request each time the contents of a folder is inspected), so you want to go for some trade off between the size of each chunk of data and the number of server requests needed in a typical scenario. This is of course application specific, but sending the contents of all child folders in for instance depth 2 might be a useful strategy...
Long story short: What will probably work best is your solution #1 with the exception that you want to send more than one folder at a time because of the overhead of a round trip to the server...