My client likes programs like Microsoft OneNote where changes are saved automatically, and he can choose to discard when he explicitly wants to do so. I will have to implement some undo functionality, but I'll figure that out some other time.
With NHibernate, I suppose I can call ISession.Update on every single property / binding change, but I can see real pain with this approach down the road. I am not a fan of timers, but maybe a 5 second timer that starts on property / binding change and at timer end use BackgroundWorker thread to save to db.
What do you think?
A ISession.Update on every property-change isn't a good idea normally. The property-change-events are fired quite often. It could slow down your application when you do ISession.Update on every change. This will probably lead to a bad user experience.
In our Application has the same behavior. We store the changes when a View is closed (an some other related event). For example when the user closes a tab, the data which was displayed in that tab is closed.
An additional timer is probably a good idea to prevent data loss when the application crashes / unexpected happens.
Related
I have a question about WebktGTK.
These days I am making a program which is can analysis web page if has suspicious web content.
When "load-failed" "load-changed" signal is emitted with WEBKIT_LOAD_FINISHED,
The program anlaysis the next page continuously by calling webkit_web_view_load_uri again again.
(http://webkitgtk.org/reference/webkit2gtk/stable/WebKitWebView.html#webkit-web-view-load-uri)
The question want to ask you is memory problem.
The more the program analsysis the webpages, The more WebKitWebProcess is bigger.
webkit_back_forward_list_get_length() return value also increased by analysising web pages. Where shoud I free memory?
Do you know how Can I solve this problem or Could give me any advice where Can I get advice?
Thank you very much :-) Have a nice day ^^
In theory, what you're doing is perfectly fine, and you shouldn't need to change your code at all. In practice, WebKit has a lot of memory leaks, and programatically loading many new URIs in the same web view is eventually going to be problematic, as you've found.
My recommendation is to periodically, every so many page loads, create a new web view that uses a separate web process, and destroy the original web view. (That will also reset the back/forward list to stop it from growing, though I suspect the memory lost to the back/forward list is probably not significant compared to memory leaks when rendering the page.) I filed Bug 151203 - [GTK] Start a new web process when calling webkit_web_view_load functions? to consider having this happen automatically; your issue indicates we may need to bump the priority on that. In the meantime, you'll have to do it manually:
Before doing anything else in your application, set the process model to WEBKIT_PROCESS_MODEL_MULTIPLE_SECONDARY_PROCESSES using webkit_web_context_set_process_model(). (If you are not creating your own web contexts, you'll need to use the default web context webkit_web_context_get_default().)
Periodically destroy your web view with gtk_widget_destroy(), then create a new one using webkit_web_view_new() et. al. and attach it somewhere in your widget hierarchy. (Be sure NOT to use webkit_web_view_new_with_related_view() as that's how you get two web views to use the same web process.)
If you have trouble getting that solution to work, an extreme alternative would be to periodically send SIGTERM to your web process to get a new one. Connect to WebKitWebView::web-process-crashed, and call webkit_web_view_load_uri() from there. That will result in the same web view using a new web process.
I am new to WinRT and was playing around with session state. I am navigating to a page to collect data and then want to return to the main page. Just before navigation I am using:
SuspensionManager.SessionState["CurrentState"] = someObject;
The object contains lists of other mildly complex objects, etc... All seems to be working but is this the correct way to use the Suspension Manager?
I have looked at other posts on the topic and some people report that it is necessary to use [DataContract] and [DataMember] attributes to all the classes that are serialized. I omitted them and it still works, (getting the data across pages). So what is the recommended approach?
I may be reading too much into one aspect your question, but the role of SuspensionManager and SessionState is to store just enough information to bring your application back to the place the user left it if the application is actually terminated while it's suspended.
In the Windows 8 application lifecycle, your app gets 'suspended' if another app comes to the foreground. While your app is suspended all of its state is retained in memory, and if reactivated (you flip back to it) everything* is restored "for free".
A suspended app could, however, also be terminated by the OS (b/c of memory pressure, for instance) and there is no opportunity to react to that scenario in your app, so what you are really doing with SessionState is storing what's necessary to 'recreate' the last place the user was at IF the application had actually terminated. It's essentially an insurance policy: if the application is merely suspended, SessionState isn't really needed.
The 'what's necessary' is the grey area, I could store all of the information about say a user profile that was in progress OR I could save just the userid that indexes into my persistent storage of all the user profile data. I generally have more of a minimalist view and will retain as little as possible in SessionState - I make the analogy that I don't need to remember everything, I only need to remember how/where to get/find everything.
There's an implication as well in your question that you're using SessionState to pass information between pages in your app, and that's not really the intent. Each page of your app is typically connected with a view model, and when you interact with a page of that app, you'd update the view model and drive additional screens and experiences from the changes already in the view model. Leaving one screen of your app and returning the main one would also imply to me that you've persisted what ever information you collected - certainly to the view model, but also to something persistent like a data base or local storage. When you revisit that page, you'd then pull the data back out of your view model (or that persistent storage); the main page doesn't need that information so why hold on to it?
Lastly, since you mentioned being new to WinRT, you may want to check out App Builder, which pulls together a number of resources in consumable chunks to lead you through building an app over a period of 30-days (though all material is available, so you can consume at any pace you want :)) The discussion of lifecycle management that's germane to your question comes in on Day 17 of that sequence.
*"everything is restored for free" doesn't necessarily mean you don't have any work to do when an app comes out of the suspended state. There may be stale data that requires refreshing, and connections or other transient or short-lived entities may need to be refreshed/recreated.
This is an IOS6 question.
I have an app that is calling a class (A) to check something. Then I want to call a class (B) to do something else
Is it possible to make sure process B doesn't start before process A finishes?
At the moment, I just call one after the other in the RootVC.
Each is showing a modal view, and I only get to see B ..
[self performA];
[self performB];
Thanks
There are several tools for managing the order of execution of parts of your application available to you. However since you are presenting view controllers you have a couple of constraints; you don't want to block the main thread (or else the app will become unresponsive) and you must perform UI actions on the main thread.
In this case the most common, and probably most appropriate, solution is to setup a callback to trigger action B when action A finishes.
The modal view controller presented as part of A might call a delegate when it has finished its task successfully. That delegate can then begin task B.
Alternately you might pass a block to A which A will execute when it finishes. That block can then perform task B.
I took the dare and failed.
The story: My app has been giving me hell updating from an iOS4 target to iOS6 (with a contingent sub of code for iOS5/3GS). It crashes unless i use #try etc... with a built in delay interval on the reattempt (which is stupid, 'cause I don't know how large a database the users have, nor how long it will take to load them). It's a painful way to get around my real problem: the view loads before the CoreData stack (logs) can be loaded completely and I don't see a way to make the initial view wait until its NSMutableArray (based on the CoreData database of my object) loads. Basically, I keep getting a false error about addObjectsSortedBy: the foremost attribute of my entity.
Threading does seem to be the answer, but I need to load an NSMutableArray and feed it into my initialViewController, which will be visible on every launch (excluding FirstTime initial), but my attempt (okay, 12 attempts) to use threading just made the crash occur earlier in the app launch.
The result: I bow down to those who have wrangled that bull of threads.
My solution has been to build in a notification in the AppDelegate.m, my initialViewController viewDidLoad is told to listen for it before anything else. If it gets the notification it skips ahead and completes the normal process unto [super viewDidLoad]; if not, it executes #try, #catch, #finally. In the #try I attempt to proceed as though the notification arrived (like it was a little late), then I handle (#catch) the error by displaying a "Please Wait" label to the user, then I tell the app to wait .xx and repeat the original addObjectsSortedBy: command as though everything were kösher to begin with.The sweet-spot for my app, with images and data in the logs appears to be .15 for the wait interval #50 test entries, with time to spare and no obvious lag on load. I could probably go down to .10 #50 entries.
BUT: I don't know how to scale this, without having the logs loaded enough to get an object.count! Without that, there is no way to scale my delay, which means it may (read:will) not work for large logs with many entries (200+)!
I have a work-around, but I'm going to keep trying to get a grip on threading, in order to have a solution. And to be honest, once I hit 20 entries, the notification never hits in time for the #try to occur.
If you can, use threads. I painted myself into a corner by failing to do so early on and am paying for it: my app has been in need of an overhaul, but I need this notch in my belt before it will be worthwhile. The earlier you can implement threaded loading the better for your long-term development. In the meantime, you may be able to use my work-around to continue testing other parts of your app.
We have an unattended app w/o a user interface that is is periodically run.
It is a VB.NET app. Instead of it being developed as a service, or a formless Windows application, it was developed with a form and all the code was placed in the form_load logic, with an "END" statement as the last line of code to terminate the program.
Other than producing a program that uses unneeded Windows form resources, is there a compelling reason to send this code back for rework to be changed to put the start up logic in a MAIN sub of a BAS file?
If the program is to enter and exit the mix (as opposed to running continuously) is there any point in making it a service?
If the app is developed with a Form do I have to worry about a dialog box being presented that no one will respond to even if there are no MessageBox commands in the app?
I recall there used to be something in VB6 where you could check an app as running unattended, presumably to avoid dialogs.
I don't know whether there are conditions where this will not run.
However, if the code was delivered by someone you will work with going forward, I would look at this as an opportunity to help them understand best practices (which this is not), and to help them understand that you expect best-practice code to be delivered.
First of all, you don't need it to be run in a Form.
Forms are there for Presentation, so it should not be done there.
If you don't want to mess with converting the application a Service (not difficult, but not very easy neither), you shoud create a Console Application, and then, schedule it with Windows Task Scheduler.
This way, you create a Console Application, with a Main function, that does exactly what you need.
Anyway, the programmer could show windows, so there should not be any messagebox. Any communication should be done via Logging to: local files, windows events, database.
If you want more information on any of them, ask me.
If you don't want it to be a service, nothing says that it has to be a windows service. Scheduling it to run via the Task Scheduler or something similar is a valid option.
However, it does sound like the developer should have choose a "Console App" project, instead of a "Windows Forms" project to create this app.
Send it back. The application is bulkier and slower than it needs to be, although that won't be much of an issue. It is somewhat more likely to run out of resources. But the main reason: converting it to a console app is very easy.
If you don't prefer for the Console window to popup, simply do the following.
Create a new class "Program.vb", add a public shared Main() method, and move the "OnLoad" logic from the form to this method.
Next delete the form, and change the project start up object (Available in the project properties window) to use the Program.Main instead of the Form.
This will have the same effect, without the windows forms resources being used. You can then remove the references to System.Windows.Form and System.Drawing.
We are using NHibernate to manage our persistence in a complex modular windows forms application - but one thought keeps bothering me. We currently open a session on launch and open all objects through that session. I am worried that all loaded objects get loaded into the NHibernate session cache, so that they cant be garbage collected, and ultimately we will end up with the whole database in memory.
This never happens with web applications because web page requests (and even better Ajax requests) represent the perfect short lived transaction so a session can be opened and closed to handle each request.
However if I load an tree of objects in my forms application and put then into a navigation pane on the screen they may stay their for the life of the application - and at any point the user may click on them, resulting in our code needing to navigate the object relationships to other objects (which only works within an NHibernate session).
What do StackOverflow readers do to keep the benefits of NHibernate without the issues I describe?
Ayende and company usually recommend using a session per "conversation". This usually makes the session lifetime last for very short operations, so it behaves more like a web app.
For your tree case, you can use Bruno's solution #2 just fine. The objects can be lazily mapped. Then, every time you need to access a child collection, you start a conversation and reconnect the parent via ISession.Lock. Then when the databinding is done, close that session. Not too much overhead to maintain, just a few lines of code in any form that needs to carry a conversation; you can extend Form and the controls you're using to do this automatically if you're feeling sassy.
The tricky part, then, is concurrent edits from different sessions. Let's not go there!
I open a session when I need one, and I'll close it when I know that I won't need it anymore.
More specifically, for instance, if I have a form which lets me edit Customer information for instance, I'll open a session when the form gets instantiated, and I'll close the Session when the form is closed.
When I have 2 instances of that form open, I also have 2 session open.
I can see a couple of alternatives:
Eager-load the object tree (which, from what I can gather from the documentation is the default)
Detach the objects, intercept the "click" event, and load the data from the database then, with a new session. This one forces you to take care of collections by yourself, instead of relying on nhibernate, which may fall outside of the scope of the question (which asks for the benefits of NHibernate, one of which is collection management)
You can take a look to my posts on how to use uNHAddins to work with session per conversation in a Windows Forms application (uNHAddins is a project with some additionsns to NHibernate led by Fabio Maulo, current NH Lead)
The first post is this
http://gustavoringel.blogspot.com/2009/02/unhaddins-persistence-conversation-part.html
From there you have links to uNHAddins trunk.