My application is creating Chromium webbrowser instances to load different customers. The problem is that for each webbrowser instance a render process is created, which consumes 80-110 mb's of memory. All the customers are stored on the same site, so I have enabled "Process Per Site" as the description of this would fix my issue. I also tested loading 3 ChromiumWebBrowser instances loading the address google.com, which also resulted in 3 different render processes being created, each consuming 80-110 mb's of memory.
Is there anything that can be done regarding multiple render processes consuming lots of memory, or is this unpreventable?
Related
I am using the UI Automation COM-to-.NET Adapter to read the contents of the target Google Chrome browser that plays a FLASH content on Windows 7. It works.
I succeeded to get the content and elements. Everything works fine for some time but after few hours the elements become inaccessible.
The (AutomationElement).FindAll() returns 0 children.
Is there any internal undocumented Timeout used by UIAutomation ?
According to this IUIAutomation2 interface
There are 2 timeouts but they are not accessible from IUIAutomation interface.
IUIAutomation2 is supported only on Windows 8 (desktop apps only).
So I believe there is some timeout.
I made a workaround that restarts the searching and monitoring of elements from the beginning of the desktop tree but the elements are still not available.
After some time (not sure how much) the elements are available again.
My requirements are to read the values all the time as fast as possible but this behavior makes a damage to the whole architecture.
I read somewhere that there is some timeout of 3 minutes but not sure.
if there is a timeout, is it possible to change it ?
Is it possible to restart something or release/dispose something ?
I can't find anything on MSDN.
Does anybody have any idea what is happening and how to resolve ?
Thanks for this nicely put question. I have a similar issue with a much different setup. I'm on Win7, using UIAutomationCore.dll directly from C# to test our application-under-development. After running my sequence of actions & event subscriptions and all the other things, I intermittently observe that the UIA interface stops working (about 8-10min in my case, but I'm heavily using the UIA interface).
Many different things including dispatching the COM interface, sleeping at different places failed. The funny revelation was I managed to use the AccEvent.exe (part of SDK like inspect.exe) during the test and saw that events also stopped flowing to AccEvent, too. So it wasn't my client's interface that stopped, but it was rather the COM-server (or whatever the UIAutomationCore does) that stopped responding.
As a solution (that seems to work most of the time - or improve the situation a lot), I decided I should give the application-under-test some breathing point, since using the UIA puts additional load on it. This could be a smartly-put sleep points in your client, but instead of sleeping a set time, I'm monitoring the processor load of the application and waiting until it settles down.
One of the intermittent errors I receive when the problem manifests itself is "... was unable to call any of the subscribers..", and my search resulted in an msdn page saying they have improved things on CUIAutomation8 interface, but as this is Windows8 specific, I didn't have the chance to try that yet.
I should also add that I also reduced the number of calls to UIA by incorporating more ui caching (FindAllBuildCache), as the less the frequency of back-and-forth the better it is for the uia. Thanks to the answer of Guy in another question: UI Automation events stop being received after a while monitoring an application and then restart after some time
I need to get screenshot of 1000 URLs using Parallel.Foreach in windows service. I tried to use WebBrowser control but it throws error since it runs only in STA. Kindly tell me how to achieve this task using Parallel.Foreach...
Edit : I am using a third party trial version DLL in below code to process it...
Parallel.ForEach(webpages, webPage=>
{
GetScreenShot(webPage);
}
public void GetScreenShot(string webPage)
{
WebsitesScreenshot.WebsitesScreenshot _Obj;
_Obj = new WebsitesScreenshot.WebsitesScreenshot();
WebsitesScreenshot.WebsitesScreenshot.Result _Result;
_Result = _Obj.CaptureWebpage(webPage);
if (_Result == WebsitesScreenshot.
WebsitesScreenshot.Result.Captured)
{
_Obj.ImageFormat = WebsitesScreenshot.
WebsitesScreenshot.ImageFormats.PNG;
_Obj.SaveImage(somePath);
}
_Obj.Dispose();
}
Most of the time this code runs fine upto processing of 80 urls but after that some tasks are being blocked. I don't know why...
Some times error is ContextSwitchDeadlock....as given below...
ContextSwitchDeadlock was detected
Message: The CLR has been unable to transition from COM context 0x44d3a8 to COM context 0x44d5d0 for 60 seconds. The thread that owns the destination context/apartment is most likely either doing a non pumping wait or processing a very long running operation without pumping Windows messages. This situation generally has a negative performance impact and may even lead to the application becoming non responsive or memory usage accumulating continually over time. To avoid this problem, all single threaded apartment (STA) threads should use pumping wait primitives (such as CoWaitForMultipleHandles) and routinely pump messages during long running operations.
This error indicates that a CLR thread is not sending any messages for an extended period of time. If a process is resource starved causing extended waits during processing this error can occur.
Given that you are trying to process 1000 web pages simultaneously, it would be no surprise that at least some of the threads will become resource starved. Personally, it is surprising to me that you can hit 80 websites without seeing errors.
Back off the number of websites you are trying to processing in parallel and your problems will likely disappear. Since you are running the trial version, there is little else you can do. If you licensed the commercial version you might be able to get support from the vendor. But at a guess, they would simply tell you to do the same thing.
The Websites.Screenshot library can be quite resource intensive depending upon the web page, esp. if the pages have flash. Think of it as being logically equivalent to opening 80 tabs simultaneously in a web browser.
You don't mention whether you are using the 32-bit or the 64-bit version. But the 64-bit version is likely to have fewer resource constraint, esp. memory. IMHO The .Net framework does a poor job of minimizing memory usage, so memory problems can crop up earlier than you would think.
ADDED
Please try limiting the number of threads threads first, e.g.
Parallel.ForEach(
Webpages,
new ParallelOptions { MaxDegreeOfParallelism = 10 }, // 10 thread limit
webPage => { GetScreenShot(webPage); }
);
Without access to the source code, you may not be able to change the threading model at all. You might also try setting the timeout to a higher value.
I don't have this control personally and am not willing to install it on my machine to answer a question re: changing the threading model. Unless it is a documented feature, you probably won't be able to do it without changing or at least inspecting the source.
Long polling has solved 99% of my problems. There is now just one other problem. Imagine a penny auction site, where people bid. On the frontpage, there are several Auctions.
If the user opens three of these auctions, and because javascript is not multithreaded, how would you get the other pages to ever load? Won't they always get bogged down and not load because they are waiting for long polling to end? In practice, I've experienced this and I can't think of a way around it. Any ideas?
There are two ways that javascript gets around some of this.
While javascript is single threaded conceptually, it does its io in separate threads using completion handlers. This means other pieces of javascript can be running while you are waiting for your network request to complete.
Javascript for each page (or even each frame in each page) is isolated from Javascript on the other pages/frames. This means that each copy of javascript can be running in its own thread.
A bigger issue for you is likely to be that browsers often limit the number of concurrent connections to a given site, and it sounds like you want to make many concurrent connections to the same site. In this case you will get a lock up.
If you control both the sever and client, you will need to combined the multiple long-poll request from the client into a single long-poll request to the server.
The Seaside book says: "saving [an image] while processing http requests is a risk you want to avoid“.
Why is this? Does it just temporarily slow down serving http requests or will requests get lost or will errors occur?
Before an image is saved registered shutdown actions are executed. This means source files are closed and web servers are shut down. After the image is saved it executes the startup actions, which typically bring up the web-server again. Depending on the server implementation open connections might be closed.
This means that you cannot accept new connections while you save an image and open connections might be temporarily suspended or closed. For both issues there are (at least) two easy workarounds:
Fork the image using OSProcess before you save it (DabbleDB, CmsBox).
Use multiple images and a load balancer so that you can remove images one at a time from the active servers before saving them.
It seems that it's just a question of slowing things down. There is this quite thorough thread on the Seaside list, the most relevant post of which is this case study of an eCommerce site:
Consequently, currently this is what happens:
image is saved from time to time (usually daily), and copied to a separate "backup" machine.
if anything bad happens, the last image is grabbed, and the orders and/or gift certificates that were issued since the last image save
are simply re-entered.
And, #2 has been very rarely done-- maybe a two or three times a
year, and then it turns out it is usually because I did something
stupid.
Also, one of the great things about Smalltalk is that it's so easy to run quick experiments. You can download Seaside and put a halt in a callback of one of the examples. For example:
WACounter>>renderContentOn: html
...
html anchor
callback: [
self halt.
self increase ];
with: '++'.
...
Open a browser on the Seaside server (port 8080 by default)
Click "Counter" to go to the example app
Click the "++" link
Switch back to Seaside. You'll see the pre-debug window for the halt
Save the image
Click "Proceed"
You'll see that the counter is correctly incremented, with no apparent ill effect from the save.
I have some functions in a web application that do a lot of calculations and as a result have high CPU usage which affects the rest of the application when other users are accessing it.
I have tried backgroundworker to no avail , the only thing that seems to work in using another thread and setting the priority to low, can the UI be updated from a worker thread? specifically I am trying to bind a grid to a dataset processed in the worker thread
If you call Application.DoEvents() periodically to process the windows message queue, this will allow the UI to be updated and respond to user input.
You need to understand that many people consider DoEvents to be evil. As the UI will respond to events such as click, you should beware of any issues this can cause such as allowing many of your heavy CPU BackgroundWorker threads to be spawned. However, used correctly DoEvents provides a valid strategy for keeping your application responsive during processing.