SharePoint 2010 Sandbox Solution Timeout - sharepoint-2010

Is there a way to adjust the timeout value for a SharePoint 2010 sandbox solution? I think it defaults to 30 seconds. I have a web part that occasionally runs a little longer than that. I really would prefer not to fall back to a farm solution if I can avoid it.

Finding the documentation on this was a little difficult, but I found it here. The relevant parts are these:
Per Request, with the Request Penalized: There is a hard limit to how long a sandboxed solution can take to be completed. By default, this is 30 seconds. If a sandboxed solution exceeds the limit, the application domain that handles the request (but not the sandboxed worker process) is terminated. This limit is configurable, but only through custom code against the object model. The relevant parts of the object model cannot be accessed by sandboxed solutions, so no sandboxed solution can change the limit.
CPU Execution Time The absolute limit of this resource is not applicable as long as it is set higher than the Per Request, with the Request Penalized limit described above. Normally, administrators will want to keep it higher so that the slow request is terminated before it causes a termination of the whole sandboxed worker process, including even the well-behaved sandboxed solutions running in it.
The following code can be used to adjust Per Request timeout:
SPUserCodeService.Local.WorkerProcessExecutionTimeout = 40;
SPUserCodeService.Local.Update();
You should be able to adjust the CPU Execution Time with something like the following:
SPUserCodeService.Local.ResourceMeasures["CPUExecutionTime"].AbsoluteLimit = 50.0;
SPUserCodeService.Local.Update();
You have to restart the Microsoft SharePoint Foundation Sandboxed Code Service for the changes to take effect.

In PowerShell, you can adjust the timeouts using the following commands:
$uc=[Microsoft.SharePoint.Administration.SPUserCodeService]::Local
$uc.WorkerProcessExecutionTimeout = 60
$uc.ResourceMeasures["CPUExecutionTime"].AbsoluteLimit = 120
$uc.Update()

Related

Is there any internal timeout in Microsoft UIAutomation?

I am using the UI Automation COM-to-.NET Adapter to read the contents of the target Google Chrome browser that plays a FLASH content on Windows 7. It works.
I succeeded to get the content and elements. Everything works fine for some time but after few hours the elements become inaccessible.
The (AutomationElement).FindAll() returns 0 children.
Is there any internal undocumented Timeout used by UIAutomation ?
According to this IUIAutomation2 interface
There are 2 timeouts but they are not accessible from IUIAutomation interface.
IUIAutomation2 is supported only on Windows 8 (desktop apps only).
So I believe there is some timeout.
I made a workaround that restarts the searching and monitoring of elements from the beginning of the desktop tree but the elements are still not available.
After some time (not sure how much) the elements are available again.
My requirements are to read the values all the time as fast as possible but this behavior makes a damage to the whole architecture.
I read somewhere that there is some timeout of 3 minutes but not sure.
if there is a timeout, is it possible to change it ?
Is it possible to restart something or release/dispose something ?
I can't find anything on MSDN.
Does anybody have any idea what is happening and how to resolve ?
Thanks for this nicely put question. I have a similar issue with a much different setup. I'm on Win7, using UIAutomationCore.dll directly from C# to test our application-under-development. After running my sequence of actions & event subscriptions and all the other things, I intermittently observe that the UIA interface stops working (about 8-10min in my case, but I'm heavily using the UIA interface).
Many different things including dispatching the COM interface, sleeping at different places failed. The funny revelation was I managed to use the AccEvent.exe (part of SDK like inspect.exe) during the test and saw that events also stopped flowing to AccEvent, too. So it wasn't my client's interface that stopped, but it was rather the COM-server (or whatever the UIAutomationCore does) that stopped responding.
As a solution (that seems to work most of the time - or improve the situation a lot), I decided I should give the application-under-test some breathing point, since using the UIA puts additional load on it. This could be a smartly-put sleep points in your client, but instead of sleeping a set time, I'm monitoring the processor load of the application and waiting until it settles down.
One of the intermittent errors I receive when the problem manifests itself is "... was unable to call any of the subscribers..", and my search resulted in an msdn page saying they have improved things on CUIAutomation8 interface, but as this is Windows8 specific, I didn't have the chance to try that yet.
I should also add that I also reduced the number of calls to UIA by incorporating more ui caching (FindAllBuildCache), as the less the frequency of back-and-forth the better it is for the uia. Thanks to the answer of Guy in another question: UI Automation events stop being received after a while monitoring an application and then restart after some time

How to capture screen shot of 1000 web pages concurrently in c#

I need to get screenshot of 1000 URLs using Parallel.Foreach in windows service. I tried to use WebBrowser control but it throws error since it runs only in STA. Kindly tell me how to achieve this task using Parallel.Foreach...
Edit : I am using a third party trial version DLL in below code to process it...
Parallel.ForEach(webpages, webPage=>
{
GetScreenShot(webPage);
}
public void GetScreenShot(string webPage)
{
WebsitesScreenshot.WebsitesScreenshot _Obj;
_Obj = new WebsitesScreenshot.WebsitesScreenshot();
WebsitesScreenshot.WebsitesScreenshot.Result _Result;
_Result = _Obj.CaptureWebpage(webPage);
if (_Result == WebsitesScreenshot.
WebsitesScreenshot.Result.Captured)
{
_Obj.ImageFormat = WebsitesScreenshot.
WebsitesScreenshot.ImageFormats.PNG;
_Obj.SaveImage(somePath);
}
_Obj.Dispose();
}
Most of the time this code runs fine upto processing of 80 urls but after that some tasks are being blocked. I don't know why...
Some times error is ContextSwitchDeadlock....as given below...
ContextSwitchDeadlock was detected
Message: The CLR has been unable to transition from COM context 0x44d3a8 to COM context 0x44d5d0 for 60 seconds. The thread that owns the destination context/apartment is most likely either doing a non pumping wait or processing a very long running operation without pumping Windows messages. This situation generally has a negative performance impact and may even lead to the application becoming non responsive or memory usage accumulating continually over time. To avoid this problem, all single threaded apartment (STA) threads should use pumping wait primitives (such as CoWaitForMultipleHandles) and routinely pump messages during long running operations.
This error indicates that a CLR thread is not sending any messages for an extended period of time. If a process is resource starved causing extended waits during processing this error can occur.
Given that you are trying to process 1000 web pages simultaneously, it would be no surprise that at least some of the threads will become resource starved. Personally, it is surprising to me that you can hit 80 websites without seeing errors.
Back off the number of websites you are trying to processing in parallel and your problems will likely disappear. Since you are running the trial version, there is little else you can do. If you licensed the commercial version you might be able to get support from the vendor. But at a guess, they would simply tell you to do the same thing.
The Websites.Screenshot library can be quite resource intensive depending upon the web page, esp. if the pages have flash. Think of it as being logically equivalent to opening 80 tabs simultaneously in a web browser.
You don't mention whether you are using the 32-bit or the 64-bit version. But the 64-bit version is likely to have fewer resource constraint, esp. memory. IMHO The .Net framework does a poor job of minimizing memory usage, so memory problems can crop up earlier than you would think.
ADDED
Please try limiting the number of threads threads first, e.g.
Parallel.ForEach(
Webpages,
new ParallelOptions { MaxDegreeOfParallelism = 10 }, // 10 thread limit
webPage => { GetScreenShot(webPage); }
);
Without access to the source code, you may not be able to change the threading model at all. You might also try setting the timeout to a higher value.
I don't have this control personally and am not willing to install it on my machine to answer a question re: changing the threading model. Unless it is a documented feature, you probably won't be able to do it without changing or at least inspecting the source.

What Is Meant By Server Response Time

I'm doing website optimisations using Google's Pagespeed Insights to test improvements. Among the high-priority fix suggestions, is this:
Reduce server response time
In our test, your server responded in 2.1 seconds.
I read the 'helpful' doc linked in this section, and now I'm really confused.
Is the server response time the DNS response, the time to first-byte, or a combination? Is it purely a server-side thing, or could this be affected by, for example, a slow JavaScript resource or ready events in the DOM?
My first guess would have been that it's the time taken from the moment the request was issued, to the 1st byte received from the server, however Google's definition is not quite that:
(from this page https://developers.google.com/speed/docs/insights/Server)
Server response time measures how long it takes to load the necessary
HTML to begin rendering the page from your server, subtracting out the
network latency between Google and your server. There may be variance
from one run to the next, but the differences should not be too large.
In fact, highly variable server response time may indicate an
underlying performance issue.
To take 2.1 seconds would suggest to me that your application/webserver is buffering it's output, so all your server side processing is happening before it sends the content. If you don't buffer then the html can begin being sent to the browser more quickly which may help, however you lose the ability to do things like change response headers late in your logic.

ASP.NET MVC site, shared WCF client object, causing a single-threaded bottleneck?

I'm trying to nail down a performance issue under load in an application which I didn't build, but have become very familiar with the workings of.
The architecture is: mobile apps call an ASP.NET MVC 3 website to get data to display. The ASP.NET site calls a third-party SOAP API using WCF clients (basicHttpBinding), caching results as much as it can to minimize load on that third party.
The load from the mobile apps is in the order of 200+ requests per second at peak times, which translates to something in the order of 20 SOAP requests per second to the third-party, after caching.
Normally it runs fine but we get periods of cascading slowness where every request to the API starts taking 5 seconds.. then 10.. 15.. 20.. 25.. 30.. at which point they time out (we set the WCF client timeout to 30 seconds). Clearly there is a bottleneck somewhere which is causing an increasingly long queue until requests can't be serviced inside 30 seconds.
Now, the third-party API is out of my control but they swear that it should not be having any issues whatsoever with 20 requests per second. So I've been looking into the possibility of a bottleneck at my end.
I've read questions on StackOverflow about ServicePointManager.DefaultConnectionLimit and connectionManagement, but digging through the source, I think the problem is somewhat more fundamental. It seems that our WCF client object (which is a standard System.ServiceModel.ClientBase<T> auto-generated by "Add Service Reference") is being stored in the cache, and thus when multiple requests come in to the ASP.NET site simultaneously, they will share a single Client object.
From a quick experiment with a couple of console apps and spawning multiple threads to call a deliberately slow WCF service with a shared Client object, it seems to me that only one call will occur at a time when multiple threads use a single ClientBase. This would explain a bottleneck when e.g. 20 calls need to be made per second and each one takes more than 50ms to complete.
Can anyone confirm that this is indeed the case?
And if so, and if I switched to every request creating it's own WCF Client object, I would just need to alter ServicePointManager.DefaultConnectionLimit to something greater than the default (which I believe is 2?) before creating the Client objects, in order to increase my maximum number of simultaneous connections?
(sorry for the verbose question, I figured too much information was better than too little)

page is being received too long

I have rewritten web application from using mod_python to using mod_wsgi. Problem is that now it takes at least 15 seconds before any request is served (firebug hints that almost all of this time is spent by receiving data). Before the rewrite it took less than 1 second. I’m using werkzeug for app development and apache as a server. Server load seems to be minimal and same goes for memory usage. I’m using apache2-mpm-prefork.
I’m using the default setting for mod_wsgi - I think it’s called the ‘embedded mode’.
I have tested if switching to apache2-mpm-worker would help but it didn’t.
Judging from app log it seems that app is done with request quite fast - less than 1 second.
I have changed the apache logging to debug, but I can’t see anything suspicious.
I have moved the app to run on a different machine but it was all the same.
Thank in advance for any help.
Sounds a bit like your response content length doesn't match how much data you are actually sending back, with content length returned being longer. Thus browser waits for more data until possibly times out.
Use something like:
http://code.google.com/p/modwsgi/wiki/DebuggingTechniques#Tracking_Request_and_Response
to verify what data is being sent back and that things like content length match.
Otherwise it is impossible to guess what issue is if you aren't showing small self contained example of code illustrating problem.