Will creating a new app pool disrupt anything in IIS 6? - iis-6

I have a website in IIS 6, let's call it "WebSite1." I have a virtual application underneath it, let's call it "VirtualApp1." Both of these are set up to use the same application pool, "AppPool1." I want to create a new application pool, "NewAppPool," and switch VirtualApp1 to using it, while leaving the rest of WebSite1 running under AppPool1.
Will doing this cause disruption to anything in WebSite1? I know it will most likely trash VirtualApp1's appdomain, but I want to know if it's going to cause any appdomain/pool recycles that would disrupt WebSite1.

I've done this many times on a production server, and never experienced any problems or hickups.
So I'd say: no, go ahead! :)

Related

SAP B1 - using stanard dll's in a SOAP web service

Currently, I am working with SAP Business One SDK. I took some project from another team. This project is a SOAP web service. It has been written in ASPX technology, anyway another team used standard DI API library. This is the main problem with this solution because it causes a lot of problems with memory leaks.
In source code after every operation when DI API is called I try to use Garbage Collector, but unfortunately, it is not enough.
The web service is hosted on IIS and I had to set an option that for some time restart it. I know it is not the best solution, but it works. Obviously, that way generates many problems.
I have a question about it. Does any reasonable solution exist? Or I will have to rewrite source code using DI Server?
I have read a lot about this, I found some articles on the Internet. Please do not put any links in comments, because I am quite sure that I have read them.
Every time you use a DI API object you have to release it. Otherwise, it will stay in memory and it will cause the memory leak you mentioned.
The correct way to release them is to use ReleaseComObject. Remember that if the object is null you will get an exception so check it first.
if (oDocuments != null)
System.Runtime.InteropServices.Marshal.ReleaseComObject(oDocuments);

WebKitGTK about webkit_web_view_load_uri

I have a question about WebktGTK.
These days I am making a program which is can analysis web page if has suspicious web content.
When "load-failed" "load-changed" signal is emitted with WEBKIT_LOAD_FINISHED,
The program anlaysis the next page continuously by calling webkit_web_view_load_uri again again.
(http://webkitgtk.org/reference/webkit2gtk/stable/WebKitWebView.html#webkit-web-view-load-uri)
The question want to ask you is memory problem.
The more the program analsysis the webpages, The more WebKitWebProcess is bigger.
webkit_back_forward_list_get_length() return value also increased by analysising web pages. Where shoud I free memory?
Do you know how Can I solve this problem or Could give me any advice where Can I get advice?
Thank you very much :-) Have a nice day ^^
In theory, what you're doing is perfectly fine, and you shouldn't need to change your code at all. In practice, WebKit has a lot of memory leaks, and programatically loading many new URIs in the same web view is eventually going to be problematic, as you've found.
My recommendation is to periodically, every so many page loads, create a new web view that uses a separate web process, and destroy the original web view. (That will also reset the back/forward list to stop it from growing, though I suspect the memory lost to the back/forward list is probably not significant compared to memory leaks when rendering the page.) I filed Bug 151203 - [GTK] Start a new web process when calling webkit_web_view_load functions? to consider having this happen automatically; your issue indicates we may need to bump the priority on that. In the meantime, you'll have to do it manually:
Before doing anything else in your application, set the process model to WEBKIT_PROCESS_MODEL_MULTIPLE_SECONDARY_PROCESSES using webkit_web_context_set_process_model(). (If you are not creating your own web contexts, you'll need to use the default web context webkit_web_context_get_default().)
Periodically destroy your web view with gtk_widget_destroy(), then create a new one using webkit_web_view_new() et. al. and attach it somewhere in your widget hierarchy. (Be sure NOT to use webkit_web_view_new_with_related_view() as that's how you get two web views to use the same web process.)
If you have trouble getting that solution to work, an extreme alternative would be to periodically send SIGTERM to your web process to get a new one. Connect to WebKitWebView::web-process-crashed, and call webkit_web_view_load_uri() from there. That will result in the same web view using a new web process.

Best way to run scheduled tasks in ASP.NET CORE [duplicate]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
Today we have built a console application for running the scheduled tasks for our ASP.NET website. But I think this approach is a bit error prone and difficult to maintain. How do you execute your scheduled task (in an windows/IIS/ASP.NET environment)
Update:
Examples of tasks:
Sending email from an email-queue in the database
Removing outdated objects from the database
Retrieving stats from Google AdWords and fill a table in the database.
This technique by Jeff Atwood for Stackoverflow is the simplest method I've come across. It relies on the "cache item removed" callback mechanism build into ASP.NET's cache system
Update: Stackoverflow has outgrown this method. It only works while the website is running but it's a very simple technique that is useful for many people.
Also check out Quartz.NET
All of my tasks (which need to be scheduled) for a website are kept within the website and called from a special page. I then wrote a simple Windows service which calls this page every so often. Once the page runs it returns a value. If I know there is more work to be done, I run the page again, right away, otherwise I run it in a little while. This has worked really well for me and keeps all my task logic with the web code. Before writing the simple Windows service, I used Windows scheduler to call the page every x minutes.
Another convenient way to run this is to use a monitoring service like Pingdom. Point their http check to the page which runs your service code. Have the page return results which then can be used to trigger Pingdom to send alert messages when something isn't right.
Create a custom Windows Service.
I had some mission-critical tasks set up as scheduled console apps and found them difficult to maintain. I created a Windows Service with a 'heartbeat' that would check a schedule in my DB every couple of minutes. It's worked out really well.
Having said that, I still use scheduled console apps for most of my non-critical maintenance tasks. If it ain't broke, don't fix it.
I've found this to be easy for all involved:
Create a webservice method such as DoSuchAndSuchProcess
Create a console app that calls this webmethod.
Schedule the console app in the task scheduler.
Using this methodology all of the business logic is contained in your web app, but you have the reliability of the windows task manager, or any other commercial task manager to kick it off and record any return information such as an execution report. Using a web service instead of posting to a page has a bit of an advantage because it's easier to get return data from a webservice.
Why reinvent the wheel, use the Threading and the Timer class.
protected void Application_Start()
{
Thread thread = new Thread(new ThreadStart(ThreadFunc));
thread.IsBackground = true;
thread.Name = "ThreadFunc";
thread.Start();
}
protected void ThreadFunc()
{
System.Timers.Timer t = new System.Timers.Timer();
t.Elapsed += new System.Timers.ElapsedEventHandler(TimerWorker);
t.Interval = 10000;
t.Enabled = true;
t.AutoReset = true;
t.Start();
}
protected void TimerWorker(object sender, System.Timers.ElapsedEventArgs e)
{
//work args
}
Use Windows Scheduler to run a web page.
To prevent malicous user or search engine spiders to run it, when you setup the scheduled task, simply call the web page with a querystring, ie : mypage.aspx?from=scheduledtask
Then in the page load, simply use a condition :
if (Request.Querystring["from"] == "scheduledtask")
{
//executetask
}
This way no search engine spider or malicious user will be able to execute your scheduled task.
This library works like a charm
http://www.codeproject.com/KB/cs/tsnewlib.aspx
It allows you to manage Windows scheduled tasks directly through your .NET code.
Additionally, if your application uses SQL SERVER you can use the SQL Agent to schedule your tasks. This is where we commonly put re-occurring code that is data driven (email reminders, scheduled maintenance, purges, etc...). A great feature that is built in with the SQL Agent is failure notification options, which can alert you if a critical task fails.
I'm not sure what kind of scheduled tasks you mean. If you mean stuff like "every hour, refresh foo.xml" type tasks, then use the Windows Scheduled Tasks system. (The "at" command, or via the controller.) Have it either run a console app or request a special page that kicks off the process.
Edit: I should add, this is an OK way to get your IIS app running at scheduled points too. So suppose you want to check your DB every 30 minutes and email reminders to users about some data, you can use scheduled tasks to request this page and hence get IIS processing things.
If your needs are more complex, you might consider creating a Windows Service and having it run a loop to do whatever processing you need. This also has the benefit of separating out the code for scaling or management purposes. On the downside, you need to deal with Windows services.
If you own the server you should use the windows task scheduler. Use AT /? from the command line to see the options.
Otherwise, from a web based environment, you might have to do something nasty like set up a different machine to make requests to a certain page on a timed interval.
I've used Abidar successfully in an ASP.NET project (here's some background information).
The only problem with this method is that the tasks won't run if the ASP.NET web application is unloaded from memory (ie. due to low usage). One thing I tried is creating a task to hit the web application every 5 minutes, keeping it alive, but this didn't seem to work reliably, so now I'm using the Windows scheduler and basic console application to do this instead.
The ideal solution is creating a Windows service, though this might not be possible (ie. if you're using a shared hosting environment). It also makes things a little easier from a maintenance perspective to keep things within the web application.
Here's another way:
1) Create a "heartbeat" web script that is responsible for launching the tasks if they are DUE or overdue to be launched.
2) Create a scheduled process somewhere (preferrably on the same web server) that hits the webscript and forces it to run at a regular interval. (e.g. windows schedule task that quietly launches the heatbeat script using IE or whathaveyou)
The fact that the task code is contained within a web script is purely for the sake of keeping the code within the web application code-base (the assumption is that both are dependent on each other), which would be easier for web developers to manage.
The alternate approach is to create an executable server script / program that does all the schedule work itself and run the executable itself as a scheduled task. This can allow for fundamental decoupling between the web application and the scheduled task. Hence if you need your scheduled tasks to run even in the even that the web app / database might be down or inaccessible, you should go with this approach.
You can easily create a Windows Service that runs code on interval using the 'ThreadPool.RegisterWaitForSingleObject' method. It is really slick and quite easy to get set up. This method is a more streamlined approach then to use any of the Timers in the Framework.
Have a look at the link below for more information:
Running a Periodic Process in .NET using a Windows Service:
http://allen-conway-dotnet.blogspot.com/2009/12/running-periodic-process-in-net-using.html
We use console applications also. If you use logging tools like Log4net you can properly monitor their execution. Also, I'm not sure how they are more difficult to maintain than a web page, given you may be sharing some of the same code libraries between the two if it is designed properly.
If you are against having those tasks run on a timed basis, you could have a web page in your administrative section of your website that acts as a queue. User puts in a request to run the task, it in turn inserts a blank datestamp record on MyProcessQueue table and your scheduled task is checking every X minutes for a new record in MyProcessQueue. That way, it only runs when the customer wants it to run.
Hope those suggestions help.
One option would be to set up a windows service and get that to call your scheduled task.
In winforms I've used Timers put don't think this would work well in ASP.NET
A New Task Scheduler Class Library for .NET
Note: Since this library was created, Microsoft has introduced a new task scheduler (Task Scheduler 2.0) for Windows Vista. This library is a wrapper for the Task Scheduler 1.0 interface, which is still available in Vista and is compatible with Windows XP, Windows Server 2003 and Windows 2000.
http://www.codeproject.com/KB/cs/tsnewlib.aspx

MS Access crashes when trying to close down a connection to Blackbaud's Raiser's Edge API

I am the IT department of a Non-Profit organization. I have a question today which might be too specialized for this forum and I hope I do not waste my time writing it up. We are using Blackbaud's 'Raiser's Edge' (RE) Software (written in VB6 and VB.net as far as I know) to keep track of our membership and donations. We have an MS Access application (have been using it since before we got RE) to process donations and for now I want to keep it and only do minor changes to adapt it to the new software.
The MS Access program is now doing a few calls to the RE API which work great. To login and establish a connection I have to create a new 'REAPI' object and use it for other API calls. That REAPI object has a method called: SignOutOnTerminate which needs to be set to TRUE when creating that object. It is supposed to kill all connections to RE once my application closes. There is no regular .close method.
Once I create the object I can do work as many times as I want and there is no problem at all as far as I can see.
However when trying to close the application or set the object to nothing (Set REAPI = Nothing) Access crashes immediately (It fades out and I get the message that Windows is looking for a solution to the problem. Then Access closes and restarts itself.)
It is more annoying and unprofessional then hindering production but I want to fix it.
The App was developed on Windows 7 64-bit with Access 2010 32-bit. It was tested on Windows XP with Office 2003 or 2007 machines (32-bit) and behaves the same way.
I have posted this problem already on 2 Blackbaud forums and tried a suggested a work around which did not work (kill the process with a shell command and then set the object to nothing). Hopefully I will get more answers soon.
I tried to just exclude the SignOutOnTerminate when creating the object. But got the same behavior.
I looked in the Event Manager --> Application Log and found the Crash. It reported that access crashed because of this dll: C:\Windows\System32\MSVBVM60.dll (It is actually located in the SysWos64 folder as it is a 32-bit application).
Looking up this error I found some suggestions to replace it with an earlier version of the dll, the one which ships with XP. I found a file and tried the suggestion but it still crashed. The error log reported the older version number as faulting so I registered it correctly.
I also created a case with Blackbaud but the rep did not know what the problem is and did not have MS Access installed. He is trying to get his support team to install it for him so he can test and investigate this error.
The last suspicion I have is that the API is causing the error and my code is fine.
But before I make this assumption and until I get my answer from Blackbaud I want to do a final check, but I have run out of ideas for further trouble shooting and resorted to pose this problem in this forum.
Any Ideas?
I realise that this is an old thread and if you have solved this by now then that is great. However this is a known issue with The Raiser's Edge API. If you use .NET with RE's API (which is COM based) there is definitely some resource that is not cleaned up properly. At one point I suspected that it was something to with making use of RE's graphical interface i.e. by calling the regular login method to log you into RE. However even if you log in to RE using the "as a server" method supplying the user name and password it still crashes on exiting the application.
We have an installer that sets up credentials in RE. The installer is in .NET and accesses the RE API. We now show a message just before the end of the application telling users to ignore the impending crash... Not a great solution by any means.

.Net 4.0 MemoryCache Clearing

I am using a .Net 4.0 MemoryCache in my WCF service.
I originally was using the Default Cache as below:
var cache = MemoryCache.Default;
Then doing the usual pattern as trying to find something in the Cache, getting, then
setting into the cache if did not find (code snippet / pseudo code as below):
var geoCoordinate = cache.Get(cacheKey) as GeoCoordinate;
if (geoCoordinate == null)
{
geoCoordinate = get it from somewhere;
cache.Set(cacheKey, geoCoordinate, DateTimeOffset.Now.AddDays(7));
}
I was finding that my entries were disappearing after approx. 2 minutes. Even if my code placed the missing entries back into the cache, subsequent cache Gets would return null.
My WCF Service is being hosted by IIS 7.5. If I recycled the App Pool, everything would work normally for 2 minutes, but then the pattern as described above would repeat.
After doing some researching I then did the below to replace:
var cache = MemoryCache.Default;
// WITH NEW CODE TO CREATE CACHE AS BELOW:
var config = new NameValueCollection();
//Hack: Set Polling interval to 10 days, so will no clear cache.
config.Add("pollingInterval", "10:00:00:00");
config.Add("physicalMemoryLimitPercentage", "90");
config.Add("cacheMemoryLimitMegabytes", "2000");
//instantiate cache
var cache = new MemoryCache("GeneralCache", config);
It seems that no matter what I place into physicalMemoryLimitPercentage, or cacheMemoryLimitMegabytes does not seem to help. But placing the pollingInterval to a large datespan does.
ie: If I set as below:
config.Add("pollingInterval", "00:00:15:00");
Then everything works fine for 15 minutes.
Note: If my WCF service is hosted by IISExpress on my dev environment, I cannot reproduce.
This also seems to happen when my WCF service is hosted by IIS 7.5.
My app pool on IIS 7.5 is NOT recycling.
Has anybody experienced something like this?
I have seen the below:
MemoryCache does not obey memory limits in configuration
Thanks,
Matt
I too have seen this issue and filed a bug with MS here with a simple reproducer project.
This has been resolved by MS in the above bug I filed - with a work around there and an upcoming QFE for .net 4 as well as confirmation that this isn't a problem in 4.5
I have not yet tried the work around
I can however give some more information on conditions required by myself to recreate this. The application pool needed to be in Integrated Pipeline mode for me to see this issue - Classic mode fixes this issue though removes some of the benefits of moving to IIS 7.5.
Equally when using Integrated mode I also did not see this issue if I used a built-in application pool identity such as ApplicationPoolIdentity. However my app needs to run as a custom identity using a service account and it is at this point at which I see the behavior. Therefore if you don't need Integrated mode or a custom Identity you can maybe work around this.
Perhaps the built-in accounts have permissions to do the cache memory statistics gathering initiated by the pollingInterval that my custom Identity does not have, I don't know.
Hope this helps or even that someone else can join more of the dots to figure out a better work around.