Slow Response When IIS Doesn't Target Default wwwroot and Does Project Folder as the Root - sql

Is it possible that setting the IIS root to the same directory to the project root will cause a slow performance?
I have an ASP.NET Web Application that handles some SQL commands to GET/POST records on the local SQL database. Recently I have came up with an idea that I no longer have to start debugging each time to test the code by changing the root of IIS from the default (C:\inetpub\wwwroot) to the root of the web-application project folder.
However, after that, I have encountered a problem where some manipulation on the web GUI, especially which include POST requests get extremely slow. For example, adding a new document or rewriting an existing one on the database now take about a minute whereas they did less than 20 seconds. Also, it seems that repeating POST commands make themselves slower (restarting the computer reset the situation). So I guess some read/write process may leave garbage and it conflicts with other processes.
Could anyone suggest any core issue about this phenomenon? Also please let me kwno if my explanation isn't enough clear to show the problem.

I have encountered a problem where some manipulation >on the web GUI, especially which include POST requests >get extremely slow
Changing the root directory is very unlikely to cause this issue.Your application was already performing very slow(20 seconds also is slow).
So no phenomenon in my opinion,You have to debug your application to find out where the delay is.To find out the root cause,you can use any profiler like perfview or a tool like debugdiag.
In case of debugdiag,choose the second option in the above link to capture a memory dump.Once you have a memory dump,simply double click the dump file and debugdiag will do an automated analysis and tell you where the problem is in your application code. E.g it can tell you your DB call is taking time .If you are not able to find,please post the analysis result updated with the question

Related

what's the performance impact causing from the large size of Apache's access.log?

If the logs file like access.log or error.log gets very large, will the large-size problem impact the performance of Apache running or user accessing? From my understanding, Apache doesn't read entire logs into memory, but just make use of filehandle to write. Right? If so, I don't have to remove the logs manually every time when it's large enough except for the filesystem issue. Please help and correct me if I'm wrong. Or is there any Apache Log I/O issue I'm supposed to take care when running it?
Thx very much
Well, i totally agree with you. Per my understanding apache access the log files using handlers and just append the new message at the end of the file. That's way a huge log file will not make the difference when has to do with writing to the file. But may be if you want to access the file or open it with a kind of logging monitoring tool then the huge size will slowdown the process of reading the file.
So i would suggest you to use log rotation to have an overall better end result.
This suggestion is directly form the apche web site.
Log Rotation
On even a moderately busy server, the quantity of information stored in the log files is very large. The access log file typically grows 1 MB or more per 10,000 requests. It will consequently be necessary to periodically rotate the log files by moving or deleting the existing logs. This cannot be done while the server is running, because Apache will continue writing to the old log file as long as it holds the file open. Instead, the server must be restarted after the log files are moved or deleted so that it will open new log files.
From the Apache Software Foundation site

Requested registry access is not allowed on remote box

We have developed a somewhat diffuse system for handling component installation and upgrades across server environments in an automated manner. It worked happily on our development environment, but I've run into a new problem I've not seen before when attempting to deploy it to a live environment.
The environment in question comprises ten servers, five each on two different geographical sites and domains. Each server runs a WCF based windows service that allows it to talk to each of the other servers and thus keep a track of what's installed where. To facilitate this process we make use of machine level environment variables - and modifying these obviously means registry changes.
Having got all this set up, my first attempts to use the system to install stuff seemed to work, but on one box in particular I'm getting "Requested registry access is not allowed" errors when the code tries to modify the environment variables. I've googled this, obviously, but there seem to be a variety of different causes and I'm really not sure which are the applicable ones. It doesn't help that this is a live environment and that our system has relatively limited internal logging capability.
The only clue I've got is that the guy who did the install on the development boxes wrote a very patch set of documentation on the process. This includes an instruction to modify the HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\LocalAccountTokenFilterPolicy value in the registry and set it to 1. I skipped this during the installation as it looked like a rather dubious security risk. Reading the documentation about this key, it looks relevant but my initial attempts at installing stuff on other boxes without this setting enabled worked fine. Sadly the author went on extended leave over the holidays yesterday and he left no explanation of why this key was needed, so we're a bit in the dark.
Can anyone help us toward the light?
Cheers,
Matt
I've seen this error when code tries to write to the event log using something like EventLog.WriteEntry() and a source that is not a registered event source is specified. When a source is specified that has not previously been registered, it will attempt to register the source, which involves writing to the registry.
I would suggest taking a look at SysInternals Process Monitor:
http://technet.microsoft.com/en-us/sysinternals/bb896645
You can use this to monitor registry access and find out what key you're getting the access denied error on. This may give you some insight as to what is causing the problem.
Essentially he's disabling part of the Remote User Account Control. Without setting the value, Remote UAC strips administrative privileges from account tokens remotely accessing the machine. Yes, it does have security implications. See Description of User Account Control and remote restrictions in Windows Vista for an explanation.

Strange SharePoint 2010 and SQL Server problem

We have a client who has just recently acquired their first SharePoint 2010 system. They have two servers, one running SQL Server 2008 R2, and the other one is used to run SharePoint 2010. When I restore a backup of the site collection from our development environment the server with running SharePoint, the site works fine for about a day. After that day, when we try to connect to the home page of the site, the site battles to connect. The browser just says “Connecting…” until the page eventually times out. But you can still access the “backend” pages, like the View all Site Content page (http:///_layouts/viewlsts.aspx) and the site settings page (http:///_layouts/settings.aspx).
Here’s a little info about the web parts that we are running on the home page. The web parts checks if the cache is empty, and if the cache is empty it populates the cache with all the items in a specific list. The list contains a lot of items, +/- 4000 items. So it’s obvious that SharePoint will be retrieving a lot of data from the SQL Server.
If I delete the web applications (that includes deleting the content database and the IIS web site) and I re-deploy the site collection using the backup I’ve made on our development environment the site works fine again for about a day.
After running into these problems we started monitoring the resources use on the SQL Server (Using the Resource Monitor). If you filter the network traffic to only display the network usage of the sqlserver.exe process, it shows that it’s only communicating at +/- 30KB/s. This is incredibly slow! When you copy a 390MB file from the SharePoint Server to the SQL (to test the connection speed between the two servers) it copies the file in 2 seconds.
This is a very strange problem that raises a couple questions. First of all our development environment is almost exactly the same as their environment (in fact, we have less RAM) so why don’t we have any problems like this in our development environment? Secondly, why if you deploy the site from scratch, does the site work for a day, and only the start causing problems later? And the final question: why is the communication speed between the SharePoint and SQL so slow, but the connection speed between the two servers is very quick?
I really hope someone can help us with this, or give us a couple of things we can troubleshoot.
Thanks in advance!
After a very long struggle we found a solution.
We found this post:
http://trycatch.be/blogs/tom/archive/2009/04/22/never-turn-off-quot-auto-create-amp-auto-update-statistics-quot.aspx
We tested it and it worked!!!
So all we had to do was switch "Auto create statistics" and "Auto update statistics" to true, and to problem was solved
Thanks for all the replys
"why is the communication speed
between the SharePoint and SQL so
slow, but the connection speed between
the two servers is very quick?"
You've shown its not the network with the file copy test so that would indicate that either
a) SQL server is overloaded and can't keep up or
b) the WFE is overloaded and can't keep up.
Have you performed basic performance troubleshoting looking at both servers for things like CPU/Memory/Disk swapping etc etc
Also there are lots of specific steps to troubleshoot SQL performance. This reference is for 2005 but all the same principals apply.

Where are the best locations to write an error log in Windows?

Where would you write an error log file, say ErrorLog.txt, in Windows? Keep in mind the path would need to be open to basic users for file write permissions.
I know the eventlog is a possible location for writing errors, but does it work for "user" level permissions?
EDIT: I am targeting Windows 2003, but I was posing the question in such a way as to have a "General Guideline" for where to write error logs.
As for the EventLog, I have had issues before in an ASP.NET application where I wanted to log to the Windows event log, but I had security issues causing me heartache. (I do not recall the issues I had, but remember having them.)
Have you considered logging the event viewer instead? If you want to write your own log, I suggest the users local app setting directory. Make a product directory under there. It's different on different version of Windows.
On Vista, you cannot put files like this under c:\program files. You will run into a lot of problems with it.
In .NET, you can find out this folder with this:
Environment.GetFolderPath(Environment.SpecialFolder.LocalApplicationData)
And the Event Log is fairly simple to use too:
http://msdn.microsoft.com/en-us/library/system.diagnostics.eventlog.aspx
Text files are great for a server application (you did say Windows 2003). You should have a separate log file for each server application, the location is really a matter of convention to agree with administrators. E.g. for ASP.NET apps I've often seen them placed on a separate disk from the application under a folder structure that mimics the virtual directory structure.
For client apps, one disadvantage of text files is that a user may start multiple copies of your application (unless you've taken specific steps to prevent this). So you have the problem of contention if multiple instances attempt to write to the same log file. For this reason I would always prefer the Windows Event Log for client apps. One caveat is that you need to be an administrator to create an event log - this can be done e.g. by the setup package.
If you do use a file, I'd suggest using the folder Environment.SpecialFolder.LocalApplicationData rather than SpecialFolder.ApplicationData as suggested by others. LocalApplicationData is on the local disk: you don't want network problems to stop you from logging when the user has a roaming profile. For a WinForms application, use Application.LocalUserAppDataPath.
In either case, I would use a configuration file to decide where to log, so that you can easily change it. E.g. if you use Log4Net or a similar framework, you can easily configure whether to log to a text file, event log, both or elsewhere (e.g. a database) without changing your app.
The standard location(s) are:
C:\Documents and Settings\All Users\Application Data\MyApp
or
C:\Documents and Settings\%Username%\Application Data\MyApp
(aka %UserProfile%\Application Data\MyApp) which would match your user level permission requirement. It also separates logs created by different users.
Using .NET runtime, these can be built as:
AppDir=
System.Environment.GetFolderPath(Environment.SpecialFolder.CommonApplicationData)
or
AppDir=
System.Environment.GetFolderPath(Environment.SpecialFolder.ApplicationData)
followed by:
MyAppDir = IO.Path.Combine(AppDir,'MyApp')
(Which, hopefully, maps Vista profiles too).
Personally, I would suggest using the Windows event log, it's great. If you can't, then write the file to the ApplicationData directory or the ProgramData (Application Data for all users on Windows XP) directory.
The Windows event log is definitely the way to go for logging of errors. You're not limited to the "Application" log as it's possible to create a new log target (e.g. "My Application"). That may need to be done as part of setup as I'm not sure if it requires administrative privileges or not. There's a Microsoft example in C# at http://support.microsoft.com/kb/307024.
Windows 2008 also has Event Log Forwarding which can be quite handy with server applications.
I agree with Lou on this, but I prefer to set this up in a configuration file like Joe said. You can use
file value="${APPDATA}/Test/log-file.txt"
("Test" could be whatever you want, or removed entirely) in the configuration file, which causes the log file to be written to "/Documents and Settings/LoginUser/Application
Data/Test" on Windows XP and to "/Users/LoginUser/AppData/Roaming/Test on Windows Vista.
I am just adding this as I just spent way too much time figuring how to make this work on Windows Vista...
This works as-is with Windows applications. To use logging in web applications, I found Phil Haack's blog entry on this to be a great resource:
http://haacked.com/archive/2005/03/07/ConfiguringLog4NetForWebApplications.aspx
%TEMP% is always a good location for logs I find.
Going against the grain here - it depends on what you need to do. Sometimes you need to manipulate the results, so log.txt is the way to go. It's simple, mutable, and easy to search.
Take an example from Joel. Fogbugz will send a log / dump of error messages via http to their server. You could do the same and not have to worry about the user's access rights on their drive.
I personally don't like to use the Windows Event Log where I am right now because we do not have access to the production servers, so that would mean that we would need to request access every time we wanted to look at the errors. It is not a speedy process unfortunately, so your troubleshooting is completely haulted by waiting for someone else. I also don't like that they kind of get lost within the ones from other applications. Sure you can sort, but it's just a bit of a nucance scrolling down. What you use will end up being a combination of personal preference coupled along with limitations of the enviroment you are working in. (log file, event log, or database)
Put it in the directory of the application. The users will need access to the folder to run and execute the application, and you can check write access on application startup.
The event log is a pain to use for troubleshooting, but you should still post significant errors there.
EDIT - You should look into the MS Application Blocks for logging if you are using .NET. They really make life easy.
Jeez Karma-killers. Next time I won't even offer a suggestion when the poster puts up an incomplete post.

Issues with DB after publishing via Database Publishing Wizard from MSFT

I work on quite a few DotNetNuke sites, and occasionally (I haven't figured out the common factor yet), when I use the Database Publishing Wizard from Microsoft to create scripts for the site I've created on my Dev server, after running the scripts at the host (usually GoDaddy.com), and uploading the site files, I get an error... I'm 99.9% sure that it's not file related, so not sure where to begin in the DB. Unfortunately with DotNetNuke you don't get the YSOD, but a generic error, with no real way to find the actual exception that has occured.
I'm just curious if anyone has had similar deployment issues using the Database Publishing Wizard, and if so, how they overcame them? I own the RedGate toolset, but some hosts like GoDaddy don't allow you to direct connect to their servers...
The Database Publishing Wizard's generated scripts usually need to be tweaked since it sometimes gets the order wrong of table/procedure creation when dealing with constraints. What I do is first backup the database, then run the script, and if I get an error, I move that query to the end of the script. Continue restoring the database and running the script until it works.
There are two areas that I would look at -
Are you running in the dbo schema and was your scripted database
using dbo?
Are you using an objectqualifier in either your dev or your
production environment? (look at your sqldataprovider configuration
settings)
You should be able to expose the underlying error message by setting the following in the web.config:
customErrors mode="Off"
Could you elaborate on "and uploading the site files"? New instance of DNN? updating an existing site? upgrading DNN version? If upgrade or update -- what files are you adding/overwriting?
Also, when using GoDaddy, can you check to verify that the web site's identity (network service or asp.net machine account depending on your IIS version) has sufficient permissions to the website's file system? It should have modify permissions and these may need to be reapplied if you are overwriting files.
IIS6 (XP, Server 2000, 2003) = ASP.Net Machine Account
IIS7 (Vista, Server 2008) = Network Service
Test your generated scripts on a new local database (using the free SQL Express product or the full meal deal). If it runs fine locally, then you can be confident that it will run elsewhere, all things being equal.
If it bombs when you run it locally, use the process of elimination and work your way through the script execution to find the offending code.
My hunch is that the order of scripts could be off. I think I've had that happen before with the database publishing wizard.
Just read your follow up. In every case that I've had your problem, it was always something to do with the connection string in web.config. Even after hours of staring at it, it was always a connection string issue in web.config. Get up, take a walk and then come back.
If you are getting one of DNN's error pages, there is a chance it may have logged the error to the eventlog table.
Depending on exactly what is happening and what DNN is showing you you might be able to manually look inside the EventLog table, pull out the XML data stored there, and parse it to find the stack trace and detailed information regarding the specific error at hand.
I have found however though that I get MUCH better overall experiences with deployments using backups and restores of my database, that way I am 100% sure that all objects moved correctly, and honestly it works better in my experience.
With GoDaddy I know another MAJOR common issue is incorrect file permissions, preventing DNN from modifying the web.config and other files that it needs to do.