Strange SharePoint 2010 and SQL Server problem - sql

We have a client who has just recently acquired their first SharePoint 2010 system. They have two servers, one running SQL Server 2008 R2, and the other one is used to run SharePoint 2010. When I restore a backup of the site collection from our development environment the server with running SharePoint, the site works fine for about a day. After that day, when we try to connect to the home page of the site, the site battles to connect. The browser just says “Connecting…” until the page eventually times out. But you can still access the “backend” pages, like the View all Site Content page (http:///_layouts/viewlsts.aspx) and the site settings page (http:///_layouts/settings.aspx).
Here’s a little info about the web parts that we are running on the home page. The web parts checks if the cache is empty, and if the cache is empty it populates the cache with all the items in a specific list. The list contains a lot of items, +/- 4000 items. So it’s obvious that SharePoint will be retrieving a lot of data from the SQL Server.
If I delete the web applications (that includes deleting the content database and the IIS web site) and I re-deploy the site collection using the backup I’ve made on our development environment the site works fine again for about a day.
After running into these problems we started monitoring the resources use on the SQL Server (Using the Resource Monitor). If you filter the network traffic to only display the network usage of the sqlserver.exe process, it shows that it’s only communicating at +/- 30KB/s. This is incredibly slow! When you copy a 390MB file from the SharePoint Server to the SQL (to test the connection speed between the two servers) it copies the file in 2 seconds.
This is a very strange problem that raises a couple questions. First of all our development environment is almost exactly the same as their environment (in fact, we have less RAM) so why don’t we have any problems like this in our development environment? Secondly, why if you deploy the site from scratch, does the site work for a day, and only the start causing problems later? And the final question: why is the communication speed between the SharePoint and SQL so slow, but the connection speed between the two servers is very quick?
I really hope someone can help us with this, or give us a couple of things we can troubleshoot.
Thanks in advance!

After a very long struggle we found a solution.
We found this post:
http://trycatch.be/blogs/tom/archive/2009/04/22/never-turn-off-quot-auto-create-amp-auto-update-statistics-quot.aspx
We tested it and it worked!!!
So all we had to do was switch "Auto create statistics" and "Auto update statistics" to true, and to problem was solved
Thanks for all the replys

"why is the communication speed
between the SharePoint and SQL so
slow, but the connection speed between
the two servers is very quick?"
You've shown its not the network with the file copy test so that would indicate that either
a) SQL server is overloaded and can't keep up or
b) the WFE is overloaded and can't keep up.
Have you performed basic performance troubleshoting looking at both servers for things like CPU/Memory/Disk swapping etc etc
Also there are lots of specific steps to troubleshoot SQL performance. This reference is for 2005 but all the same principals apply.

Related

How to move MVC application from one server to another?

Currently we are subscribed to GoDaddy for their dedicated server lease, and now we are considering just purchasing our own server and just moving off of GoDaddy.
I have no idea how to put all of my source code from one server to another and also move the database and other files. Please explain what process has to be followed for this.
Also wanted to ask if their would be any changed for the third party tools i have used in my application?
To migrate my web application from one server to another I would do following :
Make a list of all transferable which would be:
Latest and running Source code on the server (ideally located in wwwroot)
database back up files (usually .mdb or .bak)
Copy source code and database back up to targeted server. Obviously depending upon type of the server you may need to set up the site in IIS and and point it to new directory
Restore database backup on database server
edit web.config to point to new database server and credentials.
Another important tip: If you have kept domain name with GoDaddy and only changing hosting server you may also need to change namedserver of you domain, without which you will not be able to point your domain to new hosting provider!
You may not succeed on the very first attempt of copying your stuffs to target server. It's always better to maintain back up so that you can copy files again and again in case something goes wrong on the new server.
As long as third party charges are concerned please check with your service providers they may be able to guide you best. For above simple stuff you do not need to worry about license stuff.
All the best!

Slow Response When IIS Doesn't Target Default wwwroot and Does Project Folder as the Root

Is it possible that setting the IIS root to the same directory to the project root will cause a slow performance?
I have an ASP.NET Web Application that handles some SQL commands to GET/POST records on the local SQL database. Recently I have came up with an idea that I no longer have to start debugging each time to test the code by changing the root of IIS from the default (C:\inetpub\wwwroot) to the root of the web-application project folder.
However, after that, I have encountered a problem where some manipulation on the web GUI, especially which include POST requests get extremely slow. For example, adding a new document or rewriting an existing one on the database now take about a minute whereas they did less than 20 seconds. Also, it seems that repeating POST commands make themselves slower (restarting the computer reset the situation). So I guess some read/write process may leave garbage and it conflicts with other processes.
Could anyone suggest any core issue about this phenomenon? Also please let me kwno if my explanation isn't enough clear to show the problem.
I have encountered a problem where some manipulation >on the web GUI, especially which include POST requests >get extremely slow
Changing the root directory is very unlikely to cause this issue.Your application was already performing very slow(20 seconds also is slow).
So no phenomenon in my opinion,You have to debug your application to find out where the delay is.To find out the root cause,you can use any profiler like perfview or a tool like debugdiag.
In case of debugdiag,choose the second option in the above link to capture a memory dump.Once you have a memory dump,simply double click the dump file and debugdiag will do an automated analysis and tell you where the problem is in your application code. E.g it can tell you your DB call is taking time .If you are not able to find,please post the analysis result updated with the question

RavenDB Error connecting when trying to create indexes

I had an issue with my server that hosts RavenDB. It was running out of hard drive space. I cleared up some space by deleting a few databases that were no longer in use (through the management portal). I then shut down the RavenDB service. I deleted the data from the "PeriodicBackup-Temp" folder in the directory for one of the databases, and restarted the server. When the server restarted, I was getting errors from any site that tried to connect to any of the databases (503 server error). I debugged the error, and found that it is happening when I create the document store. Specifically:
IndexCreation.CreateIndexes(new CompositionContainer(new TypeCatalog(types)), docStore);
Now, the thing is, I haven't changed any code on these sites in a long time, and I certainly haven't changed anything to do with connecting to RavenDB or creating indexes. Here is what the error said:
A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.
For a while, I was also getting errors in the management portal that said that whichever database I was currently looking at did not exist (which was odd, since I was looking at the documents in the document stores while it was telling me the database did not exist...)
It's now four hours later. I have noticed all of the sites except two have started working at some point. I was hoping that Raven just needed to rebuild indexes, but when I look at the databases that correspond to the sites that are still throwing the error, Raven says that there are no stale indexes.
I am using build 2750. I have been using this build for over a year if I remember correctly.
All of my sites use the exact same code base. They are hosted on different sites and connect to different databases, but other than that they are exactly the same. I'm pretty confident that this issue has something to do with the database server, and not the web server.
Right now, the sites are not getting used, so it's not a load issue. The RavenDB process isn't even using any CPU except occasionally.
Any ideas what could be causing this? I hate to just "hope" that it's going to start working in a few hours, but it's all I have at this point.
After waiting about 8 hours, those two sites were still unable to connect. I restarted those websites, and everything started working again. This is something I had tried earlier, so I don't know exactly what needed to complete before restarting the site had an affect. I am completely up and running again. If anyone can shed some light on why this fixed it, I'm all ears.

SharePoint 2010 web frontend

i had a Sharepoint 2010 farm with two web frontend servers (Medium Size), one of them is displaying some weird results, i was wondering if theres a way to see from wicht of my server frontend a getting that error. I mean, if a had to servers ServerFrontEnd1 and ServerFrontEnd2 when i open the Site, i would like to know from wicth of those servers i getting the response. (Load Balancing)
Who the Load Balance its done in Sharepoint 2010.
And also, who can i clear the cache of my farm?
Thanks.
If you have access to the file system on each front end, you could create multiple images containing front end server numbers. Give each image the same file name (i.e., frontend.jpg) and save it to the same path on each server:
[SharePoint Root]\TEMPLATE\IMAGES\frontend.jpg
Then you can always tell which front end your request is hitting simply by visiting the URL mycompany.com/_layouts/images/frontend.jpg.
I just blogged about this:
http://www.dannyjessee.com/blog/index.php/2011/07/which-sharepoint-front-end-server-am-i-hitting/
Best of luck!
First of all check the Main Server Log Files which is hosting the core functionality of sharepoint and controlling all farm. there you can see which site and server is causing problem.
I've created a solution that tackles this exact problem by taking advantage of delegate controls
http://spservername.codeplex.com/

Issues with DB after publishing via Database Publishing Wizard from MSFT

I work on quite a few DotNetNuke sites, and occasionally (I haven't figured out the common factor yet), when I use the Database Publishing Wizard from Microsoft to create scripts for the site I've created on my Dev server, after running the scripts at the host (usually GoDaddy.com), and uploading the site files, I get an error... I'm 99.9% sure that it's not file related, so not sure where to begin in the DB. Unfortunately with DotNetNuke you don't get the YSOD, but a generic error, with no real way to find the actual exception that has occured.
I'm just curious if anyone has had similar deployment issues using the Database Publishing Wizard, and if so, how they overcame them? I own the RedGate toolset, but some hosts like GoDaddy don't allow you to direct connect to their servers...
The Database Publishing Wizard's generated scripts usually need to be tweaked since it sometimes gets the order wrong of table/procedure creation when dealing with constraints. What I do is first backup the database, then run the script, and if I get an error, I move that query to the end of the script. Continue restoring the database and running the script until it works.
There are two areas that I would look at -
Are you running in the dbo schema and was your scripted database
using dbo?
Are you using an objectqualifier in either your dev or your
production environment? (look at your sqldataprovider configuration
settings)
You should be able to expose the underlying error message by setting the following in the web.config:
customErrors mode="Off"
Could you elaborate on "and uploading the site files"? New instance of DNN? updating an existing site? upgrading DNN version? If upgrade or update -- what files are you adding/overwriting?
Also, when using GoDaddy, can you check to verify that the web site's identity (network service or asp.net machine account depending on your IIS version) has sufficient permissions to the website's file system? It should have modify permissions and these may need to be reapplied if you are overwriting files.
IIS6 (XP, Server 2000, 2003) = ASP.Net Machine Account
IIS7 (Vista, Server 2008) = Network Service
Test your generated scripts on a new local database (using the free SQL Express product or the full meal deal). If it runs fine locally, then you can be confident that it will run elsewhere, all things being equal.
If it bombs when you run it locally, use the process of elimination and work your way through the script execution to find the offending code.
My hunch is that the order of scripts could be off. I think I've had that happen before with the database publishing wizard.
Just read your follow up. In every case that I've had your problem, it was always something to do with the connection string in web.config. Even after hours of staring at it, it was always a connection string issue in web.config. Get up, take a walk and then come back.
If you are getting one of DNN's error pages, there is a chance it may have logged the error to the eventlog table.
Depending on exactly what is happening and what DNN is showing you you might be able to manually look inside the EventLog table, pull out the XML data stored there, and parse it to find the stack trace and detailed information regarding the specific error at hand.
I have found however though that I get MUCH better overall experiences with deployments using backups and restores of my database, that way I am 100% sure that all objects moved correctly, and honestly it works better in my experience.
With GoDaddy I know another MAJOR common issue is incorrect file permissions, preventing DNN from modifying the web.config and other files that it needs to do.