SQL Query perform slow with increasing users - sql

I have an asp.net web application with SQL server 2014 express database. My application runs well until data entry records are less, but as the data entry records saving increases, not every time but sometime page takes too long to load back again. On save button, after insert/update query is executed, I have reloaded the page to get fresh data. The page takes 2-3 min to load and then it again starts running normally and there is no load. This happens at any time but not everytime. I kept a watch on Activity Monitor, the Processes peaks are high. It show recent expensive query, but that query hardly takes 1 second to run because every time it will have max 120 rows output. I have added all missing indexes, but no success since my query is already optimized. What could be the cause?

Related

SSAS Process Default behavior

I'm trying to make sense of Process Default behavior on SSAS 2017 Enterprise Edition.
My cube is processed daily in this standard sequence:
Loop through 30 dimensions and performing Process Add or Process Update as required.
Process approximately 80 partitions for the previous day.
Exec a Process Default as the final step.
Everything works just fine, and for the amount of data involved, performs really well. However I have observed that after the process default completes, if I re-run the process default step manually (with no other activity having occurred whatsoever), it will take exactly the same time as the first run.
My understanding was that this step basically scans the cube looking for unprocessed objects and will process any objects found to be unprocessed. Given the flow of dimension processing, and subsequent partition processing, I'd certainly expect some objects to be unprocessed on the first run - particularly aggregations and indexes.
The end to end processing time is around 65 mins, but 10 mins of this is the final process default step.
What would explain this is that if the process default isn't actually finding anything to do, and the elapsed time is the cost of scanning the meta data. Firstly it seems an excessive amount of time, but also if I don't run the step, the cube doesn't come online, which suggests it is definitely doing something.
I've had a trawl through Profiler to try to find events to capture what process default is doing, but I'm not able to find anything that would capture the event specifically. I've also monitored the server performance during the step, and nothing is under any real load.
Any suggestions or clarifications..?

ASP.NET MVC page slow Time To First Byte

I have two views that I am refreshing every few seconds asp.net mvc. Every few views are refreshed since there is another program that is updating the database. It is taking a lot of time and it is random, what I mean by that is sometimes the first view reload time is slow and sometimes the second view reload time is slow. It is in 500ms or 50ms each for view1 and view2 or vice versa 500ms for view2 and 60ms for view1. I needs to reduce both the load time to 10ms. I have set the timer interval to 30 seconds and loading the views. I did sql profiler to see if the queries are taking a long time and it does not look like it. In dev tools TTFB time is showing as taking long and this is on my local machine with local webserver and sql. Please advise!
Thanks

SQL Server Remote Update

We have a process in which several site servers send data to a central server (through a Linked Server). A new site has seen the job duration more than double in three weeks, and a couple of the other sites often fail due to run time overlap.
It is a two step process:
Insert new records
Update changed records
The insert only takes a few seconds, but the update takes anywhere from 5 to 20 minutes, depending on the site. I am able to change the query that drives the update and get it down to only a couple seconds, but still when put into an UPDATE statement it takes several minutes.
I am leaning towards moving the jobs to a single job on the central server, so it is a pull operation which, based on the testing I have done, should be much faster. However, my question is: What is considered "best practice" in this situation? I am going to have to change quite a bit to get this working properly, so I might as well do it right.

How to control growing SQL database day by day

I have an SQL database which has a main Orders table taking 2-5 new rows per day.
Other table which has daily records is Log table. It receives new data every time a user accesses the login page of the web site including time and the IP address of the user. It gets 10-15 new rows per day for now.
As I monitor the daily backup of SQL, I realized that it is growing like 2-3MB per day. I have enough storage but it makes me worried. Is it the Log table causing this growth? I deleted like 150 rows but it didn't cause the .bak file size reduce. It increased! I didn't shrink database and I don't want to do it.
I'm not sure what to do about it. Is there any other decent way of Logging user accesses?
I typically export the rows from the production server, and import into a database on a non-production server (like my local machine), then delete the existing rows from the production server. Also run an optimize on the production server table so the size is recalculated. This is somewhat manual but it keeps the production server table size down, and the export/import process is rather quick.

Intermittent sqlexception timeout expired errors

We have an app with around 200-400 users and once a day or every other day we get the dreaded sql exception:
"Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding".
Once we get this then it happens several times for different users and then all users are stuck. They can't perform any operations.
I don't have the full specs of the boxes right in front of me but we have:
IIS and SQL Server running on separate boxes
each box has 64gb of memory with multiple cores
We get nothing in the SQL server logs (as would be expected) and our application catches the sqlexception so we just see the timeout error there - on an UPDATE. In the database we have only a few key tables. The timeout happens on one of the tables where there is 30k of rows. We have run profiler on these queries hitting the UI against a copy of production to get the size and made sure we have all of the right indexes (clustered/non-clustered). In a local environment (smaller box, same size database) everything runs fast and to the users most of the day the system runs fast. The exact same query (which had the timeout error in production) ran in less than a second.
We did change our command timeout from 30 seconds to 300 seconds (I know that 0 is unlimited and I guess we should use that, but it seems like that's just masking the real problem).
We had the profiler running in production, but unfortunately it wasn't fully enabled the last time it happened. We are setting it up correctly now.
Any ideas on what this might be?