I am using SQL Express 2008 as a backend for a web application, the problem is the web application is used during business hours so sometimes during lunch or break time when there is no users logged in for a 20 minute period SQL express will kick into idle mode and free its cache.
I am aware of this because it logs something like:
Server resumed execution after being idle 9709 seconds
or
Starting up database 'xxxxxxx'
in the event log
I would like to avoid this idle behavior. Is there anyway to configure SQL express to stop idling or at least widen the time window to longer than 20mins? Or is my only option to write a service that polls the db every 15mins to keep it spooled up ?
After reading articles like this it doesn't look to promising but maybe there is a hack or registry setting someone knows about.
That behavior is not configurable.
You do have to implement a method to poll the database every so often. Also, like the article you linked to said, set the AUTO CLOSE property to false.
Just a short SQL query like this every few minutes will prevent SQLserver from going idle:
SELECT TOP 0 NULL
FROM [master].[dbo].[MSreplication_options]
GO
Write a thread that does a simple query every few minutes. Start the thread in your global.asax Application_Start and you should be done!
Here is a good explanation: https://blogs.msdn.microsoft.com/sqlexpress/2008/02/22/understanding-sql-express-behavior-idle-time-resource-usage-auto_close-and-user-instances/
Whatever: I do not know the time after sql express goes idle. I suggest to run the script below every 10 minutes (maybe task scheduler).
This will prevent SQL Server Express from going idle:
SELECT TOP 0 NULL FROM [master].[dbo].[MSreplication_options] GO
Also make sure all data bases' property is set to AUTO_CLOSE = FALSE
Related
Our web app automatically emails us when a page execution goes beyond a second or two with timings for running each SQL statement. We track what pages each user is browsing on each page load and this query sometimes takes a couple of seconds to run (we get a number of these automatic emails telling us a page has taken longer than a couple of seconds at the same time).
UPDATE whosonline
SET datetime = GETDATE(),
url = '/user/thepage'
WHERE username = 'companyname\theusername (0123456789)'
Any ideas what could be causing this? Normally it runs in a split second but say every week or so it takes about 2 or 3 seconds for probably a timespan of 10 seconds.
This is a very broad question and there could be a number of reasons:
Is there a pattern to what day/time in the week this happens? Maybe your db machine has just come up
How many users do you have? Are there indexes to the database?
What about the database cache? Is it configured?
How do you know it's a database delay and not a network delay? Have you tried accessing from the local database server and seen if the delays happen there too?
If you have access to SQL Profiler, you might want to run that on the statement to see if anything is happening on the server that might be causing issues. I'd also check the execution path in Management Studio/Query Analyzer if you can as well. Otherwise, if those don't turn up anything it probably is something to do with the web-side of things, not SQL.
I have read somewhere about SQL express running as a user instance or something.. and as such, the instance/service "goes to sleep" if not used for x time.. (don't know the actual timings etc)
So the scenario is:
If my website (in this case) doesn't have anyone using it for "a few hours", SQL Express "seems" to go to sleep.
The next time someone comes along (after the pause for however long), the initial response takes quite a few seconds more to action.
Subsequent requests directly after the initial one seem very fast.. again until there is a pause "for a few hours" or whatever the timing is?
Any ideas? if so, any examples/directions of what to do?
Thanks!
David.
Yes, there is the so called RANU instance, which is what you get when you specify User Instance=True in the connection string. Read more about this in SQL Server 2005 Express Edition User Instances. I would recommend you stay as far away as possible from anything related to User Instances. They are impossible to debug and troubleshoot when things go wrong, they sometimes have a ramp up time to create the new instance of minutes, and they really offer no advantage in the real world. Besides, they are deprecated in SQL Server Express 2008.
If you're using SQL Express 2008 and you do not specify User Instance=True in your connection string then you do no get an user instance, so probably the first request time comes from the IIS app pool warm up, as other have suggested. It may also occur due to ordinary process workingset attrition that would cause the SQL buffer pools to go cold. You can easily identify whether it is IIS or SQL by monitoring appropriate performance counters on your system.
This isn't the database going to sleep, this is the Application Pool in IIS. If no users are connected/using the website, then the application pool will reset and the session(s) will shutdown. Then, when a user comes to the website, it has to restart the website.
There is a term called Database Warmup pal, you can find out more here and probably this is your solution
Are you sure it's the database going to sleep and not IIS? IIS will unload websites after a certain period of inactivity, and they can be very slow to reload.
We have a huge Oracle database and I frequently fetch data using SQL Navigator (v5.5). From time to time, I need to stop code execution by clicking on the Stop button because I realize that there are missing parts in my code. The problem is, after clicking on the Stop button, it takes a very long time to complete the stopping process (sometimes it takes hours!). The program says Stopping... at the bottom bar and I lose a lot of time till it finishes.
What is the rationale behind this? How can I speed up the stopping process? Just in case, I'm not an admin; I'm a limited user who uses some views to access the database.
Two things need to happen to stop a query:
The actual Oracle process has to be notified that you want to cancel the query
If the query has made any modification to the DB (DDL, DML), the work needs to be rolled back.
For the first point, the Oracle process that is executing the query should check from time to time if it should cancel the query or not. Even when it is doing a long task (big HASH JOIN for example), I think it checks every 3 seconds or so (I'm looking for the source of this info, I'll update the answer if I find it). Now is your software able to communicate correctly with Oracle? I'm not familiar with SLQ Navigator but I suppose the cancel mechanism should work like with any other tool so I'm guessing you're waiting for the second point:
Once the process has been notified to stop working, it has to undo everything it has already accomplished in this query (all statements are atomic in Oracle, they can't be stopped in the middle without rolling back). Most of the time in a DML statement the rollback will take longer than the work already accomplished (I see it like this: Oracle is optimized to work forward, not backward). If you are in this case (big DML), you will have to be patient during rollback, there is not much you can do to speed up the process.
If your query is a simple SELECT and your tool won't let you cancel, you could kill your session (needs admin rights from another session) -- this should be instantaneous.
When you cancel a query, the Oracle client should send OCIBreak() but this isn't implemented on a Windows server, that could be the cause.
Also, have your DBA check the value of SQLNET.EXPIRE_TIME.
I restored a 35Gb database on my dev machine yesterday and it was all going fine until this morning when my client app couldn't connect. So I opened SQL Management Studio to find the database 'In Recovery'.
I don't know a huge amount about this other than it is usually something to do with uncommitted transactions. Now since i know there aren't any uncommitted transactions it must be something else. So first off, I'd like to know under what conditions this can happen. Secondly, while this is going on I can't work so if there are any ways of either stopping the recovery, speeding it up or at least finding roughly how long it's gonna be that would help.
Do not shut down SQL while recovery is in progress. Let it finish. Check the error logs. If it doesn't finish, restore from backup.
You can find out how long it's going to take by looking in the event viewer. In the Application section on the Windows Logs you should get information messages from MSSQLSERVER with EventID 3450 telling you what it's up to. Something like:
Recovery of database 'XYZ' is 10% complete (approximately 123456 seconds remain) etc etc
I'm afraid I don't know how to stop it (yet).
I have a procedure written in PLJava that sends out updates over JMS in my postgres database.
What I would like to do is have that function called on an interval (every 15 seconds) internally in the database (preferably not from an outside process). Is this possible? Any ideas?
If you need no external access, you are presumably able to modify the database design so that you don't need the update at all. Can you explain more about what the update is doing?
As depesz said, you could use either cron or pgAgent, but they are only able to go down to a one minute granularity, not 15 seconds. Considering sleeping inside the stored procedure until the next iteration is not a good idea, because you will have an open transaction for all that time which is a really bad idea.
Strict answer: it is not possible. Since you don't want outside process, and PostgreSQL doesn't support jobs - you are out of luck.
If you'll reconsider using outside processes, then you're most likely want something like cron, or better yet pgagent.
On absolutely other hand - what do you need to do that has to happen every 30 seconds? this seems like a problem with design.
First, you'll spend the least amount of effort if you just go with a cron job.
However, if you were starting from scracth: You are trying to periodically replicate rows from your database. I think you are looking at a replication queue.
The PGQ project (used for Londiste replication, both from Skype's SkyTools) has a queue that you can use independently. When configuring it, you set a maximum event count, and a loop delay, before batched events are generated. You can get batches spaced by no more than 15 seconds that way. You now have to produce the events that will be batched, using a trigger that calls pgq.insert_event; and consume the queues. The consumer can call your PL/Java stored proc; you'll have to rewrite the procedure to send everything in the batch instead of scanning the base table for new events.
As far as I know postgresql doesn't support scheduled tasks. You'll need to use a script with cron or at (depending on your operating system.)
Sounds like you're doing sort of replication? Every 15s sounds like a lot of updates. Could you setup a trigger (or a number of triggers) instead of polling?
If you are using JMS why not just have th task wait for input on the queue?
Per your depesz comment, you have a PL/Java stored procedure that "flushes out database tables (updates) as java objects". Since you want it to run in 15 second intervals, it must be processing a batch of updates each time. Rather than processing a batch of updates in a stored procedure every 15 seconds, why not process them one at a time when they happen via an after update trigger and eliminate the need for a timed interval. If you are aggregrating data from multiple tables to build your objects than add the triggers to you upper most tables only.
In my case the problem was that agent couldn't authorize to database so after I've made all connections trusted from localhost the service started successfully and job works fine
for more information about error you should see into windows event viewer or eq in unix based system. see my config file C:\Program Files\PostgreSQL\10\data\pg_hba.conf