how does the caching process work in ASE 15.0.3 - sql

I am monitoring a Sybase server (ASE 15.0.3) for it's performance. One of the things it monitors is the cached data. but I want to understand how the caching process really works in ASE 15.0.3. Can one instance of ASE 15.0.3 cache statements running in another instance or is the caching limited to it's own instance. And what are the tables used in the caching process in both the case of self caching and cross caching
NOTE: by performance I mean Full set of performance tuning as supported by ASE 15.0.3

By default ASE uses the 'default data cache' for data cache and 'procedure cache' to cache 'SP query plans'. As from ASE15.x (not in compatibility mode), 'statement cache' can be used for optimization to cache query plans of SQL statement that are re-used a lot.
I use sp_sysmon to collect server-wide performance measures and read the following doc to optimize my data cache:
http://infocenter.sybase.com/help/index.jsp?topic=/com.sybase.infocenter.dc20020.1502/html/basics/X56939.htm
The very useful MDA tables are generating too much overhead in my system so I try to only use temporary to collect performance measure.
Hope this helps.

Related

How can I check if SQL Server memory caching is enabled on my server?

I have few questions about SQL Server 2008.
How can I check is Memory Caching feature in SQL Server 2008 is enabled? Is there a variable to turn memory caching on or off? "I just want to make sure it is on"
Also, when does SQL Server decide that this cached data is outdated so it dumps it and perform a hard disk read again?
Finally, assuming I have this query SELECT * FROM table1 WHERE id = 10 After the record is cached in memory and a process is trying to read it, does SQL Server place a Shared lock on that record in memory or there is no locks in memory?
The short answer is no, you can't turn off Memory Caching at the server level. The engine takes care of memory Caching for you and it is very aggressive in how it caches, you basically want all queries returned from memory and not from disk. The buffer is orders of magnitude faster than disk access.
Check out these articles which explain how the caching works - known as buffer pool in SQL Server speak.
https://dba.stackexchange.com/questions/43572/how-to-see-what-is-cached-in-memory-in-sql-server-2008
Also anything by Paul Randal is pretty definitive - http://www.sqlskills.com/blogs/paul/inside-the-storage-engine-whats-in-the-buffer-pool/
In terms of locking, this is a commonly asked question on SO. In simple terms the default behaviour is that SQL Server uses Shared locks for readers. So multiple readers can access the data at the same time. SQL Server uses a dynamic locking strategy which escalates locks as needed typically from page level to table level but may de-escalate to row level locks if it sees fit. However this is done dynamically and is automatically handled by the SQL Server engine. here is an article on this topic - https://technet.microsoft.com/en-us/library/ms189286(v=sql.105).aspx
Also this question has been asked a lot at SO before so check out this link What are row, page and table locks? And when they are acquired?
and others

SQL Server - Avoiding write timeouts on logging table due to reporting queries

I have two very busy tables in an email dispatch system. One is for batching mail for dispatch, the other is used for logging. Expensive queries are ran that use both of these tables to produce stats for a UI. I would like to remove the reporting overhead on these tables as I am seeing timeouts during report generation.
My question is - what are my options for reducing the query overhead on these two tables while generating the report data.
I've considered using triggers to create exact copies of the tables. Is there any built in functionality in SQL server for mirroring data within a database? If I can avoid growing the database unnecessarily though that would be an advantage. It doesn't matter if the stats are not real time.
There is a built in functionality for this scenario and it's known as Database Snapshot.
If you run a query against a DB snapshot table, no shared locks should be created on original database.
You can use Resource Governor for SQL Server. Unfortunately, I have only read about it and haven't used it yet. It is used to isolate workloads on SQL Server.
Please try and let us know if it helps.
Some helpful links: MSDN SQLBlog technet
Kind Regards,
Sumit

Database caching

I have windows server 2008 r2 with microsoft sql server installed.
In my application, I am currently designing a tool for my users, that is querying database to see, if user has any notifications. Since my users can access the application multiple times in a short timespan, i was thinking about putting some kind of a cache on my query logic. But then I thought, that my ms sql server probably does that already for me. Am I right? Or do I need to configure something to make it happen? If it does, then for how long does it keep the cache up?
It's safe to assume that MSSQL will has the caching worked out pretty well =)
Don't bother trying to build anything yourself on top of it, simply make sure that the method you use to query for changes is efficient (eg. don't query on non-indexed columns).
PS: wouldn't caching locally defeat the whole purpose of checking for changes on the database?
Internally the database does all sorts of things, including 'caching', but at all times it works incredibly hard to make sure your users see up-to-date data. So it has to do some work each time your application makes a request.
If you want to reduce the workload by keeping static data in your application then you have to implement it yourself.
The later versions of the .net framework have caching features built in so you should take a look at those (building your own caching can get very complex).
SQL Server will handle caching for you, yes. When you create a query or a stored procedure SQL Server will cache that execution plan and reuse it accordingly. From MSDN:
SQL Server execution plans have the following main components: Query
Plan The bulk of the execution plan is a re-entrant, read-only data
structure used by any number of users. This is referred to as the
query plan. No user context is stored in the query plan. There are
never more than one or two copies of the query plan in memory: one
copy for all serial executions and another for all parallel
executions. The parallel copy covers all parallel executions,
regardless of their degree of parallelism.
Execution Context, each user that is currently executing the query has a data structure that holds
the data specific to their execution, such as parameter values. This
data structure is referred to as the execution context. The execution
context data structures are reused. If a user executes a query and one
of the structures is not being used, it is reinitialized with the
context for the new user.
If you wish to clear this cache you can execute sp_recompile or DBCC FREEPROCHCACHE

Really slow schema information queries on SQL Server 2005

I have a database with a rather large number of tables, about 3500, and an application that needs to access a table list.
On a particular server this takes over 2.5 min to return.
EXEC sp_tables #table_type="'TABLE'"
I know there are faster ways to do that but sadly I'm not in a position to modify the application and need to find a way to push it below 30 seconds so the application doesn't throw timeout errors.
So. What, if anything, can I do to improve the performance of this sp within sql server?
I have seen these stored procedures run slow if you do not have the GRANT VIEW DEFINITION permission set on your user account. From what I read, this will cause a security check to occur slowing down the query.
Maybe a SQL guru can comment on why, if this does help your problem.
Well, sp_tables is system code and can't be changed (could workaround in SQL Server 2000, not SQL Server 2005+)
Your options are
Change the SQL
Change command timeout
Bigger server
You've already said "no" to the obvious solutions...
You need to approach this just like any other performance problem. Why is it slow? Namely, where does it block? Disk IO? CPU? Network? Lock contention? The scientific method is to use a methodology like Waits and Queues, or its newer SQL 2008 equivalent Troubleshooting Performance Problems in SQL Server 2008. The lazy way is to simply check the wait_type, wait_time and wait_resource columns in sys.dm_exec_requests for the session executing the sp_tables call. Once you find out what is blocking the execution, you can proceed accordingly.
If I'd venture a guess, you'll discover contention as the cause: other sessions are locking table's metadata exclusively and thus block the execution of sp_tables, which has to wait until all operations in front of it finish.

sql server 2008 reads blocking writes

I have upgraded a set of databases from sql server 2000 to sql server 2008 and now large reads are blocking writes whereas this wasn't a problem in sql server 2000 (same databases and same applications & reports) Why? What setting in 2008 is different? Did 2000 default to read uncommitted transactions?
(update)
Adding with (nolock) to the report views in question fixes the problem in the short term - in the long run we'll have to make copies of the data for reporting either with snapshots or by hand. [sigh] I'd still like to know what about sql server 2008 makes this necessary.
(update 2) Since the views in question are only used for reports 'read uncommited' should be ok for now.
SQL Server 2000 did not use READ UNCOMMITTED by default, no.
It might have something to do with changes in optimizations in the execution plan. Some indexes are likely locked in a different order from what they were in SQL Server 2000. Or SQL Server 2008 is using an index that SQL Server 2000 was ignoring altogether for that particular query.
It's hard to say exactly what's going on without more information, but read up on lock types and check the execution plans for your two conflicting queries. Here's a nice short article that explains another example on why things can deadlock.
Read Committed is the default isolation level in SQL Server 2000, not Read Uncommitted.
http://msdn.microsoft.com/en-us/library/aa259216(SQL.80).aspx
I imagine that something in your app was setting the isolation level - perhaps via one of the connection object properties. Have a look here for the methods used to set Transaction Isolation levels via ADO, ODBC, and OLE DB.
You can do the same in SQL Server 2008, but...are you sure that your app should be running under read uncommitted? Is your app specifically designed to handle data movement and phantom reads?
I'm actually surprised that you didn't run into problems in SQL Server 2000. It seems like every week we were fixing stored procedures that were locking tables because someone forgot nolocks.
You could look into snapshot isolation, this will allow the app to read the older version of the rows whilst the writing threads are still busy updating the rows.
http://msdn.microsoft.com/en-us/library/ms189050.aspx