I appear to be getting a lot of random deadlocks when reading data from one of my tables. This table contains alot of information and is very frequently read and updated.
I am using S#arp Architechture 1.9 which uses the Transaction attribute on all my data access / update code.
Is there anything special which I need to do to ensure I don't get deadlocks, Should i update / read my data in a certain way.
Not too sure where to start on this one.
NHibernate 3
S#arpArchitecture 1.9
SQL Server 2008 R2
Thanks.
Are you getting actual deadlocks or blocked reads? If it is the former, consider rebuilding indexes and statistics.
Related
Is there a way to check if an SQL Server table (or even better, a page in that table) was modified since a certain moment? E.g. SQL differential backup uses dirty flags to know which parts of data were changed since last backup, and resets these flags after a successful backup.
Is there any way to get this functionality from MS SQL Server? I.e. if I want to cache certain aggregate values on a database table which sometimes changes, how would I know when to invalidate the cache? Or is the only way to do it to implement it programmatically and keep tract of this while writing to the database?
I am using C# .NET 4.5 to access SQL Server 2008 R2 through NHibernate.
I suggest you think about your problem in terms of application layer data caching instead of SQL Server low-level data pages. You can use SqlDependency or QueryNotification in your C# code to get notified of changes to the underlying data. Note that this requires ServericeBroker be enabled in the SQL Server database and there are some restrictions the on queries that qualify for notification.
See http://www.codeproject.com/Articles/529016/NHibernate-Second-Level-Caching-Implementation for an example of using this it with NHibernate.
I work on a project in a Financial Institution. In this company databases are distributed in separate branches. We want to set up a data center and turn all databases to one database. But in this situation we have a database table with more than 100 million records. I think SQL operations (ex insert, update, select) in this table will be too slow and costly. Which scenarios can help me? We use code first approach of Entity Framework in our project.
A) 100 million is not too much for SQL server. With appropriate indexes, disk topology, memory and CPU allocations plus a good DBA to oversee for a while. Things should be fine.
b) Initial migration is NOT an EF topic. I would not recommend EF for that task.
EF can Create the DB, but use tools to load data.
Sample SO post
c) Test and/or do some research on expected insert/Select times on the SQL server with 100 million rows.
d) The trick to getting good performance with EF is holding as FEW records in a context as possible.
Good EF code first code is the key to it working well.
Take a look on Bulk and BCP commands. They are used to copy large amount of data.
http://technet.microsoft.com/en-us/library/ms130809(v=sql.110).aspx
http://technet.microsoft.com/en-us/library/ms190923(v=sql.105).aspx
If you don't use MS SQL Server, look for the correspondent feature in your server.
Note that 100 milion records may not be really large amount of data. I recommend you do some performance test to realize if it will really be an issue.
I was wondering... Is there a way to "bind" to an Oracle SQL database and get noticed of every create / update / delete operations in it, by any user?
A bit far-reaching demand, I know... My goal is to investigate how a specific application uses the DB. A good tool for comparing the data (not the schema) between two states of the database would also be a fair solution. A solution without having to dump the DB into a file every time is preferred.
Thanks in advance!
I would go with
Flashback Data Archive (Oracle Total Recall) available in Enterprise edition, and
Auditing available in any Oracle edition.
The two can be combined to suit your needs.
#a_horse_with_no_name suggested you using Log Miner, and it is a nice solution. But if you are a novice DBA, you can check Oracle Flashback Transaction Query which has a friendlier interface (though it still uses Log Miner underneath to analyze archived redo log files retrieving transaction details).
Some useful info WRT on using built in Oracle Auditing follows.
How to get index last modified time in Oracle?
Enabling and using Oracle Standard Auditing
Find who and when changed a specific value in the database – using Oracle Fine-Grained Auditing, plus some info regarding Log Miner.
I heard that SQL Server SELECT statements causing blocking.
So I have MVC application with EF and SQL Server 2008 and it shares DB with another application which is very frequently writes some data. And MVC application generates some real-time reports based that data which comes from another application.
So given that scenario is it possible that while generating a report it will block some tables where another application will try to write data?
I tried to make some manual inserts and updates while report is generated and it handled fine. Am I misunderstood something?
This is one of the reasons why in Entity Framework 6 for Sql Server a default in database creation has changed:
EF is now aligned with a “best practice” for SQL Server databases, which is to configure the database’s READ_COMMITTED_SNAPSHOT setting to ON. This means that, by default, the database will create a snapshot of itself every time a change is made. Queries will be performed on the snapshot while updates are performed on the actual database.
So with databases created by EF 5 and lower, READ_COMMITTED_SNAPSHOT is OFF which means that
the Database Engine uses shared locks to prevent other transactions from modifying rows while the current transaction is running a read operation.
Of course you can always change the setting yourself:
ALTER DATABASE MyDb SET READ_COMMITTED_SNAPSHOT ON
READ UNCOMMITTED should help, because it doesn't issue shared locks on data that are being retrieved. So, it doesn't bother your other application that intensively updates the data. Another option is to use SNAPSHOT isolation leven on your long-running SELECT. This approach preserves data integrity for selected data, but for a cost of higher CPU, and more intense tempdb usage.
I have two very busy tables in an email dispatch system. One is for batching mail for dispatch, the other is used for logging. Expensive queries are ran that use both of these tables to produce stats for a UI. I would like to remove the reporting overhead on these tables as I am seeing timeouts during report generation.
My question is - what are my options for reducing the query overhead on these two tables while generating the report data.
I've considered using triggers to create exact copies of the tables. Is there any built in functionality in SQL server for mirroring data within a database? If I can avoid growing the database unnecessarily though that would be an advantage. It doesn't matter if the stats are not real time.
There is a built in functionality for this scenario and it's known as Database Snapshot.
If you run a query against a DB snapshot table, no shared locks should be created on original database.
You can use Resource Governor for SQL Server. Unfortunately, I have only read about it and haven't used it yet. It is used to isolate workloads on SQL Server.
Please try and let us know if it helps.
Some helpful links: MSDN SQLBlog technet
Kind Regards,
Sumit