To prevent locks and deadlocks I have decided to use table hints WITH(NOLOCK) within my views.
That had a good effect and the users were happy with it.
The problem started when those views were called from inside an application (A CRM application called PIVOTAL).
When this application queries my views (that have WITH(NOLOCK) inside them) it adds a different table lock READCOMMITTED.
For example:
select * from MY_view WITH(READCOMMITTED)
The result of this is a SQL Server error message telling me about the conflicting locks.
Now, I cannot change the application and the way it generates its scripts.
Is there a way I could make SQL Server ignore the hint outside of My_view?
Is there a way to make the NOLOCK prevail over READCOMMITTED, so that I can keep it within My_View?
Thanks and regards.
Marcello
I don't think there is an easy solution here. SQL Server cannot take conflicting locks as it doesn't know which locks to apply to the underlying tables.
Can you create a separate set of views for the application reading with READCOMMITTED?
Related
I'm looking for a method or solution to allow for a table to be updated that others are running select queries on?
We have an MS SQL Database storing tables which are linked through ODBC to an Access Database front-end.
We're trying to have a query run an update on one of these linked tables but often it is interrupted by users running select statements on the table to look at data though forms inside access.
Is there a way to maybe create a copy of this database table for the users to look at so that the table can still be updated?
I was thinking maybe a transaction but can you perform transactions for select statements? Do they work that way?
The error we get from inside access when we try to run the update while a user has the table open is:
Any help is much appreciated,
Cheers
As a general rule, this should not be occurring. Those reports should not lock nor prevent the sql system from not allowing inserts.
For a quick fix, you can (should) link the reports to some sql server views for their source. And use this for the view:
SELECT * from tblHotels WITH (NOLOCK)
In fact in MOST cases this locking occurs due to combo boxes being driven by a larger table in from SQL server - if the query does not complete (and access has the nasty ability to STOP the flow of data, then you get a sql server table lock).
You also can see the above "holding" of a lock when you launch a form with a LARGE dataset If access does not finish pulling the table/query from SQL server - again a holding lock on the table can remain.
However, I as a general rule NOT seen this occur for reports.
However, it not all clear how the reports are being used and how their data sources are setup.
But, as noted, the quick fix is to create some views for the reports, and use the no-lock hint as per above. That will prevent the tables from holding locks.
Another HUGE idea? For the reports, if they often use some date range or other critera? MAKE 100% sure that sql server has index on the filter or critera. If you don't, then SQL server will scan/lock the whole table. This advice ALSO applies VERY much to say a form in which you filter - put indexing (sql server side) on those common used columns.
And in fact, the notes about the combo box above? We found that JUST adding a indexing to the sort column used in the combo box made most if not all locking issues go away.
Another fix that often works - and requires ZERO changes to the ms-access client side software?
You can change this on the server:
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
The above also will in most cases fix the locking issue.
I always thought the writers never block the readers (and vice versa).
However, what am seeing right now is very strange. I'm probably wrong and am missing something here. So please help as this is driving me crazy!
Today I created a very simple table:
USE [testdb]
GO
CREATE TABLE [dbo].[MyTab](
[N] [int] NULL
) ON [PRIMARY]
GO
ALTER TABLE [dbo].[MyTab] WITH CHECK ADD CHECK (([n]>(10)))
GO
Then I populated it with a few rows.
Next I decided to set IMPLICIT_TRANSACTIONS to ON and select from the table in another session in a completely separate execution of the SQL Server Management Studio. The below snapshot demonstrates what hapened:
You see the issue? The select query is still executing! This query never returns. It returns only after I commit or rollback the insert statement. I tested the same scenario multiple times and the same thing happened over and over again.
To further confirm my observation, have a look at the below report:
Can you help and let me know what am I doing wrong?
or if my assumption that readers cannot be blocked is entirely (or partially) wrong?
Thanks!
Note: Initially I was connected as the same user as the insert session when I wanted to query the table. Once I saw my select was blocked, I decided to login and test using another user. Hence, the using as 'sa' account. :)
Readers do not block writers (and visa-versa) in SQL Server if you turn on the READ_COMMITTED_SNAPSHOT database option. SQL Server will then use row versioning instead of locking to provide read consistency for READ_COMMITTED transactions, behaving similarly to the Oracle DBMS you are more familiar with.
READ_COMMITTED_SNAPSHOT is on by default with Azure SQL Database but not in on-prem SQL Server versions for backwards compatibility.
After doing some research I just realized what the issue was. In SQL Server readers and writers actually do block each other sometimes. This is unlike Oracle where readers and writers never block each other.
Further Explanation
I am an Oracle DBA and do not know too much about SQL Server databases. My observation today came as a surprise for me because in Oracle I had never seen a select query get blocked by an insert statement, which is because as per Oracle's documentation:
Readers and writers do not block one another in Oracle Database.
Therefore, while queries still see consistent data, both read
committed and serializable isolation provide a high level of
concurrency for high performance, without the need for reading
uncommitted data.
This is entirely different from SQL Server where read queries may be blocked under certain circumstances.
I am facing a problem on a particular table in my database. The rows are being deleted without any reason (I have some procedures and triggers that modify the information inside the table but they are already tested).
So I need to see which DML statements are executed against the table.
I have already tried some methods, like using this query:
select SQL_FULLTEXT, FIRST_LOAD_TIME, ROWS_PROCESSED, PARSING_SCHEMA_NAME from v$sql;
filtering by the name of my table, or tried the SQL log.
Both methods don't show me the complete history of SQL executed (for example I can't see the statements executed by the procedures).
Can anyone give me some advice of where I can see ALL the DML executed in the database?
You're using a few terms that aren't defined within the context of Oracle Database, both 'sentence' and 'register.'
However.
If you want to see WHO is touching your data in a bad place, causing it to be deleted or changed, then you have 2 options.
Immediately, check your REDO logs. We have a package, dbms_logmnr, that will allow you to see what activity has been logged. Assuming that your tables weren't created with NOLOGGING clause, those UPDATEs and DELETEs should be recorded.
Tim has a nice article on this feature here.
The better solution going forward is AUDITING. You'll want to enable auditing in the database to record WHO is doing WHAT to your tables/data. This is included as part of the Enterprise Edition of the database. There is a performance hit, the more you decide to record, the more resources it will require. But it will probably be worth paying that price. And of course you'll have to manage the space required to maintain those logs.
Now, as to 'SQL Developer' and it's HISTORY feature. It ONLY records what you are executing in a SQL Worksheet. It won't see what others are doing. It can't help you here - unless this is a 1-man database, and you're only making changes with SQL Developer. Even then, it wouldn't be reliable as it has a limit, and only records changes done via the Worksheet.
So I'm about to create a simple site where users can input their own SQL queries, which I will be running on the server side.
I'm aware of SQL injection attacks and assume this could be fairly risky thing to do.
But (if there is any) what would be a safe way to allow this feature?
e.g. I can think of the following rules I can enforce.
Allow users to only "SELECT" - never allow UPDATE, DELETE (or anything else).
Allow users to only access certain tables (if I know them).
Are there any other security measures I should take?
As well as security issues, performance might be a problem as pointed out by ElectricLlama. You might want to look into getting the query's execution plan in advance, and refusing to run if it looks like the query would be too expensive:
How do I obtain a Query Execution Plan?
Further to the above make sure the user is added to the db_datareader groups and that will make them read only. I guess you will be using a single user to perform that database actions.
You still cause a database failure if the user accidentally does a couple of cross joins:
SELECT *
FROM Table1,Table2,Table3
WHERE Table1.Field1=Table2.Field2
-- Oops I forgot to enter the Table3 Join condition
-- so now I get a cross join which will cause havoc
-- If the table is of appreciative size
But anyway it's still risky! Someone will find a security hole!
I would like to know if there is an inherent flaw with the following way of using a database...
I want to create a reporting system with a web front end, whereby I query a database for the relevant data, and send the results of the query to a new data table using "SELECT INTO". Then the program would make a query from that table to show a "page" of the report. This has the advantage that if there is a lot of data, this can be presented a little at a time to the user as pages. The same data table can be accessed over and over while the user requests different pages of the report. When the web session ends, the tables can be dropped.
I am prepared to program around issues such as tracking the tables and ensuring they are dropped when needed.
I have a vague concern that over a long period of time, the database itself might have some form of maintenance problems, due to having created and dropped so many tables over time. Even day by day, lets say perhaps 1000 such tables are created and dropped.
Does anyone see any cause for concern?
Thanks for any suggestions/concerns.
Before you start implementing your solution consider using SSAS or simply SQL Server with a good model and properly indexed tables. SQL Server, IIS and the OS all perform caching operations that will be hard to beat.
The cause for concern is that you're trying to write code that will try and outperform SQL Server and IIS... This is a classic example of premature optimization. Thousands and thousands of programmer hours have been spent on making sure that SQL Server and IIS are as fast and efficient as possible and it's not likely that your strategy will get better performance.
First of all: +1 to #Paul Sasik's answer.
Now, to answer your question (if you still want to go with your approach).
Possible causes of concern if you use VARBINARY(MAX) columns (from the MSDN)
If you drop a table that contains a VARBINARY(MAX) column with the
FILESTREAM attribute, any data stored in the file system will not be
removed.
If you do decide to go with your approach, I would use global temporary tables. They should get DROPped automatically when there are no more connections using them, but you can still DROP them explicitly.
In your query you can check if they exist or not and create them if they don't exist (any longer).
IF OBJECT_ID('mydb..##temp') IS NULL
-- create temp table and perform your query
this way, you have most of the logic to perform your queries and manage the temporary tables together, which should make it more maintainable. Plus they're built to be created and dropped, so it's quite safe to think SQL Server would not be impacted in any way by creating and dropping a lot of them.
1000 per day should not be a concern if you talk about small tables.
I don't know sql-server, but in Oracle you have the concept of temporary table(small article and another) . The data inserted in this type of table is available only on the current session. when the session ends, the data "disapear". In this case you don't need to drop anything. Every user insert in the same table, and his data is not visible to others. Advantage: less maintenance.
You may check if you have something simmilar in sql-server.