I'm using SQL server 2012 and have got a db already created in which I have four main tables which are linked through a view that triggers one trigger and leads from one table to another. Trigers work to move the needed info from one table to another.
As you may imagine, the main issue has to do with the amount of triggers that db generates and in the end, there is a pop-up window which shows a time-out warning related to the db. What could be the solution for this?
I've thought of creating a block which contains a cursor and within it, another cursor instead of creating a loop of triggers. How could I do that about cursors in the db, what's the correct sintax?
Is there any idea better than that one? I havent tried that one as I dont know how to do so and maybe I continue having the same issue related to "time-out" but I do not have any other idea and wanted to give it a try.
Related
I'm looking for a method or solution to allow for a table to be updated that others are running select queries on?
We have an MS SQL Database storing tables which are linked through ODBC to an Access Database front-end.
We're trying to have a query run an update on one of these linked tables but often it is interrupted by users running select statements on the table to look at data though forms inside access.
Is there a way to maybe create a copy of this database table for the users to look at so that the table can still be updated?
I was thinking maybe a transaction but can you perform transactions for select statements? Do they work that way?
The error we get from inside access when we try to run the update while a user has the table open is:
Any help is much appreciated,
Cheers
As a general rule, this should not be occurring. Those reports should not lock nor prevent the sql system from not allowing inserts.
For a quick fix, you can (should) link the reports to some sql server views for their source. And use this for the view:
SELECT * from tblHotels WITH (NOLOCK)
In fact in MOST cases this locking occurs due to combo boxes being driven by a larger table in from SQL server - if the query does not complete (and access has the nasty ability to STOP the flow of data, then you get a sql server table lock).
You also can see the above "holding" of a lock when you launch a form with a LARGE dataset If access does not finish pulling the table/query from SQL server - again a holding lock on the table can remain.
However, I as a general rule NOT seen this occur for reports.
However, it not all clear how the reports are being used and how their data sources are setup.
But, as noted, the quick fix is to create some views for the reports, and use the no-lock hint as per above. That will prevent the tables from holding locks.
Another HUGE idea? For the reports, if they often use some date range or other critera? MAKE 100% sure that sql server has index on the filter or critera. If you don't, then SQL server will scan/lock the whole table. This advice ALSO applies VERY much to say a form in which you filter - put indexing (sql server side) on those common used columns.
And in fact, the notes about the combo box above? We found that JUST adding a indexing to the sort column used in the combo box made most if not all locking issues go away.
Another fix that often works - and requires ZERO changes to the ms-access client side software?
You can change this on the server:
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
The above also will in most cases fix the locking issue.
I am facing a problem on a particular table in my database. The rows are being deleted without any reason (I have some procedures and triggers that modify the information inside the table but they are already tested).
So I need to see which DML statements are executed against the table.
I have already tried some methods, like using this query:
select SQL_FULLTEXT, FIRST_LOAD_TIME, ROWS_PROCESSED, PARSING_SCHEMA_NAME from v$sql;
filtering by the name of my table, or tried the SQL log.
Both methods don't show me the complete history of SQL executed (for example I can't see the statements executed by the procedures).
Can anyone give me some advice of where I can see ALL the DML executed in the database?
You're using a few terms that aren't defined within the context of Oracle Database, both 'sentence' and 'register.'
However.
If you want to see WHO is touching your data in a bad place, causing it to be deleted or changed, then you have 2 options.
Immediately, check your REDO logs. We have a package, dbms_logmnr, that will allow you to see what activity has been logged. Assuming that your tables weren't created with NOLOGGING clause, those UPDATEs and DELETEs should be recorded.
Tim has a nice article on this feature here.
The better solution going forward is AUDITING. You'll want to enable auditing in the database to record WHO is doing WHAT to your tables/data. This is included as part of the Enterprise Edition of the database. There is a performance hit, the more you decide to record, the more resources it will require. But it will probably be worth paying that price. And of course you'll have to manage the space required to maintain those logs.
Now, as to 'SQL Developer' and it's HISTORY feature. It ONLY records what you are executing in a SQL Worksheet. It won't see what others are doing. It can't help you here - unless this is a 1-man database, and you're only making changes with SQL Developer. Even then, it wouldn't be reliable as it has a limit, and only records changes done via the Worksheet.
I am attempting to use SQL Schema Compare in Visual Studio 2013/15 and am running into the problem that discluding tables from delete removes them from being processed at all.
The issue is that the tables it is trying to delete are customer made tables, so when we sync our version against their databases it asks to delete them. We do not want to delete them, but some of their tables have constraints on ours so when it attempts to CCDR it fails due to table constraints. Is there a way to add the table to be (re-created? like the rest of them?), without writing scripts for each client to do what SQL Schema Compare already does just for those few tables?
Red-Gate's SQL Compare does this somehow, but it's hidden from us so not quite sure how it's achieved. Discluding doesn't delete, but does not error on the script either.
UPDATE:
The option "Drop constraints not in source" does not appear to work correctly. It does drop some, however there are others that it just does not drop the constraints. In red-gate's tool, when we compared I found how to get the SQL from it, and their product doesn't say the table needs to be updated at all, while Visual Studio's does. They seem to work almost identical, but the tables that fail are the ones that shouldn't be update at all (read below)
Update 2:
Another problem I've found is "Ignore column collation" also doesn't work correctly, as tables that shouldn't be getting dropped are being told they need to be updated even though it's only order of column changes, not actual column or data changes, which makes this feel like more of a bug report than anything.
My suggestion with these types of advance data calculations is to not use Visual Studio. Put the logic on the Sql engine and write the code for this in Sql. Due to the multi user locking issues of a Sql engine these types of processes are prone to fail when the wrong combinations of user actions happen at the same time. The Visual Studio tool can not interface with the data locking issues due to records changing that the Sql engine can. If you even get this to work it will only be safe to run if you are in single user mode.
It is a nice to use tool, easier than writing Sql but there are huge reliability and consistency risks for going down this path.
I don't know if this will help but I've found this paragraph
on the following page:
https://msdn.microsoft.com/en-us/library/hh272690(v=vs.103).aspx
The update will fail because our change involves changing a column
from NOT NULL to NULL and as a result causes data loss. If you want to
proceed with the update, click on the Options button (the fifth one
from the left) on the toolbar for the Schema Compare and uncheck the
block incremental deployment if data loss option.
I am trying to find out an ideal way to automatically copy new records from one database to another. the databases have different structure! I achieved it by writing VBS scripts which copy the data from one to another and triggered the scripts from another application which passes arguments to the script. But I faced issues at points where there were more than 100 triggers. i.e. 100wscript processes trying to access the database and they couldn't complete the task.
I want to find out a simpler solution inside SQL, I read about setting triggers, Stored procedure and running them from SQL agent, replication etc. The requirement is that I have to copy records to another database periodically or when there is a new record into another database.
Which method will suit me the best?
You can use CDC to do this activity. Create a SSIS package using CDC and run that package periodically through SQL Server Agent Job. CDC will store all the changes of that table and will do all those changes to the destination table when you run the package. Please follow the below link.
http://sqlmag.com/sql-server-integration-services/combining-cdc-and-ssis-incremental-data-loads
The word periodically in your question suggests that you should go for Jobs. You can schedule jobs in SQL Server using Sql Server agent and assign a period. The job will run your script as per assigned frequency.
PrabirS: Change Data Capture
This is a good option. Because it uses the truncation-log to create something similar to the Command Query Segregation Pattern (CQRS).
Alok Gupta: A SQL Job that runs in the SQL Agent
This too is a good option, given that you have something like a modified date thus you can filter the altered data. You can create a Stored Procedure and let it run regularly in the SQL Agent.
A third option could be triggers (the change will happen in the same transaction).
This option is useful for auditing and logging. But you should definitely avoid writing business logic in triggers, as triggers are more or less hidden and occur without directly calling them (similar to CDC actually). I have actually created a trigger about half a year ago that captured the data and inserted it somewhere else in xml-format as the columns in the original table could change over time (multiple projects using the same database(s)).
-Edit-
By the way, your question more or less suggest a lack of a clear design pattern and that the used technique is not the main problem. You could try to read how an ETL-layer is build, or try to implement a "separations of concerns". Note; it is hard to tell if this is the case, but given how you formulated your question, an unclear design is something that pops up in my mind as possible problem.
I am writing a trigger to audit updates and deletes in tables. I am using SQL Server 2008
My questions are,
Is there a way to find out what action is being taken on a record without going through the selection phase of the deleted and inserted tables?
Another question is, if the record is being deleted, how do I record within the audit table the user that is performing the delete. (NOTE: the user connected to the database is a general connection string with a set user, I need the user who is logged into either a web app or a windows app)
Please help?
For part one, you can either set up separate triggers or have one trigger that checks the special tables INSERTED and DELETED to discriminate between updates and deletes.
For part two, there's no way around it in this case, you're going to have to get that username to the database somehow via your web/windows app. Unfortunately you can't communicate with the trigger itself, and with a generic connection string the DB doesn't have any idea who it's dealing with.
I've found that it can be helpful to add a "LastModifiedBy" column to the tables that you plan to audit so that you can store that info on the original tables themselves. Then your trigger just copies that info into the audit table. This is also nice because if you only need to know who the last person to touch something was you don't have to look in the audit table at all, just check that one column.
Consider this, if you don't actually delete records but add a field to mark them as deleted, you can get the user from the last modified. If you want to actually delete records then, you can have a nightly job that deletes in a batch not one at time. This could even be set up to flag if too many records are being deleted and not run.
The easiest way to do this so that nothing breaks is to rename the table, add the column IsDeleted as a bit field and then create a view with the same name the table was orginally called. The view will select all the records where isdelted is null.
Don't let anyone talk you out of using triggers for this. You don't want people who are doing unauthorized changes to be able to escape the auditing. With a trigger (and no rights to anyone except a production dba to alter the table in any way), then no one except the dba can delete without being audited. In a typical system with no stored procedures to limit direct table access, all too many people can usually directly affect a table opening it wide for fraud. People committing fraud do not typically use the application they are supposed to use to change the data. You must protect data at the database level.
When you write your triggers make sure they can handle multi-row inserts/updates/deletes. Triggers operate on the whole set of data not one row at a time.
As roufamatic said, you can either set up triggers specific to each action or you can check against the INSERTED and DELETED tables.
As for the deleting user, it is possible to pass that information into the trigger as long as the code in your application handles it. I encountered this requirement about a year ago with a client and the solution that I came up with was to use SET CONTEXT_INFO and CONTEXT_INFO() to pass the user name along. All of our database access was through stored procedures, so I just needed to add a line or two of code to the delete stored procedures to SET CONTEXT_INFO then I changed the delete triggers to get the user from CONTEXT_INFO(). The user name had to be passed as a parameter from the application of course. If you aren't using stored procedures you might be able to just do the SET CONTEXT_INFO in the application. I don't know how connection pooling might affect that method. Obviously, if someone does a delete outside of the application there wouldn't be a record of that unless you also separately captured the USERNAME() in your trigger (probably a good idea, although it wasn't necessary for our audit log, which was more for reporting than security).
There was a little bit of trickiness because CONTEXT_INFO is a binary string, but it didn't take long to get that all sorted out.
I'm afraid that I don't have any of the code handy since it was for a past client. If you run into any problems after going through the help for CONTEXT_INFO and SET CONTEXT_INFO then feel free to post here and I'll see what I can remember.
To find out what action is being taken you can use the INSERTED and DELETED tables to compare before and after values. There is no magic way to tell which user of a web app has made a change. The usual method is to have a modified column in your table and have the web app code populated this with the relevant username