Does SET IDENTITY_INSERT [Table] ON persist beyond the scope of a SQL Script? I'm wondering if I need to explicitly set it to "OFF" or if SQL Server knows that it should only use that setting for the current script.
Thanks!
Yes, it does persist beyond the current batch.
It doesn't, however, persist beyond the current session, so if you disconnect immediately after running it, you don't need to change it.
As an aside, it may only be ON for one table at a time, per session, and as pointed out by Aaron in a comment below, will throw an error if you try setting it for more than one table per session.
Related
Is there anyway to undo an update in postgreSQL?
I have used this query to update a column
UPDATE dashboard.inventory
SET address = 'adf'
WHERE address ## to_tsquery(('makati'))
but i made a huge stupid mistake because it was on a wrong column..
If you are inside a transaction block you can use : ROLLBACK
If you have already committed or did it in autocommit mode, then no.
The data is perhaps still in your database, just not visible. But autovacuum may soon clear it out, if it hasn't already. To best preserve your options, immediately stop your database in immediate mode and do a complete file-level backup. You could then hire a specialist firm to recovery it from that backup, if you decide to do that.
If you use WAL archiving, you could set up a copy of the database using point in time recovery, restored to just before the error, then use that to extract the lost column to a file, then use that repopulate the column in your real database.
What is the appropriate syntax to set max_string_size = 'EXTENDED' in v$parameter?
I tried:
ALTER SYSTEM set value='EXTENDED',display_value='EXTENDED'
WHERE NAME='max_string_size';
But I get:
ORA-02065: illegal option for ALTER SYSTEM
Thanks.
UPDATE:
After this change, we get errors on Concurrent Request form when we go to View Details. FRM-41072: Cannot create Group job_notify and FRM-41076: Error populating Group. Anyone else seen this and have resolved? Per Metalink ticket the change is irreversible, the only way to fix is to restore from backup.
You are mixing a SQL query syntax into the ALTER SYSTEM command; you need to use this format:
alter system set max_string_size='EXTENDED';
See https://docs.oracle.com/database/121/SQLRF/statements_2017.htm#i2282157
Adding note from William's comment: This is a fundamental change to the database; so you need to test this thoroughly. A full backup before changing this would be important. And this is why you cannot change the setting to be effective immediately. There may be PL/SQL code that may need to be reviewed, such as triggers, etc.
For anyone else considering this change, know that the option is not compatible with EBS. It causes some odd behavior, which does not go away even after setting max_string_size back to STANDARD.
If you use EBS, as others have advised, do not apply this change to your system.
We were not able to find a way to eradicate the problem this change caused and ended up restoring the test system from backup.
I've got some EF/LINQ statements that I need to be case-insensitive text searches, but our oracle database is case sensitive. How can I execute the necessary ALTER SESSION statement at the connection/command level so that it will affect the subsequent same-context calls?
Command I think I need to run (OTN Thread)
ALTER SESSION SET NLS_SORT=BINARY_CI
I'm aware of both Database.ExecuteSqlCommand and Database.Connection.CreateCommand as methods, but I can't figure out the 'when'. If I manually try to do this to the context after creation but before the LINQ, I have to manually open & close the connection, and then it seems to be a different transaction as the LINQ, and doesn't seem to be applying.
Technically this is not a solution to your question of how to inject ALTER SESSION SET NLS_SORT=BINARY_CI into a query, but it may help you with your insensitive case search, just use .ToLower() .
The first option would be to ask the DBA to add a "login trigger" to the DB account you are using. The disadvantage there is two fold: One it will be set for every command, and two the DBA will laugh at you for not simply doing the oracle defacto "upper" on everything.
These guys seemed to pull it off using the ExecuteStoreCommand off of the context. I'm not a fan of EF so I can't help much here, but I'd guess you'd need to execute your linq query inside of that same context?
http://blogs.planetsoftware.com.au/paul/archive/2012/07/31/ef4-part-10-database-agnostic-linq-to-entities-part-2.aspx
You may be able to use one of the "executing" methods in the command interception feature in EF:
http://msdn.microsoft.com/en-us/data/dn469464#BuildingBlocks
I have a TSQL script that is used to set up a database as part of my product's installation. It takes a number of steps which all together take five minutes or so. Sometimes this script fails on the last step because the user running the script does not have sufficient rights to the database. In this case I would like the script to fail strait away. To do this I want the script to test what rights it has up front. Can anyone point me at a general purpose way of testing if the script is running with a particular security permission?
Edit: In the particular case I am looking at it is trying to do a backup, but I have had other things go wrong and was hoping for a general purpose solution.
select * from fn_my_permissions(NULL, 'SERVER')
This gives you a list of permissions the current session has on the server
select * from fn_my_permissions(NULL, 'DATABASE')
This gives you a list of permissions for the current session on the current database.
See here for more information.
I assume it is failing on an update or insert after a long series of selects.
Just try a simple update or insert inside a transaction. Hard-code the row id, or whatever to make it simple and fast.
Don't commit the transaction--instead roll it back.
If you don't have rights to do the insert or update, this should fail. If you DO, it will roll back and not cause a permanent change.
try the last insert/update up front with some where condition like
insert/update
where 1=2
if (##error <> 0)
raise error 6666 'no permissions'
this would not cause any harm but would raise a flag upfront about the lack of rights.
For a large database (thousands of stored procedures) running on a dedicated SQL Server, is it better to include SET NOCOUNT ON at the top of every stored procedure, or to set that option at the server level (Properties -> Connections -> "no count" checkbox)? It sounds like the DRY Principle ("Don't Repeat Yourself") applies, and the option should be set in just one place. If the SQL Server also hosted other databases, that would argue against setting it at the server level because other applications might depend on it. Where's the best place to SET NOCOUNT?
Make it the default for the server (which it would be except for historical reasons). I do this for all servers from the start. Ever wonder why it's SET NOCOUNT ON instead of SET COUNT OFF? It's because way way back in Sybase days the only UI was the CLI; and it was natural to show the count when a query might show no results, and therefore no indication it was complete.
Since it is a dedicated server I would set it at the server level to avoid having to add it to every stored procedure.
The only issue would come up is if you wanted a stored procedure that did not have no-count.