What is the appropriate syntax to set max_string_size = 'EXTENDED' in v$parameter?
I tried:
ALTER SYSTEM set value='EXTENDED',display_value='EXTENDED'
WHERE NAME='max_string_size';
But I get:
ORA-02065: illegal option for ALTER SYSTEM
Thanks.
UPDATE:
After this change, we get errors on Concurrent Request form when we go to View Details. FRM-41072: Cannot create Group job_notify and FRM-41076: Error populating Group. Anyone else seen this and have resolved? Per Metalink ticket the change is irreversible, the only way to fix is to restore from backup.
You are mixing a SQL query syntax into the ALTER SYSTEM command; you need to use this format:
alter system set max_string_size='EXTENDED';
See https://docs.oracle.com/database/121/SQLRF/statements_2017.htm#i2282157
Adding note from William's comment: This is a fundamental change to the database; so you need to test this thoroughly. A full backup before changing this would be important. And this is why you cannot change the setting to be effective immediately. There may be PL/SQL code that may need to be reviewed, such as triggers, etc.
For anyone else considering this change, know that the option is not compatible with EBS. It causes some odd behavior, which does not go away even after setting max_string_size back to STANDARD.
If you use EBS, as others have advised, do not apply this change to your system.
We were not able to find a way to eradicate the problem this change caused and ended up restoring the test system from backup.
Related
Is there anyway to undo an update in postgreSQL?
I have used this query to update a column
UPDATE dashboard.inventory
SET address = 'adf'
WHERE address ## to_tsquery(('makati'))
but i made a huge stupid mistake because it was on a wrong column..
If you are inside a transaction block you can use : ROLLBACK
If you have already committed or did it in autocommit mode, then no.
The data is perhaps still in your database, just not visible. But autovacuum may soon clear it out, if it hasn't already. To best preserve your options, immediately stop your database in immediate mode and do a complete file-level backup. You could then hire a specialist firm to recovery it from that backup, if you decide to do that.
If you use WAL archiving, you could set up a copy of the database using point in time recovery, restored to just before the error, then use that to extract the lost column to a file, then use that repopulate the column in your real database.
I have a really strange problem on my SQL Server.
Every night 2 tables, that I have recently created, are being automatically truncated...
I am quite sure, that it is truncate, as my ON DELETE Trigger does not log any delete transactions.
Additionally, using some logging procedures, I found out, that this happens between 01:50 and 01:52 at night. So I checked the scheduled Jobs on the server and did not find anything.
I have this problem only on our production server. That is why it is very critical. On the cloned test server everything works fine.
I have checked transaction log entries (fn_dblog), but didnt find any truncate logs there.
I would appreciate any help or hints that will help me to find out process/job/user who truncates the table.
Thanks
From personal experience of this, as a first step I would look to determine whether this is occurring due to a DROP statement or a TRUNCATE statement.
To provide a possible answer, using SSMS, right click the DB name in Object Explorer, mouse over Reports >> Standard Reports and click Schema Changes History.
This will open up a simple report with the object name and type columns. Find the name of the table(s), click the + sign to expand, and it will provide you history of what has happened at the object level for that table.
If you find the DROP statement in there, then at least you know what you are hunting for, likewise if there is no DROP statement, you are likely looking for a TRUNCATE.
Check with below query,
declare #var as varchar(max)='tblname'
EXEC sp_depends #objname =#var;
it will return number of stored procedure name which are using your table and try search for any truncate query if you have wrote by mistake.
Thanks a lot to everyone who has helped!
I've found out the reason of truncating. It was an external application.
So if you experience the same problem, my hint is to check your applications that could access the data.
I don't know if can help you to resolve the question.
I often encounter the following situations.
Look at this example:
declare #t varchar(5)
set #t='123456'
select #t as output
output:12345
Is there some configuration option for MS SQL Server which enables more verbose error messages.
Specific example: I would like to see the actual field values of the inserted record which violates a constraint during an insert, to help track down a bug in stored procedures which I haven't been able to reproduce.
I don't believe there is any such option. There are trace flags that give more information about deadlocks, but I've never heard of one that gives more information on a constraint violation.
If you control the application that is causing the crash then extending it's handling (as Jenn suggested) to include parameter values etc. Once you have the parameter values you can get a copy of live setup on a non-live server and start debugging the issue.
For more options, can any of the users affected reliably reproduce the issue? If they can then you might be able to run a profiler trace to capture the actual statements / parameter values being sent to the database. Of course, if you can figure out the steps to reproduce the issue then you can probably use more traditional debugging methods...
You don't say what the constraint is, I'm assuming it is a fairly complex constraint. If so, could it be broken down into several constraints so you can get more of a hint about the problem with the data?
You could also re-write the constraint as a trigger which could then include more information in the error that it raises. Although this would obviously need testing before being deployed to a production server!
Personally, I would go with changing the error handling of the application. It is probably the less risky change.
PS The application that I helped write, and now spend my time supporting, logs quite a lot of data when an unhandled exception occurs. If it is during a save then our data access layer attaches the complete list of all commands that were being run as part of the save transaction including parameter values. This has proved to be invaluable on many occasions, including some when tracking down constraint violations.
In a stored proc, what I do to get better informatino in a complex SP about the errors is take advantage of the fact that table variables are not affected by a rollback. So I put the information I want to use to troubleshoot into table variables at the time I create it and then if I hit the catch block and rollback, after the rollback I insert the data from the table variable into an exception table along with some meta data like the datetime.
With some thought you can design an exception table that will capture what you need from just about any proc (for instance you could concatenate all the input variables into one field, you could put in the step number that failed (of course then you have to assign stepnumbers to a variable) or you could log every step along the awy and then the last one logged is the one it failed on. Belive me when you are looking at troubleshooting a 100 line SP, this can come in handy. If I have dymanic SQl inteh proc, I can log the SQL variable that contains the dynamic code that was run.
The beauty of this is now you don't have to try to reproduce the error, you know what the input parameters were and any other information you find useful. Yes it can take a bit of time to set up once, but once you do it is relatively easy to get in the habit of putting it into any complex proc that you will want to log errors on.
You might also want to set a an nongeneralized one if you want to return spefic data values of a select used for an insert or the result set of a select that would tell you waht what wopuld have been being updated or deleted. Then you would have that only if the proc failed. This is a little more work than the general exception table but may be needed in some complex cases.
I've got some EF/LINQ statements that I need to be case-insensitive text searches, but our oracle database is case sensitive. How can I execute the necessary ALTER SESSION statement at the connection/command level so that it will affect the subsequent same-context calls?
Command I think I need to run (OTN Thread)
ALTER SESSION SET NLS_SORT=BINARY_CI
I'm aware of both Database.ExecuteSqlCommand and Database.Connection.CreateCommand as methods, but I can't figure out the 'when'. If I manually try to do this to the context after creation but before the LINQ, I have to manually open & close the connection, and then it seems to be a different transaction as the LINQ, and doesn't seem to be applying.
Technically this is not a solution to your question of how to inject ALTER SESSION SET NLS_SORT=BINARY_CI into a query, but it may help you with your insensitive case search, just use .ToLower() .
The first option would be to ask the DBA to add a "login trigger" to the DB account you are using. The disadvantage there is two fold: One it will be set for every command, and two the DBA will laugh at you for not simply doing the oracle defacto "upper" on everything.
These guys seemed to pull it off using the ExecuteStoreCommand off of the context. I'm not a fan of EF so I can't help much here, but I'd guess you'd need to execute your linq query inside of that same context?
http://blogs.planetsoftware.com.au/paul/archive/2012/07/31/ef4-part-10-database-agnostic-linq-to-entities-part-2.aspx
You may be able to use one of the "executing" methods in the command interception feature in EF:
http://msdn.microsoft.com/en-us/data/dn469464#BuildingBlocks
Does SET IDENTITY_INSERT [Table] ON persist beyond the scope of a SQL Script? I'm wondering if I need to explicitly set it to "OFF" or if SQL Server knows that it should only use that setting for the current script.
Thanks!
Yes, it does persist beyond the current batch.
It doesn't, however, persist beyond the current session, so if you disconnect immediately after running it, you don't need to change it.
As an aside, it may only be ON for one table at a time, per session, and as pointed out by Aaron in a comment below, will throw an error if you try setting it for more than one table per session.