I am using an ORACLE db with SAS/connect.
I recently implemented a change in my libname statement (a week ago) in which I added the following (don't know if related to issue):
insertbuff=10000 updatebuff=10000 readbuff=10000
Starting yesterday, I have been having an ORACLE issue when, after doing a
proc sql;
drop table oralib.mytable;
quit;
data oralib.mytable;
set work.mytable;
run;
I get the following error:
ERROR: ERROR: ERROR: ORACLE execute error: ORA-04031: unable to
allocate 4160 bytes of shared memory ("shared pool","unknown
object","sga heap(1,0)","modification "). With the occurrence of the above ERROR, the error limit of 1 set by the
ERRLIMIT= option has been reached. ROLLBACK has been issued(Any Rows processed after the last COMMIT are lost).
Total rows processed: 1001
Rows failed : 1
It seems to happen randomly on any table of any size. Sometimes it will go through, sometimes (most of the times) it won't. Is there a shared pool release I should do from SAS?
Thanks for your help!
The shared pool is a memory structure on Oracle which keeps the following stuff:
data dictionary cache
SQL query and PL/SQL function result caches
storage for recently executed code in its parsed form
It is possible to flush the shared pool, but this is not a good idea and I would not recommend it. What you have to do is size the shared pool of the database properly. Note that the shared pool is a pool for the entire Oracle instance - it is not on a per user base. So, if there are other users of the database, they might contribute the problem. I doubt that any particular query is the cause and I guess that the problem is that the shared pool is undersized.
In case you have some DBA privileges granted for your user, you can check the current shared pool size by running the following query:
SELECT * FROM v$sgainfo;
You can increase the size of the shared pool with the following query
ALTER SYSTEM SET SHARED_POOL_SIZE = 200M;
Nevertheless, the best solution will be turn to the DBA managing the database (if there is such).
I'm not a SAS guy, so, I'll answer your question from the POV of an Oracle DBA, which is what I am.
ORA-04031 means you're running out of space in the shared pool. With a product like SAS, I'm sure they have a recommended minimum size that the shared pool should be. So, you should check the SAP installaiton documentation, and confirm whether your database has a large enough shared pool size set. (Use show parameter shared_pool_size to see what size it's set in yourr database.)
Second, I'm not familiar with the changes you made, and I'm not sure if that would have an effect on the shared pool utilization. Perhaps check the SAS documentation on that one.
Third, it could be an Oracle bug. You should check My Oracle Support for your version of Oracle, and do a search on ORA-04031, with those specific arguments you are seeing in your error message. If it's a known bug, there may be a patch already available.
If it's none of the above, you may need to open an SR with Oracle.
Hope that helps.
Related
I was trying to create a sequence in SQL Server 2012 using below T-SQL script:
Create Sequence ConteoCycling
AS int
START WITH 2
MINVALUE 0
MaxVALUE 4
INCREMENT BY 1
GO
I gave it a min (0) and max (4) range values so that it recycles itself once the given range is consumed. After successful creation, I saw following message in the messages tab in output window of SQL Server Management Studio. This message is my point of confusion :
cache size is greater than the number of available values; the cache
size has been automatically set to accommodate the remaining sequence
values.
So the message says that it was able to set the cache size automatically. But, when I went into the properties window of the sequence from object explorer it doesn't show up any specific cache size but Default size cache option selected as shown in the snapshot below:
My question is that -
Based on the range I've selected for my sequence if SQL Server engine is somehow able to guess the cache size required for running through this sequence then why it is not setting a specific size instead.
Why it still shows Default Size option selected? Is it somehow dynamic in nature if it is set to Default Size option? It will be great if someone can enlighten me on what the SQL Server engine is doing behind the scenes in regards to the cache size of the sequence during its creation and recycling process?
This message is my point of confusion :
cache size is greater than the number of available values; the cache
size has been automatically set to accommodate the remaining sequence values
The message(in italics) is not correct ..See the connect item raised for this and i am not able to repro the issue any more on SQLServer 2016 developer edition.
Relevant items from Connect item:
Thanks for reporting this. The behavior is overall correct. The message is meant as a hint that the cache size is greater than the number of remaining sequence values. In fact we are not storing a different value for the cache size than the one specified in the alter/create sequence statement.
The actual bug here is the message stating something else. We will fix this for a future release.
Posted by Jan [MSFT] on 4/27/2012 at 9:41 AM
Thanks again for reporting this issue. We have fixed the message and the fix will be available in a future update of SQL Server.
Message i am getting on SQL2016 developer edition for your test scenario..
The sequence object 'ConteoCycling' cache size is greater than the number of available values.
When I execute a query for the first time in DBeaver it can take up to 10-15 seconds to display the result. In SQLDeveloper those queries only take a fraction of that time.
For example:
Simple "select column1 from table1" statement
DBeaver: 2006ms,
SQLDeveloper: 306ms
Example 2 (other way around; so theres no server-side caching):
Simple "select column1 from table2" statement
SQLDeveloper: 252ms,
DBeaver: 1933ms
DBeavers status box says:
Fetch resultset
Discover attribute column1
Find attribute column1
Late bind attribute colummn1
2, 3 and 4 use most of the query execution time.
I'm using oracle 11g, SQLDeveloper 4.1.1.19 and DBeaver 3.5.8.
See http://dbeaver.jkiss.org/forum/viewtopic.php?f=2&t=1870
What could be the cause?
DBeaver looks up some metadata related to objects in your query.
On an Oracle DB, it queries catalog tables such as
SYS.ALL_ALL_TABLES / SYS.ALL_OBJECTS - only once after connection, for the first query you execute
SYS.ALL_TAB_COLS / SYS.ALL_INDEXES / SYS.ALL_CONSTRAINTS / ... - I believe each time you query a table not used before.
Version 3.6.10 introduced an option to enable/disable a hint used in those queries. Disabling the hint made a huge difference for me. The option is in the Oracle Properties tab of the connection edit dialog. Have a look at issue 360 on dbeaver's github for more info.
The best way to get insight is to perfom the database trace
Perform few time the query to eliminate the caching effect.
Than repeat in both IDEs following steps
activate the trace
ALTER SESSION SET tracefile_identifier = test_IDE_xxxx;
alter session set events '10046 trace name context forever, level 12'; /* binds + waits */
Provide the xxxx to identify the test. You will see this string as a part of the trace file name.
Use level 12 to see the wait events and bind variables.
run the query
close the conenction
This is important to not trace other things.
Examine the two trace files to see:
what statements were performed
what number of rows was fetched
what time was elapsed in DB
for the rest of the time the client (IDE) is responsible
This should provide you enough evidence to claim if one IDE behaves different than other or if simple the DB statements issued are different.
We are getting the following error using Oracle:
[Oracle JDBC Driver]Application failover does not support non-single-SELECT statement
The error occurs when we try to make a delete or insert over a large number of rows (tens of millions of rows).
I know that the script works, because it was working for almost an year before these error messages start to pop.
We know that no one change any database configuration, so we figure out that the problem must be on the volume of processed data (row number is growing as time goes by...).
But we never see that kind of error before! What does it means? It seems that a failover engine tries to recover from an error, but when oracle is 'taken over' by this engine, it enter in a more restricted state, where some kinds of queries does not work (like Windows Safe Mode...)
Well, if this is what is happening, how can I get the real error message? The one that trigger the failover mechanism?
BTW, below is one of the deletes that triggers the error:
delete from odf_ca_rnv_av_snapshot_week
(we tried this one just to test the simplest delete we could think of... a truncate won't help us with the real deal :) )
check this link
the error seems to come not from Oracle or JDBC, but from "progress". It means that it can only recover from SELECT statements and not from DML.
You'll have to figure out why the failover occurs in the first place.
I am having this big database on one MSSQL server that contains data indexed by a web crawler.
Every day I want to update SOLR SearchEngine Index using DataImportHandler which is situated in another server and another network.
Solr DataImportHandler uses query to get data from SQL. For example this query
SELECT * FROM DB.Table WHERE DateModified > Config.LastUpdateDate
The ImportHandler does 8 selects of this types. Each select will get arround 1000 rows from database.
To connect to SQL SERVER i am using com.microsoft.sqlserver.jdbc.SQLServerDriver
The parameters I can add for connection are:
responseBuffering="adaptive/all"
batchSize="integer"
So my question is:
What can go wrong while doing this queries every day ? ( except network errors )
I want to know how is SQL Server working in this context ?
Further more I have to take a decicion regarding the way I will implement this importing and how to handle errors, but first I need to know what errors can arise.
Thanks!
Later edit
My problem is that I don't know how can this SQL Queries fail. When i am calling this importer every day it does 10 queries to the database. If 5th query fails I have to options:
rollback the entire transaction and do it again, or commit the data I got from the first 4 queries and redo somehow the queries 5 to 10. But if this queries always fails, because of some other problems, I need to think another way to import this data.
Can this sql queries over internet fail because of timeout operations or something like this?
The only problem i identified after working with this type of import is:
Network problem - If the network connection fails: in this case SOLR is rolling back any changes and the commit doesn't take place. In my program I identify this as an error and don't log the changes in the database.
Thanks #GuidEmpty for providing his comment and clarifying out this for me.
There could be issues with permissions (not sure if you control these).
Might be a good idea to catch exceptions you can think of and include a catch all (Exception exp).
Then take the overall one as a worst case and roll-back (where you can) and log the exception to include later on.
You don't say what types you are selecting either, keep in mind text/blob can take a lot more space and could cause issues internally if you buffer any data etc.
Though just a quick re-read and you don't need to roll-back if you are only selecting.
I think you would be better having a think about what you are hoping to achieve and whether knowing all possible problems will help?
HTH
In our oracle server(10 g), we are getting ORA-4030 error on sometimes.
ORA-04030: out of process memory when trying to allocate nn bytes
We understood it is related with memory size adjustment. We are trying some memory settings.
Other than this, wanted to know,
(1) Any specific SQL query usages will be cause this kind of error
(2) any Oracle SQL query tuning can be applied to avoid this
Your replies will help.
Thanks in advance.
1) The sorts,distinct, group and join hashes are the most probably to give you this error!
2) What OS do you use? In linux you can see what resources do you for your users with ulimit -a.
You should increase the memory per process for PGA.
Regards
One thing which could be contributing to the error, is not freeing cursors. In .net a SQLStatement = a db cursor. Make sure that the applications are closing (and disposing) the SQL statements it is using.