ORA-30009: Not enough memory for CONNECT BY operation - sql

I have a Linux VM (with CentOS 64 bit) where I installed Oracle Database 11g Express Edition. The database is running, I can use sqlplus and I can create tables and stuff. However, when I run a certain SQL script which inserts a huge amount of random data (~2 million rows) I get this error:
ORA-30009: Not enough memory for CONNECT BY operation
I already tried to increase PGA_AGGREGATE_TARGET with the command below. As far as I read this should solve the problem (but it doesn't) since it increases the memory.
ALTER SYSTEM SET PGA_AGGREGATE_TARGET = 40M scope = both;
However, the problem is that I can not set PGA_AGGREGATE_TARGET higher than ~40M (which seems to be not sufficient). If I try to set it up to 100M or more I get another error:
ERROR at line 1:
ORA-02097: parameter cannot be modified because specified value is invalid
ORA-47500: XE edition memory parameter invalid or not specified
Any idea how I can solve this problem? It is perfectly fine for me to re-install the whole database or whatever.
PS.: I asked the same question on dba.stackexchange.com since I wasn't sure whether it is programming or database administration.

You are using Oracle 11g Express edition and it has certain limitations like database size doesn't grow up more than 11g and memory SGA+PGA doesn't grow up more than 1 GB. It means, you are encountering with somewhere this limitation. If you are not be able to expand PGA that means, you need to decrease SGA size first and after that try again. I will get success.

Related

Undo tablespace is growing after oracle version updated from 11.2.0.1 to 11.2.0.4

I have recently upgraded the oracle database from 11.2.0.1 to 11.2.0.4 in RAC environment. Installation was successful and i checked logs on both instances. Services were fine untill Ora 30036 resulted due to undotablespace. I got following in spfile
*undo_retention =108000
node11.undo_tablespace='UNDOTBS1'
node12.undo_tablespace='UNDOTBS2'
I have following long running queries
1. EMD_MAINTENANCE.EXECUTE_EM_DBMS_JOB_PROCS()
2. #This query is for AWR auto workload repository
select result_key_value from mgmt_policy_assoc_eval_details........
( select column_value from table cast ...........)
Undotablespace 2 is growing at the rate of 1700MB / hours. I am afraid that I'll ran short of space. I have following questions in my mind:-
Q1. What could be maximum undotablespace for an oracle 11.2.0.4 running in RAC?
Q2. Why retention period is specified for 30hours?
Q3. Why the tablespace2 is growing too fast?
Q4. What could be possible workaround?
It is important to highlight that we have resumed services after few days due to planned upgrade. I have stopped Enterprise manager console as well as its eating a lot of space. SysAux is at 99%.
Q1. What could be maximum undotablespace for an oracle 11.2.0.4 running in RAC?
please refer to https://docs.oracle.com/cloud/latest/db112/REFRN/limits002.htm#REFRN0042 for details.
Q2. Why retention period is specified for 30hours?
is before upgrade, the pramater value never changed? it is configured by manual.
Q3. Why the tablespace2 is growing too fast?
please check if there is long running transaction from gv$transaction or x$ktuxe which consume too many undo blocks. if not, maybe the undo rentention need reduce.
Q4. What could be possible workaround?
advice that do not set the undo datafile autoextend is on avoid the disk space oom. and refer to Q3. i think the buiness is also changed after upgrade.

System.OutOfMemoryException when querying large SQL table

I've written a SQL query that looks like this:
SELECT * FROM MY_TABLE WHERE ID=123456789;
When I run it in the Query Analyzer in SQL Server Management Studio, the query never returns; instead, after about ten minutes, I get the following error: System.OutOfMemoryException
My server is Microsoft SQL Server (not sure what version).
SELECT * FROM MY_TABLE; -- return 44258086
SELECT * FROM MY_TABLE WHERE ID=123456789; -- return 5
The table has over forty million rows! However, I need to fetch five specific rows!
How can I work around this frustrating error?
Edit: The server suddenly started working fine for no discernable reason, but I'll leave this question open for anyone who wants to suggest troubleshooting steps for anyone else with this problem.
According to http://support.microsoft.com/kb/2874903:
This issue occurs because SSMS has insufficient memory to allocate for
large results.
Note SSMS is a 32-bit process. Therefore, it is limited to 2 GB of
memory.
The article suggests trying one of the following:
Output the results as text
Output the results to a file
Use sqlcmd
You may also want to check the server to see if it's in need of a service restart--perhaps it has gobbled up all the available memory?
Another suggestion would be to select a smaller subset of columns (if the table has many columns or includes large blob columns).
If you need specific data use an appropriate WHERE clause. Add more details if you are stuck with this.
Alternatively write a small application which operates using a cursor and does not try to load it completely into memory.

Is it possible for SQL Server to report an incorrect number of rows affected?

I have a utility which:
grabs sql commands from a flat text file
executes them sequentially against SQL Server located on the same machine
and reports an error if an UPDATE command affects ZERO ROWS (there should never be an update command in this file that doesn't affect a record, hence it being recorded as an error)
The utility also logs any failed commands.
Yet the final data in the database seems to be wrong/stale, even though my utility is reporting no failed updates and no failed commands.
I know the first and most obvious culprit is some kind of logic or runtime error in my programming of the utility itself, but I just need to know of it's THEORETICALLY possible for SQL Server to report that at least one row was affected and yet no apply the change.
If it helps, the utility always seems to correctly execute the same number of commands and the final stale/wrong data is always the same i.e. it seems to correctly execute a certain number of commands that are being successfully queried against the database, then failing.
Thanks.
EDIT:
I should also note that this utility is exhibiting this behavior across 4 different production servers each with their own dedicated local database server, and that these are beefy machines with 8-16 GB RAM each that are managed by a professional sysadmin.
Based on what you say...
It is possible for the "xx rows affected" to be misleading if you have a trigger firing. You may be reading the count from the trigger. If so, add SET NOCOUNT ON to the trigger
Alternatively, the data is the same, so you actually do dummy update with the same values. Add a WHERE clause to test for differences for example.
BEGIN TRANSACTION
UPDATE MyTable
SET Message = ''
WHERE ID = 2
ROLLBACK TRANSACTION
Messages:
(1 row(s) affected)

Retrieving billions of rows from remote server?

I am trying to retrieve around 200 billion rows from a remote SQL Server. To optimize this, I have limited my query to use only an indexed column as a filter and am selecting only a subset of columns to make the query look like this:
SELECT ColA, ColB, ColC FROM <Database> WHERE RecordDate BETWEEN '' AND ''
But it looks like unless I limit my query to a time window of a few hours, the query fails in all cases with the following error:
OLE DB provider "SQLNCLI10" for linked server "<>" returned message "Query timeout expired".
Msg 7399, Level 16, State 1, Server M<, Line 1
The OLE DB provider "SQLNCLI10" for linked server "<>" reported an error. Execution terminated by the provider because a resource limit was reached.
Msg 7421, Level 16, State 2, Server <>, Line 1
Cannot fetch the rowset from OLE DB provider "SQLNCLI10" for linked server "<>".
The timeout is probably an issue because of the time it takes to execute the query plan. As I do not have control over the server, I was wondering if there is a good way of retrieving this data beyond the simple SELECT I am using. Are there any SQL Server specific tricks that I can use? Perhaps tell the remote server to paginate the data instead of issuing multiple queries or something else? Any suggestions on how I could improve this?
This is more of the kind of job SSIS is suited for. Even a simple flow like ReadFromOleDbSource->WriteToOleDbSource would handle this, creating the necessary batching for you.
Why read 200 Billion rows all at once?
You should page them, reading say a few thousand rows at a time.
Even if you do genuinely need to read all 200 Billion rows you should still consider using paging to break up the read into shorter queries - that way if a failure happens you just continue reading where you left off.
See efficient way to implement paging for at least one method of implementing paging using ROW_NUMBER
If you are doing data analysis then I suspect you are either using the wrong storage (SQL Server isn't really designed for processing of large data sets), or you need to alter your queries so that the analysis is done on the Server using SQL.
Update: I think the last paragraph was somewhat misinterpreted.
Storage in SQL Server is primarily designed for online transaction processing (OLTP) - efficient querying of massive datasets in massively concurrent environments (for example reading / updating a single customer record in a database of billions, at the same time that thousands of other users are doing the same for other records). Typically the goal is to minimise the amout of data read, reducing the amount of IO needed and also reducing contention.
The analysis you are talking about is almost the exact opposite of this - a single client actively trying to read pretty much all records in order to perform some statistical analysis.
Yes SQL Server will manage this, but you have to bear in mind that it is optimised for a completely different scenario. For example data is read from disk a page (8 KB) at a time, despite the fact that your statistical processing is probably only based on 2 or 3 columns. Depending on row density and column width you may only be using a tiny fraction of the data stored on an 8 KB page - most of the data that SQL Server had to read and allocate memory for wasn't even used. (Remember that SQL Server also had to lock that page to prevent other users from messing with the data while it was being read).
If you are serious about processing / analysis of massive datasets then there are storage formats that are optimised for exactly this sort of thing - SQL Server also has an add on service called Microsoft Analysis Services that adds additional online analytical processing (OLAP) and data mining capabilities, using storage modes more suited to this sort of processing.
Personally I would use a data extraction tool such as BCP to get the data to a local file before trying to manipulate it if I was trying to pull that much data at once.
http://msdn.microsoft.com/en-us/library/ms162802.aspx
This isn't A SQL Server specific answer, but even when the rDBMS supports server side cursors, it's considered poor form to use them. Doing so means that you are consuming resources on the server even though the server is still waiting for you to request more data.
Instead you should reformulate your query usage so that the server can transmit the entire result set as soon as it can, and then completely forget about you and your query to make way for the next one. When the result set is too large for you process all in one go, you should keep track of the last row returned by the current batch so that you can fetch another batch starting at that position.
Odds are the remote server has the "Remote Query Timeout" set. How long does it take for the query to fail?
Just run into the same problem, I also had the message at 10:01 after running the query.
Check this link. There's a remote query timeout setting under Connections that's setup to 600secs by default and you need to change it to zero (unlimited) or other value you think is right.
Try to change remote server connection timeout property.
For that go to SSMS, connect to the server, right click on server's name in object explorer, further select Properties -> Connections and change value in the Remote query timeout (in seconds, 0 = no timeout) text box.

Suspended status in SQL Activity Monitor

What would cause a query being done in Management Studio to get suspended?
I perform a simple select top 60000 from a table (which has 11 million rows) and the results come back within a sec or two.
I change the query to top 70000 and the results take up to 40 min.
From doing a bit of searching on another but related issue I came across someone using DBCC FREEPROCCACHE to fix it.
I run DBCC FREEPROCCACHE and then redo the query for 70000 and it seemmed to work.
However, the issue still occurs with a different query.
I increase to say 90000 or if I try to open the table using [Right->Open Table], it pulls about 8000 records and stops.
Checking the activity log for when I do the Open Table shows the session has been suspended with a wait type of "Async_Network_IO". For the session running the select of 90000 the status is "Sleeping", this is the same status for the above select 70000 query which did return but in 45min. It is strange to me that the status shows "Sleeping" and it does not appear to be changing to "Runable" (I have the activiy monitor refreshing ever 30sec).
Additional notes:
I am not running both the Open Table and select 90000 at the same time. All queries are done one at a time.
I am running 32bit SQL Server 2005 SP2 CU9. I tried upgrading to SP3 but ran into install failurs. The issues was occuring prior to me trying this upgrade.
Server setup is an Active/Active cluster the issue occurs on either node, and the other instance does not have this issue.
I have ~20 other database on this same server instance but only this one DB is seeing the issue.
This database gets fairly large. It is currently at 76756.19MB. Data file is 11,513MB.
I am logged in locally on the Server box using Remote Desktop.
The wait type "Async_Network_IO" means that its waiting for the client to retrieve the result set as SQL Server's network buffer is full. Why your client isn't picking up the data in a timely manner I can't say.
The other case it can happen is with linked servers when SQL Server is querying a remote table, in this case SQL Server is waiting for the remote server to respond.
Something worth looking at is virus scanners, if they are monitoring network connections sometimes they can get lagged, its often apparent by them hogging all the CPU.
Suspended means it is waiting on a resource and will resume when it gets its resource. Judging from the sizes you are pulling back, it seems you are in an OLAP type of query.
Try the following things:
Use NOLOCK or set the TRANSACTION ISOLATION LEVEL at the top of the query
Check your execution plan and tune the query to be more efficient