Oracle simple SQL query result in: ORA-08103: object no longer exists - sql

please help with a query on Oracle. I'm using SQLPlus (but with SQLDeveloper is the same) to accomplish a simple query like:
select count(*) from ARCHIT_D_CC where (TYP_ID=22 OR TYP_ID=23) and SUBTM like '%SEP%' and CONS=1234
This is a very simple query that works perfect until I'll execute it on a big table that contains tons of data. After a few minutes I got:
ERROR at line 1: ORA-08103: object no longer exists
This because the database is partitioned and due to large ammount of data in the table and before my query finishes, oracle BT mechanism rotates the table partitions. That's why I got the message.
Now, is there a way to avoid this error? Maybe specify the partition or something like that. As already wrote, in other table with less data, it works perfect.
Thanks
Lucas

Related

Executed SQL Query not showing in GV$SQL

I am running a sql query from a java application. The query is running successfully and is able to fetch data and perform the required action.
However when trying to look for the sql id in GV$SQL and V$SQLAREA, the query does not show up. I have tried all combinations of my query keywords in the like clause.
SQL Query:
select * from GV$SQL where UPPER(sql_text) like UPPER('%{query part here}%');
This gives no results. Any suggestions or ideas on where to look for sql id of my query?
By default SQL_TEXT only contains the first 1000 characters, so its possible that you are looking for a component of the query that is past that. You could guard against that by using the SQL_FULLTEXT column which is a clob.
There is also a chance that the query has been aged out of the shared pool and thus is no longer visible in there. You can also query V$SQLSTATS which typically has a longer retention period.
Also, double check that something else is not pertubating your SQL on the way into the database, eg, cursor sharing which means if you are searching for a literal, it may have been stripped from the SQL

Out of Memory error while creating table but SELECT works

I'm trying to CREATE a table using CREATE TABLE AS, which gives the following error:
[Amazon](500310) Invalid operation: Out Of Memory:
Details:
-----------------------------------------------
error: Out Of Memory:
code: 1004
context: alloc(524288,MtPool)
query: 6041453
location: alloc.cpp:405
process: query2_98_6041453 [pid=8414]
-----------------------------------------------;
I'm getting this error everytime I execute the query, but executing just the SELECT part of the query (without the CREATE TABLE AS) works fine. The result has around 38k rows. However I see there's a drastic difference in Bytes returned in the sequential scan on one table.
SELECT
CREATE TABLE AS SELECT
I fail to understand why there's so much difference between these two scenarios and what can be done to mitigate it. I also tried to create a TEMP TABLE but that also results in memory error.
I'm not great at understanding Query Plans (never found a detailed guide to it for Redshift, so if you could link to some resource that'd be a bonus).
Update : Also tried creating the table first and then INSERTing the data using SELECT, that also gives the same error.
Update 2 : Tried set wlm_query_slot_count to 40; or even 50 but still the same error.
We ran into a similar issue after our clusters got updated to the latest release (1.0.10694).
Two things that helped:
Changing your WLM to allocate more memory to your query (in our case,
we switched to WLM
Auto)
Allocating a higher query_slot_count to your query:
set wlm_query_slot_count to 2; to allocate 2 query slots for
example.
We suspect that AWS may have changed something with memory management with the most recent updates. I'll update once we hear back.
As a workaround, you could try inserting the records in batches.
Solved this using manual WLM implement.

SQL multiple tables - very slow

I am trying to fasten up a SQL Server report regarding the IBM OS/400 operating system for my sales department.
A colleague of mine (which left the company) did this report and used a ton of sub selects.
The report usually takes about 30 min to process and often just fails to be displayed. I already tried to cut out some tables/rows in hopes of fastening up the process without success (all is needed by the sales department).
It works over all relevant data (orders, customers, articles, our order at the manufacturer, the manufacturer and so on). Any ideas?
I can't index it, due to the OS/400 system; guess it would be a new programming task for our contractor which leads to costs.
Can I use some clever joins? or somehow reduce the amount of subselects?
Are you using 4 part names in your query? That's probably your problem...
From SQL server...
-- Pull all rows from the table(s) back to MS SQL server and do the where locally on the MS SQL server
select * from LINKEDSVR.MYIBMI.MYLIB.MYTBL where locnbr = '00335';
-- Sends the statement to IBM i server for processing, only results are returned..
select * from openquery(LINKEDSVR, 'select * from MYTBL where locnbr = ''00335''');
Try running the subselects first, sending the output of each to its own table.
Update statistics on the tables. Then run the rest of the SQL, replacing what were originally subselects with the tables created in the first step.
Treat multiple layers of nesting the same way: each layer is its own insert into another table.
I've found that query optimizers have a hard time with complex SQL. Breaking-out the subqueries into separate steps often resolves this.
Between runs my preference is to leave the data intact as a reference in case debugging is needed, then truncate the tables as the first step of a run.
Responding to eraser's comments
Assuming your original query takes this general form:
select [columns] from
(-- subquery
select [columns] from TableA
) as Subquery
from TableB
where mainquery_where_clause
Re-write:
-- Create a table to handle results for your subquery:
Create Table A ;
-- Update the data distribution statistics:
update stats (TableA) ;
-- Now run the subquery:
insert into SubQTable select [columns] from TableA
-- Now run the re-written main query:
Select [columns]
from TableA, TableB
where TableA.joincol = TableB.joincol
and mainquery_where_clause ;
I noticed some syntax issues with the SQL you posted. Looks like something got left out. But the principle of my answer remains the same. Please note that applying my suggestion may not help, as there are potentially many variables to your scenario; you mentioned subqueries, so I chose to address that.
Halfer's suggestion is a great one: edit your original question, adding the SQL code, and putting it in the "{}" supplied by the text editing tool.
I strongly suggest that you obtain the SQL execution plan and post the results.

BigQuery data using SQL "INSERT INTO" is gone after some time

Today I notice another strange behaviour of BigQuery.
I run UDF standard SQL in the BQ web ui:
CREATE TEMPORARY FUNCTION ...
INSERT INTO projectid.dataset.inserttable...
All seems good, the result of the UDF SQL are inserted in the insert table correct, I can tell from "Number of rows". But the table size is not correct, still keep the table size before run the insert query. Furthermore, I found all the inserted rows are gone after 1 hour later.
Some more info I found, when run a "DETELE FROM insert table true" or "SELECT ...", then the deleted number of rows and table size seems correct with the inserted data. But just can not preview the insert table correctly in the WEB UI.
Then I am guessing the "Detail" or "Preview" info of the table has time delay? May I know do you have any idea about this behaviour?
The preview may have a delay, so SELECT * FROM YourTable; will give the most up-to-date results, or you can use COUNT(*) just to verify that the number of rows is correct. You can think of it as being similar to streaming, if you have tried that, where some rows may be in the streaming buffer for a while before they make it into regular storage.

How can I stop my SQL from returning empty data results?

I usually use Toad to manipulate my Oracle databases, but I even tried SQL manager for this one and it still would not work. I have a table with a few hundred records, and even running a simple
SELECT * FROM customer
will not work. There are no errors, and the data grid that displays pulls all the correct field column names but there are no records shown. What could be causing this?
Does your login schema own the table? If not, verify that any synonym is actually pointing to the object that you think it is. Preface the table name with its owning schema to rule out any conflicts.