Oracle output results of query to file - sql

Currently i am running multiple queries which spool the output to a file this can be a lengthy process, below are my current settings in sqlplus.
set feedback off
set heading off
set echo off
set termout OFF
set linesize 150
set long 1999999
set longchunk 1999999
set pagesize 0
spool results.sql
#queries.sql
spool off
set termout on
set echo on
set heading on
set feedback on
I was wondering if there is any way i can speed up this process? Or is there a faster way of sending output of the queries to a file using sqlplus?
Thanks

Clarifications
Can you please clarify the following
Is the sqlplus client on the same physical box (or virtual machine) that your database management system is on (more specifically, your ora and datafiles)?
Can you test your query by decreasing the size of the result set using fetch first n rows?
Time Complexity and Specifics
Why do you believe the bottleneck is spooling? Before jumping to any conclusions about the causes of poor performance you may want to try an gather some timing statistics on your query. I would modify your query to only return a subset of rows
SELECT * FROM scott.dept FETCH FIRST 10000 rows
Optimizing your queries is highly depending on how your data is structure and how your routines run, so in order to optimize you need to be able to benchmark your performance changes.
You will then want to check the following two parameters
SHOW PARAMETER statistics_level
SHOW PARAMETER timed_statistics
And you will also want to time your queries
SET TIMING ON
Remember, all of this is simply to benchmark performance, you will need to revert these settings in production.
Keep in mind that running the same query multiple times will cause inconsistent performance metrics because of pooling / buffering / caching / etc. You should query the v$statistics_level and investigate
Rows Processed
Sort (memory)
Sort (Disk)
physical reads
consistent get
I would SHUTDOWN IMMEDIATELY; STARTUP MOUNT; and then your an explain on your query multiple times.
EXPLAIN PLAN FOR SELECT * FROM SCOTT.DEPT;
Check if the explain plan is changing on your subsequent calls.
Additionally, in certain circumstances you can force a cache of a table by running
SELECT * FROM SCOTT.DEPT CACHE;
And then comparing it to the performance metrics of
SELECT * FROM SCOTT.DEPT NOCACHE;
You may want to also keep the table in-memory indefinitely
ALTER TABLE *scott.dept* STORAGE (buffer_pool keep)
If you don't see anything glaring then run an explain plan for your query. Run the explain multiple times in succession to see if anything changes.
Beyond that, you should also remember that running your query multiple times without flushing your buffers / clearing cache is going to give you unreliable readings. If you aren't sure the spooling is the bottle neck, I would also suggest you run an explain plan for your query
There are also some tricks to increasing performance over time (assuming this routine is ran regularly). If you with alter table *table_name* cache, run the query without spooling to a file, then running again alter table *table_name* nocache with spool you may be a more performant run, but again if this is feasible completely depends on your use cases.
Settings
While I doubt your issue here is going to be solved by any combination of settings, below are a few that may provide marginal benefits
set trimspool on
Removing white spaces in spool file to fill up linesize
2. set echo off
This may be pointless depending on whether your routine is fully automated or interactive
3. set verify off
This will remove substitution variables from your spool, but can speed up your process considerably... again, depending on your data
4. set autoprint off
This is another variable setting. Off is the default setting, but you but assurance I would explicitly set it
5. set serveroutput off
6. SET TRIMOUT ON
Removes whitespaces filling linesize on out file
7. set arraysize N
You are definitely going to tinker with this one. N is the number of rows being fetched at a single time. The size of this depends on the structure of your table
While these settings may increase the speed of your routines, performance is generally specific to your architecture and data set.

Related

limit query time in the client side of PostgreSQL

I'm trying to query the PostgreSQL database, but this is a public server and I really don't want to waste a lot of CPU for a long time.
So I wonder if there is some way to limit my query time for a specific duration, for example, 3/5/10 minutes.
I assume that there is syntax like limit but not for results amount but for query duration.
Thanks for any kind of help.
Set statement_timeout, then your query will be terminated with an error if it exceeds that limit.
According to this documentation:
statement_timeout (integer)
Abort any statement that takes more than
the specified number of milliseconds, starting from the time the
command arrives at the server from the client. If
log_min_error_statement is set to ERROR or lower, the statement that
timed out will also be logged. A value of zero (the default) turns
this off.
Setting statement_timeout in postgresql.conf is not recommended
because it would affect all sessions.
Here is a potential catch with using LIMIT to control how long a query might run. Rightfully, you ought to also be using ORDER BY with your query, to tell Postgres how exactly it should limit the size of the result set. But the caveat here is that Postgres would typically have to materialize the entire result set when using LIMIT with ORDER BY, and then also possibly sort, which might take longer than just reading in the entire result set.
One workaround to this might be to just use LIMIT without ORDER BY. If the execution plan does not include reading the entire table and sorting, it might be one way to do what you want. However, keep in mind that if you go this route, Postgres would have free license to return any records from the table it wishes, in any order. This is probably not what you want from a business or reporting point of view.
But, a much better approach here would be just tune your query using things like indices, and make it faster to the point where you don't need to resort to a LIMIT trick.

Azure SQL server deletes

I have a SQL server with 16130000 rows. I need to delete around 20%. When I do a simple:
delete from items where jobid=12
Takes forever.
I stopped the query after 11 minutes. Selecting data is pretty fast why is delete so slow? Selecting 850000 rows takes around 30 seconds.
Is it because of table locks? And can you do anything about it? I would expect delete rows should be faster because you dont transfer data on the wire?
Best R, Thomas
Without telling us what reservation size you are using, it is hard to give feedback on whether X records in Y seconds is expected or not. I can tell you about how the system works so that you can make this determination with a bit more investigation by yourself, however. The log commit rate is limited by the reservation size you purchase. Deletes are fundamentally limited on the ability to write out log records (and replicate them to multiple machines in case your main machine dies). When you select records, you don't have to go over the network to N machines and you may not even need to go to the local disk if the records are preserved in memory, so selects are generally expected to be faster than inserts/updates/deletes because of the need to harden log for you.
You can read about the specific limits for different reservation sizes are here:
DTU Limits and vCore Limits
One common problem customers hit is to do individual operations in a loop (like a cursor or driven from the client). This implies that each statement has a single row updated and thus has to harden each log record serially because the app has to wait for the statement to return before submitting the next statement. You are not hitting that since you are running a big delete as a single statement. That could be slow for other reasons such as:
Locking - if you have other users doing operations on the table, it could block the progress of the delete statement. You can potentially see this by looking at sys.dm_exec_requests to see if your statement is blocking on other locks.
Query Plan choice. If you have to scan a lot of rows to delete a small fraction, you could be blocked on the IO to find them. Looking at the query plan shape will help here, as will set statistics time on (I suggest you change the query to do TOP 100 or similar to get a sense of whether you are doing lots of logical read IOs vs. actual logical writes). This could imply that your on-disk layout is suboptimal for this problem. The general solutions would be to either pick a better indexing strategy or to use partitioning to help you quickly drop groups of rows instead of having to delete all the rows explicitly.
Try to use batching techniques to improve performance, minimize log usage and avoid consuming database space.
declare
#batch_size int,
#del_rowcount int = 1
set #batch_size = 100
set nocount on;
while #del_rowcount > 0
begin
begin tran
delete top (#batch_size)
from dbo.LargeDeleteTest
set #del_rowcount = ##rowcount
print 'Delete row count: ' + cast(#del_rowcount as nvarchar(32))
commit tran
end
Drop any foreign keys, delete the rows and then recreate the foreign keys can speed up things also.

Oracle Sequences , altering and viewing

I want to update the cache size of an existing sequence and i want to describe a sequence in oracle like table . how to do it ?
and what are all the drawbacks of increasing the cache value of an sequence
Alter sequence seq_name cache 20;
See the docs.
To get the ddl you may use the dbms_metadata package, wich can be used for any object:
select dbms_metadata.get_ddl('SEQUENCE','SEQ_NAME') from dual;
Increasing the cache size is useful when you have massive fetches from sequence. Increasing it has no drawback considering the fact that you use them.
But if you generate 1 milion values at a time and you use only 10, maybe is not a good ideea, because 999990 values are lost. Next session will generate another 1000000 values.
I think the engine works to generate them and allocate values for your session.
For example in my opinion, a cache 10 times less than you normally use in a session is ok.
UPDATE: Adding David Aldridge's comment:
The usefullness of a large cache is really related to the rate at
which it is used in general, so not just for large selects but for
systems with many session all using one value at a time. As
background, the performance problem with a small cache is caused by
the need for the SEQ$ system table to be modified when the cache is
exhausted. It's a small operation but obviously you don't want to be
doing it 100 times a second.
So, increasing the cache you'll have fewer concurent sessions on the same resource.

Running Updates on a large, heavily used table

I have a large table (~170 million rows, 2 nvarchar and 7 int columns) in SQL Server 2005 that is constantly being inserted into. Everything works ok with it from a performance perspective, but every once in a while I have to update a set of rows in the table which causes problems. It works fine if I update a small set of data, but if I have to update a set of 40,000 records or so it takes around 3 minutes and blocks on the table which causes problems since the inserts start failing.
If I just run a select to get back the data that needs to be updated I get back the 40k records in about 2 seconds. It's just the updates that take forever. This is reflected in the execution plan for the update where the clustered index update takes up 90% of the cost and the index seek and top operator to get the rows take up 10% of the cost. The column I'm updating is not part of any index key, so it's not like it reorganizing anything.
Does anyone have any ideas on how this could be sped up? My thought now is to write a service that will just see when these updates have to happen, pull back the records that have to be updated, and then loop through and update them one by one. This will satisfy my business needs but it's another module to maintain and I would love if I could fix this from just a DBA side of things.
Thanks for any thoughts!
Actually it might reorganise pages if you update the nvarchar columns.
Depending on what the update does to these columns they might cause the record to grow bigger than the space reserved for it before the update.
(See explanation now nvarchar is stored at http://www.databasejournal.com/features/mssql/physical-database-design-consideration.html.)
So say a record has a string of 20 characters saved in the nvarchar - this takes 20*2+2(2 for the pointer) bytes in space. This is written at the initial insert into your table (based on the index structure). SQL Server will only use as much space as your nvarchar really takes.
Now comes the update and inserts a string of 40 characters. And oops, the space for the record within your leaf structure of your index is suddenly too small. So off goes the record to a different physical place with a pointer in the old place pointing to the actual place of the updated record.
This then causes your index to go stale and because the whole physical structure requires changing you see a lot of index work going on behind the scenes. Very likely causing an exclusive table lock escalation.
Not sure how best to deal with this. Personally if possible I take an exclusive table lock, drop the index, do the updates, reindex. Because your updates sometimes cause the index to go stale this might be the fastest option. However this requires a maintenance window.
You should batch up your update into several updates (say 10000 at a time, TEST!) rather than one large one of 40k rows.
This way you will avoid a table lock, SQL Server will only take out 5000 locks (page or row) before esclating to a table lock and even this is not very predictable (memory pressure etc). Smaller updates made in this fasion will at least avoid concurrency issues you are experiencing.
You can batch the updates using a service or firehose cursor.
Read this for more info:
http://msdn.microsoft.com/en-us/library/ms184286.aspx
Hope this helps
Robert
The mos brute-force (and simplest) way is to have a basic service, as you mentioned. That has the advantage of being able to scale with the load on the server and/or the data load.
For example, if you have a set of updates that must happen ASAP, then you could turn up the batch size. Conversely, for less important updates, you could have the update "server" slow down if each update is taking "too long" to relieve some of the pressure on the DB.
This sort of "heartbeat" process is rather common in systems and can be very powerful in the right situations.
Its wired that your analyzer is saying it take time to update the clustered Index . Did the size of the data change when you update ? Seems like the varchar is driving the data to be re-organized which might need updates to index pointers(As KMB as already pointed out) . In that case you might want to increase the % free sizes on the data and the index pages so that the data and the index pages can grow without relinking/reallocation . Since update is an IO intensive operation ( unlike read , which can be buffered ) the performance also depends on several factors
1) Are your tables partitioned by data 2) Does the entire table lies in the same SAN disk ( Or is the SAN striped well ?) 3) How verbose is the transaction logging . Can the buffer size of the transaction loggin increased to support larger writes to the log to suport massive inserts ?
Its also important which API/Language are you using? e.g JDBC support a batch update feature which makes the updates a little bit efficient if you are doing multiple updates .

Fastest way to do mass update

Let’s say you have a table with about 5 million records and a nvarchar(max) column populated with large text data. You want to set this column to NULL if SomeOtherColumn = 1 in the fastest possible way.
The brute force UPDATE does not work very well here because it will create large implicit transaction and take forever.
Doing updates in small batches of 50K records at a time works but it’s still taking 47 hours to complete on beefy 32 core/64GB server.
Is there any way to do this update faster? Are there any magic query hints / table options that sacrifices something else (like concurrency) in exchange for speed?
NOTE: Creating temp table or temp column is not an option because this nvarchar(max) column involves lots of data and so consumes lots of space!
PS: Yes, SomeOtherColumn is already indexed.
From everything I can see it does not look like your problems are related to indexes.
The key seems to be in the fact that your nvarchar(max) field contains "lots" of data. Think about what SQL has to do in order to perform this update.
Since the column you are updating is likely more than 8000 characters it is stored off-page, which implies additional effort in reading this column when it is not NULL.
When you run a batch of 50000 updates SQL has to place this in an implicit transaction in order to make it possible to roll back in case of any problems. In order to roll back it has to store the original value of the column in the transaction log.
Assuming (for simplicity sake) that each column contains on average 10,000 bytes of data, that means 50,000 rows will contain around 500MB of data, which has to be stored temporarily (in simple recovery mode) or permanently (in full recovery mode).
There is no way to disable the logs as it will compromise the database integrity.
I ran a quick test on my dog slow desktop, and running batches of even 10,000 becomes prohibitively slow, but bringing the size down to 1000 rows, which implies a temporary log size of around 10MB, worked just nicely.
I loaded a table with 350,000 rows and marked 50,000 of them for update. This completed in around 4 minutes, and since it scales linearly you should be able to update your entire 5Million rows on my dog slow desktop in around 6 hours on my 1 processor 2GB desktop, so I would expect something much better on your beefy server backed by SAN or something.
You may want to run your update statement as a select, selecting only the primary key and the large nvarchar column, and ensure this runs as fast as you expect.
Of course the bottleneck may be other users locking things or contention on your storage or memory on the server, but since you did not mention other users I will assume you have the DB in single user mode for this.
As an optimization you should ensure that the transaction logs are on a different physical disk /disk group than the data to minimize seek times.
Hopefully you already dropped any indexes on the column you are setting to null, including full text indexes. As said before, turning off transactions and the log file temporarily would do the trick. Backing up your data will usually truncate your log files too.
You could set the database recovery mode to Simple to reduce logging, BUT do not do this without considering the full implications for a production environment.
What indexes are in place on the table? Given that batch updates of approx. 50,000 rows take so long, I would say you require an index.
Have you tried placing an index or statistics on someOtherColumn?
This really helped me. I went from 2 hours to 20 minutes with this.
/* I'm using database recovery mode to Simple */
/* Update table statistics */
set transaction isolation level read uncommitted
/* Your 50k update, just to have a measures of the time it will take */
set transaction isolation level READ COMMITTED
In my experience, working in MSSQL 2005, moving everyday (automatically) 4 Million 46-byte-records (no nvarchar(max) though) from one table in a database to another table in a different database takes around 20 minutes in a QuadCore 8GB, 2Ghz server and it doesn't hurt application performance. By moving I mean INSERT INTO SELECT and then DELETE. The CPU usage never goes over 30 %, even when the table being deleted has 28M records and it constantly makes around 4K insert per minute but no updates. Well, that's my case, it may vary depending on your server load.
READ UNCOMMITTED
"Specifies that statements (your updates) can read rows that have been modified by other transactions but not yet committed." In my case, the records are readonly.
I don't know what rg-tsql means but here you'll find info about transaction isolation levels in MSSQL.
Try indexing 'SomeOtherColumn'...50K records should update in a snap. If there is already an index in place see if the index needs to be reorganized and that statistics have been collected for it.
If you are running a production environment with not enough space to duplicate all your tables, I believe that you are looking for trouble sooner or later.
If you provide some info about the number of rows with SomeOtherColumn=1, perhaps we can think another way, but I suggest:
0) Backup your table
1) Index the flag column
2) Set the table option to "no log tranctions" ... if posible
3) write a stored procedure to run the updates