Tempdb becomes too big in size [closed] - sql

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 4 years ago.
Improve this question
I was executing one normal SP and after that my Tempdb acquired 80GB disk space which was only 8MB before. How can I overcome this? and Why this happened?

It happened because you did something in the SP that needed tempdb. Sorting under certain conditions, the dreaded DISTINCT that needds to know all data for example.
You can overcome this by rewriting your SQL not to use Tempdb. And the current resize you just fix (redefine size, restart server, tempdb is recreated).
Depending on the db, btw, I would NOT consider 80gb to be excessive on a decent modern server. Depends WHAT you do, obviously.

Related

size of database after restored from sql dump [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Closed 3 years ago.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Improve this question
i have 5 gb PostgreSQL dump file. I will restore it by psql command but I have no space on my computer (about 1 gb). I want to know, will the database take over than or equal to 5 gb?
A SQL dump is typically a lot smaller than the restored database, because it only contains the definition of indexes, not the actual index data. So you should expect the database to need at least 5GB after being restored. If it contains a lot of indexes it might be substantially be bigger.
The only situation where a SQL dump might be bigger than the restored size is, if the dump contains a lot of text data that is longer than approximately 2KB. Any text value exceeding that size will automatically be compressed. But still it's very unlikely that the restored size will be 5 times smaller than the dump.

How to improve the performance of the package [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I have been asked to improve the package performance without affecting the functionality.How to start with optimisation ?Any suggestions
In order to optimize PL/SQL programs you need to know where they spend time during execution.
Oracle provide two tools for profiling PL/SQL. The first one is DBMS_PROFILER. Running a packaged procedure in a Profiler session gives us a breakdown of each program line executed and how much time was spent on each line. This gives us an indication of where the bottlenecks are: we need to focus on the lines which consume the most time. We can only use this on our own packages but it writes to databases tables so it is easy to use. Find out more.
In 11g Oracle also gave us the Hierarchical Profiler, DBMS_HPROF. This does something similar but it allows us to drill down into the performance of dependencies in other schemas; this can be very useful if your application has lots of schemas. The snag is the Hprofiler writes to files and uses external tables; some places are funny about the database application writing to the OS file system. Anyway, find out more.
Once you have your profiles you know where you need to start tuning. The PL/SQL Guide has a whole chapter on Tuning and Optimization. Check it out.
" without affecting the functionality."
Depending on what bottlenecks you have you may need to rewrite some code. To safely change the internal workings of PL/SQL without affecting the external functionality(same outcome for same input) you need a comprehensive set of unit tests. If you don't have these already you will need to write them first.

Which all reasons can cause index to go into unusable state in oracle [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 7 years ago.
Improve this question
We are using oracle DB. In the "ALL_INDEXES" table for some of the indexes the status value is showing as "UNUSABLE". This we have observed when we move tables from compressed to uncompressed or vice versa. But we have not perform moving of the tables and still it showing for some of the indexes unusable. Can someone explain which all reasons are there.
We are not creating lists on SO, but let's say the question is "What may caused the index go unusable?". The idea is that anything that touches the table(partition) segment with a "bulk" ddl operation, invalidates the index.
For example, if you truncate a partition or drop it, the global index will be set unusable.
(Edit: Here we can count also sqlldr-ing with direct path as discussed here)
Another reason would be that someone set it unusable;

Can we restrict our database to not autogrow? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
can we make our database ( what ever its size) to not auto grow at all ( data and log file ) ?
if we proceed with this choice maybe we will face problems when the database is full during the on hours
Typically the way you prevent growth events from occurring during business hours is by pre-allocating the data and log files to a large enough size to minimize or completely eliminate auto-growth events in the first place. This may mean making the files larger than they need to be right now, but large enough to handle all of the data and/or your largest transactions across some time period x.
Other things you can do to minimize the impact of growth events:
balance the growth size so that growth events are rare, but still don't take a lot of time individually. You don't want the default of 10% and 1MB that come from the model database; but there is no one-size-fits-all answer for what your settings should be.
ensure you are in the right recovery model. If you don't need point-in-time recovery, put your database in SIMPLE. If you do, put it in FULL, but make sure you are taking frequent log backups.
ensure you have instant file initialization enabled. This won't help with log files, but when your data file grows, it should be near instantaneous, up to a certain size (again, no one-size-fits-all here).
get off of slow storage.
Much more info here:
How do you clear the SQL Server transaction log?

Getting intermediate spool output [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Am using oracle 11g, and i have a sql file with 'spool on' which runs for at least 7+ hours, as it has to spool huge data. But spool output is dumped only when the whole sql is finished, but i would like to know is there any other way to know the progress of my sql, or data spooled until that point in time, so that am rest assured that my sql is running properly as expected. Please help with your inputs.
Sounds like you are using DBMS_OUTPUT, which always only starts to actually output the results after the procedure completes.
If you want to have real/near time monitoring of progress you have 3 options:
Use utl_file to write to a OS file. You will need access to the db server OS file system for this.
Write to a table and use PRAGMA AUTONOMOUS_TRANSACTION so you can commit the log table entries without impacting your main processing. This is easy to implement, and readily accessible. Implemented in a good way this can become a de facto standard for all your procedures. You may then need to implement some sort of house keeping to avoid this getting too big and unwieldy.
A quick and dirty option which is transient, is to use DBMS_APPLICATION.SET_CLIENT_INFO, and then query v$session.client_info. This works well, good for keeping track of things, fairly unobtrusive and because it is a memory structure is fast.
DBMS_OUTPUT really is limited.