Getting intermediate spool output [closed] - sql

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Am using oracle 11g, and i have a sql file with 'spool on' which runs for at least 7+ hours, as it has to spool huge data. But spool output is dumped only when the whole sql is finished, but i would like to know is there any other way to know the progress of my sql, or data spooled until that point in time, so that am rest assured that my sql is running properly as expected. Please help with your inputs.

Sounds like you are using DBMS_OUTPUT, which always only starts to actually output the results after the procedure completes.
If you want to have real/near time monitoring of progress you have 3 options:
Use utl_file to write to a OS file. You will need access to the db server OS file system for this.
Write to a table and use PRAGMA AUTONOMOUS_TRANSACTION so you can commit the log table entries without impacting your main processing. This is easy to implement, and readily accessible. Implemented in a good way this can become a de facto standard for all your procedures. You may then need to implement some sort of house keeping to avoid this getting too big and unwieldy.
A quick and dirty option which is transient, is to use DBMS_APPLICATION.SET_CLIENT_INFO, and then query v$session.client_info. This works well, good for keeping track of things, fairly unobtrusive and because it is a memory structure is fast.
DBMS_OUTPUT really is limited.

Related

Maintaining Someones Stored Procedures [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I have this question - a developer who wrote very complex stored procedures and he/she left the organization. Now you're taking his part and make that same stored procedure to work very fast or like work as earlier to get the same results. What are the steps do we need to follow. In other words, it's like working on someone's impending work.
Stored procedures are notoriously hard to maintain. I would start by writing unit tests - this could involve setting up a dedicated test environment, with "known good" data. Figure out the major logic branches in the procs, and write unit tests to cover those cases. This should make you more familiar with the code.
Once you have unit tests, you can work on optimization (if I've understood your question, you're trying to improve performance). If your performance optimization involves changing the procs, the unit tests will tell you if you've changed the behaviour of the code.
Make sure you keep the unit tests up to date, so that when you leave, the next person doesn't face the same challenge!
First, look at the execution plan of the stored procedure. Make sure you understand why the SQL Server optimization engine choose these plan over another execution plan, which index it used and why, how statistics works, ...
Then, make it better.
Theses are the steps you need to follow.
Understand what's being done
Make it better
Repeat.

How to improve the performance of the package [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I have been asked to improve the package performance without affecting the functionality.How to start with optimisation ?Any suggestions
In order to optimize PL/SQL programs you need to know where they spend time during execution.
Oracle provide two tools for profiling PL/SQL. The first one is DBMS_PROFILER. Running a packaged procedure in a Profiler session gives us a breakdown of each program line executed and how much time was spent on each line. This gives us an indication of where the bottlenecks are: we need to focus on the lines which consume the most time. We can only use this on our own packages but it writes to databases tables so it is easy to use. Find out more.
In 11g Oracle also gave us the Hierarchical Profiler, DBMS_HPROF. This does something similar but it allows us to drill down into the performance of dependencies in other schemas; this can be very useful if your application has lots of schemas. The snag is the Hprofiler writes to files and uses external tables; some places are funny about the database application writing to the OS file system. Anyway, find out more.
Once you have your profiles you know where you need to start tuning. The PL/SQL Guide has a whole chapter on Tuning and Optimization. Check it out.
" without affecting the functionality."
Depending on what bottlenecks you have you may need to rewrite some code. To safely change the internal workings of PL/SQL without affecting the external functionality(same outcome for same input) you need a comprehensive set of unit tests. If you don't have these already you will need to write them first.

Is it correct to use raw SQL requests in some cases? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
When I filled my database with about 25K records, I noticed that my application started working slowly. I've checked out the logs and realize that instead of one SQL request ActiveRecord is performing more that eight. I have rewritten the code to use one SQL request, and it has speeded my application up minimum in two times.
So, is it correct to write raw SQL requests in parts of application that is heavily loaded?
Some times you need to eager load your data. Other times you really need to write raw SQL queries
It is sometimes correct to use raw SQL, as ActiveRecord and Arel do not easily allow the full SQL syntax to be used, and sometimes it is helpful to just express a scope as a raw SQL fragment, but it is not correct for the first response to a performance problem to be the use of raw SQL.
It would be better to explore eager loading and joining methods, and other options, before using raw SQL, as you may be making your application less flexible to changes in future.
If you post the code that is causing the problem and the SQL being generated by it, then you may get useful advice on how to avoid raw SQL.

Does EF not using the same old concept of creating large query that leads to degrade performance? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I know this could be a stupid question, but a beginner I must ask this question to experts to clear my doubt.
When we use Entity Framework to query data from database by joining multiple tables it creates a sql query and this query is then fired to database to fetch records.
"We know that if we execute large query from .net code it will
increase network traffic and performance will be down. So instead of
writing large query we create and execute stored procedure and that
significantly increases the performance."
My question is - does EF not using the same old concept of creating large query that leads to degrade performance.
Experts please clear my doubts.
Thanks.
Contrary to popular myth, stored procedure are not any faster than a regular query. There are some slight, possible direct performance improvements when using stored procedures ( execution plan caching, precompiltion ) but with a modern caching environment and newer query optimizers and performance analysis engines, the benefits are small at best. Combine this with the fact that these potential optimization were already just a small part of the query results generation process, the most time-intensive part being the actual collection, seeking, sorting, merging, etc. of data, these stored procedure advantages are downright irrelevant.
Now, one other point. There is absolutely no way, ever, that by creating 500 bytes for the text of a query versus 50 bytes for the name of a stored procedure that you are going to have any effect on a 100 M b / s link.

DB copy within MySQL that is faster than `mysqldump`? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I have a production db that I'd like to copy to dev. Unfortunately it takes about an hour to do this operation via mysqldump | mysql and I am curious if there is a faster way to do this via direct sql commands within mysql since this is going into the same dbms and not moving to another dbms elsewhere.
Any thoughts / ideas on a streamlined process to perform this inside of the dbms so as to eliminate the long wait time?
NOTE: The primary goal here is to avoid hour long copies as we need some data very quickly from production in the dev db. This is not a question about locking or replication. Wanted to clarify based on some comments from my including more info / ancillary remarks than I should have initially.
You could set up a slave to replicate the production db, then take dumps from the slave. This would allow your production database to continue operating normally.
After the slave is done performing a backup, it will catch back up with the master.
http://dev.mysql.com/doc/refman/5.0/en/replication-solutions-backups-mysqldump.html