How to improve the performance of the package [closed] - sql

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I have been asked to improve the package performance without affecting the functionality.How to start with optimisation ?Any suggestions

In order to optimize PL/SQL programs you need to know where they spend time during execution.
Oracle provide two tools for profiling PL/SQL. The first one is DBMS_PROFILER. Running a packaged procedure in a Profiler session gives us a breakdown of each program line executed and how much time was spent on each line. This gives us an indication of where the bottlenecks are: we need to focus on the lines which consume the most time. We can only use this on our own packages but it writes to databases tables so it is easy to use. Find out more.
In 11g Oracle also gave us the Hierarchical Profiler, DBMS_HPROF. This does something similar but it allows us to drill down into the performance of dependencies in other schemas; this can be very useful if your application has lots of schemas. The snag is the Hprofiler writes to files and uses external tables; some places are funny about the database application writing to the OS file system. Anyway, find out more.
Once you have your profiles you know where you need to start tuning. The PL/SQL Guide has a whole chapter on Tuning and Optimization. Check it out.
" without affecting the functionality."
Depending on what bottlenecks you have you may need to rewrite some code. To safely change the internal workings of PL/SQL without affecting the external functionality(same outcome for same input) you need a comprehensive set of unit tests. If you don't have these already you will need to write them first.

Related

Maintaining Someones Stored Procedures [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I have this question - a developer who wrote very complex stored procedures and he/she left the organization. Now you're taking his part and make that same stored procedure to work very fast or like work as earlier to get the same results. What are the steps do we need to follow. In other words, it's like working on someone's impending work.
Stored procedures are notoriously hard to maintain. I would start by writing unit tests - this could involve setting up a dedicated test environment, with "known good" data. Figure out the major logic branches in the procs, and write unit tests to cover those cases. This should make you more familiar with the code.
Once you have unit tests, you can work on optimization (if I've understood your question, you're trying to improve performance). If your performance optimization involves changing the procs, the unit tests will tell you if you've changed the behaviour of the code.
Make sure you keep the unit tests up to date, so that when you leave, the next person doesn't face the same challenge!
First, look at the execution plan of the stored procedure. Make sure you understand why the SQL Server optimization engine choose these plan over another execution plan, which index it used and why, how statistics works, ...
Then, make it better.
Theses are the steps you need to follow.
Understand what's being done
Make it better
Repeat.

Why would a Scripting language be made 'purposefully Turing non-complete'? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
So, I was reading about Bitcoin Script on their official documentation and found this line: "Script is simple, stack-based, and processed from left to right. It is purposefully not Turing-complete, with no loops." I tried to reason hard but couldn't understand why would someone make a language "purposefully non Turing-complete". What is the reason for this? What happens if a language become Turing Complete?
And extending further, whether "with no loops" has anything to do with the script being non-Turing Complete?
possible reasons:
security: if there is no loops program will always terminate. user can't hang up the interpreter. if, in addition there is a limit on size of the script you can have pretty restrictive time constraints. another example of a language without loops is google queries. if google allowed loops in , users would be able to kill their servers
simplicity: no loops make language much easier to read and write by non-programmers
no need: if there is no business need for it then why bother?
The main reason is because Bitcoin scripts are executed by all miners when processing/validating transactions, and we don't want them to get stuck in an infinite loop.
Another reason is that according to this message from Mike Hearn, Bitcoin script was an afterthought of Satoshi to try to incorporate a few types of transactions he had had in mind. This might explain the fact that it is not so well designed and and has little expressiveness.
Ethereum has a different approach by allowing arbitrary loops but making the user pay for execution steps.

Does EF not using the same old concept of creating large query that leads to degrade performance? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I know this could be a stupid question, but a beginner I must ask this question to experts to clear my doubt.
When we use Entity Framework to query data from database by joining multiple tables it creates a sql query and this query is then fired to database to fetch records.
"We know that if we execute large query from .net code it will
increase network traffic and performance will be down. So instead of
writing large query we create and execute stored procedure and that
significantly increases the performance."
My question is - does EF not using the same old concept of creating large query that leads to degrade performance.
Experts please clear my doubts.
Thanks.
Contrary to popular myth, stored procedure are not any faster than a regular query. There are some slight, possible direct performance improvements when using stored procedures ( execution plan caching, precompiltion ) but with a modern caching environment and newer query optimizers and performance analysis engines, the benefits are small at best. Combine this with the fact that these potential optimization were already just a small part of the query results generation process, the most time-intensive part being the actual collection, seeking, sorting, merging, etc. of data, these stored procedure advantages are downright irrelevant.
Now, one other point. There is absolutely no way, ever, that by creating 500 bytes for the text of a query versus 50 bytes for the name of a stored procedure that you are going to have any effect on a 100 M b / s link.

Getting intermediate spool output [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Am using oracle 11g, and i have a sql file with 'spool on' which runs for at least 7+ hours, as it has to spool huge data. But spool output is dumped only when the whole sql is finished, but i would like to know is there any other way to know the progress of my sql, or data spooled until that point in time, so that am rest assured that my sql is running properly as expected. Please help with your inputs.
Sounds like you are using DBMS_OUTPUT, which always only starts to actually output the results after the procedure completes.
If you want to have real/near time monitoring of progress you have 3 options:
Use utl_file to write to a OS file. You will need access to the db server OS file system for this.
Write to a table and use PRAGMA AUTONOMOUS_TRANSACTION so you can commit the log table entries without impacting your main processing. This is easy to implement, and readily accessible. Implemented in a good way this can become a de facto standard for all your procedures. You may then need to implement some sort of house keeping to avoid this getting too big and unwieldy.
A quick and dirty option which is transient, is to use DBMS_APPLICATION.SET_CLIENT_INFO, and then query v$session.client_info. This works well, good for keeping track of things, fairly unobtrusive and because it is a memory structure is fast.
DBMS_OUTPUT really is limited.

What is the dif between Software testing and Software inspection? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
So I've read some reports about both of these methods but I can't really grasp the dif between the two.
If anyone could sum it up for me or try to explain it I'd be ever so grateful!
BR, Fredrik
Similar to a car. If you test it, you usually drive it around or at least turn it on. If you inspect it usually you check fluids, maybe pull a spark plug, connect it to a computer and check its settings, fiddle with buttons and switches to make sure there is connectivity. During an inspection you may test the vehicle, but during a test you do not always inspect the vehicle.
Software testing is useful because it allows for a mock up of a production environment to be used in order to see if there are bugs, or errors which either throw exceptions or cause logical errors such as making relationships out of state.
Software inspection is more involved. It can involve testing, but can also involve doing code review to make sure that efficient process is used, and that the readability and maintainability is proper. It helps to make sure that features are properly decoupled, the program is running as fast as possible, and that nothing is going on behind the scenes which is undesirable.