Maintaining Someones Stored Procedures [closed] - sql

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I have this question - a developer who wrote very complex stored procedures and he/she left the organization. Now you're taking his part and make that same stored procedure to work very fast or like work as earlier to get the same results. What are the steps do we need to follow. In other words, it's like working on someone's impending work.

Stored procedures are notoriously hard to maintain. I would start by writing unit tests - this could involve setting up a dedicated test environment, with "known good" data. Figure out the major logic branches in the procs, and write unit tests to cover those cases. This should make you more familiar with the code.
Once you have unit tests, you can work on optimization (if I've understood your question, you're trying to improve performance). If your performance optimization involves changing the procs, the unit tests will tell you if you've changed the behaviour of the code.
Make sure you keep the unit tests up to date, so that when you leave, the next person doesn't face the same challenge!

First, look at the execution plan of the stored procedure. Make sure you understand why the SQL Server optimization engine choose these plan over another execution plan, which index it used and why, how statistics works, ...
Then, make it better.
Theses are the steps you need to follow.
Understand what's being done
Make it better
Repeat.

Related

How to improve the performance of the package [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I have been asked to improve the package performance without affecting the functionality.How to start with optimisation ?Any suggestions
In order to optimize PL/SQL programs you need to know where they spend time during execution.
Oracle provide two tools for profiling PL/SQL. The first one is DBMS_PROFILER. Running a packaged procedure in a Profiler session gives us a breakdown of each program line executed and how much time was spent on each line. This gives us an indication of where the bottlenecks are: we need to focus on the lines which consume the most time. We can only use this on our own packages but it writes to databases tables so it is easy to use. Find out more.
In 11g Oracle also gave us the Hierarchical Profiler, DBMS_HPROF. This does something similar but it allows us to drill down into the performance of dependencies in other schemas; this can be very useful if your application has lots of schemas. The snag is the Hprofiler writes to files and uses external tables; some places are funny about the database application writing to the OS file system. Anyway, find out more.
Once you have your profiles you know where you need to start tuning. The PL/SQL Guide has a whole chapter on Tuning and Optimization. Check it out.
" without affecting the functionality."
Depending on what bottlenecks you have you may need to rewrite some code. To safely change the internal workings of PL/SQL without affecting the external functionality(same outcome for same input) you need a comprehensive set of unit tests. If you don't have these already you will need to write them first.

should I create a counter column? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
Optimization was never one of my expertise. I have users table. every user has many followers. So now I'm wondering if I should use a counter column in case that some user has a million followers. So instead of counting a whole table of relations, shouldn't I use a counter?
I'm working with SQL database.
Update 1
Right now I'm only writing the way I should build my site. I haven't write the code yet. I don't know if I'll have slow performance, that's why I'm asking you.
You should certainly not introduce a counter right away. The counter is redundant data and it will complicate everything. You will have to master the additional complexity and it'll slow down the development process.
Better start with a normalized model and see how it works. If you really run into performance problems, solve it then then.
Remember: premature optimization is the root of all evils.
It's generally a good practice to avoid duplication of data, such as summarizing one data point in another data's table.
It depends on what this is for. If this is for reporting, speed is usually not an issue and you can use a join.
If it has to do with the application and you're running into performance issues with join or computed column, you may want to consider summary table generated on a schedule.
If you're not seeing a performance issue, leave it alone.

Why would a Scripting language be made 'purposefully Turing non-complete'? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
So, I was reading about Bitcoin Script on their official documentation and found this line: "Script is simple, stack-based, and processed from left to right. It is purposefully not Turing-complete, with no loops." I tried to reason hard but couldn't understand why would someone make a language "purposefully non Turing-complete". What is the reason for this? What happens if a language become Turing Complete?
And extending further, whether "with no loops" has anything to do with the script being non-Turing Complete?
possible reasons:
security: if there is no loops program will always terminate. user can't hang up the interpreter. if, in addition there is a limit on size of the script you can have pretty restrictive time constraints. another example of a language without loops is google queries. if google allowed loops in , users would be able to kill their servers
simplicity: no loops make language much easier to read and write by non-programmers
no need: if there is no business need for it then why bother?
The main reason is because Bitcoin scripts are executed by all miners when processing/validating transactions, and we don't want them to get stuck in an infinite loop.
Another reason is that according to this message from Mike Hearn, Bitcoin script was an afterthought of Satoshi to try to incorporate a few types of transactions he had had in mind. This might explain the fact that it is not so well designed and and has little expressiveness.
Ethereum has a different approach by allowing arbitrary loops but making the user pay for execution steps.

Does EF not using the same old concept of creating large query that leads to degrade performance? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I know this could be a stupid question, but a beginner I must ask this question to experts to clear my doubt.
When we use Entity Framework to query data from database by joining multiple tables it creates a sql query and this query is then fired to database to fetch records.
"We know that if we execute large query from .net code it will
increase network traffic and performance will be down. So instead of
writing large query we create and execute stored procedure and that
significantly increases the performance."
My question is - does EF not using the same old concept of creating large query that leads to degrade performance.
Experts please clear my doubts.
Thanks.
Contrary to popular myth, stored procedure are not any faster than a regular query. There are some slight, possible direct performance improvements when using stored procedures ( execution plan caching, precompiltion ) but with a modern caching environment and newer query optimizers and performance analysis engines, the benefits are small at best. Combine this with the fact that these potential optimization were already just a small part of the query results generation process, the most time-intensive part being the actual collection, seeking, sorting, merging, etc. of data, these stored procedure advantages are downright irrelevant.
Now, one other point. There is absolutely no way, ever, that by creating 500 bytes for the text of a query versus 50 bytes for the name of a stored procedure that you are going to have any effect on a 100 M b / s link.

Best way to improve the database performance [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I was recently in an interview and i have been asked a question that was :
After one year of publishing your application the data in the database became massive , so what is the best way to optimize the DB performance in the DB side not in the coding side whether database is Oracle or SQL server ... i just want to know what is the best answer for this question ?
I can give you an answer, but can't guarantee that an interviewer would like it.
The best way to optimise the performance is to understand what your application does, and the data structures that the system provides. You must understand the business so that you can understand the data, and when you do that you'll know whether the SQL submitted to the system is "asking the correct question", and doing so in a way that makes sense for the data and it's distribution.
Furthermore, you should measure and document what the normal behaviour of the system is, and the cycles it might go through on a daily, weekly, monthly, quarterly and annual basis. You should be prepared to be able to quantify any deviation from normal performance.
You must understand the database technology itself. The concepts, the memory structures and processing, REDO, UNDO, index and table types, and maybe partitioning, parallelism, and RAC. The upsides and the downsides.
You must know SQL extremely well, and be completely up to date on its capabilities in your DB version, and any new ones now available. You must be able to read a raw execution plan straight from DBMS_XPlan(). Tracing query execution must be within your skill set.
You must understand query transformation and optimisation, the use of bind variables, and statistics.
If I had to choose only one of the above, it would be that you must have measured and documented historical performance, and be able to quantify deviations from it, because without that you will never know where to start.
I'm pretty sure the point of the question was to see how you deal with vague, overly broad questions. One thing you did that was pretty positive, was to seek out authoritative answers on SO. Don't know if that's going to help you now that the interview is done.
So - how do you respond to such a question? An "I have no way of knowing" is probably not the approach to take - even if it the correct answer.
Maybe something like, "I'm not entirely sure what you're asking - so let me try to understand with a couple of questions. Are we talking about query performance or update performance? Are there indexes to support the workload? What makes you feel optimization is necessary?"
I think it is as much about your approach to problem solving as any particular tech.
But, then on the other hand, maybe I'm wrong. Maybe the first answer is always "Index the hell out of it!" :-D
Interviewing is a nightmare, isn't it?