Can early testing lead to premature optimization? [closed] - testing

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
One school of thought that I usually hear is to Test early, test often, may it be usability testing or any other kind.
Another statement, which is also generally believed to be true: Premature optimization is the root of all evil
That leaves me a bit confused. Should I test early? If I find and fix a problem, is that optimization? Also, is it premature? Should I just use early testing to identify the problems, and then fix them later on?
Please give some guidance regarding these statements.
How do I know if I'm optimizing prematurely?

Optimizing prematurely probably means that you should fix design and/or coding 'issues' at an early stage. If you wait with this, more code will be added and it will be more complicated (and at least time consuming) to improve those issues.
The result is that you - as positive side effects - reduce the number of bugs, not only by optimizing but also later.
You know when you are optimizing prematurely when you improve the structure (either code or design), without changing functionality. For this regression testing can be used:
Run tests, all should be ok
Improve code/design
Run tests, all should be ok
Of course this only works when you have very good regression tests.
If you mean performance optimmizing, this is completely different and I would advice not to do any performance optimizing unless you run into problems performance wise, or when you know beforehand performance might be an issue. And if the last case is true, it should be part of the design.

Related

Maintaining Someones Stored Procedures [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I have this question - a developer who wrote very complex stored procedures and he/she left the organization. Now you're taking his part and make that same stored procedure to work very fast or like work as earlier to get the same results. What are the steps do we need to follow. In other words, it's like working on someone's impending work.
Stored procedures are notoriously hard to maintain. I would start by writing unit tests - this could involve setting up a dedicated test environment, with "known good" data. Figure out the major logic branches in the procs, and write unit tests to cover those cases. This should make you more familiar with the code.
Once you have unit tests, you can work on optimization (if I've understood your question, you're trying to improve performance). If your performance optimization involves changing the procs, the unit tests will tell you if you've changed the behaviour of the code.
Make sure you keep the unit tests up to date, so that when you leave, the next person doesn't face the same challenge!
First, look at the execution plan of the stored procedure. Make sure you understand why the SQL Server optimization engine choose these plan over another execution plan, which index it used and why, how statistics works, ...
Then, make it better.
Theses are the steps you need to follow.
Understand what's being done
Make it better
Repeat.

How to improve the performance of the package [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I have been asked to improve the package performance without affecting the functionality.How to start with optimisation ?Any suggestions
In order to optimize PL/SQL programs you need to know where they spend time during execution.
Oracle provide two tools for profiling PL/SQL. The first one is DBMS_PROFILER. Running a packaged procedure in a Profiler session gives us a breakdown of each program line executed and how much time was spent on each line. This gives us an indication of where the bottlenecks are: we need to focus on the lines which consume the most time. We can only use this on our own packages but it writes to databases tables so it is easy to use. Find out more.
In 11g Oracle also gave us the Hierarchical Profiler, DBMS_HPROF. This does something similar but it allows us to drill down into the performance of dependencies in other schemas; this can be very useful if your application has lots of schemas. The snag is the Hprofiler writes to files and uses external tables; some places are funny about the database application writing to the OS file system. Anyway, find out more.
Once you have your profiles you know where you need to start tuning. The PL/SQL Guide has a whole chapter on Tuning and Optimization. Check it out.
" without affecting the functionality."
Depending on what bottlenecks you have you may need to rewrite some code. To safely change the internal workings of PL/SQL without affecting the external functionality(same outcome for same input) you need a comprehensive set of unit tests. If you don't have these already you will need to write them first.

should I create a counter column? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
Optimization was never one of my expertise. I have users table. every user has many followers. So now I'm wondering if I should use a counter column in case that some user has a million followers. So instead of counting a whole table of relations, shouldn't I use a counter?
I'm working with SQL database.
Update 1
Right now I'm only writing the way I should build my site. I haven't write the code yet. I don't know if I'll have slow performance, that's why I'm asking you.
You should certainly not introduce a counter right away. The counter is redundant data and it will complicate everything. You will have to master the additional complexity and it'll slow down the development process.
Better start with a normalized model and see how it works. If you really run into performance problems, solve it then then.
Remember: premature optimization is the root of all evils.
It's generally a good practice to avoid duplication of data, such as summarizing one data point in another data's table.
It depends on what this is for. If this is for reporting, speed is usually not an issue and you can use a join.
If it has to do with the application and you're running into performance issues with join or computed column, you may want to consider summary table generated on a schedule.
If you're not seeing a performance issue, leave it alone.

Why would a Scripting language be made 'purposefully Turing non-complete'? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
So, I was reading about Bitcoin Script on their official documentation and found this line: "Script is simple, stack-based, and processed from left to right. It is purposefully not Turing-complete, with no loops." I tried to reason hard but couldn't understand why would someone make a language "purposefully non Turing-complete". What is the reason for this? What happens if a language become Turing Complete?
And extending further, whether "with no loops" has anything to do with the script being non-Turing Complete?
possible reasons:
security: if there is no loops program will always terminate. user can't hang up the interpreter. if, in addition there is a limit on size of the script you can have pretty restrictive time constraints. another example of a language without loops is google queries. if google allowed loops in , users would be able to kill their servers
simplicity: no loops make language much easier to read and write by non-programmers
no need: if there is no business need for it then why bother?
The main reason is because Bitcoin scripts are executed by all miners when processing/validating transactions, and we don't want them to get stuck in an infinite loop.
Another reason is that according to this message from Mike Hearn, Bitcoin script was an afterthought of Satoshi to try to incorporate a few types of transactions he had had in mind. This might explain the fact that it is not so well designed and and has little expressiveness.
Ethereum has a different approach by allowing arbitrary loops but making the user pay for execution steps.

What is the dif between Software testing and Software inspection? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
So I've read some reports about both of these methods but I can't really grasp the dif between the two.
If anyone could sum it up for me or try to explain it I'd be ever so grateful!
BR, Fredrik
Similar to a car. If you test it, you usually drive it around or at least turn it on. If you inspect it usually you check fluids, maybe pull a spark plug, connect it to a computer and check its settings, fiddle with buttons and switches to make sure there is connectivity. During an inspection you may test the vehicle, but during a test you do not always inspect the vehicle.
Software testing is useful because it allows for a mock up of a production environment to be used in order to see if there are bugs, or errors which either throw exceptions or cause logical errors such as making relationships out of state.
Software inspection is more involved. It can involve testing, but can also involve doing code review to make sure that efficient process is used, and that the readability and maintainability is proper. It helps to make sure that features are properly decoupled, the program is running as fast as possible, and that nothing is going on behind the scenes which is undesirable.