Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
What methodology would you use with a static code analysis tool?
When and where would you run the analysis? How frequent?
How would you integrate it to a continues build environment, on daily builds? only nightly?
If I am using then on a new code base I set them up exactly how I want up front. If I am using them on an existing code base I enable messages in stages, so that a particular category of issue is reported on. Once that particular type of message is cleaned up I add the next category.
I treat static analysis tools as if they were part of the compiler. Each developer runs them each time they do a build. If possible I would also treat them as I do compiler warnings - as errors. That way code with warnings does not make it onto the build server at all. This has issues if you cannot turn warnings off in specific cases... and warnings should only be turned off by agreement.
My experience is that in general, static analysis should be used early in the development process, preferably (or ideally) before unit test and code check-in. Reports from static analysis can also be used during the code review process. This enables development of robust code by the software developer and in some cases writing code that can be analyzed more accurately by static analysis tools.
The challenge with early use is that software developers must be adequately trained to use static analysis tools and are able to effectively triage results obtained. That way, they can take concrete steps to improve the quality of the software. Otherwise, use of the tool diminishes or flagged issues are ignored and use of static analysis diminishes over time.
In practice most development organizations use static analysis late in the development process. In these phases, the static analysis tools are used by quality or test engineers. In many cases it is coupled with build systems to produce quality metrics and provide guidance about the safety and reliability of the software. However, if identified issues accumulate and span multiple code components, the probability that all issues will be fixed will decrease. Therefore, late use of static analysis in general may require more time and resource to address identified issues.
It is also could be a good Idea to establish code review task (peer code review by another developer) together with using static-analysis tool so before checking the source code in the server. so it will help to increase the quality of code and preventing of useless lines of code that be useless legacy code one day.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I read it in a research paper published on IEEE which said that libraries dont change often and hence dont need much of regression testing. I wanted someone to verify the statement.
Also, it said that Randoop was earlier developed and evaluated on libraries. Can someone verify that?
[The paper] said that Randoop was earlier developed and evaluated on libraries.
This is largely true not just of Randoop, but of other test generation tools such as ARTOO, Check 'n' Crash, EvoSuite, GRT, QuickCheck, etc.
The paper is "Scaling up automated test generation: Automatically generating maintainable regression unit tests for programs" (ASE 2011). Its problem statement is that test generation tools have often been applied to libraries, which are easier to handle than programs. Its contribution is showing how to extend a test generation tool (Randoop) to programs.
An example of an earlier paper that applied Randoop to libraries is "Feedback-directed random test generation" (ICSE 2007). It reports finding dozens of important, previously-unknown errors.
I read it in a research paper published on IEEE which said that libraries dont change often and hence dont need much of regression testing.
The paper does not say libraries "don't need much regression testing". It actually says, "A library is less likely to be in need of a regression test suite. Libraries rarely change, and a library is likely to already have some tests." The main point is that the Randoop tool generates tests, and such a tool is more needed for components that don't have tests. As a general rule, libraries usually already have a human-written test suite. The library is also exercised by every program that uses it. By contrast, many programs exist that don't have a test suite, or whose test suite omits large parts of the program's behavior. Test generation is more needed for such components.
This is point #5 near the end of a list of 6 reasons to motivate extending Randoop to programs. The comment makes sense in that context but not when taken out of context or misquoted. The list starts with,
Randoop was originally targeted toward detecting existing bugs in data structure libraries libraries such as the JDK's java.util. Instead, we want to extend Randoop to generate maintainable regression tests for complex industrial software systems.
Data structure libraries tend to be easier for tools to handle in several ways. ...
Returning to one of your questions, every software component -- whether a program or a library -- needs a regression test suite to be run when it changes. Running the tests gives you confidence that your changes have not broken its functionality. If you never change a component, then you don't need a regression test suite for it.
Some libraries never change (because of policy, or because there is no need to change them), and others are being constantly updated.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 6 years ago.
Improve this question
We are a small company and i am a test coordinator appointed to bring a process in testing for the company.
We dont have a testing process in place. Development-Deployment and testing happens almost daily and the communication is established over skype or mails.
How do i start to bring a testing process in place?
We have operations running in 8 different countries and we dont have a dedicated testing team for testing. The business users are the testers we have.
It is crutial for me to get them all to testing when required.
So how do i bring that change in the way they work?
Any suggestions or help is kindly appreciated.
I think the best approach for this changing is show the test value for your managers.
I suppose that without well organized test process the bug finding happens eventually. The value of one crucial issue find by your customer but not by you may lead to the huge impact on the company business. Well, you may wait when it will happen or just start to build the test group.
Also this is the common fact that finding bugs as soon as possible saves a lot of money for the organization. This is mostly because fixing the issue close to the development time requires much less time.
I would recommend Jira as the open source tool which allow to organize the bugs tracking and also supports agile development process.
I would suggest to consider Comindware Tracker - workflow automation software. It executes processes you create automatically by assigning tasks to the right team member, only after the previous step in the workflow is completed. Furthermore, you can create forms visually, set your own workflow rules and have your data processed automatically. You can configure Comindware Tracker to send e-mail notifications to users when a particular event occurs with a task or document, or to send scheduled e-mail reports. Discussion threads are available within every task. You can share a document with a team and it will be stored within the task, document versioning is supported.
Perhaps the key reason why small company just starting to optimize workflows should consider Comindware Tracker is its ability in changing workflows in real-time during process execution without the need to interrupt it. As you are likely to make plenty of changes during the course of your starting phase, this solution is worth of attention. This product review might be useful - http://www.brighthubpm.com/software-reviews-tips/127913-comindware-tracker-review/
Disclaimer – I work in Comindware. We use Comindware Tracker to manage workflows within our company. I will be glad to answer any question about the solution, if any should rise.
If you are looking to release frequently then you should consider using automated regression testing.
This would involve having an automated test for every bit of significant functionality in your applications. In addition, when new functionality is being developed an automated regression test would be written at the same time.
The benefit of the automated regression test approach is that you can get the regression tests running in continuous integration. This allows you to continuosly regression test and uncover any regression bugs soon after the code is written.
Manual regression testing is very difficult to sustain. As you add more and more functionality to the applications the manual regression testing takes longer and makes it very difficult to release frequently. It also means the time spent testing will continually increase.
If your organisation decides not to go with test automation then I would suggest you need to create a delivery pipeline that includes a manual regression testing phase. You might want to consider using an agile framework such as Kanban for this (which typically works well with frequent releases).
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
This is an interview question:
Software crashes in production environment, no access to debugger. What steps would you do to solve the problem short term? Long term? What would you do to prevent it from happening? What tools would you use?
My ideas:
Short term:
Track the log file of the program generate by OS, which may generate some signals about the crash.
Narrow down to the file where the program crashes by adding some print.
Add try-catch in the possible locations.
Find the reason.
Long-term:
Check the whole program design idea, algorithm/data structure usage, to make sure that they are used correctly and suitably.
Test it with different cases that have caused crashes to find the essential reasons
Tools : GDB, Valgrind family, gprof
Any better ideas or solutions?
Short Term
1. The absolute first thing to do is work out what was done to generate the problem and try and reproduce it. If you can do that, you can now track it down in a debugged environment.
2. If it is not reproducible, you need to look through all the information you collected in step one (which will include any logging) and see if you can see a possible problem.
3. If the problem has not been found, you will need to add logging, and lots of it. This is where a "DEBUG" logging setting comes in handy. It will probably slow down the system, and may even mask the problem (which tells you something about the nature of the problem).
4. With the new logging information you can go back to step one. Repeat this until the problem is solved!
In the long term the most obvious thing to do is make sure you have sufficient logging in place, even if it has to be turned on and off, to catch problems. As well as this, you need to try and beef up the testing effort..
When you have tracked down a problem, it is worth noting the type of problem (race condition, scalability, database access, etc.). This gives you an area to apply more automated and manual tests.
You have some good initial ideas, here are my comments:
Add logging to your code - you will get very little information from
the operating system about your code.
If exceptions can be thrown by methods that you call, you should catch them. Don't let them bubble up to the end user!
Run valgrind now, not later
Setup a test environment that simulates your production environment. Start simple, and increase the complexity until you are able to reproduce your issue. You do have a test environment, right?
The very first thing you should do is determine the severity of the problem. This will help to devise your short-term strategy. You will need to have some brief discussions with the major stakeholders in the software (such as the client), or have a project manager do this and report back to you.
In the heat of the moment, this is often the bit overlooked, and rushing a short-term fix almost always means wasting a lot of time not really understanding what needs to be done.
After this, your actual strategy, both long term and short term, is rather dependent on the technology you are using and how it is deployed.
Short term
It is absolutely vital to grab some preliminary information about the crash before attempting to resolve the problem, grab log files, take screenshots, note down system info like memory/CPU usage, archive any temporary data that might be useful.
The short-term action should be to get the system up-and-running again, quickly. Some common approaches to short-term solutions:
Try turning it off and on again... Seriously, 90% of the time this
will get production running again in the short term, at least until
the bug manifests itself again.
Revert to a previous production
release, preferably the latest version that was known to work fairly
reliably.
Run a second instance on another machine and fail-over if
the problem occurs again. This has the added bonus that logs and
system state are preserved after the last crash occurred.
Long term
In the long term, you will want to properly analyse the information you gathered at the time of failure. Where possible, try to reproduce the problem as closely as you can. Revert your code to the version being deployed (you do use version control tools right?), check high-level factors as well as low-level configuration ones. e.g. who was using the system when it crashed? Can they show you what they did?
Debugging and logging may be useful at this stage, and all the usual developer tools such as functional tests and memory profiling tools. A crash could come from a number of sources, from memory protection faults to an unexpected state of a resource. You should compile a list of candidate problems, and cross them off as you gain confidence that they aren't the cause of the crash.
Apart from logging, you can enable creation of mdmp files ( windows ) or the core dumps ( linux ) then examine them later; One downside of this approach is that core dumps can be pretty big. mdmp and core dumps contain the context of the application when the crash occurred.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
We are building a large CRM system based on the SalesForce.com cloud. I am trying to put together a test plan for the system but I am unsure how to create system-wide tests. I want to use some behaviour-driven testing techniques for this, but I am not sure how I should apply them to the platform.
For the custom parts we will build in the system I plan to approach this with either Cucumber of SpecFlow driving Selenium actions on the UI. But for the SalesForce UI Customisations, I am not sure how deep to go in testing. Customisations such as Workflows and Validation Rules can encapsulate a lot of complex logic that I feel should be tested.
Writing Selenium tests for this out-of-box functionality in SalesForce seems overly burdensome for the value. Can you share your experiences on System testing with the SalesForce.com platform and how should we approach this?
That is the problem with detailed test plan up front. You trying to guess what kind of errors, how many, and in what areas you will get. This may be tricky.
Maybe you should have overall Master Test Plan specifying only test strategy, main tool set, risks, relative amount of how much testing you want to put in given areas (based on risk).
Then when you starting to work on given functionality or iteration (I hope you are doing this in iterations not waterfall), you prepare detailed test plan for this set of work. You adjust your tools/estimates/test coverage based on experiences from previous parts.
This way you can say at the beginning what is your general approach and priorities, but you let yourself adapt later as project progresses.
Question about how much testing you need to put into testing COTS is the same as with any software: you need to evaluate the risk.
If your software need to be
Validated because of external
regulations (FDA,DoD..)
you will need to go deep with your
tests, almost test entire app. One
problem here may be ensuring
external regulator, that tools you
used for validation are validated
(and that is a troublesome).
If your application is
mission-critical for your company,
than you still need to do a lot of
testing based on extensive risk
analysis.
If your application is not concerned
with all above, you can go with
lighter testing. Probably you can
skip functionality that was tested
by platform manufacturer, and focus
on your customisations. On the other
hand I would still write tests (at
least happy paths) for
workflows you will be using in your
business processes.
When we started learning Selenium testing in 2008 we created Recruiting application from SalesForce handbook and created a suite of tests and described our path step by step in our blog. It may help you get started if you decide to write Selenium code to test your app.
I believe the problem with SalesForce is you have Unit and UI testing, but no Service-level testing. The SpecFlow I've seen which drives Selenium UI is brittle and doesn't encapsulate what I'm after in engineering a service-level test solution:
( When I navigate to "/Selenium-Testing-Cookbook-Gundecha-Unmesh/dp/1849515743"
And I click the 'buy now' button
And then I click the 'proceed to checkout' button)
That is not the spirit or intent of Specflow.
Given I have not selected a product
When I select Proceed to Checkout
Then ensure I am presented with a message
In order to test that with selenium, you essentially have to translate that to clicks and typing, whereas in the .NET realm, you can instantiate objects, etc., in the middle-tier, and perform hundreds of instances and derivations against the same BACKGROUND (mock setup).
I'm told that you can expose SF through an API at some security risk. I'd love to find more about THAT.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
How are you tracking changes, testing effort for bugs that impact multiple artifacts released separately?
Code sharing is good because it reduces the total number of paths through the code which means more impact for fewer changes and less bugs (or more bugs addressed with fewer changes). For example, we may build a search tool and an indexer that use the same file handling package or model package.
We need to be able to ensure that changes get tested in all the right components and track which changes were included with which released tools. We also don't want to be forced to release the change in all applications at the same time.
Goal: one bug to be tested, scheduled tracked independently against each released application. With automated systems that understand the architecture guiding us to make the right choices.
Bug Split Release Scenario:
We may release a patch of the search tool that contains a performance fix in a util library. Critical for the search tool, the fix is less visible in the indexer so it can wait until the next maintenance release. We want the one bug to be scheduled-tracked-released with the search patch and deferred for the indexer's next maintenance release.
So, when I create a bug in our tracking system (JIRA) I want it to magically become multiple objects.
primary issue describing the problem and tracking the development work
a set of tasks that allow me to track testing effort and for me to track how this issue has been released for each application it impacts.
How can we make the user experience of code sharing low effort to encourage more of it without becoming blind to what changes impacted which releases or forcing people to enter many duplicate bugs?
I'm sure that large scale projects from Eclipse to linux distros have faced this kind of problem and wonder how they have solved it (I'm going to poke around on them next).
Do any of you have experience with this kind of situation and how have you approached it?
In Jira you can allow sub-tasks so you could assign sub-tasks to the main task. You can also allow time tracking on the issues so you know how much time each task is taking and what the difference between estimated and actual is.
You can also enable versioning so you have a road map of what is being done in the next release with a change log. The problem with the road map is that it is only for one project so you can't have a road map that covers all of your projects.
Finally, you can create your own custom workflows to do almost anything you want to do. I've never tried this because we'd have to learn a new language to do it and the reason we got Jira was to decrease development overhead, not increase it by having to customise our bug tracker - but it is possible.
For jira, make use of the affects versions and fixed in versions (plus you can add multiple custom fields, like verified by QA in versions)