Guidelines for GIS Application Testing - testing

I am a software tester by profession and I have worked on various technologies till date. I got a new assignment which is a GIS application. I am not aware of how to test GIS application, what parameters should be considered while testing etc.
I will really appreciate if anyone could help me out with some guidelines for testing GIS application.
Thank you in advance. :)

Ashok, possibly since the time whn the question has been asked you have been turned into GIS testing expert, but let me try to answer )))
I would focus on what the app should do with geometries:
does it takes into account the correct type of geometries, does it ignore the incorrect ones
If the app builds own geometries based on the original geometries, I would try different topologies that may be problematic for doing this. Say, the app should draw a geometry 5 px left to some original geometry, in parallel to it. I would try a loop that is lesser than 10 px diameter in order to have no space to 5 px left. And so on
I would test the huge values of data, so what will be if the app would try to consume the worldwide net of such geometries.

Related

Regrounding Zero Based ColumnSeries in Apache/Adobe Flex

I have tweeted an image illustrating the problem with Flex ColumnSeries on a PlotChart when trying to overlay one on top of another.
Essentially, it can display one series alright, two or more OK on initialization, but after a bit of manipulation (in the user session), the columns lose their sense of where zero is, and begin to float (these series have no minfield, thus zero is their starting point). FWIW: the axis for these columns is on the right, but that can change given the type of data displayed.
The app this is for allows users to turn multiple series of multiple plotting styles on and off, change visual parameters, and even the order in which the series stack on top of each other -- just to give you an idea of what's going on.
Due to how dynamic this all is, I am doing most of the code in ActionScript.
So the questions are:
Is this fixable? Googling around has provided no insights, regardless of inquiry.
Is there a refresh function or equivalent within PlotChart/CartesianCharts that may help?
May this not be a problem with the chart canvas, but more of the axis which the series points to? or the series itself?
If it has not been made clear already: I am lost on this. The issue I have known about for ~a year now was first discovered on a Beta version of the app I am working on now, but it took a while for it to surface in an average user session. As the complexity of the app has grown (by client demand), the issue takes a lot less time to surface.
The issue also occurs on all versions of Flex I have used: 4.5, 4.6, 4.9... etc.
Please help, or offer pointers. Thanks!

region monitoring accuracy in iOS 5/6, late 2012

I am trying to use region monitoring for my app, but I am trying to use it with accuracy on the order of what building in a given city area the user is in.
Reading through other articles here on region monitoring, I have gotten a bunch of conflicting arguments on the accuracy of the system. Now, at the end of 2012, what is the accuracy like?
From my own testing it seems to be checking me into locations that are a few dozen meters away from where I am, which is too granular for my needs. I need to know if this is an issue with region monitoring or just my implementation.
Thanks, and I hope this question isn't too much of a repetition of other ones, but the dates of those questions and responses makes getting the current answer confusing.

HTML5 Canvas for falling word game

I want to develop a game which will have following content
1. User will log in
2. User will be provided by alphabets of a word falling from sky, and he would be required to complete the word before they hit the bottom level.
3. The words would be pulled up from a database.
4. The reward points gathered by user on completion of task, would be converted to a corresponding "Mobile recharge topup" and would be sent to users mobile.
I was planning to do this in a html5 using Canvas element. Could you let me know, if this is possible.
I have studied 5 mobile recharge api service, but none of them are satisfactory so far. Any direction in there?
To give you an idea of my expertise with this, I am a totally new user of web programming. I have been a systems programmer before, and need to develop this for assisting in a research project related to studying economic incentives of attracting low income workers to spend time on web, if enough incentive is provided.
I sincerely appreciate your time and help.
Thank you,
Mrunal
This is what I have found.
It is a half baked script ...
http://www.javascriptsource.com/games/falling-by-tim-withers-120409100502.html

UV-vis detector [Hardware]

This is not hundred percent programming related question, but I was not able to find answer on the net.
Is there some kind of detector to record frequency/intensity of light radiation source? something like spectroscopy detector, but instead of actual machine, just the module which can be integrated in project. I have tried searching on Google but I do not even know what such device is called
if you know the more appropriate place to ask, can you let me know please.
Thank you
As far as I know, no sensor exists to directly measure the frequency of a visible light source.
The final detector of automated spectroscopes is generally a (typically linear) CCD.
The other parts of the spectroscope disperse the light into a rainbow-like spectrum, so reddish photons hit pixels towards one end of the CCD, and bluish photons hit pixels towards the other end of the CCD.
If you only want to discriminate a few frequency bands (rather than a high-resolution spectrum of hundreds of frequency bands), then things are much simpler -- you can either use a few color filters, or you can use a few LED of different colors.
"Think Small Revisited: Handheld Spectroscopy" by John Coates 2007
"Handheld Spectroscopy" from Ocean Optics
Andrey, did you consider asking in the group of W.S. Jenks at the ISU? They might have a portable OceanOptics spectrometer or know somebody who has.
To get on-topic again, these nice thingies can be controlled though a Java-based framework.

What are the most useful software development metrics? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I would like to track metrics that can be used to improve my team’s software development process, improve time estimates, and detect special case variations that need to be addressed during the project execution.
Please limit each answer to a single metric, describe how to use it, and vote up the good answers.
(source: osnews.com)
ROI.
The total amount of revenue brought in by the software minus the total amount of costs to produce the software. Breakdown the costs by percentage of total cost and isolate your poorest performing and most expensive area in terms of return-on-investment. Improve, automate, or eliminate that problem area if possible. Conversely, find your highest return-on-investment area and find ways to amplify its effects even further. If 80% of your ROI comes from 20% of your cost or effort, expand that particular area and minimize the rest by comparison.
Costs will include payroll, licenses, legal fees, hardware, office equipment, marketing, production, distribution, and support. This can be done on a macro level for a company as whole or a micro level for a team or individual. It can also be applied to time, tasks, and methods in addition to revenue.
This doesn't mean ignore all the details, but find a way to quantify everything and then concentrate on the areas that yield the best (objective) results.
Inverse code coverage
Get a percentage of code not executed during a test. This is similiar to what Shafa mentioned, but the usage is different. If a line of code is ran during testing then we know it might be tested. But if a line of code has not been ran then we know for sure that is has not been tested. Targeting these areas for unit testing will improve quality and takes less time than auditing the code that has been covered. Ideally you can do both, but that never seams to happen.
"improve my team’s software development process": Defect Find and Fix Rates
This relates to the number of defects or bugs raised against the number of fixes which have been committed or verified.
I'd have to say this is one of the really important metrics because it gives you two things:
1. Code churn. How much code is being changed on a daily/weekly basis (which is important when you are trying to stabilize for a release), and,
2. Shows you whether defects are ahead of fixes or vice-versa. This shows you how well the development team is responding to defects raised by the QA/testers.
A low fix rate indicates the team is busy working on other things (features perhaps). If the bug count is high, you might need to get developers to address some of the defects.
A low find rate indicates either your solution is brilliant and almost bug free, or the QA team have been blocked or have another focus.
Track how long is takes to do a task that has an estimate against it. If they were well under, question why. If they are well over, question why.
Don't make it a negative thing, it's fine if tasks blow out or were way under estimated. Your goal is to continually improve your estimation process.
Track the source and type of bugs that you find.
The bug source represents the phase of development in which the bug was introduced. (eg. specification, design, implementation etc.)
The bug type is the broad style of bug. eg. memory allocation, incorrect conditional.
This should allow you to alter the procedures you follow in that phase of development and to tune your coding style guide to try to eliminate over represented bug types.
Velocity: the number of features per given unit time.
Up to you to determine how you define features, but they should be roughly the same order of magnitude otherwise velocity is less useful. For instance, you may classify your features by stories or use cases. These should be broken down so that they are all roughly the same size. Every iteration, figure out how many stories (use-cases) got implemented (completed). The average number of features/iteration is your velocity. Once you know your velocity based on your feature unit you can use it to help estimate how long it will take to complete new projects based on their features.
[EDIT] Alternatively, you can assign a weight like function points or story points to each story as a measure of complexity, then add up the points for each completed feature and compute velocity in points/iteration.
Track the number of clones (similar code snippets) in the source code.
Get rid of clones by refactoring the code as soon as you spot the clones.
Average function length, or possibly a histogram of function lengths to get a better feel.
The longer a function is, the less obvious its correctness. If the code contains lots of long functions, it's probably a safe bet that there are a few bugs hiding in there.
number of failing tests or broken builds per commit.
interdependency between classes. how tightly your code is coupled.
Track whether a piece of source has undergone review and, if so, what type. And later, track the number of bugs found in reviewed vs. unreviewed code.
This will allow you to determine how effectively your code review process(es) are operating in terms of bugs found.
If you're using Scrum, the backlog. How big is it after each sprint? Is it shrinking at a consistent rate? Or is stuff being pushed into the backlog because of (a) stuff that wasn't thought of to begin with ("We need another use case for an audit report that no one thought of, I'll just add it to the backlog.") or (b) not getting stuff done and pushing it into the backlog to meet the date instead of the promised features.
http://cccc.sourceforge.net/
Fan in and Fan out are my favorites.
Fan in:
How many other modules/classes use/know this module
Fan out:
How many other modules does this module use/know
improve time estimates
While Joel Spolsky's Evidence-based Scheduling isn't per se a metric, it sounds like exactly what you want. See http://www.joelonsoftware.com/items/2007/10/26.html
I especially like and use the system that Mary Poppendieck recommends. This system is based on three holistic measurements that must be taken as a package (so no, I'm not going to provide 3 answers):
Cycle time
From product concept to first release or
From feature request to feature deployment or
From bug detection to resolution
Business Case Realization (without this, everything else is irrelevant)
P&L or
ROI or
Goal of investment
Customer Satisfaction
e.g. Net Promoter Score
I don't need more to know if we are in phase with the ultimate goal: providing value to users, and fast.
number of similar lines. (copy/pasted code)
improve my team’s software development process
It is important to understand that metrics can do nothing to improve your team’s software development process. All they can be used for is measuring how well you are advancing toward improving your development process in regards to the particular metric you are using. Perhaps I am quibbling over semantics but the way you are expressing it is why most developers hate it. It sounds like you are trying to use metrics to drive a result instead of using metrics to measure the result.
To put it another way, would you rather have 100% code coverage and lousy unit tests or fantastic unit tests and < 80% coverage?
Your answer should be the latter. You could even want the perfect world and have both but you better focus on the unit tests first and let the coverage get there when it does.
Most of the aforementioned metrics are interesting but won't help you improve team performance. Problem is your asking a management question in a development forum.
Here are a few metrics: Estimates/vs/actuals at the project schedule level and personal level (see previous link to Joel's Evidence-based method), % defects removed at release (see my blog: http://redrockresearch.org/?p=58), Scope creep/month, and overall productivity rating (Putnam's productivity index). Also, developers bandwidth is good to measure.
Every time a bug is reported by the QA team- analyze why that defect escaped unit-testing by the developers.
Consider this as a perpetual-self-improvement exercise.
I like Defect Resolution Efficiency metrics. DRE is ratio of defects resolved prior to software release against all defects found. I suggest tracking this metrics for each release of your software into production.
Tracking metrics in QA has been a fundamental activity for quite some time now. But often, development teams do not fully look at how relevant these metrics are in relation to all aspects of the business. For example, the typical tracked metrics such as defect ratios, validity, test productivity, code coverage etc. are usually evaluated in terms of the functional aspects of the software, but few pay attention to how they matter to the business aspects of software.
There are also other metrics that can add much value to the business aspects of the software, which is very important when an overall quality view of the software is looked at. These can be broadly classified into:
Needs of the beta users captured by business analysts, marketing and sales folks
End-user requirements defined by the product management team
Ensuring availability of the software at peak loads and ability of the software to integrate with enterprise IT systems
Support for high-volume transactions
Security aspects depending on the industry that the software serves
Availability of must-have and nice-to-have features in comparison to the competition
And a few more….
Code coverage percentage
If you're using Scrum, you want to know how each day's Scrum went. Are people getting done what they said they'd get done?
Personally, I'm bad at it. I chronically run over on my dailies.
Perhaps you can test CodeHealer
CodeHealer performs an in-depth analysis of source code, looking for problems in the following areas:
Audits Quality control rules such as unused or unreachable code,
use of directive names and
keywords as identifiers, identifiers
hiding others of the same name at a
higher scope, and more.
Checks Potential errors such as uninitialised or unreferenced
identifiers, dangerous type casting,
automatic type conversions, undefined
function return values, unused
assigned values, and more.
Metrics Quantification of code properties such as cyclomatic
complexity, coupling between objects
(Data Abstraction Coupling), comment
ratio, number of classes, lines of
code, and more.
Size and frequency of source control commits.