Time estimate for ABAP development - abap

I'm looking for a table or list of standard time estimations for developments in ABAP, something customizable in some variables according to the development team, complexity of project, etc...
Something similar to:
Simple Module Pool -> 10 hours
Complex Module Pool -> 30 hours
Definition of Dictionary -> (0,4 * number_of_tables * average_fields ) hours
ALV Report -> (2 * number_of_parameters) hours
I've searched but haven't found anything yet. I found AboveSoft Adaptive Estimator, what looks like a software tool to do what I need, but I prefer something... manual, an official or standard table.
Do you know anything like that?
Thank you in advance.
Updated, as requested in comments by Rob S., to provide more information for future similar questions:
What I'm looking for is a bunch of formulas, any metric system that can be applicable to (or even created for) time estimations on SAP development.
I'm looking for a technic/tool/method to estimate SAP work, duration, cost, something similar to COCOMO II, FP, ESTIMACS or SLIM for SAP development.

If I am reading this right, you are looking for a something to estimate how long it would take someone to program an application. I would doubt an official table actually exists.
Development time is highly variable. Programmer experience, complexity of requirements, clarity of requirements, and dozens of other factors affect how much time development takes. So even if an official table exists, it may not be accurate.

the formulas you made up for illustration purposes in your question are as good as any others - in other words, you are asking for something that is pointless.
the reason is that no formula can account for the truly important variables:
your team
your customer
your environment
your standards and best practices
all of which will have a much larger drag coefficient than any other terms
if you want accurate estimates, ask your developers, and track their accuracy
if you truly think that this sort of thing can be reduced to formulas, please resign as a project manager immediately

I'm not a project manager, I'm only an internship into a SAP team. Due to my experience in other languages I DO know that there are so many variables that it's impossible to automate a estimation of development time.
But I've received the work of search for a "standard table of estimated times" for SAP/ABAP developments and, being a newbie in SAP, I imagined that will could exist any metric standarization.
I think i've suffer a rough joke from my manager...
Sorry for the inconvenience of my question.

You can use Excel, Numbers, Gant charts, to do it manualy but you won't be able to find ANY automated thing for that, you'll have to do it yourself!

Let me guess... you're a project manager?
There is no "one way" in programming, especially not in the highly specialized world of ABAP.

HI, I understand your need...
I'm project manager and estimation specialist, and what you're looking for is a table for estimate effort, por develop ABAP components...
You need a tabulate table, where based on complexity of the component and complexity of the change, yo can get an estimate effort. (this is based in one estimation method called Object Points http://yunus.hacettepe.edu.tr/~sencer/objectp.html)
This effor is only for Codding & Unit Testing, you as project manager (or your project manager!) must to take this estimation as input, but you need to estimate another project factors to get the complete "project estimation"...
I didn't find this table, or some estandar table for bencharmak, so I'm working in a project withing my software factory to build our own table...
I hope this will be helpful for you..
Regards,

this simply doesn't exist. General metrics are so general that are useless. Project Managers should find other ways to make their life easier, like resign, but not try to quantify developments as if peeling potatoes.

Try AboveSoft's new tool, named AboveSoft Predictor. You can download it here www.abovesoft.com
It connects to an SAP system and you can easily (graphically) generate your own estimation templates which are saved in SAP.
Dave.

Related

Tool to track requirements and assumptions

I am leading a fairly massive project right now that is in it's idea phase (just getting off the ground) that has more questions than answers today. With all of the uncertainty, our standard methodology for tracking requirements and gathering estimates won't cut it. However, I still have to build a model and get the data that management needs for corporate accounting and budgeting purposes.
I've been asked to simply document the assumptions we're making as a project team and that the developers and application owners would be able to provide a very high level estimate for the work as needed by the business for budgeting purposes...
I need a tool that will also allow assumptions to be tied to high level requirements in a 1 to many relationship so that any changes to an assumption will allow us to identify where more estimation work is required.
Example...
Assumption:
We will operate with a single facility responsible for x, y, and z.
Requirement/Scope:
- This system will need to have an additional facility added.
- This other system will need to be capable of processing x, y, and z.
So at the end of the day, if my assumption changes I want to quickly see that I have at least an impact to 2 of my requirements/scope lines...
I need a tool that will also allow assumptions to be tied to high level requirements in a 1 to many relationship so that any changes to an assumption will allow us to identify where more estimation work is required.
I think this is called "traceability" (for example Requirements traceability) so, include that word in your search terms.
When things are ill-structured, you don't need much of a tool.
http://www.w3.org/2001/tag/doc/leastPower.html
You need a lot of patience and clarity to get from what you have to more formal requirements.
Plain-old word-processing is often best.
Since you want to do estimating, a spreadsheet is about all the structure that the problem can stand right now.
A big-old-matrix with requirements on one axis and assumptions on the other will allow you to teak, adjust and assess impact.
If you spend time loading all your questions and answers into some tool, you spend a lot of time playing with the tool -- not the issues. Also, as ideas come and go, you hate to delete the really EPIC FAIL ideas from the tool.
Often, you should feel free to start again "from scratch", discarding the bad ideas.
Write, write and rewrite until the questions, answers and requirements get to a level of manageability.
Then migrate the easy stuff until a more rigid and formal tool. Leave the complex, ill-defined and unfocused stuff in a word processor.
Try out Ultimate Trace. It is free and it provides two-way n-to-m association between any traceables.
I take it you need a cheap and quick solution. There are lots of tools to do this that can cost lots of $$$$. One that I like was Compuware's test and requirements management suite. TrackRecord I think the name was.
A cheaper solution could be MindMapping the requirements. You could tie in the requirements to the many parts of the solution etc.
Another thing you could look into are UML tools.
To clarify: When you start gathering requirements you have stated req'ts and assumptions about various things including what some requirements should be. At that point, yes I track them in the same artifact for convenience.
Now, as I said, some of the assumptions are about req'ts, but some, or many, are not. These other assumptions may be about design or they may be about dependencies, to give two examples. I expect all assumptions that are about req'ts to resolve by the time the req'ts are approved. The others I will roll forward into the next artifact where they get resolved in design.
The exception to resolving assumptions is if the "scope" of the assumption goes beyond the project. I've seen one or two assumptions that were so basic and/or difficult to prove that the assumption was an underpinning of the project.
Assumptions don't exist in tandem with specific requirements. Once an assumption has been confirmed and becomes a requirement, the assumption goes away.
I always put assumptions in the same artifact with the requirements. So any tool that tracks requirements can be used to track related assumptions. I've put them in BRDs (Business Requirements Documents), use cases, IBM's RequisitePro...

How do I systematically test and think like a real tester

My friend asked me this question today. How to test a vending machine and tell me its test cases. I am able to give some test cases but those are some random thoughts. I want to know how to systematically test a product or a piece of software. There are lots of tests like unit testing, functional testing, integration testing, stress testing etc. But I would like to know how do I systematically test and think like a real tester ? Can someone please explain me how all these testings can be differentiated and which one can be applied in a real scenario. For example Test a file system.
Even long-time, well respected, professional testers will tell you: It is an art more than a science.
My trick to designing new test cases starts with the various types of tests you mention, and it must include all those to be thorough, but I try to find a list of all the ways I can interact with the code/product.
For the vending machine example, there are tons of parts, inside and out.
Simple testing, as the product is designed to work, gives plenty of cases
Does it give the correct change
How fast can it process the request
What if an item is out of stock
What if it is overfilled
What if the change drawer is full
What if the items are too big, or badly racked
What if the user puts in too little money
What if it is out of change
Then there are the interesting cases, which normal users wouldn't think about.
What if you try to tip it over
Give it a fake coin
Steal from it
Put a coin in with a string
Give it funny amounts of change
Give it half-ripped bills
Pry it open with a crow-bar
Feed it bad power/brownout
Turn it off in the middle of various operations
The way to think like a tester is figure out every possible way you can attack it, from all the "funny cases" in usual scenarios, to all the methods that are completely outside of how it should be used. Any point of input, including ones you might think the developers/owners have control over, are fair game.
You can also use many automated test tools, such as pairwise test selection, model-based test toolkits, or for software, various stress/load and security tools.
I feel like this answer was a good start, but I now realize it was only half of the story.
Coming up with every single way you can possibly test the system is important. You need to learn to stretch the limits of your imagination, your problem decomposition skills, your understanding of chains of functionality/failure, and your domain knowledge about the thing you are testing. This is the point I was attempting to make above. With the right mindset, and with enough vigilance, these skills will start to improve very quickly - within a year, or within a few years (depending on the complexity of the domain).
The second level of becoming a very competent tester is to determine which tests you should care about. You will always be able to break every system, in a ton of different ways. Whether those failures are important or not is a more interesting question, and is often much more difficult to answer. The benefit to answering this question, though, is two-fold.
First, if you know why it is important to fix pieces of the system that break (or to skip fixing them!), then you can understand where you should focus your efforts. You know what you can afford to spend less time testing, and what you must spend more time on.
Second, and more importantly, you will help your team expose where they should be focusing their efforts. You will start to uncover things that are called "second-order unknowns". Your team doesn't know what it doesn't know.
The primary trick that helps you accomplish this is to always ask "why?", until whoever you are asking is stumped.
An example:
Q: Why this test?
A: Because I want to exercise all functionality in the system.
Q: Why does this system function this way?
A: Because of the decisions that the programmer made, based on the product specifications.
Q: Why did our product specifications ask for this?
A: Because the company that we are writing the software for had a requirement that the software works this way.
Q: Why did that company we are contracting for add that as a requirement?
A: Because their users need to do :thing:
Q: Why do the users need to do :thing:?
A: Because they are trying to accomplish :xyz:
Q: Why do they need to accomplish :xyz:
A: Because they save money by doing :abc:
Q: Why did they choose :xyz: to solve :abc:?
A: ... good question.
Q: What could they do instead?
A: ... now that I think about it, there's a ton of options! Maybe one of them works better?
With practice, you will start knowing which specific "why" questions to ask, and which to focus on. You will also learn to start deeper down the chain, and be less mechanical in your approach.
This is no longer just about ensuring that the product matches the specifications that the dev, pm, customer, or end user provided. It also helps determine if the solution you are providing is the highest quality solution that your team could provide.
A hidden requirement of this is that you must learn that half your job as a tester is to ask questions all the time. You might think that your team mates will be annoyed at this, but hopefully I've shown that it is both crucial to your development, and the quality of the product you are testing. Smart and curious teammates who care about the product (who aren't busy and frustrated) will love your questions.
#brett :
Suppose you have the system with you, which you want to test. Now the main thing that comes into picture is make sure you have the test scenario or test plan. Once you have that, then for you it becomes very much clear about how and what to test about the system.
Once you have test plan then your vision becomes clear regarding what all is expected and what all is something unexpected. For unexpected behavior you can recheck once and file an issue, if you think that that is not correct. I had given you answer in a general case. if you have a real world scrnario, then it may be really helpful to provide guidelines on that.

SAP Business Objects

I have been offered by my employer to work on SAP Business Objects to analyse large amount of data they have.
I have the following doubts before I could accept that:
a. I love programming and do not want to lose touch with it. Do you think working on this tool would excite a person who loves building software? Or Is it like most part of the tool configurable through Wizard like interface?
b. Is this tool capable of working on data collected for research and testing purpose?
I tried googling but all I could get is some videos which mentions "Business Intelligence" more than 12 times a minute. Any suggestion or even links to help me make the preliminary analysis would be helpful. Thanks...
Business Objects is not rocket science. A competent developer should be able to figure out how to build a universe in a few days. My first experience took me about two days to figure out how to build a universe and another two days or so to get some analytic reports out of it.
However, 'research data' suggests that the actual structure of the data will vary depending on the nature of the survey so you will probably find yourself constantly making ad-hoc changes or new bespoke universes for each job. Business Objects is probably a reasonably flexible way to do this (a custom universe for a tabular set of research data could probably be set up in a few hours). However, the job would basically devolve to a reporting analyst position.
If you're not a 'tools guy' by nature you will probably find this sort of work unsatisfying. I do full life-cycle work on data warehouse systems and from time to time this involves developing front ends using Business Objects. I'm quite happy to work with it casually as part of a larger job but I wouldn't want a job solely working with just one reporting tool.
If you think of yourself as a programmer I would recommend against accepting the job if it was limited to just working with Business Objects.
I have experience working with Designer and reporting in Business Objects... Honestly, it's quite easy. I have to say I'm a total programmer at heart, and absolutely hate working with it, but that's what possessed me to write a program that uses the DLL's to automate everything. I enjoyed automating it, and ended up making a program that did in about 5 minutes what it previously took me weeks to do. Now all the BO developers use it, and I mostly spend my time updating that.
In summary... It sucks to work with when it's +60% of your job, but you don't have to lose out on Programming. If anything, I think I've improved my programming. Now I barely do the crappy side of the work. I just run my program, and everything works out.
I'm not sure what you are asking in question "B".

What kind-of stats does your company collect to define code / software product quality

Most programming houses / managers i know of can only define quality in terms of the no of bugs made / resolved in retrospect.
However most good programmers can innately sense quality once they start meddling with the code.(right?)
Has any programming houses that you know of, successfully translated this information into metrics that organizations can measure and track to ensure quality?
I ask since i very often hear rantings from dis-gruntled managers who just cannot put their finger on what quality really is. But some organizations like HoneyWell i hear has lots of numbers to track programmer performance, all of which translates to numbers and can be ticked off during appraisals. Hence my question to the community at large to bring out the stats they know of.
Suggestions about tools that can do a good job of measuring messy codes will help too.
At one customer site we used the CRAP metric which is defined as:
CRAP(m) = comp(m)^2 * (1 – cov(m)/100)^3 + comp(m)
Where comp(m) is the cyclomatic complexity of a given method and cov(m) is the level of unit test coverage for that method. We used NDepend and NCover to provide the raw information to calculate the metric. It was useful for find particular areas of the code base where attention should be paid. Also rather than specify a particular value as a target, we aimed for improvement over time.
Not perfect by any stretch, but still useful.
Just a quick reminder:
Code quality is :
not defined by a single criteria : there are several groups of people involved in the code quality: developers, project managers and stakeholders, and they all need to see the code quality represented differently.
not defined by one number coming from one formula, but rather by the trend of that number: a "bad" note in itself does not mean anything, especially if it is legacy code, but a bad note which keeps getting worse... that is worrisome ;)

Understanding code metrics

I recently installed the Eclipse Metrics Plugin and have exported the data for one of our projects.
It's all very good having these nice graphs but I'd really like to understand more in depth what they all mean. The definitions of the metrics only go so far to telling you what it really means.
Does anyone know of any good resources, books, websites, etc, that can help me better understand what all the data means and give an understanding of how to improve the code where necessary?
I'm interested in things like Efferent Coupling, and Cyclomatic Complexity, etc, rather than lines of code or lines per method.
I don't think that code metrics (sometimes referred to as software metrics) provide valuable data in terms of where you can improve.
With code metrics it is sort of nice to see how much code you write in an hour etc., but beyond they tell you nada about the quality of the code written, its documentation and code coverage. They are pretty much a week attempt to measure where you cannot really measure.
Code metrics also discriminate the programmers who solve the harder problems because they obviously managed to code less. Yet they solved the hard issues and a junior programmer whipping out lots of crap code looks good.
Another example for using metrics is the very popular Ohloh. They employ metrics to put a price tag on an opensource project (using number of lines, etc.), which in itself is an attempt which is flawed as hell - as you can imagine.
Having said all that the Wikipedia entry provides some overall insight on the topic, sorry to not answer your question in a more supportive way with a really great website or book, but I bet you got the drift that I am not a huge fan. :)
Something to employ to help you improve would be continuous integration and adhering to some sort of standard when it comes to code, documentation and so on. That is how you can improve. Metrics are just eye candy for meetings - "look we coded that much already".
Update
Ok, well my point being efferent coupling or even cyclomatic complexity can indicate something is wrong - it doesn't have to be wrong though. It can be an indicator to refactor a class but there is no rule of thumb that tells you when.
IMHO a rule such as 500+ lines of code, refactor or the DRY principal are more applicable in most cases. Sometimes it's as simple as that.
I give you that much that since cyclomatic complexity is graphed into a flow chart, it can be an eye opener. But again, use carefully.
In my opinion metrics are an excellent way to find pain points in your codebase. They are very useful also to show your manager why you should spend time improving it.
This is a post I wrote about it: http://blog.jorgef.net/2011/12/metrics-in-brownfield-applications.html
I hope it helps