Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I'm 2 weeks into learning Objective-C (but previous experience with Java and Python), and after the initial hurdle I'm starting to get a hang of things like pointers, ARC, and delegates. However, on more than one occasion I've come across a problem/bug that I have no idea how to solve, and worse, I have no idea how to approach a solution.
My general troubleshooting strategy, for when things aren't working as expected, is as follows:
Reread through the relevant section of my code to make sure the general logic and flow makes sense
Look at any self-defined methods and make sure they're working properly
Put a few NSLog statements within the code to see where it is doing unexpected things
Once I've identified the troublesome part(s), search the apple docs/stackoverflow/google to see if anyone has experienced the same problems.
I might be omitting a step , but this general process works for the most part. However, there are some problems that I come across for which this process does not work, and I get totally stuck. A few examples:
I was trying to load an NSWindowController using the proper method called in the proper location, but it wasn't displaying. Turned out to be an issue with pointers, after a long time of aimless experimenting
I'm testing an example pulled directly from an Apple Technical Q&A to create a clickable hyperlink into an NSAttributedString, but when you hover over said link the pointing hand cursor does not appear. See my thread for more info.
I've created an NSButton and want the pointing hand cursor to appear when I hover over the button. I'm using the code in the accepted answer here, but it doesn't work and I don't know why. Also, subclassing every button I want to use would be tedious and messy.
Clearly my troubleshooting strategy is not working. Once I exhaust all 4 steps to no avail, I have to just try random fixes until one, seemingly magically, works. This is neither time efficient nor good for helping me understand the language better. Are there steps that I can add to my troubleshooting strategy, and if so, what are they?
For example, when I asked a friend the same question, they suggested using breakpoints to monitor instance variables and objects.
When you get across a bug,
first thing you need to do is read the error, see what the error is and where it occurred.
Go to that specific location of the error, see if there is an obvious mistake.
If not, set up a break point at skeptical location, then step in and check step by step to see if each value on the stack is what you expected it to be.
You may want to put up some test, consider all possibilities, such as what if this value is null, what if it's 0, etc.
Sometimes bugs are tricky, but is not really recommended to read your code all over again each time you see an error.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
This is not a specific question about a certain piece of code and apologies if my current question is not valid on StackOverflow. With Such a big Community I am just curious to find a common approach for best practices.
It's not that I Don't know how to code the OOP way, but while my code keeps growing I am losing track and my code actually bluntly stated gets "ugly". What happens is that it takes a lot of time to rewrite my code to finally have it correct again.
Don't get me wrong. I really take time and dedication to adapt my code the best way I can. I rewrite parts that can be rewritten etc. Actually I want to avoid to rewrite and I know it takes alot of practice. Any Guidelines/Tips regarding taking on a project is highly appreciated.
I Have written Code already in OOP way, but it's small code/project based and now when I am starting to take on bigger projects I lose track.
My question is simply: Am I the only one that has this? And how to keep my coding neat all the way to the end?
Thanks in advance for any tips regarding this situation. I just want to write better and cleaner code.
In practice, you often might need to update your existing code to meet the new requirements. But if this is something you are doing on regular basis, then it is possible that your development process is not good enough.
You may lack a beforehand thinking and planning process.
What do you do when you have a new development task? Do you just go on and write some code?
I use the process like this:
1) User story.
Describe a user story, it often comes from the customer or users. This can be something like "I want to have a new beautiful chart of the recent data on the dashboard".
2) Requirements specification. Start asking questions and add details. You may need to create a separate document just to describe all the details - what data exactly should be shown, should it be line or bar or other kind of chart, where exactly should the chart be placed, etc, etc...
Result - a detailed requirements specification. It should be clear enough so you can give it to other developer, who should be able to continue with following steps without asking the questions.
Check also this article 10 Typical Mistakes in Specs.
3) Implementation details. Think how to implement the requirements, describe the classes structure and objects interaction, think on extensibility and flexibility, plan the unit tests for your code.
The basic idea is that you start writing and re-writing your code even before you actually write the real code.
Imagine how your classes / objects will be used, try to write some tests before actually writing the code.
Here is where SOLID principles, design patterns and UML diagrams can be useful to design your code in a good way.
4) Estimation. If you was good at point (3), it will be trivial to split your implantation into small steps and estimate how much time you will need for every step, like:
Database migration - add new tables - 1h
Implement ClassA, ClassB, ClassC - 1h
Implement ClassD, with a special calculation algorithm - 4h
Unit tests - 2h
Testing, fixing bugs found - 20% of the above
I usually use this scheme of estimation (along with 20% for unexpected things) and it is possible to get very accurate estimations. Like for a 40 hour task it can be 1-2 hours error, it should never be something like 50% error.
5) Implementation. Here you just go on and do the task.
Everything is planned, everything is absolutely clear.
No thinking on design and strategy, only local decisions (like do I use for or while loop here?).
This way you minimize the amount of "surprises" during the implementation and reduce the need in re-writing the code later.
Always keep SOLID in mind. That will help you to stay on the right track. Here is a good start http://butunclebob.com/ArticleS.UncleBob.PrinciplesOfOod
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I've been wanting to help out with an open source project for some time now. I want to do this for two main reasons. I am a strong supporter of foss and I want to gain experience.
I've struggled to find a project that I thought I would want to help with. There are many I would like to help with, but they are far to complex for what I would want to start with. However, I recently found a browser, uzbl, which seems like it is complex enough for me to be able to learn from, however simple enough for me to be able to wrap my head around.
It is a simple lightweight browser which uses webkit and gtk.
Now that you are familiar with my positon, here is my question.
What is the best way to go about understanding the code base?
Now I realize this is a subjective question. However, I feel if I have the suggestions of more experienced developers, I may be able to cut down on the wrong turns I make.
Should I familiarize myself with the libraries used first? Or should I look through the core code and concern myself with what goes in under the hood after I have a basic understanding of the global picture?
How do you guys go about understanding the code base of a new project?
Thank you for your time and effort.
First, compile and run it. With many projects, simply accomplishing that task will give you some understanding :-)
Second, it's good to have some problem to solve. Some small task, like fix a minor bug somewhere. Just debug, search for the bug origin, try to fix it. Then run unit tests (if any), see what else you broke with your fix. Investigate that, too.
Then repeat. A dozen of bugs, preferably in different areas, give some better understanding of the codebase.
After that, I usually start wondering around the product, just looking how it works and playing with it. Natural curiosity takes over sooner or later, and I find a feature that interests me. In a "wow, I wonder how it works" kind of way. Then I try to sketch a high-level design for that feature myself. Along the lines of "what would I do if I had to create this feature? Very briefly, just in my mind, no fancy diagrams and whatnot.
In doing so, I inevitably find myself in need to know more about the system: after all, you can't design a fully isolated feature, can you? You'll have to interact with the rest of the system, and you need to know how. So I go into code and find out.
Then, a moment may come when I realize that I really don't know how to design this little part of that feature. I think, and I think, and I think, and I just don't know. So then comes the sweetest part: I go into the code and look how is it designed in the first place. Sometimes I would be disappointed in the project. But more often, I would be disappointed in myself. But that's exactly what's precious: that's how I learn.
I have found two things (in addition to the answer by Fyodor Soikin) that really help. One is getting doxygen to do document everything for me. It makes it a lot easier when I can go through the function declarations, and click the data structure which is supposed to be passed in so I can see all the members it contains (if there is no current documentation you can just set EXTRACT_ALL = YES). I also find it very useful to compile the project with debuggin symbols then run it in gdb so that you can follow it from start to finish.
I would look for their tests first, and look at those. If there aren't tests, I probably wouldn't bother with the project. If there are tests, they should be instructive as to how the software works. Step through the tests in the debugger to understand how the high level components fit together.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
We have a project hand over from on shore team to our team (off shore) not long ago. However we were having difficulties for the hand-over process.
We couldn't think of any questions to ask during their design walk-through, because we were overwhelmed by the sheer amount of information. We wanted to ask, but we didn't know what to ask. Since they got no question from us, the management think that the hand-over process was been done successfully.
We had tried to go through all the documentation from our company wiki page before attending the handover presentation, but there are too many documents, we don't even know where to start with.
I wonder, are there any rules or best practices that we can follow, to ensure a successful project hand-over, either from us, or to us.
Thanks.
In terms of reading the documentation, personally I'd go for this order:
Get a short overview of the basic function of the application - what is it meant to achieve. The business case is probably the best document which will already exist.
Then the functional specification. At this point you're not trying to understand any sort of how or technology, just what the app is meant to do. If it's massive, ask them what they key business processes are and focus on those.
Then the high level technical overview. This should include an architecture diagram, required platforms, versions, config and so on. List any questions you have.
Then skim any other useful looking technical documents - certainly a FAQ if there is one, test scripts can be good too as they outline detailed "how to" type scenarios. Maybe it's just me but I find reading technical documents before I've seen the system a waste - it's too academic and they're normally shockingly written. It's certainly an area I'd limit the time I spent on if I didn't feel I was getting a reasonable return for the time I was spending.
If there are several of you arrage structured reviews between you and discuss the documents you've read, making sure you've got what you need to out of it. If the system is big then each take an area and present to the others on it - give yourselves a reason to learn as much as possible and knowing you're going to be quizzed is a good motivator. Make a list of questions where you don't understand something. Having structured reviews between you will focus your minds and make it more of an interactive task, rather than just trawling through page after page of tedious document.
Once you get face to face with them:
Start with a full system demo. Ask questions as they come up, don't let them fob you off with unclear answers - if they can't answer something have it written down and task them with getting the answer.
Now get the code checked out and running on your machines. Do this on at least two machines - one they lead, one you lead. Document the whole process - this is the most important step. If you can't get the code running you're screwed.
Go through the build process. Ensure that you can build the app (including any automated build and unit tests they may have). Note that all unit tests should pass - if they don't or if they say "oh, that one always fails" then they need to fix that before final acceptance.
Go through the install process. Do this at least twice, one they lead, once you lead. Make sure that it's documented.
Now come up with a set of common business functions carried out with the application. Use this to walk the code with them. The code base will be too big to cover the whole thing but make sure you cover a representative sample.
If there is a database or an API do a similar exercise. Come up with some standard data you might need to extract or some basic tasks you might need to carry out using the API and spend some time working through these with them.
Ask them if there's anything they think you should know.
Make sure that any questions you've written down anywhere else are answered.
You may consider it worth going through the bug list (open and closed) - start with the high priority ones and talk through anything particularly worrying looking. Even if they've fixed it it may point at a bit of code which is troublesome.
And finally if the opportunity exists - if there are any outstanding bugs or changes, see if you can pair program a couple.
Do not finally accept the app unless you are 100% sure you can:
Get the code to compile
Get the code to build (including the database)
Get the application installed
Do not accept handover is complete until they have:
Documented anything you picked up on that wasn't covered to your satisfaction
Answered ALL of your questions - a question they won't answer after being asked repeatedly screams of something they're hiding
And grab their e-mail addresses and phone numbers. Even if it's only informal they'll probably be willing to help out if the shit really hits the fan...
Good luck.
My basic process for receiving a handover would be:
Get a general overview of the app, document it
Get a list of all future work that the client expects
... all known issues
... any implementation specifics
As much up-to-date documentation they have
If possible, have them write some tests for critical components of the system (or at least get them thoroughly documented)
If there is too much documentation (possible) just confirm that it is all up to date, and make sure you find out from them where to start, if it is not clear.
Ask as many question as possible; anything that comes to mind, because you may not have the chance again.
Most handovers, perhaps all of them, will cause a lot of information to be lost. The only effective way to perform a handover that I have seen is to do it gradually. One way to do it is to allow a few key people from phase One to stay on the project well into Phase Two.
The extreme solution is to get rid of all handovers, and start using an Agile mindset.
As a start, define the exit criteria for the handover. This should be discussed, negotiated and agreed with both parties and make sure higher management knows this. Then write up a checklist of all things needed to achieve the exit criteria and chase it.
Check out "Software Requirements" and Software Requirement Patterns for ideas on questions to ask when gathering information about a project. I think that just as they would work for new development, they would also help you to come to terms with an existing project.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
Code can be perfect, and also perfectly useless at the same time. Getting requirements right is as important as making sure that requirements are implemented correctly.
How do you verify that users' requirements are addressed in the code you're working on?
You show it to the users as early and as often as possible.
Chances are that what they've asked for isn't actually what they want - and the best way of discovering that is to show them what you've got, even before it's finished.
EDIT: And yes, this is also an approach to answering questions on StackOverflow :)
You write tests that assert that the behavior the user requires exists. And, as was mentioned in another answer, you get feedback from the users early and often.
even if you talk with the user, and get everything right, the user might have gotten it wrong. They won't know until they use the software that they didn't want what they asked for. the surest way is to do some sore of prototype that allows the user to "try it out" before you write the code. you could try something like paper prototyping
If possible, get your users to write your acceptance tests. This will help them think through what it means for the application to work correctly. Break the development down into small increments that build on each other. Expose these to the customer early (and often), getting them to use it, as others have said, but also have them run their acceptance tests. These should also be developed in tandem with the code under test. Passing the test won't mean that you have completely fulfilled the requirements (the tests themselves may be lacking), but it will give you and the customer some confidence that you are on the right track.
This is just one example of where heavy customer interaction pays off when developing code. The way to get the most assurance that you are developing the right code is having the customer participating in the development effort.
How do you verify that users' requirements are addressed in the code you're working on?
For a question put in this form the answer is "You can't".
The best way is to work with users from the very first days, show them prototypes and incorporate their feedback continuously.
Even so, at the end of the road, there will likely be nothing resembling what was originally discussed and agreed on.
Ask them what they want you to build before you build it.
Write that down and show them the list of requirements you have written down.
Get them to sign off on the functional design.
Build a mock up and confirm that it does what they want it to.
Show them the features as it is being implemented to confirm that they are correct.
Show them the application when it's finished and allow them to go through acceptance testing.
They still wont be happy but you will have done everything you can.
Any features that are not in the document they signed off can be considdered change requests which you can charge them extra. Get them to sign off everything you show them, to limit your liability
by using development method that often controls alignement between the implementation and the requirements.
For me, the best way is to involve a "expert customer" to validate and test in a interative way as often as possible the implementation ....
If you don't, you risk to have, as you said, a very beautiful soft perfectly useless....
you can try personas; a cohort of example users that use the system.
quantify their needs, wants, and make up scenarios of what is important to them; and what they need to get done with the software.
most importantly- make sure that the users (the persona's) goals are met.
here's a post I wrote that explains it in more detail.
You write unit tests that expect an answer that supports the requirements. If the requirement is to sum a set of numbers, you write
testSumInvoice()
{
// create invoice of 3 lines of $1, $2, $3 respectively
Invoice myInvoice = new Invoice().addLine(1).addLine(2).addLine(3);
assertTrue(myInvoice.getSum(), 6);
}
If the unit test failed, either your code is wrong or possible was changed due to some other requirement. Now you know that there is a conflict between the two cases that needs to be resolved. It could be as simple as updating the test code or as complex as going back to the customer with a newly discovered edge case that isn't covered by the requirements.
The beauty of writing unit tests is it forces you to understand what the program should do such that if you have trouble writing the unit test, you should revisit your requirements.
I don't really agree that code can be perfect...but that's outside of the real question. You need to find out from the users prior to any design or coding is done what they want - ask them 'what does success look like', 'what do you expect when the system is complete', 'how do you expect to use it'...and video tape the response, mindmap it, or wireframe it and than give review it with them to ensure you captured the most important aspects. You can than use those items to verify the iterative deliveries...expect the users to change their mind/needs over time and once they have 'it in their hand' (IKIWISI - I Know It When I See It)...and record any change requests in the same fashion.
AlbertoPL is right: "Most of the time even the users don't know what they want!"
And if they know, they have a solution in mind and specify aspects of that solution instead of just telling the problem.
And if they tell you a problem, they may have other problems without being aware that these are related by having a common cause or a common solution.
Thus, before you implement mockups and prototypes, go and watch the use of what the customer already has or what the staff is still doing by hand.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
On our Scrum board, tasks start at 'To Do', go to 'In Progress', and when you're done with a task, they move to 'To Verify' before ending up in 'Done'. The 'To Verify' column is when you're done with a task and someone else can have a look at it, test it, and comment on it.
This has proven helpful for errors, better code, etc.
To people who have a similar practice: after the developer has addressed the comments/errors, do you verify it again, or do you assume the issues have been addressed and move the task to 'Done'?
I hope this is clear, and would like to hear your thoughts.
This question is not specific to Scrum, I've seen this problem outside agile processes too.
The answer turns out to be: it depends on the issues raised in verification. If there are minor issues raised, and the responsible developer is senior enough, then trust him to fix things the first time around. But if the person doing the verification considers the items too complex, or the Scrum Master lacks the confidence to trust the developer to get it right the second time around, then you move the post-it back to In Progress.
A good example of the kind of error you don't bother checking is a simple typo. A good example of something you would check again is an error in a boundary condition, when there are many interdependent boundary conditions.
In my expirence the fixing of bugs have a 50 - 75% chance of introducing new bugs, especially if the code if not covered by test cases. I would certainly verify it again.
Never assume the issue as being addressed until it is independently (i.e. not by the person who fixed it) verified.
We have no To Verify column. A task is in progress till it has been implemented and tested. An untested task cannot be done and why should someone else test it, report back to the programmer and then the programmer has to fix it? This only adds latency to the workflow. The programmer should test his own code, write unit tests for it if possible and integrate it into the app if possible and test it here as part of a natural workflow. That way he finds his own bugs and can immediately fix them. When he sets the task to Done, he's not only convinced to have fully implemented the task, but also that the task is bug free.
Okay, we all know, that means little. Sometimes bugs are found much later, but these are then not so obvious bugs and usually their fix will be a task of its own.
In the projects I have been in (both agile and non-agile), bug fixes were always verified by someone else. Quite often new bugs are introduced, so a bit exploring around the fix is needed. I have even seen some debug code forgotten in the build - everything works fine but extra files appear from nowhere.
Its also possible the developer did not find all paths to a bug, or that the bug report was so unclear that the developer made a wrong fix - e.g. if something has been misunderstood, and correct functionality is reported as a bug.
To ensure things stay done when they are done, tests for the fix should also be added to your automated tests - otherwise some embarrassing corner-case bug will re-appear months later.