What were the (then) unpublished optimizations that Steve Yegge referred to in "Dynamic Languages Strike Back"? - optimization

I was reading the transcription of Steve Yegge's Dynamic Languages Strike Back presentation, when I noticed this comment when he begins to discuss trace trees:
I'll be honest with you, I actually have two optimizations that couldn't go into this talk that are even cooler than this because they haven't published yet. And I didn't want to let the cat out of the bag before they published. So this is actually just the tip of the iceberg.
What are the optimizations he was referring to?
Update
Several days ago, I asked this question in a comment on the article. However, comment moderation is turned on (for good reasons), so it hasn't appeared yet.
Update
It has been a couple weeks since I first tried to reach the author. Does anyone else know another way to contact him?

Take a look at this: https://blog.stackoverflow.com/2009/04/podcast-50/
EDIT: Difficult to find specific (confirmed) references however, this paper perhaps gives some information regarding this: http://people.mozilla.org/~dmandelin/tracemonkey-pldi-09.pdf
and this blog post which appears related: http://andreasgal.wordpress.com/2008/08/22/tracing-the-web/
Might not be related as it is a Microsoft research paper from March 2010: http://research.microsoft.com/pubs/121449/techreport2.pdf
Pure speculative on my part but it appears (at least to me) that there are two major forms of performance, that at the developer level (IDE) and that at the compiler level which this subject of trace trees addresses hence the "continuous optomization" during execution to get the trace inline for the hot spots. This then leads me quickly to areas of optomization related to multi-cores and how to utilize the trace tree somehow in that regard (multi-core environments). Interesting stuff considering the currently theoretical non-static type speed speculation as compared to the speed winners in static type utilized in current C and the performance potential to be gained. I recall a discussion I had with a hardware engineer years ago (1979) where we speculated that if we could just capture the 'hot' execution paths we could get a huge gain in performance by keeping it "ready to run" in situ somehow - this was way prior to the work at HP in this regard (1999?) and unfortunatly we did not get further than the discussion stage due to other commitments. (I am rambling here I think...:)
OR, was this just related to the GO language? hard to tell in some respects.

You can watch that video from youtube under the StanfordUniversity channel: http://www.youtube.com/watch?v=tz-Bb-D6teE
You can add comments there too. Maybe someone will come to your rescue.

Related

do you rely on your memory or consult references and use a lot of intellisense? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I have noticed I do not code as much as I use to. Today I dedicate more time to analysis and design, then I communicate that design to programmers. Then they do the coding. This has affected my coding productivity, because I must consult references and rely on intellisense. Things are becoming more complex everyday
Now, here is the irony. If I were to hire a programmer and ask him/her to sit in front of a computer, I may ask to do some coding and I would check abilities. I would evaluate them based on their use of memory vs. consulting references. Maybe I will prefer that programmer who did not consult too much, but who knows what they are doing.
What is your opinion and experience?
I would say that a developer who knows how to find the answers is better than one who has an overall good knowledge already. I find that intellisense is a good tool for finding answers, besides it is too much to remember all method names, arguments, overloads, etc.
I use memory to get me into the right general area (e.g. knowing which classes to use or at least which namespace they'll be in) and then often Intellisense/MSDN for the exact method name or arguments to use.
Having said that, Stack Overflow is improving my ability to code without any references (or even compilation) - I'm sure code will just work out of the box for me more often now than it used to. (I tend to post and then check the code works, add links to MSDN etc - assuming I'm reasonably confident in the approach.)
Someone knowing what resources are available, and how to find the answers, and how to effectively debug - these are qualities I look for now in prospective employees.
I used to consult my memory only, but two things have happened:
Class libraries have gotten larger, so has the number of languages available
The ratio of programming-related memory to personal-life-related memory has shifted away from code
Programming today is also eight times harder than it was when I started. I used to work on 8-bit machines, now I'm working on 64-bit ones. :)
I once was at a job interviewed with the CTO of a company. He asked a question based on a real life problem the company had a while back and solved. It was a multi step problem.
I was standing in front of a whiteboard working through my solution and struggling through a particular part, a part I would use google for before even attempting it, had I been tasked with solving this problem for real instead of for an interview. He asked me at that point, "would you do anything different if this wasn't an interview question." I responded, "Yes. I would exhaust all possibilities of using a third party component for this part of the task and look up the solution, because it is a well defined problem thats been solved several times." There was a bit more discussion where I justified my answer, explained exactly what I would research, and I solved some other parts of the question. In the end I was offered and accepted the job, partly because of knowing how to find out what I didn't know.
Being able to use references is as important as being able to code from memory. Obviously, if you are a one language shop, and want people proficient in that language,the person should be able to write a complete hello world app in notepad. Interview problems should focus on small problems, and one should not worry about small syntax errors. This is why a whiteboard is the best IDE for interview questions.
Unless you demand all your coders use notepad and don't give them internet access, don't be as concerned by the syntax. If you do sit them down in front of a computer, worry about the finished product as well as the technique used to get there.
I'm a PHP programmer in my early 30's. I rely on PHP's excellent documentation, for several reasons:
Programming concepts don't change. If I know what my object models are and how I want to manipulate data, then there's dozens of ways to implement the details. The details are important, but a better grasp of the design and structure is more important
PHP has notoriously inconsistent functions. One string function might use ($needle,$haystack) as parameters, and another might use ($haystack,$needle). Trying to keep them straight isn't worth the hassle when you can just type php.net/function_name and get the reference.
I don't rely on intellisense, simply because I haven't found a decent IDE for PHP that does it well. Eclipse is ok, but it's not fantastic. Netbeans gives me 'PHPDoc not found' for all the built-in PHP functions whenever I install it. There's nothing that I've found so far that beats out the documentation.
The bottom line is that the ability to memorize functions isn't indicative of coding ability. Obviously there's a key set of basic functions that a good programmer will know just from extensive usage over time, but I wouldn't base a hiring decision on whether someone knows substr_replace vs. str_replace from memory.
Because I've read either the documentation, or articles, or a book on a subject, the things I learn on a topic are organized. The result is that, if I can't bring something up from memory, I can probably find it quickly through IntelliSense or the Object Browser.
Worse come to worst, I can pick up the book again; something these youngsters are not being taught to do.
John Saunders
Age 51
Pretty much Google + Old Projects + my memory (of course)
References will not solve your problems though, its only for the nuts and bolts, the higher level of problem solving is the actual "programming" part IMHO.
I tend to use Intellisense and Resharper much more than I used to before, but this has helped my overall productivity. If I can get the idea of how I want to solve something and then use tools to get the more boring parts like class names and function signatures, why shouldn't I use the tools I have? I feel relieved that Jon Skeet has a similar approach it seems.
I rely on my bookmarks and books... and my ability to use them effectively. I have multiple books above my desk, including a copy of the ISO C90 standard. Moreover, I use Xmarks to have access to my bookmarks wherever I go. Sometimes, I make a pdf out of a particular page and upload it to my web-site if it is important enough.
Sometimes the information provided by the resources I use makes its way into my terrible memory... maybe.

Low Friction Minimal Requirements Gathering [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
How can our team gather requirements from our "Product Owner" in as low friction yet useable of a way as possible?
Now here's the guidelines- No posts that it can't be done or that the business needs to make a decision that it cares about quality, yada yada. The product I work for is a small group that has been successful for years. I just want to help them step it up a notch.
Basically, I'm on a 6 or 7 person team with one Product Owner. She does a great job but is juggling a few different roles (as I believe is common on extremely small teams). Usually requirements are given at sporadic times (email convos, face to face discussions, meetings, etc). They are never entered into a system and sometimes this results in features missing a release or the release getting pushed back since everyone forgot about the necessary feature.
If you're in a similar situation but you found a way to overcome this, I'd love to hear it. I'm happy to write code to help ease this situation but it can't be a web site that the Product Owner has to go to in order to get anything done. She is extremely busy and we need some way of working together as a team in order to gather these requirements.
I'm currently thinking of something like this: Developers and team members gather requirements discussed in face to face meetings and write some quick notes on the features discussed on a wiki page. Product owner is notified whenever these pages are updated and it then becomes her responsibility to ensure accuracy.
Pros: We'll have some record of the features. Cons: The developers are taking responsibility for something that they ordinarily wouldn't. I'm okay with that here. I think in this situation it's teamwork.
Of course once we do this, then we're going to see that the product owner probably doesn't have enough time to ensure feature accuracy. Ultimately she is overburdened and I think this will help showcase that fact, but I just need to be able to draw attention to that first.
So any suggestions?
P.S. her time is extremely limited so it is considered unreasonable to expect her to need to type in the requirements after discussion. She only has time to discuss them once and move on.
Although the concept of "product owner" is a littl ambiguous to me, I think I am working in very similar circumstances: the customer is extremely buzy and always is a bottleneck in developing requirements.
On the surface, what we try to do in this situation is quite obvious and seemingly simple: we try to make sure that the customer is involved in "read-only / talk-only" mode. No writing. Minimum reading. Mostly talking.
The devil, of course, is in details. So, here are some specifics about our process (in no particular order):
We often start from recording problem statements, which are the ultimate sources of requirements. In fact, sometimes a problem statement is all that we record initially, just to make sure it does not get lost.
NB: It is important to distinguish problem statements from requirements. Although a problem statement sometimes clearly implies some requirement, in general a single problem statement may yield a whole bunch of requirements (each having its own severity and priority); moreover, sometimes a given requirement my define a solution (usually just a partial one) to multiple problems.
One of the main reasons of recording problem statements (and this is very relevant to your question!) is that semantically they are somewhat "closer to the customer's skin" and more stable than requirements derived from them. I believe those problem statements make it much easier and quicker to put the customer into proper context whenever he has time to provide feedback to the development team.
We do record all the requirements (and back-track them to problem statements), regardless of when are we going to implement them. Priorities govern the order in which requirements get implemented. Of course, they also govern the order in which customer reviews unfinished requirements.
NB: A single fat document containing all requirements is an absolute no-no! All the requirements are placed in "problem tracking database", along with bug reports. (A bug is just a special case of a problem in our book.)
We always try to do our best to minimize the number of iterations necessary to "finalize" each requirement (or a group of related requirements). Ideally, a customer should have to review a requirement only once.
Whenever the first review turns out to be insufficient (happens all the time), and the requirement in question is complex enough to require a lot of text and/or illustrations, we make sure that the customer does not have to re-read everything from scratch. All the important changes/additions/deletions since the previous reviwed version are highlighted.
While a problem or requirement remains in an unfinished state, all the open issues (mostly questions to customer) are embedded into the document and highlighted. As a result, whenever the customer has time to review requirements, he does not have to call a meeting and solicit questions from the team; instead the customer can open any unfinished document first, see what exactly is expected from him, and then decide what's the best way and time (for him) to address any of the open issues. Sometimes the customer chooses to write a email or add a comment directly to the problem document.
We try our best to establish and maintain official domain vocabulary (even if it gets scattered across the documentation). Most importantly, we practically force the customer to stick to that vocabulary.
NB: This is one of the most difficult parts of the process, and customer tries to "rebel" from time to time. However, at the end of the day everybody agrees that it is the only way to make precious meetings with the customer as efficient as possible. If you ever attended one-hour meetings where 30 minutes were being spent just to get everybody on the same page (again), I'm sure you would appreciate having a vocabulary.
NB: Whenever possible, any changes in the official vocabulary get reflected in the very next release of the software.
Sometimes, a given problem can be solved in multiple ways, and the right choice is not obvious without consulting with the customer. It means that there will be a "menu of requirements" for the customer to pick from. We document such "menus", not just the finally chosen requirement.
This may seem controversial and look like an unnecessary overhead. However, this approach saves a lot of time whenever the customer (usually few weeks or months down the road) suddenly jumps in with a question like "why the heck did we do it this way and not that way?" Also, it is not such a big deal to hide "rejected branches" using proper organization/formatting of requirements documentation. Boring but doable. :-)
NB: When preparing "menus of requirements", it is very important not to overdo them. Too many choices or too many choice nesting levels - and the next review may require much more customer's time than really necessary. Needless to say that the time spent on elaborated branches may be totally wasted. Yes, it is difficult to find some balance here (it greatly depends on the always-in-a-hurry customer's ability to think two or more steps ahead and do it quickly). But, what can I say? If you really want to do your job well, I am sure that after some time you will find the right balance. :-)
Our customer is a very "visual" guy. Therefore, whenever we discuss any significant user interface elements, screen mockups (or even lightweight prototypes) often are extremely helpful. Real time savers sometimes!
NB: We do screen mockups exclusively for the customer, only in order to facilitate discussions. They may be used by developers too, but in no way do they substitute user interface specifications! More often than not, there are some very important UI details that get specified in writing (now - primarily for developers).
We are lucky enough to have a customer with a very technical background. So we do not hesitate to use UML diagrams as discussion aid. All kinds of UML diagrams - as long as they help customer to get into proper context quicker and stay there.
I am talking about requirements-level UML diagrams, of course. Not about implementation-level ones. I believe that even not very technical people can start digging requirements-level UML diagrams sooner or later; you just have to be patient and know what to put on a diagram.
Obviously, the cost of such process greatly depends on analytical and writing skills of the team, and of course on the tools that you have at your disposal. And I must admit that in our case this process appears to be quite expensive and slow. But, taking into account the very low rate of bugs and low rate of "vapor-features"... I think, in the long run, we get very good payback.
FWIW: According to Joel's nice classification of software products, this project is an "internal" one. So we can afford to be as agile as our customer can handle. :-)
"Developers and team members gather requirements discussed in face to face meetings and write some quick notes"
Start with that. If you aren't taking notes, just make one small change. Take Notes. Later, you might post them to a wiki or create a feature backlog or start using Scrum or bugzilla or something.
First, however, make small changes. Write stuff down sounds like something you're not doing, so just do that and see what improves and what you can do next. Be Agile. Work Incrementally.
You might want to be careful of the HiPPO in the room. The Highest Paid Person's Opinion is not always a good one. We've tended to focus more on providing great tools and support for developers. These things, done right, take some of the hassle out of development, so that it becomes faster and more fun. Developers are then more flexible in terms of their workload, and more amenable to late-breaking changes.
One-Click testing and deployment are a couple of good ones to start with; make sure every developer can run up their own software stack in a few seconds and try out ideas directly. Developers are then able to make revisions quickly or run down side paths they find interesting, and these paths are often the most successful. And by successful I mean measured success based on real metrics gathered right in the system and made readily available to all concerned. The owner is then able to set the metrics, which they probably care about, rather than the requirements, which they either don't care about or have no experience in defining.
Of course it depends on the owner and your particular situation, but we've found that metrics are easier to discuss than requirements, and that developers are pretty good at interpreting them too. A typical problem might be that customers seem to spend a long time filling their shopping carts but don't go on to checkout.
1) A marketing requirement might be to make the checkout button bigger and redder. 2) The CEO's requirement might be to take the customer straight to checkout, as the CEO only ever buys one item at a time anyway. 3) The UI designer's requirement might be to place a second checkout button at the top of the cart as well as the existing one at the bottom. 4) The developer's requirement might be some Web 2.0 AJAX widget that follows the mouse pointer around the screen. Who's right?
Who cares... the customer probably saw the ridiculous cost of delivery and ran away. But redefine the problem as a metric, instead of a requirement, and suddenly the developer becomes interested. The developer doesn't have to do 10 rounds with the CMO on what shade of red the button should be. He can play with his Web 2.0 thing all week, and then rush off the other 3 solutions on Monday morning. Each one gets deployed live for 48 hours and the cart-to-checkout rate gets measured and reported instantly. None of it makes any difference, but the developer got to do their job and the business shifts it's focus onto the crappy products they sell and the price they gauge on delivery.
Well, ok, so the example is contrived. There's a lot of work in there to make sure that the project is small, the team is experienced, hot deployment is simple, instant rollback is provided, and that everyone's on board. What we wanted to get to is a state where the developer's full potential is not wasted, so that's why they're involved not just from the start, but also in the success. Start out with an issue like the number of clicks during registration is too high, run it through a design committee, and we found that the number of clicks actually went up in the design specification. That was our experience anyway. But leave the developer some freedom to just reduce the number of clicks and you might actually end up with a patented solution, as we did. Not that the developer cares about patents, but it had merit - and no clicks!

What are effective ways to log and track programming mistakes?

On occasion I've heard people discuss the benefits of keeping track of programming mistakes, if for no other reason than it increases awareness of common errors. I've started to keep a list of bugs that I find in my code, along with what could have led to them. The main question I have is this:
What information related to my mistakes should I be keeping
track of so that I can improve as a
programmer?
And a couple other questions related to this:
How do I use this information once I start logging my mistakes?
Is tracking mistakes truly beneficial?
This is only useful if you are actually vigilant with tracking and reviewing. When I was working on a team, no matter how much documented that for example our servers in the production environment were natted and would not be able to resolve their own domain names or public IP addresses, every 6 months, I'd get a call at 4 AM from the deployment team or dev team that a new developer was responsible for, and they either forgot or were unaware.
I remember a particular engineer who was repsonsible for deploying and he had paper checklists, we built him deployment tools that forced him to record his checklist, yet he would always forgot to set the connection string (resulting in the 4 am phone call). Point is it's only worth it if your going to use it vigilantly.
I've found the best way to use this is by implementing your rules into a code analyzer like fxcop.
I think what is more useful than keeping a log of individual mistakes is making sure you come away with a real understanding of why it was a mistake in the first place. Most mistakes stem from a lack of understanding about one thing or another, correct that understanding and you eliminate an entire set of potential mistakes in the future. If I would log anything it would be what I learned from the experience that I didn't know before, not the specifics of the mistake that was made which is likely to be of limited usefulness when you look back at it later.
what was the mistake
how can it be avoided
add the latter to an appropriate checklist, and refer to it as often as appropriate
I think tracking mistakes can be worthwhile, but in my experience it helps a lot to categorize them at some level.
Every programmer is going to make enough mistakes over the course of their career to fill an encyclopaedia. If you make a huge checklist out of all of them then you're never going to get any coding done because you'll eventually be spending all of your time going over your checklist. So: categorize your mistakes in some way that makes sense to you so you can rifle through your list looking at the most important mistakes for the sort of code you're currently working on.
Also, to add to the above as far as what to collect:
what are the symptoms of the mistake (so you can find it later)
how you actually solved it
Yes, tracking your personal mistakes is beneficial. Refer to the SEI for numerous data points (here's one at random). One such methodology is the Personal Software Process (PSP). It's too long to go into here, but here's a book about it. There's also this free SEI publication on PSP.
If you balk at SEI and think Agile is the way to go, you'll probably get better mileage out of a book like Clean Code: A Handbook of Agile Software Craftsmanship (publisher website).
Bottom line: disciplined developer = good, undisciplined developer = bad.
I would also want to ask the question of how much time would be required to accurately track the mistakes, and if that time could be better spent directly on improving the software instead. If you can do this in a minimal amount of time and are able to refer back to your records to prevent future mistakes, it may be valuable. In the long run though, I think it will be better to stick with an absolutely high level list of common mistakes you make.
I'd look for trends or similar types of errors that tend to recur. Once you're aware of the types of errors you make, you can be alert for them, or you can change your coding style to avoid them.
As an example, I worked very closely with my office-mate in a previous life. At one point I was halfway through my first sentence explaining some odd behavior, and he interrupted with, "increment your counter." I still double-check my loop counters today!
Instead of keeping a log of future mistakes, maybe you could review your bugtracker history, and try to find common types of errors which keep ocurring.
I'm so inspired I think I'll try this tommorow.
"What information related to my mistakes should I be keeping track of so that I can improve as a programmer?"
Read other people blogs. Write your own. You don't have to publish it. But you do have to turn each mistake into a story.
Don't drown this in metadata. This isn't amenable to a database or even a bug tracker. A mistake is a story where someone made a bad choice and recovered from it.
Here's your outline.
Context. What was going on.
Situation. The problem you were trying to solve.
What you did. Why it was bad.
What you should have done. Why it was better.
What changed. What you learned.
You should also record your successes in almost the same form.
Context. What was going on.
Situation. The problem you were trying to solve.
What you did. Why it was good.
What you learned.
When in doubt watch a movie. Seriously. Characters a confronted with choices, make mistakes and recover from those mistakes. That story line is the essence of drama. A mistake is the same thing.
Be careful what you log. Not every bad situation is a mistake. Some situations are inevitable or so far outside your control that nothing can be done about it.
Bad planning on someone else's part which puts you into panic-mode isn't your mistake. You can meticulously log everyone else's mistakes, but what's the point? If you're tracking what other people are doing, you should seek therapy.
Bad planning which provides inadequate budget, schedule or communication about changes may be a compound problem.
You didn't provide enough planning information up front.
In the scramble to cope, you made another mistake which lead to problem resolution and recovery.
In hindsight (which is always 20/20) you can see a decision which was wrong. However, at the time, it was based on best available information. That's not a mistake. Any sentence that begins "If we'd known that X..." is useless for mistake analysis. You can try to write a checklist of all the things you need to know next time you make that decision. But next time, you won't be making exactly that decision and some other piece of information will turn up missing.
Never call other people's actions mistakes.
Find the root causes. It's a root cause when it's something you can't control.
Don't retroactively call information that turned up missing a mistake.
I have been thinking the same thing. For the time being I keep a little spreadsheet with bugs I correct. The purpose of this is to make myself aware of the more common errors being made.
Hopefully I will start to see patterns and avoid making common errors over and over again.
I find that this makes my job more interesting, probably since I am an analytical person who likes to collect data and information in a structured manner.
You should try to relate the errors to use of inadequate tools, habits, methods or knowledge. In most cases you will find there would have been ways to catch the error earlier. Things like type-safety, unit-testing, code-review, coding standards, better API design, better documentation and working environment, comes to my mind.
I use JIRA with a custom workflow. For a bug, the last step before an issue is closed is a screen that allows me to select a "root cause". I've got a history of the root cause of every bug that's occurred in the last 3 systems I've built.

Coming up to speed on the programming environment [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I'm not a full-time software guy. In fact, in the last ten years, 90 % of my work was either on the hardware or doing low-level (embedded) code.
But the other 10% involves writing shell scripts for development tools, making kernel changes to add special features, and writing GUI applications for end-users.
The problem is that I find myself facing significant holes in my knowledge, often because it's been years since I've done "X", and I've either forgotten, or the environment has changed.
Every so often, there are threads on TheDailyWTF.com along the lines of "WTF: the guy spent all day writing tons of code, when he could have called foobar() in library baz". I've been there myself, because I don't remember much beyond the #include <stdio.h> stuff (for example), and my quick search somehow missed the right library.
What methods have you found effective to crash-learn and/or crash-refresh yourself in programming environments that you rarely touch?
Ask developers you know that work in the environment that you are interested in.
Search the web a lot.
Ask specific questions in relevant IRC channels (Freenode is great).
Ask specific questions on StackOverflow and other sites.
There really isn't any substitue for being "in the daily flow" of the programming environment in question. Having a good feel for the current state of the art is something you only get from experience, as I'm sure you can verify in you own areas of expertise.
i try to keep up with general news about languages i'm interested in but aren't necessarily using at the moment. being able to follow the general changes helps a lot for when you have to pick it up again.
beyond that, i personally find it easiest to grab an up to date reference book, and code a few basic things to get me used to the environment again, ie as a web programmer i'd make a simple crud app, or a quick web service/client.
For frameworks/APIs (such as a JavaScript framework or a widget library):
Quickly scan through the entire API documentation; get a glimpse of all that's out there instead of picking the first method that seems to fit your needs.
If available, glance at the source code of the
framework to see how the
API was intended to be used. Seeing what's behind the curtain helps. And also
some of the methods will have been used
internally, showcasing their true intents.
Don't necessarily always trust existing code (Googled, from co-workers, from books) since not everyone does the due diligence to find out the most proper way to use an API. Sometimes even samples in API documentation can be out-of-date.
In newer full-featured environments like Java, .NET, and Python, there are library solutions to almost every common problem. Don't think "how can I program this in plain C", but "which library solves this problem for me?" It's an attitude shift. As far as resources, the library documentation for the three environments I mentioned are all good.
The best solution I think is to get a book on the topic / environment you need to catch up on.
Ask questions from developers who you know who have the experience in that area.
You can also check out news groups (Google Groups makes this easy) and forums. You can ask questions, but even reading 10 minutes of the latest popular questions for a particular topic / environment will keep you a little bit "in the know".
The same thing can go for blogs too if you can find a focussed blog. These are pretty rare though and I personally don't look to blogs to keep me "in the know" on a particular environment. (I personally find blogs most popular and interesting in the "here's something neat" or "here's how I failed and you can avoid it" or "general practice" areas.)
In addition to the answers above, I think what you are asking for will take a significant amount of your time, and you must be willing to spend that time to achieve your goals. My method would be pretty much the same as Owen's answer; get a reference book or tutorial and work through the examples hacking in changes as you go to experiment with how any given thing works. I'd say as a bare minimum, allocate a hour to do this every other day, in a time that you know you won't be interrupted. Any less, and you'll probably continue to struggle.
The best way to crash-learn is simple, simply do it, use google to search for X tutorial, open your favorite browser and start typing away. Once you reached a certain level of feeling with X, do look at other people things, there is lots of open source out there and there must be someboby who has used X before, look at how they solved certain problems and learn from this, this is an easy way to verify that you are 'on the right track' or that you're doing things or thinking in patterns that other people would define as 'common sense'.
Crash-refreshing something is much easier since you have a suspended learning curve already, the way I do this is to keep some of the example you did while writing or keep some projects you did. Then you can easily refresh and use your own examples.
The library issue you mention here well, only improving your search skills will improve that one (although looking on how others solved this will help as well)
Don't try and pick up every environment.
Focus on the one that's useful and/or interesting, and then pick a few quality blogs to regularly read or podcasts to listen to. You'll pick up the current state of the environment fairly quickly.
Concrete example: I've been out of the Java world for a long time, but I've been put on a Java project in the last few months. Since then I've listened to the Java Posse podcast and read a few blogs, and although I'm far from a Java guru I've got back up to speed without too much trouble.
Just a though. While we are working on our code we know that we need to work very hard to optimize the critical path, but on non critical path we usually don't spend to much effort to optimize.
From your description you are working 90% on embedded and 10% on rest, lets assume that in 50% of the rest you are spending more time that needed. So according to my calculation you are optimizing about 5% of your work flow ...
Of course the usual google/SO/forums search is useful before you doing something new, but investing more than just that is waste of time for my opinion, unless you want to waste some time just for fun or general education ... :), but this is another story.
By the way I'm in same position and last time i needed some GUI and used MFC (because i used it sometimes 10 years ago :) ) and i perfectly understand that i probably will get better results with C# and friends, but the learning curve just not justify this especially knowing that i need mix the C code with GUI.

Do you follow the Personal Software Process? Does your organization/team follow the Team Software Process? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
For more information - Personal Software Process on Wikipedia and Team Software Process on Wikipedia.
I have two questions:
What benefits have you seen from
these processes?
What tools and/or
methods do you use to follow these
processes?
I went through the training and then my company paid for me to go to Carnegie Mellon and go through the PSP instructor training course to get certified as an instructor. I think the goal was to use this as part of our company's CMM/CMMI effort. I met Watts Humphrey and found him to be a kind, gentle soul with some deeply held ideas about process. I read several of his books as well.
Here's my take on it in a nutshell - it is too structured for most people to follow, assuming you follow things to the letter. The idea of estimation based on historic info is OK, particularly in the classroom setting, but in the real world where estimates are undone in a day due to the changing tide of requirements and direction, it is far less useful. I've also done Wide Band Delphi estimation and that was OK but honestly wasn't necessarily any better than the 'best guess' I'd make.
My team was less than enthusiastic about PSP and that is part of the problem - developer buy-in. My company was doing it for the wrong reason - simply to say "hey, look we use PSP and have some certified instructors!".
In the end, I've found using a 'agile' approach to be better. I have a backlog of work to do and can generally estimate it pretty well. I've been doing it long enough that I can make pretty good rough estimates on time and frankly don't think that the time tracking really improves things much. Perhaps in some environments it would work well, but at my place, we'll keep pumping out quality software without all the process hoops that yield questionable benefits.
Just my two cents.
I got into this once, even tried using PSP Dashboard.
It's just too hard to keep up with. Who wants to use a stop watch for all their activities? Follow Joel's advice on Painless Scheduling and Evidence Based Scheduling.
+1 this question, -1 to PSP.
I have used the PSP and TSP process by heart for 4 years (though it was in the begining of my software career). As an idealist you will love what is being done by you and ofcourse Yes there are amazing results as well.
Though PSP advocates the recording of your defects to the core (such as ; or typo's), I was in a conversation with Mr. Watts Humphrey where lot of people asked him about the advancements of compilers and missing of object orientedness (which I felt, how is it missing, as I was an OO Programmer and was using it successfully). There was a very good answer provided by him. It went on like, "PSP, or as a matter of fact any process methodology, is not a concept thats stuck on a single idea. The core idea is to introduce people to the quality methods and analysis.
"Its always adaptive. You can tailor it to fit to your needs. If you feel like you will go with Function Point methodology, you are alright to go ahead with it. Same for any estimation techniques. But you should do it constantly and repetetively.
"Same with the advancement of compilers. If you feel like the WBS in the structure of PSP won't fit to your development, do modify it and use but again do it constinuously.
"As you do it continuously, you will have collected the historical data of yours and will be statistically do a predectable and accurate estimates on all the parameters"
May be I am giving this answer late but when I read all the replies, I felt I wanted to share this.
As per the tools, we have Process Dashboard, the PSP excel sheet and all.
For the PSP, I have seen the Software Process Dashboard, but it seems awfully difficult to use.
I learned it just this last semester in college and it worked great for me. I know that by following it to the letter I can be confident I can hit Compile and won't have any errors and by hitting Run I won't have to spend time anymore fixing and re-compiling the program to run it again and again till the mess is fixed.
People complain about having to record the "missing semi-colons" and such but by the time you're on program 7, you're no longer making such trivial mistakes and instead your defects are found in the important bits of your program. I have not had the opportunity to apply it to a real scenario though but im really looking forward to!
I try to follow the PSP 2.1 process when possible. It really helps me keep a focus on not skipping important, but less exciting, portions of a project. Usually this is design and design review for small projects.
To keep track of time you can use the PSP Dashboard, which has a bunch of built in features and scripts that help you follow the process.
If you are just looking for a time-tracking tool, I also like http://slimtimer.com. It can also do some decent reports.
I've been using PSP for the last six months.
It is time consuming. For my estimations I had to spend 7% of my time filling forms.
It is frustating to have to put the mistake "missing semicolon" over and over again.
But on the other hand as I get used to the process, it became important as I started to see which errors I was mainly doing and I started "naturally" avoiding them.
It also makes you "review" your code so you can see if there's any problem before hitting the compile button.
For tools I recommend using Timetracker: http://0xff.net/
I recommend at least trying PSP for a couple of months, because you will create some habits that help reduce the time you spend compiling and correcting minor bugs.
I have completed the PSP course, the next one is supposed to be TSP which is meant for team dynamics as others say. I have mixed feelings about PSP (mostly negative, but the results were interesting), I arrived to the following conclusions:
First of all my main source of frustration is that the design templates are way too tedious and impractical. Change them for UML and BPMN, tell your instructors from the start, IMPOSE IF NECESSARY. The book itself says that the design templates are for people who don't know or want to learn UML.
Secondly, estimations were the only valuable part for me. The book itself says that you can use other stuff appart from lines of code and it even tells you how to know how relevant they are statistically. My take on this (counting lines of code) is that a tool/plugin that connects with your VCS (git, mercurial) must exist and automate the building of your personal database, otherwise is too tedious to track base/added/reused parts.
The process itself is nice, but not applicable to big projects, why?, because it just doesn't cope with iterations. In the real world, due to requirement changes you will always have to reiterate on a project. You can still apply the discipline to small programmatic tasks, this is: plan, design, review your design (have design standards and a small checklist that u can memorize), code, review your code (have clear coding standards and a small mental checklist you can memorize), test, ponder on your mistakes. Any experienced programmer will know these are eventually intuitive steps to follow. My recommendation in real practice: follow the process but don't document other stuff than your design, and if you do implement unit tests, document them well.
This process might actually be worth to follow and practical... for real-time system programming where there is absolutely no room for mistakes, otherwise doesn't feel worth it.
If you are seeking for a methodology to organize and improve focus, try GTD (Get Things Done) and Pomodoro first.
If you have obsessive-compulsive disorder you might actually enjoy PSP =).
My final recommendation, learn from it as a reference, might lead to better and more practical stuff. This thing is just too academic.
P.S.: R.I.P. Watts Humphrey
I followed the PSP for a few weeks some years ago, because my group wanted to experiment with it. I found it very disappointing and even irritating to work with. It exhausted my patience. My main negative points are:
Ridiculous emphasys in things like typos or missing semicolons.
Impractical forms that you have to fill by hand.
Focus on procedural programming instead of OO.
Estimating involves counting the number of loops, functions, etc.
I found it a massive waste of time. I'd rather choose to leave this profession than to be forced to follow the PSP.
Related material: My answer about a PSP book in the "What programming book would you NOT recommend to developers" question.
I used it during university but at work we really don't have a process at all. Only recently have we started using version control.
My experience with it was that it seemed far too tedious to be useful. If it's not automated, then it can go away.