Anyone tried submitting hs_err files to Sun? - sun

I run a lot of java code on my servers, and occasionally I get a JVM crash, accompanied by a crash dump hs_err_pid file.
Lately I've decided to try to be a better netizen, so I examined the latest crash log, made sure it was indeed the latest JVM, and that the crash was not caused by an external library, and then I tried submitting the file to Sun's bug database.
However, it seems that the entire submitting process is geared towards preventing you from submitting bug reports, making sure you check the box to verify you understand that THIS IS NOT A PLACE TO RECEIVE SUPPORT, and forcing you to fill all this information that they could actually get from the crash file, and some more completely irrelevant information like my company name and Sun account ID.
So my question is twofold:
Is there some backdoor through which I can just say, "look, here's a crash log, do what you like with it"?
Should I even bother? It seems that Sun is hinting that they have enough bug reports as it is, and they don't really need people sending them more crash log files.
What should I do?
EDIT: In case I wasn't clear enough, I'm just trying to help Sun and the community. I don't expect any fixes, explanations or support.
On the other hand, I don't care enough about these crashes to put any effort into reproducing them or investigating them at all.

I do, but only with the understanding (hope?) that this is only for the greater good. I have never received a response or follow up.
However, it seems that the entire
submitting process is geared towards
preventing you from submitting bug
reports, making sure you check the box
to verify you understand that THIS IS
NOT A PLACE TO RECEIVE SUPPORT, and
forcing you to fill all this
information that they could actually
get from the crash file, and some more
completely irrelevant information like
my company name and Sun account ID.
My read is that it is geared toward 'setting expectations'. I'm sure there are, despite the process you complain about, zillions of folks who submit a report and then expect a person email from James Gosling himself in under two hours with a patch or workaround.
The contact info stuff is probably for a hypothetical Sun engineer that actually wants to contact you about the report with other questions.
Don't forget that Sun has a paid support programs, and that this crash report system is not where Sun customers are supposed to get support.
So, to directly answer your questions:
Use the front door. Why wouldn't you?
Yes, you should bother. Just don't expect any feedback.

Related

Hardware Serial Number Discussion. Licence protection

I am working on some application wich will get HDD serial number and then i will use that HDD serial number for licence (cd-key) registration with product. Now the problems wich i can come to:
User have 2 HDD's and once my application gets its serial from first HDD it will register with it so what if user later changes order of HDD's? if the seccond HDD becomes a Master and the first one becomes slave? could be solved with getting both and combine them togather but what if later he removes one then? :D
What if user's HDD dies and he buys new one? Is still same pc only another HDD. So the licence wont be walid anymore just because is another HDD.
Is it possible to fake it? Example i am using VB.net 2010 and application is working on framework(.net) so there is some "dll" wich is responsible to get the serial of HDD so would be possible to replace this "dll" (crack it) so it returns some hardcoded serial of hdd?!?
Could be possible to get processor serial? that would be much batter but could it be done? and does the processor have serial, i mean probably have but is it possible to get it? and same question as abowe could it be faked through changing "dll" or something?
anny other suggestions or experiances?
I seen there are more questions like this but couldnt find some answers so now i ask here!
------ EDITED/ADDED: -------
As talked below i forgot all .net can be decompiled in few secconds! so...
Making own installer. Why?
if i make an installer in wich you enter serial and only if serial is ok to use then install software so what it does? it extracts my software to your computer and again you have ".net" exe wich you can easely decompile and make a crack for it so where is point in making installation with serial!? or if my software is "protected" with some obfuscator so then installation with serial is unneded here i could then simply include serial registration in my software and using some booleans store registered=1||0
i got email from one person here, btw. duno where you got my mail :) and he says some smart things and why some of you people dont respond to my question and this discussion and what he says is this: "scared that others will see my code and how bad it is." so then people just dont want to spent time on this. well thats not problem i know my code is big "minestrone", big mess much words(variables) some on english some on croatian so on well my software is working thats important and i know i suc* we all suc* everyone knows something(more or batter) that the other one. anyway, thats not problem, problem is that i dont want that the software is open source lets say my software is "photoshop" and now someone downloads click there and there and have the whole code and can easely copy paste change few things and no problem he made good application :)
custom compiler? anyone have experiances? would it be ok for some time? :)
what other solution or language would be good to use in future to avoid this "open source" .net! i been looking around so for vb.net, c#, c++ is all based on .net so is all same. vb6 wich i love again same thing. they all can easely be decompiled! what language could not be so easy to decompile? should i switch to assembler? :D i joke, i hope! :p
maybe i just too much stressed up, much work! duno you decide :)
PLEASE READ MY QUESTION AND PLEASE DONT ANSWER ME SOMETHING LIKE "PIRACY CANOT BE STOPED BLA BLA" AND THINGS LIKE THAT. THAT WASNT MY QUESTION! THANK YOU!
Sorry on bold big latters but some people read just title and then answer stupidities! If you want talk about it then read question and write otherwise dont post some stupidities please
Let me first answer your questions:
If the order of the HDDs changes, your application could still find that serial number within the system. However, in either case I would resort to a scheme where I use the device of the system partition or so.
If the HDD dies, the user will be in trouble. There is no good solution to that as long as you insist on your source for the uniqueness of the user's system: i.e. the HDD serial.
It's absolutely possible, yes. At different levels, though. A cracker would always choose the simplest method.
Yes. I'm afraid that will only work with unmanaged code, though. See Wikipedia. And yes, this could be circumvented again by DLL placement (see my comment on the question).
Now let me give you an advice that worked fine for me. Use the SID of the machine account (not to be confused with SYSTEM, which has a well-known SID). And before you counter with NewSid (which, by the way has been retired by MS), this is much more effort to change, especially in domain environments and can have very nasty and unforeseen effects. Therefore if you want to tie your application to a Windows installation, the SID will be sufficient. The SID has the same advantages as a UUID you could create, but it's not as easy to manipulate as a UUID that you store in the registry or a file.
Oh, and before I forget to mention it. Yes, even using the SID can be "cracked" in various ways. But it balances convenience for the user with your demand for security.
Yes, you have to be aware of that. You'll need several fall back methods to take care of this
You have to be aware of that as well.
Everything is fakeable with some energy behind it. However, why fake such an id if you simply can manipulate the program itself? All .net code can be disassembled and manipulated
I think this is possible as well, but would have the same problem behind it.
Other suggestion:
Just because there is piracy, don't make the experience bad for your customer. Use something that is reuseable (like a serial number or keyfile), invest in a good obfuscator to make it harder for somebody to inspect your code, but beyond all: Make your application stand out so people buy it. And even though you didn't ask for it, I have to say it - you can't stop piracy by enforcing orwellian-like surveillance of your program. This will drive customers away as it is a pain in the *ss to work with your application. With a serial or keyfile you still have some sort of protection, the customer likes it because it is easy to use, he doesn't have to call you/write a support ticket if his computer fails or the stars align unfavourable. Pirates will break it eventually, but your customer is happy, and that is what counts.
Anything you rely on which is in userland can and will be spoofed if it is worthwhile to the end user/attacker. So locking the licence to an HDD serial number will not put of attackers, but it will seriously upset your customers.
The same goes for processor serial numbers - it is too easy to pop some code inline to change what your application will read.
Your only reasonable bet will be dongles - ie specific hardware, or a way to get them to register and run with an online connection, so you can validate them using elements you control (although in saying that, if your app is high enough value, expect the dongle to be hacked/replicated too!)
Your biggest problem may be overdoing the security - if you get it wrong in any way you will alienate your customer base.
People regularly upgrade failed hard drives, or those which are too small, as well as most other components in their computers. If you stop them using your product, even for a couple of days, they are likely to look elsewhere!
You can do what you are suggesting, but there are issues. What you are suggesting is called "machine binding" in the licensing world. There are commercial tools that do this for you (disclaimer: I work for one such provider Wibu-Systems). What YOU are proposing has some pros and cons:
Pros: requires no separate hardware (dongle), you can roll your own, easy solution to implement at a basic level.
Cons: can be cracked in a matter of minutes, will create problems for users when they change the HW config or move the app to a new PC, rolling your own will introduce the oppty for new bugs in an area you apparently have no prior experience with.
Why not use a commercial solution? Would you write your own setup program, too? How about your own compiler, linker, and debugger?

How do you solve a problem that is unreproducible, random and changes are not immediately testable?

Thought I would throw this one out there and see what other people's experiences have been like.
I'm experiencing an issue with a system at work where it stops processing jobs in a queue and 'jams' so to speak. Once the services are restarted the software processes the queue and everything returns to normal.
In my experience so far, I cannot for the life of me figure out what is causing these stoppages. That, and I cannot reproduce the stoppage myself. The queue fails at all different intervals, sometimes running for a month straight, other times failing as close together as twice in 1 day. I have since involved two different vendors and various colleagues within the department and everyone is stumped, and has been for several months.
Since I started, we've isolated the processing to a single server and cranked up the logging which we've sent to the vendors. Neither have no idea what the problem is.
We've updated a few settings here and there, upgraded client and server pieces, but we have no idea if the things we are doing is contributing to an overall solution.
So I have a problem that appears to be unreproducible, random and untestable.
Has anyone been involved with any similar situations?
What are some of the ways to solve a situation like this?
Any shared input or experiences would be great.
Cheers,
EDIT:: Cranked up the logging, updated all of the components to the latest version, and made sure proper anti-virus exclusions were done and so far it has not jammed in over a month!
Use a logging framework that can be turned on in production. You might have to have too much logging initially but it should help narrow down the problem and as you get closer you can narrow the scope of the logging and at the same time increase the verbosity (is that a word) of the remaining log statements.
In addition to the logging as pointed out by Kelly there is the possibilty of a deadlock taking place since things seem to stop. One option if this is a Java application is to use jconsole and connect to the JVM instance. jconsole has a detect deadlock option which can provide very valuable information when the hangup occurs.
If this is not a Java application and perhaps a .NET application you could make use of this technique.

What should a tester report?

I have a web site I am building for a client. I now have a tester on the project with me.
I feel testers are needed. REALLY! I cannot test my own code. I also appreciate the value of a new set of eyes. But what desires reporting?
It is easy to say everything should be reported, but I don't have someone between me and the tester to filter out the unimportant requests. The tester does not know the system nor the target user well. She is assigning me tasks and not the project manager. I think this will change soon, but until it does, what do you recommend? There seems to be a believe that our users have NEVER used the interent before at all, and they are as dumb as rocks.
The problem I am having is that EVERYTHING the tester suggests is being accepted automattically and assigned to me.
I have many cases that make me drop my jaw and say "Really? Are you serious? This deserves to be a issue?"
Ex: Need to add text at top of page that says "* = Required" for required fields.
Have you ever felt this way? How did you deal with it?
For now, I am just doing as I am told, but I am making it clear I do not agree.
It sounds to me like your tester is doing the right thing. You can't assume any level of user expertise when testing an application. If a user can break something, they will.
You and your tester need to work out a severity scale. The outliers (those that anybody with Internet experience could probably work around/would never hit) would be considered low priority and sit on the back burner until you knock out the high priority items.
...never the less, those outliers should still be logged because they can definitely come back to bite you in the ass in the end.
You need to add priorities for your issues. This will allow you to do the important issues first, and cosmetic issues last. Here is example priorities from Jira:
Priority 1 - a reproducible crash; issue blocking any further testing or development of a specific feature; loss of user's persistent data; huge memory leak
Priority 2 - a major issue that must be fixed before the product is released; prevents users from using a feature; negatively affects partner; significant memory leak in frequently used functionality
Priority 3 - a minor issue that should be fixed before a product is released; does not prevent users from using a product; highly visible usability issue; small memory leak in rarely used functionality
Priority 4 - a purely cosmetic issue; doesn't affect functionality
Actually it sounds like your tester is doing the right thing (and the text for "* = required" is a very good idea).
In addition to the suggestions about prioritizing reports, I would suggest that you categorize the reports as to whether they refer to user experience or functionality.
You and the tester will never exactly agree on what "needs" to be reported. Just set the priority on issues correctly, and get on with fixing the high-priority stuff first.
One thing you absolutely do not want to do is to discourage the tester from filing bugs. That'll come back to bite you when something ships totally broken, and they say "I thought that was just how it worked".
Do make sure that you're communicating the development schedule and status properly, so they don't waste time testing features that aren't sufficiently complete.
I would report to the client what each change will cost in terms of time and money. Things that are legitimate bugs you'll probably need to fix on your own time (unless your contract says otherwise). Things that are design / subjective issues you should be able to assign a cost to. Let the client know what it is going to cost them and they can decide if they want to proceed or not.
Hopefully you've got some sort of a project specification that the client has signed off on so that you know when the project is complete and what sort of things are not included in the project scope. If not, you might have a bit of a fight on your hands. For changes that you think are outside of the project scope, you might need to compromise - maybe bill them at a cheaper rate or split the cost with them. If you're in that situation it's a good learning experience to get everything documented in the project specification so that there is no question about what falls outside of the project scope. I've been there - one experience like this is enough to teach you to put more work into your specifications.
Report everything and triage. After a bit of time, she'll start to understand what gets past triage and what doesn't. Humans can learn; teach.

How to get users to pay attention to problems?

We occasionally need to notify users about warnings or problems. But often times, especially if it's a common problem, users will just dismiss the warning and continue. Often times users won't even remember seeing the warning, but we check their logs and see that several were displayed. So, how do you get users to pay attention when you're trying to tell them something important?
This isn't as simple as forcing users to resolve all problems before allowing them to save. They often need to save data that isn't strictly okay by our business rules for various reasons (usually for problems that can't be solved right away, or at all).
We've got a better warning/error handling system in mind that I think will help a lot, but I want to see what others have done.
If you want users to pay attention to warnings, use them in moderation!
The big problem with the UAC in Vista is that people are getting so many notifications, that they stop reading who is exactly requesting access to what, they just give permission without thinking.
Another example is the delete confirmation in explorer when sending files to the recycle bin. I got so used to just hitting 'Ok' immediately after pressing 'delete', that I missed the fact that the dialog was telling me that the file would not be moved to the bin, but deleted immediately, for whatever reason.
My personal fix: I disabled the delete confirmation for the recycle bin. If something can not be moved to the bin, I still get a message, and this time I know that it might be important, so I pay attention.
Conclusion: Don't spam the user with messages, or the important warnings will get lost in the noise.
The quality of your warning will not prevent users from submitting invalid data. If you allow invalid data to be submitted, it will be.
If you have data that must be submitted to a rules system, then that data must be valid before it is submitted. However, allowing users to save their work is a separate issue. You should allow users to save their work, then submit the data to the rules engine when it is valid.
The fundamental problem is that users don't like to read, they just want to be left alone to do their work :).
The best way to combat this is the following:
Don't pop up a window unless absolutely necessary
If you do, make the error or warning message as short and succinct as you possibly can
Long error/warning messages simply won't get read. The user will get to about the fifth word and think "this is taking too much time, I just want to get back to work".
My advice boils down to three things.
Reevaluate what you think is important for the user to know.
Don't be lazy and ask the user to resolve what your program can resolve for itself.
Don't interrupt what the user is doing with stupid (and yes, they are stupid) messages.
If you have a form with required data, then color-code the field as red or highlight it with an asterisk to indicate it's required. Disable the "OK" or "Confirm" button until they fill out all required fields.
For fields with incomplete or inconsistent data, bring up a tooltip or color-code the field so the user knows that something may be wrong. You could also display the list of warnings prominently somewhere on your form. But don't stop the data entry. You'll just frustrate and anger your users.
I must admit that I too often click on "OK" or whatever I'm conditioned to do for a dialog to go away without thinking. Usually this occurs when there are just too many of them.
Without claiming to be a psychologist of any kind, I think it is natural to pay attention to unusual things and filter away repetetive things.
With that in mind it is maybe worth considering to make less important dialogues less intrusive so that the real important ones get more attention.
I think toaster messages and the way google handles messages in it's online apps are real nice examples of how to notifiy a user of something inessential.
--EDIT--
Now that I re-read my post, I remember reading this in "Don't Make Me Think".
A brilliant little book (few tens of pages) that's full of nice and easy to understand usability things. Somewhat focused on online usability, but defenatly applyable in offline applications too.
This is what we've got planned. Essentially, create something Bugzilla-ish for storing these errors/warnings/whatever. But it also goes hand in hand with some of the other answers.
Instead of using simple MessageBox, display warnings/errors in a Visual Studio-like error window. As long as there are problems, they'll be displayed in this window.
If the data is saved, save all warnings/errors to the database. Now anyone can see what the current issues are - bonus! Also, those problems can be loaded from the database instead of detecting them in the app all the time, which will help a lot - some problems are not trivial to detect.
Allow users to perform several actions, like:
Acknowledge the problem, so it is no longer displayed.
Assign the problem to another user
Flag the problem as "not really a problem"
Set a "must be solved by" date
(probably others, the design hasn't been fully thought out yet)
Log all of these actions to the database, so we have accountability
That's it in a nutshell. Now problems stick around, so they're in the users faces until they're solved. The problems can be tracked, so we can tell where the ball was dropped if we get bit. I hope it works!
Though I never got around to implementing this at a previous site, I wanted to create a custom dialog box where users would have to check a box stating that they have read and acknowledged the message (and then log that response). This was for an ISO-xxxx company so this kind of bureaucracy was a logical response to these types of mistakes.
My other, much more sinister, idea was to make "No" or "Cancel" the default options. Eventually they would get the Tab-Enter keystrokes down pat and then you would just switch it back.
Break the system!
It has honestly been my experience that if you don't want an end user to do something without explicitly understanding it, stop them from doing it...
As seriously anoying as the whole "Windows Error/warning Messages" gets, I never take notice until a program tells me I can't do something... then I am forced to ask myself "Why Not"
Time to google the answer... or RTFM
I know that it is not always feasible to use this approach, but if you can... they will listen!
I like programs that hint that there's a problem while ignoring it as long as possible - which sounds very like what you're striving for. One thing I've been thinking about (but vaguely, since I haven't had a use for it) is putting a status indicator for errors/warnings (a bit like the omnipresent throbber of a web-browser, but for errors). This icon would change state, a bit like a traffic light, to show that the program has problems that will have to be addressed sooner or later - perhaps yellow for warnings if the problem with the data could be corrected later and isn't going to cause any major problems, red for any problem that is going to have to be fixed before they complete the current job (for form data, that would mean the whole transaction, not the current form). Obviously the colours wouldn't be enough, there would have to be some support for colour-blind people, but you get the idea. Clicking the indicator would bring up a list of the problems (and perhaps explanations as to why that is a problem - so that people can point out when the code's assumptions are unhelpful or wrong), and selecting a problem would allow you to jump to the field where it can be fixed.
One thing you should probably do, whatever method you go with in the end, is to look through your warnings and work out whether they're actually necessary. I've seen far too many programs that warn me about perfectly reasonable input that is then accepted, or warn me about the usual behaviour of the program. That's the sort of thing that helps condition people to click through warnings. If you have logs of the warnings, you might start there - Why are people clicking through them? They might be conditioned, or it might be that there genuinely isn't a problem, and someone hasn't told you that things have changed.
I quite like the firefox method when installing plugins: The ok button is disabled and displays a countdown for 5 seconds. After that the use can choose to ignore it.
For web applications, the alert() and confirm() javascript methods, while somewhat basic, achieve the effect of either preventing users from doing something or making sure that they clearly agree to something that they have been warned about.
For other situations, where the action will not cause considerable disruption in the business processes, we often display a small warning box at the top of the page after, say, a form is submitted.
For example, our applications require location validation in several places (valid city/state/zip).
If the location is absolutely critical, we will make it required on the form.
If the location is required for some aspects of the application, we will use the confirm() to make sure they understand that they will not be able to use certain features without a valid location.
In some cases, we use a default location. In that case, we provide the message/warning box at the top of the next page indicating that a default location is being used.
I've found that if you're producing log messages for your own use (even if that use directly benefits the users themselves), the only way to get users to report problems is to have the application do it for them.
In the case of dealing with user input that might be wrong, have you considered using something like the red squigglies used by spell checking or some kind of highlighting of the problem areas as the user does their work? Most users have been trained to ignore dialogs by using buggy software, but that kind of message might make it clear that the error is the user's to fix.
Do you have a good idea why each of the exceptional situations occurs? What do you try to achieve with each of these messages:
make user review the data for obvious typos or mistakes
make someone else to review the data at a later stage when more information is available
inform this user and anyone else looking at the data at a later stage about any assumptions made
make sure user understands the consequences of their actions
Can any of these goals be achieved more effectively in a different way?
Some ideas (none of them automatically qualify as a silver bullet):
Keep messages short and relevant, exclude any language that does not contribute additional info (such as "please" etc), tell users what is expected of them (i.e. instead of "Post code is empty" use "Enter post code".).
Use language that is understood by the users, always give sufficient information, try to be as specific as possible.
Use different looking messages for different types of warnings and errors (use font, colour, imaging, possibly animation and sound).
Revisit the entire process, so that someone has to process any info submitted with warnings later on.
Visualise warnings the next time info brought to the screen (i.e. highlight problematic areas) so that they can be resolved later, when more info is available.
Add a sign off to the warnings, for instance request a user to enter their password each time they need to dismiss a warning.
Make actions undo-able, so you don't really need the warnings
Dont try to solve with programming. See if you can change the data input process.
Use colors and icons.
Green - everything is ok (or confirmation something happened as expected)
Yellow - Warning. You user may or may not want to look into the issue
Red - Error. Something that requires user interaction to resolve.
I would also suggest (as others have on this thread) to use sparingly.

What do you do with a developer who does not test his code? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
One of our developers is continually writing code and putting it into version control without testing it. The quality of our code is suffering as a result.
Besides getting rid of the developer, how can I solve this problem?
EDIT
I have talked to him about it number of times and even given him written warning
If you can do code reviews -- that's a perfect place to catch it.
We require reviews prior to merging to iteration trunk, so typically everything is caught then.
If you systematically perform code reviews before allowing a developer to commit the code, well, your problem is mostly solved. But this doesn't seem to be your case, so this is what I recommend:
Talk to the developer. Discuss the consequences for others in the team. Most developers want to be recognized by their peer, so this might be enough. Also point out it is much easier to fix bugs in the code that's fresh in your mind than weeks-old code. This part makes sense if you have some form of code owneship in place.
If this doesn't work after some time, try to put in place a policy that will make commiting buggy code unpleasant for the author. One popular way is to make the person who broke the build responsible for the chores of creating the next one. If your build process is fully automated, look for another menial task to take care of instead. This approach has the added benefit of not pinpointing anyone in particular, making it more acceptable for everybody.
Use disciplinary measures. Depending on the size of your team and of your company, those can take many forms.
Fire the developer. There is a cost associated with keeping bad apples. When you get this far, the developer doesn't care about his fellow developers, and you've got a people problem on your hands already. If the work environment becomes poisoned, you might lose far more - productivity-wise and people-wise - than this single bad developer.
As a developer who rarely tests his own code, I can tell you the one thing that's made me slowly shift my behavior...
Visibility
If the environment allows pushing code out, waiting for users to find problems, and then essentially asking "How about now?" after making a change to the code, there's no real incentive to test your own stuff.
Code reviews and collaboration encourage you to work towards making a quality product much more than if you were just delivering 'Widget X' while your coworkers work on 'Widget Y' and 'Widget Z'
The more visible your work is, the more likely you are to care about how well it works.
Code review. Stick all of your dev's in a room every Monday morning and ask them to bring their most proud code-based accomplishment from the previous week along with them to the meeting.
Let them take the spotlight and get excited about explaining what they did. Have them bring copies of the code so other dev's can see what they're talking about.
We started this process a few months ago, and it's astonishing to see the amount of sub-conscious quality checks that take place. After all, if the dev's are simply asked to talk about what they're most excited about, they'll be totally stoked to show people their code. Then, other dev's will see the quality errors and publicly discuss why they're wrong and how the code should really be written instead.
If this doesn't get your dev to write quality code, he's probably not a good fit for your team.
Make it part of his Annual Review objectives. If he doesn't achieve it, no pay rise.
Sometimes though you do just have to accept that someone is just not right for your team/environment, it should be a last resort and can be tough to handle but if you have exhausted all other options it may be the best thing in the long run.
Tell the developer you would like to see a change in their practices within 2 weeks or you will begin your company's disciplinary procedure. Offer as much help and assistance as you can, but if you can't change this person, he's not right for your company.
Using Cruise Control or a similar tool, you can make checkins automatically trigger a build and unit tests. You would still need to ensure that there are unit tests for any new functionality he adds, which you can do by looking at his checkins.
However, this is a human problem, so a technical solution can only go so far.
Why not just talk to him? He probably won't actually bite you.
Make him "babysit" the build, and become the build manager. This will give him less time to develop code (thus increasing everyone's performance) and teach him why a good build is so necessary.
Enforce test cases - code cannot be submitted without unit test cases. Modify the build system so that if the test cases don't compile and run correctly, or don't exist, then the entire task checkin is denied.
-Adam
Publish stats on test code coverage per developer, this would be after talking to him.
Here are some ideas from a sea shanty.
Intro
What shall we do with a drunken sailor, (3×)
Early in the morning?
Chorus
Wey–hey and up she rises, (3×)
Early in the morning!
Verses
Stick him in a bag and beat him senseless, (3×)
Early in the morning!
Put him in the longboat till he’s sober, (3×)
Early in the morning!
etc. Replace "drunken sailor" with a "sloppy developer".
Depending on the type of version control system you are using you could set up check-in policies that force the code to pass certain requirements before being allowed to check-in. If you are using a sytem like Team Foundation Server it gives you the ability to specify code-coverage and unit testing requirements for check-ins.
You know, this is a perfect opportunity to avoid singling him out (though I agree you need to talk with him) and implement a Test-first process in-house. If the rules aren't clear and the expectations are known to all, I've found that what you describe isn't all that uncommon. I find that doing the test-first development scheme works well for me and improves the code quality.
They may be overly focused on speed rather than quality.
This can tempt some people into rushing through issues to clear their list and see what comes back in bug reports later.
To rectify this balance:
assign only a couple of items at a time in your issue tracking system,
code review and test anything they have "completed" as soon as possible so it will be back with them immediately if there are any problems
talk to them about your expectations about how long an item will take to do properly
Peer programming is another possibility. If he is with another skilled developer on the team who dies meet quality standards and knows procedure then this has a few benifits:
With an experienced developer over his shoulder he will learn what is expected of him and see the difference between his code and code that meets expectations
The other developer can enforce a test first policy: not allowing code to be written until tests have been written for it
Similarly, the other developer can verify that the code is up to standard before it is checked-in reduicing the nmber of bad check-ins
All of this of course requires the company and developers to be receptive to this process which they may not be.
It seems that people have come up with a lot of imaginative and devious answers to this problem. But the fact is that this isn't a game. Devising elaborate peer pressure systems to "name and shame" him is not going to get to the root of the problem, ie. why is he not writing tests?
I think you should be direct. I know you say that you've talked to him, but have you tried to find out why he isn't writing tests? Clearly at this point he knows that he should be, so surely there must be some reason why he isn't doing what he's been told to do. Is it laziness? Procrastination? Programmers are famous for their egos and strong opinions - perhaps he's convinced for some reason that testing is a waste of time, or that his code is always perfect and doesn't need testing. If he's an immature programmer, he might not fully understand the implications of his actions. If he's "too mature" he might be too set in his ways. Whatever the reason, address it.
If it does come down to a matter of opinion, you need to make him understand that he needs to set his own personal opinion aside and just follow the rules. Make it clear that if he can't be trusted to follow the rules then he will be replaced. If he still doesn't, do just that.
One last thing - document all of your discussions along with any problems that occur as a result of his changes. If it comes to the worst you may be forced to justify your decisions, in which case, having documentary evidence will surely be invaluable.
Stick him on his own development branch, and only bring his stuff into the trunk when you know it's thoroughly tested. This might be a place where a distributed source control management tool like GIT or Mercurial would excel. Although with the increased branching/merging support in SVN, you might not have too much trouble managing it.
EDIT
This is only if you can't get rid of him or get him to change his ways. If you simply can't get this behaviour to stop (by changing or firing), then the best you can do is buffer the rest of the team from the bad effects of his coding.
If you are at a place where you can affect the policies, make some changes. Do code reviews before check ins and make testing part of the development cycle.
It seems pretty simple. Make it a requirement and if he can't do it, replace him. Why would you keep him?
I usually don't advocate this unless all else fails...
Sometimes, a publicly-displayed chart of bug-count-by-developer can apply enough peer pressure to get favorable results.
Try the Carrot, make it a fun game.
E.g The Continuous Integration Game plugin for Hudson
http://wiki.hudson-ci.org/display/HUDSON/The+Continuous+Integration+Game+plugin
Put your developers on branches of your code, based on some logic like, per feature, per bug fix, per dev team, whatever. Then bad check-ins are isolated to those branches. When it comes time to do a build, merge to a testing branch, find problems, resolve, and then merge your release back to a main branch.
Or remove commit rights for that developer and have them send their code to a younger developer for review and testing before it can be committed. That might motivate a change in procedure.
You could put together a report with errors found in the code with the name of the programmer that was responsible for that piece of software.
If he's a reasonable person, discuss the report with him.
If he cares for his "reputation" publish the report regularly and make it available to all his peers.
If he only listens to the "authority", do the report and escalate the issue to his manager.
Anyway, I've seen often that when people are made aware of how bad they seem from outside, they change their behaviour.
Hey this reminds me of something I read on xkcd :)
Are you referring to writing automated unit test or manually unit testing prior to check-in?
If your shop does not write automated tests then his checking in of code that does not work is reckless. Is it impacting the team? Do you have a formalized QA department?
If you are all creating automated unit tests then I would suggest that part of your code review process include the unit tests as well. It will become obvious that the code is not acceptable per your standards during your review.
Your question is rather broad but I hope I provided some direction.
I would agree with Phil that the first step is to individually talk to him and explain the importance of quality. Poor quality can often be linked to the culture of the team, department and company.
Make executed test cases one of the deliverables before something is considered "done."
If you don't have executed test cases, then the work is not complete, and if the deadline passes before you have the documented test case execution, then he has not delivered on time, and the consequences would be the same as if he had not completed the development.
If your company's culture would not allow for this, and it values speed over accuracy, then that's probably the root of the problem, and the developer is simply responding to the incentives that are in place -- he is being rewarded for doing a lot of things half-assed rather than fewer things correctly.
Make the person clean latrines. Worked in the Army. And if you work in a group with individuals who eat a lot of Indian food, it wont take long for them to fall in line.
But that's just me...
Every time a developer checks something in that does not compile, put some money in a jar. You'll think twice before checking in then.
Unfortunately if you have already spoken to him many times and given him written warnings I would say it is about time to eliminate him from the team.
You might find some helpful answers here: How to make junior programmers write tests?
I'd be tempted to suggest elaborating a bit on what you've tried and what results you got as this may have changed a bit but here are my initial suggestions:
Is it any tests or comprehensive tests? Some may code blindly and do zero tests, but this is rather rare, IME. Usually there are some tests done but not enough to cover most of the cases that would be comprehensive testing.
Group dynamics may help. I'd assume he is part of a team and that the team's view may be of some help here. In a way this is trying to get peer pressure which is usually a bad thing but sometimes it can be used in good ways.
How well spelled out were the warnings? In a way this can seem childish but there is a chance that what you think of as testing may not be the same as his. Do you want nUnit tests, an excel spreadsheet, logs from his computer, or something else as proof of the existence and use of tests? From what you've described there isn't anything to confirm that he did understand what you meant, was going to use tests and provide evidence of doing so.
Check-in policy question. Some places, such as my current workplace, encourage committing often which can mean that one does commit code without tests. Is there a known, accepted and well-followed policy where you are? That's another aspect here.