Does SAT have any advantages over ST12? - sap

I use ST12 in my daily work, but could I benefit from learning SAT too?
Does it provide anything useful, that cannot be found in ST12?

In general, as a developer you want to understand all the tools that are available to you. Then you will be able to determine yourself when the most appropriate time is to use either tool.
It is true that you could probably do 95% of your analysis just using ST12. I find SAT useful when your bottleneck might not be the database, or when you just want an overview of a program that you may not have written yourself. At some clients SAT was also the only tool that I had authorization for and you had to ask Basis to run ST12 on your behalf.
I find SAT my first point of call for performance analysis, as it generally gives me a better idea of what to focus on in the more detailed ST12 trace. In SAT there is not "Display Execute plan" button, so if you want to know exactly what your code is doing on the database you'll need ST12 for it.

SAT can show memory usage, for everything else ST12 is better
ST12 has many advantages:
it shows the program flow better, with "Bottom up call hierarchy"
it shows the children of a module better with "Top-down call tree"
it provides better details of SQL statements
it even links the STAD records
SAT has one advantage:
it can show you where the memory usage has huge jumps

Related

Fastest way to understand business logic in a new project

I would like to know the fastest/best way to learn the business logic in a new project.
Most projects have been running for years, some of them are poorly documented, but you still need to know how to work with them. What is the best way to do this? (Use Case Diagram / support from colleagues / code analysis etc.)
The problem with verbose logging is that you may be overloaded with details that do not help, and even if you have the right details, you may misunderstand the big picture.
Moreover, if projects run for years, and are poorly documented, chances are the little available documentation is already obsolete. And chances are, the team did not invest heavily in logging either.
Reverse engineering the code is another approach, but where to start if there are millions of lines of code in the legacy system? Some things can be easily read in code, but many more complex, emergent behavior comes from the interactions between many classes, and this kind of knowledge is the most difficult to extract.
So here is the way to go:
Talk to colleagues. The best approach to move knowledge from one brain to another is direct conversation. It works much better than any formal diagram or any documentation. Unfortunately this is not always possible (e.g. team left)
If 1 is not possible, understand the business user's point of view. May be there is a user manual? Maybe some colleagues of the user-support? if none of those are possible, the ultimate way is to spend some time in the day of the user's life. You will not understand how the system works, but at least you'll get a quick intro in what the system is supposed to do, what matters to the users, and maybe some business rules.
Check for automated test cases. In fact, such test cases are a hidden and up-to date documentation resource.
Check for non-automated test cases, in particular use acceptance tests, and integration tests. If these are not automated, there are chances that they are already obsolete. But it's better than nothing.
Reverse engineer the code. Identify the main classes and how they interact. And yes, some simplified class diagrams will help you to understand how classes are related (no need to document properties and methods: these can be found back in the code). And some sequence diagram will help you to get a picture of the more complex interactions.
Run server, log verbose everywhere. Follow code flow and dig in.

How do you respond to the argument "No time to test/develop clean code, because of the deadline"?

Ok, I think this question is at the wrong place and I'll head over to https://softwareengineering.stackexchange.com/ to read/ask about it. Thanks all for your answers up to this point. :)
apologies ;) I'm sorry if this question is a little bit subjective, but I can not come up with a better title. I'll correct it if you know something better.
In my organization there is a lot of buzz about this whole automated testing and continuous integration thing, but one argument I constantly hear is this:
How should I develop good, clean, easy to maintain code and write unit tests, if the
deadline is already set and it is only half of my estimate?
I'm a developer myself, so I can understand this. But I always try to respond that not only the developers need a paradigm shift, but the management too.
If you are a developer and your estimates are cut half, no matter what you estimate, you are not going anywhere, no matter how complex or trivial your problems are. You need the backup of the management guys, the One Guy who is giving the money.
Conclusion?
Can you give me some help, may it be a good URL to read about this development/management conflict, a book or maybe a personal insight? Did you survive a large process shift like this in a Waterfall company that is now doing Lean development? Or do you know this argument and have a clever answer to it?
And please, help me rename or move this question. :-)
Update
Thanks for all the answers already! :) I think I have to make clear that my point wasn't the "do it twice as fast" statement from management. It's about the negative point of view that comes with this statement from a developer.
Is there anything I can do to help people to understand that this is not the default in software development? That the PM is not actively preventing writing good code and that maybe both sides need a bit more education about the pros/contras of clean code bases, good coverage and lots of automated tests?
One good example is Technical Debt. It's manager friendly. Imagine your credit card. If you accrue debt for a few weeks that can be helpful. You don't need to carry around cash for daily purchases and you pay it off at the end of the month.
This is like a crunch before a release. You take on some debt and then pay it back soon. If you keep charging things and never paying off that debt it starts to compound. That new feature you want it more difficult because the foundation you're building on is unsound. The debt you've accumulated is keeping you from acting quickly. If you're over your limit even typical small purchases won't go through.
You might also want to take a look at Facts and Fallacies of Software Engineering . It talks about estimates and the troubles they can cause when they're not reviewed as the project evolves.
It may sound defeatist to say this, but I've worked in a few shops that had this issue, and they never changed- or more accurately, I found that it was not possible to change the system from within.
The issue is that, from the perspective of the management that insists on this type of development, as long as the product is being released approximately on time, and the customers are buying it, goal accomplished. To put it another way, As long as you are making money, quality does not matter.
Now, you, I, and experienced management understand the long term cost of technical debt. It may be possible to explain to a rational manager the cost of technical debt, the compounding reduction in return on investment in programmer time (by far the most expensive part of a software project), and the fact that a clean, well designed, well tested code base means that new features can be implemented more quickly, and that more time can be spent on new features instead of fixing bugs- leading to a long term improvement in the mean time between releases.
It may be possible to explain this to your management, but every place I've worked that had these issues required a critical failure before they wised up. This usually involved a large portion of the team quitting from frustration, or a large drop in sales as quality diminished due to unrealistic scheduling (in turn leading to massive layoffs). Either way, although I've heard of organizations changing after the fact.
In short, try to explain the cost of technical debt, and the benefit of a clean codebase. Explain it in terms of sales, releases, and customer satisfaction, instead of from a technical perspective. If that doesn't work, start looking for a new job, because poor management leads to a poor product, and a poor product reflects poorly on you as a developer.
What do you mean your "estimates are cut half"? Do you mean you give an estimate, and management says, "No, do it in half that time"? That is unacceptable.
Someone must push back against management. (I say "someone" because I don't know your hierarchy.) There is no such thing as a free lunch. If they want it sooner, then they must make hard, painful tradeoffs. They must prioritize and drop lower-priority features.
If they say, "No. We need it all now. Do it sooner or else," hold the line. They may be surprised, and they may be upset, but you'll earn their respect. The changes will come when they start listening.
There doesn't need to be a conflict between management and development. The conflict is between management and time. It's not your fault it takes time to do things. It's their job to make the hard decisions to get the products out on time without overworking developers until they quit in exhaustion. Just saying "Wrong, do it in half that time" is not management. It's fantasy.
In reality, your management will probably continue to be foolish. If so, you can try to play their game: come up with a safe estimate that you feel is very safe with the automated testing and then double it. Complain loudly when they cut the hours by half, then sigh in resignation. Allow them to feel they are doing their job. Mission accomplished!
A good PM does not estimate. Ever! A good PM will get an estimate from the person who is going to do the work. They will not change it. They may try to coax the worker to change it but, since the worker is the one doing the job, they should be controlling the estimate.
If you have a PM who cuts your estimate in half, make sure your estimate was in writing and then use that to explain to him (sorry for the gender bias, English doesn't really have a good neutral pronoun) and hopefully his boss that the reason your work only seems half finished is because he was screwing around with your estimates.
Tactfully point out that, if they're not going to take your estimates seriously, they should leave you alone and just pluck any old number out of their derrière. That will have the same effect of missed deadlines and unhappy customers but without you wasting time providing numbers that are just going to be ignored anyway.
In any case, the cold war between bad PMs and smart developers will naturally lead to the situation where you should initially double your estimates so that the halving will have little effect :-)
It may sound obvious, but my answer to this question is:
Writing unit tests before the code will allow you to develop good, clean and easy to maintain code
I had a problem with my management who were concerned that developers would not be able to complete their tasks on time if they will be required to write unit tests - this is a common concern when trying to implement a TDD in a Waterfall company. So I made this statement and we had to prove it by writing tests before code and not missing the deadline :) Actually when you get used to it, it will allow to write even more code.
Typically, if you improve your programming practices and code quality, you'll almost certainly speed up your development, as you'll save more time debugging than you will in writing unit tests and trying to make everything right to begin with. Very few shops are in the position of spending more time on code quality than it saves.
Another danger is if management meddles in the process, rather than just serves up impossible deadlines. If they expect you to cowboy code in order to make the deadline, you really won't be able to use good practices.

When you write your code, do you deal with errors proactively or reactively?

In other words, do you spend time anticipating errors and writing code to get around these potential issues, or do you write the code as you see fit and then work through any errors on an issue by issue basis?
I've been thinking a lot about this lately and I'm very much a reactive person. I write my code, give it a whirl, go back correct error and repeat until application works as expected. However a friend of mine offered that he spends time thinking how each line is interpreted and fixes errors before they occur.
I must point out that re-active is pure PRE-live. I definitely make sure my application is working before it goes live.
There should always be a balance.
Too many error checking is slow and leads to garbage code. Not enough error checking makes your program crash on edge cases which is not very good to discover after having it shipped.
So you decide how reliable some piece of code should be and implement error checking accordingly. Some test utility can be not very reliable - less error checking. A COM server meant to be used by a third party search service in deep background should be super reliable - much more error checking.
I think asking this in isolation is kinda weird, and very subjective, however there are obviously a bunch of techniques that permit you to do each. I tend to use these two:
Test-driven development (this would seem to be proactive)
Strong, static typing (reactive, but part of a tight iterative development cycle, as in, it's enforced by my ML compiler, and I compile a lot)
Very occasionally I swerve into the world of formal verification of programs. That's definitely "reactive", but if you think a little more up-front, it tends to make the verification easier.
I must also say that I value a lot of up-front thought in programming. The easiest way to avoid bugs is to not write them in the first place. Sometimes it's inevitable, but often a little more time spent thinking about the problem can lead to better-quality solutions, and then the rest can be taken care of using the kinds of automated methods I talked about above.
I usually ask myself a bunch of what-ifs when coding, like
The user clicks the button, what if they didn't select a date?
The user is typing in the search box, what if they try to type html in there?
My label text depends on a value from a shared drive, what if it's not mapped?
and so on. By doing this I've found that when the application does go live, there are a ton fewer errors and I can focus on fixing more obscure bugs instead of correcting conditions that should have been in place to begin with.
I live by a simple principle when considering error-handling: garbage in, garbage out. If you don't want any garbage (e.g. invalid input) messing up your software, you have to find all the points in your software where it can get in and handle it. Of course, the more complicated your software is, the harder it is to find every point of entry, but I feel that the more you do up front the less reactive you will need to be later on.
I advocate the proactive approach.
I try to write the code in that style which results in maintainable and reliable code
I use the defensive programming techniques to prevent stupid errors in code due to my loss of attention and similar
I design the database model according to the fortress principle, SQL code checking for results after each singular operation
I think of potential problems that can happen with that part of the code and I account for that. Not for every possibility but for major ones I can think of right now.
This usually results in software operating rather smoothly. At times it even surprises me but that was the intended goal, so here we are.
IMHO, the word "Error" (or its loose synonym "bug") itself means that it is a program behavior that was not foreseen.
I usually try to design with all possible scenarios in mind. Of course, it is usually not possible to think of all possible cases. But thinking through and allowing for as many scenarios as possible is usually better than just getting something working as soon as possible. This saves a lot of time and effort debugging and redesigning the code. I often sit down with pen and paper for even the smallest of programing tasks before actually typing any code into my editor.
As I said, this will not eliminate all errors. For me it pays off many times over in terms of time spent debugging. Another benefit is that it results in a more solid and maintainable design with fewer bugfixing hacks and special cases added on later. But in any case, you will have to do a lot of debugging after the code is done.
This does not apply when all you want is a mockup or rapid prototype. Also practical constraints such as deadlines often makes a thorough evaluation difficult or impossible.
What kind of programming? It's impossible to answer this in any general way. (It's like asking "do you wear a helmet when playing?" -- well, playing what?)
At work, I'm working on a database-backed website. The requirements are strict, and if I don't anticipate how users will screw it up, I'm going to get a call at some odd hour of the day to fix it.
At home, I'm working on a program ... I don't even know what it'll do yet. I can't deal with 'errors' because I don't know what 'an error' is in this context, because I don't know what correct behavior is going to be. The entire purpose of the program can and frequently does change on a timescale of minutes to hours, so even a couple minutes spent thinking about errors this early is a complete waste of time. (It's even worse than browsing SO, since error-handling adds lines of code.)
I guess the only general answer is "I do what makes sense in terms of saving time in the long term", which is, after all, the whole reason to use machines to do work for us.

SQL 2005: should I roll my own log shipping?

I'm looking into using log shipping for disaster recovery and I'm getting mixed messages about whether to use the built-in stuff or roll my own. Which do you recommend, please, and if you favour rolling your own what's wrong with the built-in stuff? If I'm going to reinvent the wheel I don't want to make the same mistakes! (We have the Workgroup edition.) Thanks in advance.
There's really two parts to your question:
Is native log shipping good enough?
If not, whose log shipping should I use?
Here's my two cents, but like you're already discovering, a lot of this is based on opinions.
About the first question - native log shipping is fine for small implementations - say, 1-2 servers, a handful of databases, and a full time DBA. In environments like this, the native log shipping's lack of monitoring, alerting, and management isn't a problem. If it breaks, you don't sweat bullets because it's relatively easy to repair. When would it break? For example, if someone accidentally deletes the transaction log backup file before it's restored on the disaster recovery server. (Happens all the time with automated processes.)
When you grow beyond a couple of servers, the lack of management automation starts to become a problem. You want better automated email alerting, alerts when the log shipping gets more than X minutes/hours behind, alerts when the file copying is taking too long, easier handling of multiple secondary servers, etc. That's when people turn to alternate solutions.
About the second question - I'll put it this way. I work for Quest Software, the makers of LiteSpeed, a SQL Server backup & recovery product. I regularly talk to database administrators who use our product and other products like Idera SQLSafe and Red Gate SQL Backup to make their backup management easier. We build GUI tools to automate the log shipping process, give you a nice graphical dashboard showing exactly where your bottlenecks are, and help make sure your butt is covered when your primary datacenter goes down. We sell a lot of licenses. :-)
If you roll your own scripts - and you certainly can - you will be completely alone when your datacenter goes down. You won't have a support line to call, you won't have tools to help you, and you won't be able to tell your coworkers, "Open this GUI and click here to fail over." You'll be trying to walk them through T-SQL scripts in the middle of a disaster. Expert DBAs who have a lot of time on their hands sometimes prefer writing their own scripts, and it does give you a lot of control, but you have to make sure you've got enough time to build them and test them before you bank your job on it.
Have you considered mirroring instead? Here is some documentation to determine if you could do that instead
If you decide to roll your own, here's a nice guide.
I'm assuming you're going this route because Enterprise Edition is so costly?
If you don't need a "live-backup", but really just want a frequently updated backup, I think this approach makes a lot of sense.
One more thing:
Make sure you regularly verify that your backup strategy is working.
I'm pretty sure it's available in Standard, since we're doing some shipping, but I'm not sure about the Workgroup edition - it's pretty stripped down.
I'm always in favor of the packages solution, but mostly because I trust a whole team of MSFT developers more than I trust myself, but that comes with a price for sure. I'd second that any solution you roll on your own has to come with a lag notification piece so that you'll know immediately if it isn't working - how many times do we only find out backup solutions aren't working when somebody needs a backup? Also, think honestly about how much time it will take you to design and roll your own solution, including bug fixes and maintenance - can you really do it more cheaply? Maybe you can, but maybe not.
Also, one problem we ran into with Workgroup edition is that it only supports 5 connections at once, and it seems to start dropping connections if you get more users than that, so we had to upgrade to Standard. We were getting ASP.NET errors that our connections were closed if we left them unattended for even a few seconds, which caused us all kinds of problems.
I would expect this to be close to the last place you'd want to save a few bucks, especially given the likely consequences if you screw up. Would you rather have your job on the line? I don't even think I'd admit it, if I felt I had a chance of getting this one right?
What's your personal upside benefit in this?
I tried the built-in log shipping and found some real problems with it so I developed my own. I blogged about it here.
PS: And just for the record, you definitely get log shipping in the Workgoup edition. I don't know where this Enterprise-only thing started.

What do you do with a developer who does not test his code? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
One of our developers is continually writing code and putting it into version control without testing it. The quality of our code is suffering as a result.
Besides getting rid of the developer, how can I solve this problem?
EDIT
I have talked to him about it number of times and even given him written warning
If you can do code reviews -- that's a perfect place to catch it.
We require reviews prior to merging to iteration trunk, so typically everything is caught then.
If you systematically perform code reviews before allowing a developer to commit the code, well, your problem is mostly solved. But this doesn't seem to be your case, so this is what I recommend:
Talk to the developer. Discuss the consequences for others in the team. Most developers want to be recognized by their peer, so this might be enough. Also point out it is much easier to fix bugs in the code that's fresh in your mind than weeks-old code. This part makes sense if you have some form of code owneship in place.
If this doesn't work after some time, try to put in place a policy that will make commiting buggy code unpleasant for the author. One popular way is to make the person who broke the build responsible for the chores of creating the next one. If your build process is fully automated, look for another menial task to take care of instead. This approach has the added benefit of not pinpointing anyone in particular, making it more acceptable for everybody.
Use disciplinary measures. Depending on the size of your team and of your company, those can take many forms.
Fire the developer. There is a cost associated with keeping bad apples. When you get this far, the developer doesn't care about his fellow developers, and you've got a people problem on your hands already. If the work environment becomes poisoned, you might lose far more - productivity-wise and people-wise - than this single bad developer.
As a developer who rarely tests his own code, I can tell you the one thing that's made me slowly shift my behavior...
Visibility
If the environment allows pushing code out, waiting for users to find problems, and then essentially asking "How about now?" after making a change to the code, there's no real incentive to test your own stuff.
Code reviews and collaboration encourage you to work towards making a quality product much more than if you were just delivering 'Widget X' while your coworkers work on 'Widget Y' and 'Widget Z'
The more visible your work is, the more likely you are to care about how well it works.
Code review. Stick all of your dev's in a room every Monday morning and ask them to bring their most proud code-based accomplishment from the previous week along with them to the meeting.
Let them take the spotlight and get excited about explaining what they did. Have them bring copies of the code so other dev's can see what they're talking about.
We started this process a few months ago, and it's astonishing to see the amount of sub-conscious quality checks that take place. After all, if the dev's are simply asked to talk about what they're most excited about, they'll be totally stoked to show people their code. Then, other dev's will see the quality errors and publicly discuss why they're wrong and how the code should really be written instead.
If this doesn't get your dev to write quality code, he's probably not a good fit for your team.
Make it part of his Annual Review objectives. If he doesn't achieve it, no pay rise.
Sometimes though you do just have to accept that someone is just not right for your team/environment, it should be a last resort and can be tough to handle but if you have exhausted all other options it may be the best thing in the long run.
Tell the developer you would like to see a change in their practices within 2 weeks or you will begin your company's disciplinary procedure. Offer as much help and assistance as you can, but if you can't change this person, he's not right for your company.
Using Cruise Control or a similar tool, you can make checkins automatically trigger a build and unit tests. You would still need to ensure that there are unit tests for any new functionality he adds, which you can do by looking at his checkins.
However, this is a human problem, so a technical solution can only go so far.
Why not just talk to him? He probably won't actually bite you.
Make him "babysit" the build, and become the build manager. This will give him less time to develop code (thus increasing everyone's performance) and teach him why a good build is so necessary.
Enforce test cases - code cannot be submitted without unit test cases. Modify the build system so that if the test cases don't compile and run correctly, or don't exist, then the entire task checkin is denied.
-Adam
Publish stats on test code coverage per developer, this would be after talking to him.
Here are some ideas from a sea shanty.
Intro
What shall we do with a drunken sailor, (3×)
Early in the morning?
Chorus
Wey–hey and up she rises, (3×)
Early in the morning!
Verses
Stick him in a bag and beat him senseless, (3×)
Early in the morning!
Put him in the longboat till he’s sober, (3×)
Early in the morning!
etc. Replace "drunken sailor" with a "sloppy developer".
Depending on the type of version control system you are using you could set up check-in policies that force the code to pass certain requirements before being allowed to check-in. If you are using a sytem like Team Foundation Server it gives you the ability to specify code-coverage and unit testing requirements for check-ins.
You know, this is a perfect opportunity to avoid singling him out (though I agree you need to talk with him) and implement a Test-first process in-house. If the rules aren't clear and the expectations are known to all, I've found that what you describe isn't all that uncommon. I find that doing the test-first development scheme works well for me and improves the code quality.
They may be overly focused on speed rather than quality.
This can tempt some people into rushing through issues to clear their list and see what comes back in bug reports later.
To rectify this balance:
assign only a couple of items at a time in your issue tracking system,
code review and test anything they have "completed" as soon as possible so it will be back with them immediately if there are any problems
talk to them about your expectations about how long an item will take to do properly
Peer programming is another possibility. If he is with another skilled developer on the team who dies meet quality standards and knows procedure then this has a few benifits:
With an experienced developer over his shoulder he will learn what is expected of him and see the difference between his code and code that meets expectations
The other developer can enforce a test first policy: not allowing code to be written until tests have been written for it
Similarly, the other developer can verify that the code is up to standard before it is checked-in reduicing the nmber of bad check-ins
All of this of course requires the company and developers to be receptive to this process which they may not be.
It seems that people have come up with a lot of imaginative and devious answers to this problem. But the fact is that this isn't a game. Devising elaborate peer pressure systems to "name and shame" him is not going to get to the root of the problem, ie. why is he not writing tests?
I think you should be direct. I know you say that you've talked to him, but have you tried to find out why he isn't writing tests? Clearly at this point he knows that he should be, so surely there must be some reason why he isn't doing what he's been told to do. Is it laziness? Procrastination? Programmers are famous for their egos and strong opinions - perhaps he's convinced for some reason that testing is a waste of time, or that his code is always perfect and doesn't need testing. If he's an immature programmer, he might not fully understand the implications of his actions. If he's "too mature" he might be too set in his ways. Whatever the reason, address it.
If it does come down to a matter of opinion, you need to make him understand that he needs to set his own personal opinion aside and just follow the rules. Make it clear that if he can't be trusted to follow the rules then he will be replaced. If he still doesn't, do just that.
One last thing - document all of your discussions along with any problems that occur as a result of his changes. If it comes to the worst you may be forced to justify your decisions, in which case, having documentary evidence will surely be invaluable.
Stick him on his own development branch, and only bring his stuff into the trunk when you know it's thoroughly tested. This might be a place where a distributed source control management tool like GIT or Mercurial would excel. Although with the increased branching/merging support in SVN, you might not have too much trouble managing it.
EDIT
This is only if you can't get rid of him or get him to change his ways. If you simply can't get this behaviour to stop (by changing or firing), then the best you can do is buffer the rest of the team from the bad effects of his coding.
If you are at a place where you can affect the policies, make some changes. Do code reviews before check ins and make testing part of the development cycle.
It seems pretty simple. Make it a requirement and if he can't do it, replace him. Why would you keep him?
I usually don't advocate this unless all else fails...
Sometimes, a publicly-displayed chart of bug-count-by-developer can apply enough peer pressure to get favorable results.
Try the Carrot, make it a fun game.
E.g The Continuous Integration Game plugin for Hudson
http://wiki.hudson-ci.org/display/HUDSON/The+Continuous+Integration+Game+plugin
Put your developers on branches of your code, based on some logic like, per feature, per bug fix, per dev team, whatever. Then bad check-ins are isolated to those branches. When it comes time to do a build, merge to a testing branch, find problems, resolve, and then merge your release back to a main branch.
Or remove commit rights for that developer and have them send their code to a younger developer for review and testing before it can be committed. That might motivate a change in procedure.
You could put together a report with errors found in the code with the name of the programmer that was responsible for that piece of software.
If he's a reasonable person, discuss the report with him.
If he cares for his "reputation" publish the report regularly and make it available to all his peers.
If he only listens to the "authority", do the report and escalate the issue to his manager.
Anyway, I've seen often that when people are made aware of how bad they seem from outside, they change their behaviour.
Hey this reminds me of something I read on xkcd :)
Are you referring to writing automated unit test or manually unit testing prior to check-in?
If your shop does not write automated tests then his checking in of code that does not work is reckless. Is it impacting the team? Do you have a formalized QA department?
If you are all creating automated unit tests then I would suggest that part of your code review process include the unit tests as well. It will become obvious that the code is not acceptable per your standards during your review.
Your question is rather broad but I hope I provided some direction.
I would agree with Phil that the first step is to individually talk to him and explain the importance of quality. Poor quality can often be linked to the culture of the team, department and company.
Make executed test cases one of the deliverables before something is considered "done."
If you don't have executed test cases, then the work is not complete, and if the deadline passes before you have the documented test case execution, then he has not delivered on time, and the consequences would be the same as if he had not completed the development.
If your company's culture would not allow for this, and it values speed over accuracy, then that's probably the root of the problem, and the developer is simply responding to the incentives that are in place -- he is being rewarded for doing a lot of things half-assed rather than fewer things correctly.
Make the person clean latrines. Worked in the Army. And if you work in a group with individuals who eat a lot of Indian food, it wont take long for them to fall in line.
But that's just me...
Every time a developer checks something in that does not compile, put some money in a jar. You'll think twice before checking in then.
Unfortunately if you have already spoken to him many times and given him written warnings I would say it is about time to eliminate him from the team.
You might find some helpful answers here: How to make junior programmers write tests?
I'd be tempted to suggest elaborating a bit on what you've tried and what results you got as this may have changed a bit but here are my initial suggestions:
Is it any tests or comprehensive tests? Some may code blindly and do zero tests, but this is rather rare, IME. Usually there are some tests done but not enough to cover most of the cases that would be comprehensive testing.
Group dynamics may help. I'd assume he is part of a team and that the team's view may be of some help here. In a way this is trying to get peer pressure which is usually a bad thing but sometimes it can be used in good ways.
How well spelled out were the warnings? In a way this can seem childish but there is a chance that what you think of as testing may not be the same as his. Do you want nUnit tests, an excel spreadsheet, logs from his computer, or something else as proof of the existence and use of tests? From what you've described there isn't anything to confirm that he did understand what you meant, was going to use tests and provide evidence of doing so.
Check-in policy question. Some places, such as my current workplace, encourage committing often which can mean that one does commit code without tests. Is there a known, accepted and well-followed policy where you are? That's another aspect here.