How can I decide when to use linear programming? - optimization

When I look at optimization problems I see a lot of options. One is linear programming. I understand in abstract terms how LP works, but I find it difficult to see whether a particular problem is suitable for LP or not. Are there any heuristics that can help guide this decision?
For example, the work described in Is there a good way to do this type of mining? took weeks before I saw how to structure the problem correctly. Is it possible to know "in advance" that problem could be solved by LP, without first seeing "how to phrase it"?
Is there a checklist I can use to decide whether a problem is suitable for LP? Is there a standard (readable) reference for this topic?

Heuristics (and/or checklists) to decide if the problem at hand is really a Linear Program.
Here's my attempt at answering, and I have also tried to outline how I'd approach this problem.
Questions that indicate that a given problem is suitable to be formulated as an LP/IP:
Are there decisions that need to be taken regularly, at different time intervals?
Are there a number of resources (workers, machines, vehicles) that need to be assigned tasks? (hours, jobs, destinations)
Is this a routing problem, where different "points" have to be visited?
Is this a location or a "layout" problem? (Whole class of Stock-cutting problems fall into this group)
Answering yes to these questions means that an LP formulation might work.
Commonly encountered LP's include: Resource allocation.: (Assignment, Transportation, Trans-shipment, knapsack) ,Portfolio Allocation, Job Scheduling, and network flow problems.
Here's a good list of LP Applications for anyone new to LPs or IPs.
That said, there are literally 1000s of different types of problems that can be formulated as LP/IP. The people I've worked with (researchers, colleagues) develop an intuition. They are good at recognizing that a problem is a certain type of an Integer Program, even if they don't remember the details, which they can then look up.
Why this question is tricky to answer:
There are many reasons why it is not always straightforward to know if an LP formulation will cut it.
There is a lot of "art" (subjectivity) in the approach to modeling/formulation.
Experience helps a lot. People get good at recognizing that this problem can be "likened" to another known formulation
Even if a problem is not a straight LP, there are many clever master-slave techniques (sub-problems), or nesting techniques that make the overall formulation work.
What looks like multiple objectives can be combined into one objective function, with an appropriate set of weights attached.
Experienced modelers employ decomposition and constraint-relaxation techniques and later compensate for it.
How to Proceed to get the basic formulation done?
The following has always steered me in the right direction. I typically start by listing the Decision Variables, Constraints, and the Objective Function. I then usually iterate among these three to make sure that everything "fits."
So, if you have a problem at hand, ask yourself:
What are the Decision Variables (DV)? I find that this is always a good place to start the process of formulation. How many types of DV's are there? (Which resource gets which task, and when should it start?)
What are the Constraints?
Some constraints are very readily visible. Others take a little bit of teasing out. The constraints have to be written in terms of your decision variables, and any constants/limits that are imposed.
What is the Objective Function?
What are the quantities that need to be maximized or minimized? Note: Sometimes, it is not clear what the objective function is. That is okay, because it could well be a constraint-satisfaction problem.
A couple of quick Sanity Checks once you think your LP formulation is done:
I always try to see if a trivial solution (all 0s or all big
numbers) is not part of the solution set. If yes, then the
formulation is most probably not correct. Some constraint is
missing.
Make sure that each and every constraint is "related"' to
the Decision Variables. (I occasionally find constraints that are
just "hanging out there." This means that a "bookkeeping constraint"
has been missed.)
In my experience, people who keep at it almost always develop the needed intuition. Hope this helps.

Related

What OOP coding practices should you always make time for?

I tend to do a lot of projects on short deadlines and with lots of code that will never be used again, so there's always pressure/temptation to cut corners. One rule I always stick to is encapsulation/loose coupling, so I have lots of small classes rather than one giant God class. But what else should I never compromise on?
Update - thanks for the great response. Lots of people have suggested unit testing, but I don't think that's really appropriate to the kind of UI coding I do. Usability / User acceptance testing seems much important. To reiterate, I'm talking about the BARE MINIMUM of coding standards for impossible deadline projects.
Not OOP, but a practice that helps in both the short and long run is DRY, Don't Repeat Yourself. Don't use copy/paste inheritance.
Not a OOP practice, but common sense ;-).
If you are in a hurry, and have to write a hack. Always add a piece of comment with the reasons. So you can trace it back and make a good solution later.
If you never had the time to come back, you always have the comment so you know, why the solution was chosen at the moment.
Use Source control.
No matter how long it takes to set up (seconds..), it will always make your life easier! (still it's not OOP related).
Naming. Under pressure you'll write horrible code that you won't have time to document or even comment. Naming variables, methods and classes as explicitly as possible takes almost no additional time and will make the mess readable when you must fix it. From an OOP point of view, using nouns for classes and verbs for methods naturally helps encapsulation and modularity.
Unit tests - helps you sleep at night :-)
This is rather obvious (I hope), but at the very least I always make sure my public interface is as correct as possible. The internals of a class can always be refactored later on.
no public class with mutable public variables (struct-like).
Before you know it, you refer to this public variable all over your code, and the day you decide this field is a computed one and must have some logic in it... the refactoring gets messy.
If that day is before your release date, it gets messier.
Think about the people (may even be your future self) who have to read and understand the code at some point.
Application of the single responsibility principal. Effectively applying this principal generates a lot of positive externalities.
Like everyone else, not as much OOP practices, as much as practices for coding that apply to OOP.
Unit test, unit test, unit test. Defined unit tests have a habit of keeping people on task and not "wandering" aimlessly between objects.
Define and document all hierarchical information (namespaces, packages, folder structures, etc.) prior to writing production code. This helps to flesh out object relations and expose flaws in assumptions related to relationships of objects.
Define and document all applicable interfaces prior to writing production code. If done by a lead or an architect, this practice can additionally help keep more junior-level developers on task.
There are probably countless other "shoulds", but if I had to pick my top three, that would be the list.
Edit in response to comment:
This is precisely why you need to do these things up front. All of these sorts of practices make continued maintenance easier. As you assume more risk in the kickoff of a project, the more likely it is that you will spend more and more time maintaining the code. Granted, there is a larger upfront cost, but building on a solid foundation pays for itself. Is your obstacle lack of time (i.e. having to maintain other applications) or a decision from higher up? I have had to fight both of those fronts to be able to adopt these kinds of practices, and it isn't a pleasant situation to be in.
Of course everything should be Unit tested, well designed, commented, checked into source control and free of bugs. But life is not like that.
My personal ranking is this:
Use source control and actually write commit comments. This way you have a tiny bit of documentation should you ever wonder "what the heck did I think when I wrote this?"
Write clean code or document. Clean well-written code should need little documentation, as it's meaning can be grasped from reading it. Hacks are a lot different. Write why you did it, what you do and what you'd like to do if you had the time/knowledge/motivation/... to do it right
Unit Test. Yes it's down on number three. Not because it's unimportant but because it's useless if you don't have the other two at least halfway complete. Writing Unit tests is another level of documentation what you code should be doing (among others).
Refactor before you add something. This might sound like a typical "but we don't have time for it" point. But as with many of those points it usually saves more time than it costs. At least if you have at least some experience with it.
I'm aware that much of this has already been mentioned, but since it's a rather subjective matter, I wanted to add my ranking.
[insert boilerplate not-OOP specific caveat here]
Separation of concerns, unit tests, and that feeling that if something is too complex it's probably not conceptualised quite right yet.
UML sketching: this has clarified and saved any amount of wasted effort so many times. Pictures are great aren't they? :)
Really thinking about is-a's and has-a's. Getting this right first time is so important.
No matter how fast a company wants it, I pretty much always try to write code to the best of my ability.
I don't find it takes any longer and usually saves a lot of time, even in the short-term.
I've can't remember ever writing code and never looking at it again, I always make a few passes over it to test and debug it, and even in those few passes practices like refactoring to keep my code DRY, documentation (to some degree), separation of concerns and cohesion all seem to save time.
This includes crating many more small classes than most people (One concern per class, please) and often extracting initialization data into external files (or arrays) and writing little parsers for that data... Sometimes even writing little GUIs instead of editing data by hand.
Coding itself is pretty quick and easy, debugging crap someone wrote when they were "Under pressure" is what takes all the time!
At almost a year into my current project I finally set up an automated build that pushes any new commits to the test server, and man, I wish I had done that on day one. The biggest mistake I made early-on was going dark. With every feature, enhancement, bug-fix etc, I had a bad case of the "just one mores" before I would let anyone see the product, and it literally spiraled into a six month cycle. If every reasonable change had been automatically pushed out it would have been harder for me to hide, and I would have been more on-track with regard to the stakeholders' involvement.
Go back to code you wrote a few days/weeks ago and spend 20 minutes reviewing your own code. With the passage of time, you will be able to determine whether your "off-the-cuff" code is organized well enough for future maintenance efforts. While you're in there, look for refactoring and renaming opportunities.
I sometimes find that the name I chose for a function at the outset doesn't perfectly fit the function in its final form. With refactoring tools, you can easily change the name early before it goes into widespread use.
Just like everybody else has suggested these recommendations aren't specific to OOP:
Ensure that you comment your code and use sensibly named variables. If you ever have to look back upon the quick and dirty code you've written, you should be able to understand it easily. A general rule that I follow is; if you deleted all of the code and only had the comments left, you should still be able to understand the program flow.
Hacks usually tend to be convoluted and un-intuitive, so some good commenting is essential.
I'd also recommend that if you usually have to work to tight deadlines, get yourself a code library built up based upon your most common tasks. This will allow you to "join the dots" rather than reinvent the wheel each time you have a project.
Regards,
Docta
An actual OOP practice I always make time for is the Single Responsibility Principle, because it becomes so much harder to properly refactor the code later on when the project is "live".
By sticking to this principle I find that the code I write is easily re-used, replaced or rewritten if it fails to match the functional or non-functional requirements. When you end up with classes that have multiple responsibilities, some of them may fulfill the requirements, some may not, and the whole may be entirely unclear.
These kinds of classes are stressful to maintain because you are never sure what your "fix" will break.
For this special case (short deadlines and with lots of code that will never be used again) I suggest you to pay attention to embedding some script engine into your OOP code.
Learn to "refactor as-you-go". Mainly from an "extract method" standpoint. When you start to write a block of sequential code, take a few seconds to decide if this block could stand-alone as a reusable method and, if so, make that method immediately. I recommend it even for throw-away projects (especially if you can go back later and compile such methods into your personal toolbox API). It doesn't take long before you do it almost without thinking.
Hopefully you do this already and I'm preaching to the choir.

What are some techniques for understanding object interaction

When starting a new application what are some ways to decide what objects you will need, what they should do, and how they should interact with each other?
Is this a job for a white board, or is it easier to just start coding and move stuff around as needed?
CRC Cards, or Class-Responsibility-Collaboration Cards, are a good way to do this - see the Wikipedia page.
They're a good way to brainstorm (as a team or just by yourself) which classes you'll needed, what each class will be responsible for, and how they'll interact with each other.
You might want to try:
CRC Cards
The cards will help define the object, its responsibilities, and its collaborations.
There's an entire process that you go through when creating the CRC cards. That process defines the rules that help make each class concise and really only perform the operations it needs.
This should fall more or less 'naturally' from the requirements.
Do you have a solid understanding of what the app is supposed to do, its scope, et al?
It really depends on what the size of the project is you're talking about: the approach and rigor one must apply is different for a team building commercial software than a personal project.
That said, some general tips are:
1) On your medium of choice (I like whiteboards) start enumerating the use cases or user stories. Keep going until you feel like you've gotten the most important/encompassing 80% covered.
2) When you're satisfied that you have the "WHAT" (use cases) succinctly and more-or-less sufficiently defined, you can start working out the "HOW" (objects, algorithms, et al). I would suggest a bias towards less complexity: you do not want a complicated and multi-layered object hierarchy unless you really, really need it (and even then, you probably don't).
3) I tend to enforce a "no-coding" rule until #1 and #2 are done, throw-away prototypes or explorations of particular approaches notwithstanding. It's very very easy to start slinging code and discover you're "trapped" by the object model you came up with before you fully understood what it is you're building.
4) Once you're done with your requirements gathering you can use any # of approaches to start breaking out functional units/objects/roles/etc. CRC cards are one approach; I've also had success with UML class & sequence diagrams.
I personally like to do lots of whiteboarding in UML; I take pictures with a digital camera frequently to archive thinking/approaches. These are much better than nothing when it comes to poor-man's documentation or answering the question "why did/didn't we..." 2 months down the road.
I think the answer depends on a couple of factors:
what is the size of the project? For a small project it's fairly easy to have a good idea of the three or four objects that might be required and just start coding. For bigger projects it's important to start listing them beforehand so that you can develop good object names and hierarchy.
how many people are on the project? If there is more than just you, you should sit down and plan who is going to be working on what and that requires a list of the classes that are going to be needed.
For anything bigger than a small (one or two day) project, it's worth putting some time into design before you start coding. White boards are fine for this but make sure you take a picture when you're done!
Sequence and class diagrams. These go a long way when your in design. The last thing you want to do, unless you aren't constrained in terms of time & resources, is just start coding.
Probably best to start on a whiteboard or some design overview. It all depends on what your application needs to do really. Figure out what actions your application needs to do. Associate objects accordingly with the methods you come up after breaking your application down into appropriate pieces.

What techniques do you use when you are designing an Object Model alone?

So no doubt that building a domain model is something that I think happens best when you approach it as as team. Even going so far as to involve someone who is not technical and a member of the 'business' in the modeling sessions. So much can get done quickly when you put the right people in a room and hammer out things on a whiteboard. But what about the times that you don't have that luxury? What about when you have to build a complex domain model alone? I have been doing this for the past month or so and have done the following:
Start off by Noun Idendtification, then use Class-Role-Collaborations to analyze relationships
Look for analysis patterns that can be used to refine the model, Party, etc..
As soon as I have a handle on the basics, I'll bust out an IDE and start writing XUnit tests to show that the model let's me do the things that I want
While these techniques have worked well, I'm not sure they are as efficient as a truely collaborative effort. I think it is easy to get carried away with a concept only to realize later that it violates x or y requirement. What techniques have you used when working in isolation to ensure that your object/domain model is on target?
Everyone does it differently, I think, but...
I almost always start with a Class diagram (usually UML-like and on paper), paying special attention to relationships between classes and their arity. Validation at this stage is mostly trying to understand if the high-level semantics of the entities make sense together.
Then start sketching in the key functions, especially those involved in collaborations. Make sure objects in a collaboration can reach each other through the relationships. At this stage I'll be using a drawing tool (StarUML).
Then come the gedanken experiments. I mentally walk through the trickiest use cases I can think of and see if I can envision a way to address them with the given design. This isn't psuedocode, just stepping through each of the major tasks/functions and following the lines of the diagram to make sure I'm not missing callbacks, circular dependencies, etc.
I think one key is to not get too married to any particular aspect of the design until you've satisfied yourself that it will probably work reasonably well. In my mind, if you can't step through a design mentally to evaluate/validate it you either lack some understanding of the problem, or the design on paper isn't complete enough...
Then, time permitting, set that one aside and see if you can come up with something really different...
If you're building it all on your own, just make sure it's adaptable, because there's no way you'll think of everything on the first shot.
Get some big paper. Draw everything out, and be messy. Don't worry about making it perfect. Put everything down that you think of, cross out stuff as it proves to not be useful. The paper will look like your mind threw up pieces of an object model all over the place. As you think of things that have already been written down, make those things stand out. At the end of this process, you'll have a mess, but for sure you'll have a lot of good ideas. At this point, I would recommend showing this to people, but since you said that's out of the question, we'll move on.
Now sit down in front of a computer with a UML tool and map out something that resembles the highlights of your brain dump. Think of the major pieces of the object model and then think of the more minor things that enable those pieces to work together. Once you have settled on something, turn that UML into code and go about writing some tests to see if it works. Rinse and repeat.

What are effective ways to log and track programming mistakes?

On occasion I've heard people discuss the benefits of keeping track of programming mistakes, if for no other reason than it increases awareness of common errors. I've started to keep a list of bugs that I find in my code, along with what could have led to them. The main question I have is this:
What information related to my mistakes should I be keeping
track of so that I can improve as a
programmer?
And a couple other questions related to this:
How do I use this information once I start logging my mistakes?
Is tracking mistakes truly beneficial?
This is only useful if you are actually vigilant with tracking and reviewing. When I was working on a team, no matter how much documented that for example our servers in the production environment were natted and would not be able to resolve their own domain names or public IP addresses, every 6 months, I'd get a call at 4 AM from the deployment team or dev team that a new developer was responsible for, and they either forgot or were unaware.
I remember a particular engineer who was repsonsible for deploying and he had paper checklists, we built him deployment tools that forced him to record his checklist, yet he would always forgot to set the connection string (resulting in the 4 am phone call). Point is it's only worth it if your going to use it vigilantly.
I've found the best way to use this is by implementing your rules into a code analyzer like fxcop.
I think what is more useful than keeping a log of individual mistakes is making sure you come away with a real understanding of why it was a mistake in the first place. Most mistakes stem from a lack of understanding about one thing or another, correct that understanding and you eliminate an entire set of potential mistakes in the future. If I would log anything it would be what I learned from the experience that I didn't know before, not the specifics of the mistake that was made which is likely to be of limited usefulness when you look back at it later.
what was the mistake
how can it be avoided
add the latter to an appropriate checklist, and refer to it as often as appropriate
I think tracking mistakes can be worthwhile, but in my experience it helps a lot to categorize them at some level.
Every programmer is going to make enough mistakes over the course of their career to fill an encyclopaedia. If you make a huge checklist out of all of them then you're never going to get any coding done because you'll eventually be spending all of your time going over your checklist. So: categorize your mistakes in some way that makes sense to you so you can rifle through your list looking at the most important mistakes for the sort of code you're currently working on.
Also, to add to the above as far as what to collect:
what are the symptoms of the mistake (so you can find it later)
how you actually solved it
Yes, tracking your personal mistakes is beneficial. Refer to the SEI for numerous data points (here's one at random). One such methodology is the Personal Software Process (PSP). It's too long to go into here, but here's a book about it. There's also this free SEI publication on PSP.
If you balk at SEI and think Agile is the way to go, you'll probably get better mileage out of a book like Clean Code: A Handbook of Agile Software Craftsmanship (publisher website).
Bottom line: disciplined developer = good, undisciplined developer = bad.
I would also want to ask the question of how much time would be required to accurately track the mistakes, and if that time could be better spent directly on improving the software instead. If you can do this in a minimal amount of time and are able to refer back to your records to prevent future mistakes, it may be valuable. In the long run though, I think it will be better to stick with an absolutely high level list of common mistakes you make.
I'd look for trends or similar types of errors that tend to recur. Once you're aware of the types of errors you make, you can be alert for them, or you can change your coding style to avoid them.
As an example, I worked very closely with my office-mate in a previous life. At one point I was halfway through my first sentence explaining some odd behavior, and he interrupted with, "increment your counter." I still double-check my loop counters today!
Instead of keeping a log of future mistakes, maybe you could review your bugtracker history, and try to find common types of errors which keep ocurring.
I'm so inspired I think I'll try this tommorow.
"What information related to my mistakes should I be keeping track of so that I can improve as a programmer?"
Read other people blogs. Write your own. You don't have to publish it. But you do have to turn each mistake into a story.
Don't drown this in metadata. This isn't amenable to a database or even a bug tracker. A mistake is a story where someone made a bad choice and recovered from it.
Here's your outline.
Context. What was going on.
Situation. The problem you were trying to solve.
What you did. Why it was bad.
What you should have done. Why it was better.
What changed. What you learned.
You should also record your successes in almost the same form.
Context. What was going on.
Situation. The problem you were trying to solve.
What you did. Why it was good.
What you learned.
When in doubt watch a movie. Seriously. Characters a confronted with choices, make mistakes and recover from those mistakes. That story line is the essence of drama. A mistake is the same thing.
Be careful what you log. Not every bad situation is a mistake. Some situations are inevitable or so far outside your control that nothing can be done about it.
Bad planning on someone else's part which puts you into panic-mode isn't your mistake. You can meticulously log everyone else's mistakes, but what's the point? If you're tracking what other people are doing, you should seek therapy.
Bad planning which provides inadequate budget, schedule or communication about changes may be a compound problem.
You didn't provide enough planning information up front.
In the scramble to cope, you made another mistake which lead to problem resolution and recovery.
In hindsight (which is always 20/20) you can see a decision which was wrong. However, at the time, it was based on best available information. That's not a mistake. Any sentence that begins "If we'd known that X..." is useless for mistake analysis. You can try to write a checklist of all the things you need to know next time you make that decision. But next time, you won't be making exactly that decision and some other piece of information will turn up missing.
Never call other people's actions mistakes.
Find the root causes. It's a root cause when it's something you can't control.
Don't retroactively call information that turned up missing a mistake.
I have been thinking the same thing. For the time being I keep a little spreadsheet with bugs I correct. The purpose of this is to make myself aware of the more common errors being made.
Hopefully I will start to see patterns and avoid making common errors over and over again.
I find that this makes my job more interesting, probably since I am an analytical person who likes to collect data and information in a structured manner.
You should try to relate the errors to use of inadequate tools, habits, methods or knowledge. In most cases you will find there would have been ways to catch the error earlier. Things like type-safety, unit-testing, code-review, coding standards, better API design, better documentation and working environment, comes to my mind.
I use JIRA with a custom workflow. For a bug, the last step before an issue is closed is a screen that allows me to select a "root cause". I've got a history of the root cause of every bug that's occurred in the last 3 systems I've built.

OOP Problems to use for Coding Tests during interviews

As a second interview I get people to sit down and write code...I try to make the problem really technology independent.
My programming problems that I have don't really exercise peoples OO abilities. I tend to try and keep the coding problem solvable within 2 hours ish. So, I've struggled to find a problem small enough and involved enough that it exposes peoples OO design skills.
Any suggestions?
This is a problem that I use with some trainings, looks simple but is tricky OOP-wise:
Create model classes that will properly represent the following constructs:
Define a Shape object, where the object is any two dimensional figure, and has the following characteristics: a name, a perimeter, and a surface area.
Define a Circle, retaining and accurately outputting the values of the aforementioned characteristics of a Shape.
Define a Triangle. This time, the name of the triangle should take into account if it is equilateral (all 3 sides are the same length), isoceles (only 2 sides are the same length), or scalene (no 2 sides are the same).
You can go on and on with quadrelaterals (which include squares, rectangles, rhombi, etc) and other polygons.
The way that they would solve the above problems would reveal the people who understand OOP apart from those who don't.
ideally, you want to present a problem that appears difficult, but has a simple, elegant, obvious solution if you think in OO terms
perhaps:
we need to control access to a customer web site
each customer may have one or more people to access the site
different people from different customers may be able to view different parts of the site
the same person may work for more than one customer
customers want to manage permissions based on the person, department, team, or project
design a solution for this using object-oriented techniques
one OO solution is to have a Person, a Customer, an Account, and AccountPermissions, where the Account specifies a Person and a Customer and an optional Parent Account. the use of a recursive Account object collapses the otherwise cumbersome person/team/department/project structure a direct ERD solution might yield
I have used the FizzBuzz Programming Test. And shockingly can corroborate the claims made by the article. As a second follow up I have asked candidates to compute the angle(s) between the hands on an analog clock. We set up a laptop with VS 2008 installed and the stub in place. all they have to do is fill in the implementation.
I am always stunned at how poorly candidates do on these two questions. I really am.
Designing Social Security Application is something which I ask a lot of people during interviews.
The nice thing about this is everyone is aware of how it works and what things to keep track of.
They also have to justify their design and this really helps me get inside their head :)
(As there is lots of flexibility here)
Kind regards,
Whether or not people do some coding in the interview, I make it a point to ask this:
Tell me about a problem you solved recently using object oriented programming. You'd be surprised how often people cannot answer that simple question. A lot of times I get a blank stare, or they say something like "what do you mean? I program in .NET, which is all object oriented."
These aren't specifically OO Questions, but check out the other questions tagged interview-questions
Edit: What about implementing some design patterns? I don't have the best knowledge in the area but it seems as if you would be getting two questions for the price of one. You can test for both OO and Design pattens in the one question.
How about some sort of simple GUI. It's got inheritance, overriding, possibly events. If you mean for them to actually implement as part of the test then you could hand them a blank windows-form with an OnPaint() and tell them to get to it.
You could do worse than ask them to design a MapReduce library with a single-process implementation. Will the interface still work for a distributed implementation? What's the exception-handling policy? Should there be special support for chaining MapReduce jobs in a pipeline? What's the interface to the inputs and outputs? How are inputs chunked up? Can different inputs in one job go to different mappers? What defaults are reasonable?
A good solution in Python takes about a page of code.
I've got a super simple set. The idea is mainly to use them to filter out people who really don't know their stuff rather than filtering in the rock stars.
These are all 5 minute white-board type questions, so they are really not that hard. But the act of writing up code, and talking through it reveals a lot about a candidate - and is brilliant for exposing those that can otherwise BS through the talk.
Write a method that takes a radius of a circle as an argument, and returns the area of the circle (You would be amazed how many people struggle on this one!)
Write a program that accepts a series of numbers as arguments from the command line. Add them up, and print the sum
Write a class that acts as a keyed counter (basically a map that keeps track of how many times each key is "counted")