Related
I would like to know the fastest/best way to learn the business logic in a new project.
Most projects have been running for years, some of them are poorly documented, but you still need to know how to work with them. What is the best way to do this? (Use Case Diagram / support from colleagues / code analysis etc.)
The problem with verbose logging is that you may be overloaded with details that do not help, and even if you have the right details, you may misunderstand the big picture.
Moreover, if projects run for years, and are poorly documented, chances are the little available documentation is already obsolete. And chances are, the team did not invest heavily in logging either.
Reverse engineering the code is another approach, but where to start if there are millions of lines of code in the legacy system? Some things can be easily read in code, but many more complex, emergent behavior comes from the interactions between many classes, and this kind of knowledge is the most difficult to extract.
So here is the way to go:
Talk to colleagues. The best approach to move knowledge from one brain to another is direct conversation. It works much better than any formal diagram or any documentation. Unfortunately this is not always possible (e.g. team left)
If 1 is not possible, understand the business user's point of view. May be there is a user manual? Maybe some colleagues of the user-support? if none of those are possible, the ultimate way is to spend some time in the day of the user's life. You will not understand how the system works, but at least you'll get a quick intro in what the system is supposed to do, what matters to the users, and maybe some business rules.
Check for automated test cases. In fact, such test cases are a hidden and up-to date documentation resource.
Check for non-automated test cases, in particular use acceptance tests, and integration tests. If these are not automated, there are chances that they are already obsolete. But it's better than nothing.
Reverse engineer the code. Identify the main classes and how they interact. And yes, some simplified class diagrams will help you to understand how classes are related (no need to document properties and methods: these can be found back in the code). And some sequence diagram will help you to get a picture of the more complex interactions.
Run server, log verbose everywhere. Follow code flow and dig in.
My friend asked me this question today. How to test a vending machine and tell me its test cases. I am able to give some test cases but those are some random thoughts. I want to know how to systematically test a product or a piece of software. There are lots of tests like unit testing, functional testing, integration testing, stress testing etc. But I would like to know how do I systematically test and think like a real tester ? Can someone please explain me how all these testings can be differentiated and which one can be applied in a real scenario. For example Test a file system.
Even long-time, well respected, professional testers will tell you: It is an art more than a science.
My trick to designing new test cases starts with the various types of tests you mention, and it must include all those to be thorough, but I try to find a list of all the ways I can interact with the code/product.
For the vending machine example, there are tons of parts, inside and out.
Simple testing, as the product is designed to work, gives plenty of cases
Does it give the correct change
How fast can it process the request
What if an item is out of stock
What if it is overfilled
What if the change drawer is full
What if the items are too big, or badly racked
What if the user puts in too little money
What if it is out of change
Then there are the interesting cases, which normal users wouldn't think about.
What if you try to tip it over
Give it a fake coin
Steal from it
Put a coin in with a string
Give it funny amounts of change
Give it half-ripped bills
Pry it open with a crow-bar
Feed it bad power/brownout
Turn it off in the middle of various operations
The way to think like a tester is figure out every possible way you can attack it, from all the "funny cases" in usual scenarios, to all the methods that are completely outside of how it should be used. Any point of input, including ones you might think the developers/owners have control over, are fair game.
You can also use many automated test tools, such as pairwise test selection, model-based test toolkits, or for software, various stress/load and security tools.
I feel like this answer was a good start, but I now realize it was only half of the story.
Coming up with every single way you can possibly test the system is important. You need to learn to stretch the limits of your imagination, your problem decomposition skills, your understanding of chains of functionality/failure, and your domain knowledge about the thing you are testing. This is the point I was attempting to make above. With the right mindset, and with enough vigilance, these skills will start to improve very quickly - within a year, or within a few years (depending on the complexity of the domain).
The second level of becoming a very competent tester is to determine which tests you should care about. You will always be able to break every system, in a ton of different ways. Whether those failures are important or not is a more interesting question, and is often much more difficult to answer. The benefit to answering this question, though, is two-fold.
First, if you know why it is important to fix pieces of the system that break (or to skip fixing them!), then you can understand where you should focus your efforts. You know what you can afford to spend less time testing, and what you must spend more time on.
Second, and more importantly, you will help your team expose where they should be focusing their efforts. You will start to uncover things that are called "second-order unknowns". Your team doesn't know what it doesn't know.
The primary trick that helps you accomplish this is to always ask "why?", until whoever you are asking is stumped.
An example:
Q: Why this test?
A: Because I want to exercise all functionality in the system.
Q: Why does this system function this way?
A: Because of the decisions that the programmer made, based on the product specifications.
Q: Why did our product specifications ask for this?
A: Because the company that we are writing the software for had a requirement that the software works this way.
Q: Why did that company we are contracting for add that as a requirement?
A: Because their users need to do :thing:
Q: Why do the users need to do :thing:?
A: Because they are trying to accomplish :xyz:
Q: Why do they need to accomplish :xyz:
A: Because they save money by doing :abc:
Q: Why did they choose :xyz: to solve :abc:?
A: ... good question.
Q: What could they do instead?
A: ... now that I think about it, there's a ton of options! Maybe one of them works better?
With practice, you will start knowing which specific "why" questions to ask, and which to focus on. You will also learn to start deeper down the chain, and be less mechanical in your approach.
This is no longer just about ensuring that the product matches the specifications that the dev, pm, customer, or end user provided. It also helps determine if the solution you are providing is the highest quality solution that your team could provide.
A hidden requirement of this is that you must learn that half your job as a tester is to ask questions all the time. You might think that your team mates will be annoyed at this, but hopefully I've shown that it is both crucial to your development, and the quality of the product you are testing. Smart and curious teammates who care about the product (who aren't busy and frustrated) will love your questions.
#brett :
Suppose you have the system with you, which you want to test. Now the main thing that comes into picture is make sure you have the test scenario or test plan. Once you have that, then for you it becomes very much clear about how and what to test about the system.
Once you have test plan then your vision becomes clear regarding what all is expected and what all is something unexpected. For unexpected behavior you can recheck once and file an issue, if you think that that is not correct. I had given you answer in a general case. if you have a real world scrnario, then it may be really helpful to provide guidelines on that.
All,
I am a developer but like to know more about testing process and methods. I believe this helps me write more solid code as it improves the cases I can test using my unit tests before delivering product to the test team. I have recently started looking at Test Driven Development and Exploratory testing approach to software projects.
Now it's easier for me to find test cases for the code that I have written. But I am curios to know how to discover test cases when I am not the developer for the functionality under test.
Say for e.g. let's have a basic user registration form that we see on various websites. Assuming the person testing it is not the developer of the form, how should one go about testing the input fields on the form, what would be your strategy? How would you discover test cases? I believe this kind of testing benefits from exploratory testing approach, i may be wrong here though.
I would appreciate your views on this.
Thanks,
Byte
Bugs! One of my favorite starting places on a project for adding new test cases is to take a look at the bug tracking system. The existing bugs are test cases in their own right, but they also can steer you towards new test cases. If a particular module is buggy, it can lead you to develop more test cases in that area. If a particular developer seems to introduce a certain class of bugs, it can guide testing of future projects by that developer.
Another useful consideration is to look more at testing techniques, than test cases. In your example of a registration form, how would you attack it from a business requirements perspective? Security? Concurrency? Valid/invalid input?
Testing Computer Software is a good book on how to do all kinds of different types of testing; black box, white box, test case design, planning, managing a testing project, and probably a lot more I missed.
For the example you give, I would do something like this:
For each field, I would think about the possible values you can enter, both valid and invalid. I would look for boundary cases; if a field is numeric, what happens if I enter a value one less than the lower bound? What happens if I enter the lower bound as a value? Etc.
I would then use a tool like Microsoft's Pairwise Independent Combinatorial Testing (PICT) Tool to generate as few test scenarios as I could across the cases for all input fields.
I would also write an automated test to pound away on the form using random input, capture the results and see if the responses made sense (virtual monkeys at a keyboard).
Ask questions. Keep a list of question words and force yourself to come up with questions about the product or a feature. Lists like this can help you get out of the proverbial box or rut. Don't spend too much time on a question word if nothing comes to you.
Who
Whose
What
Where
When
Why
How
How much
Then, when you answer them, ask "else" questions. This forces you to distrust, for a moment at least, your initial conclusions.
Who else
Whose else
etc..
Then, ask the "not" questions--negate or refute your assumptions, and challenge them.
Who not (eg, Who might not need access to this secure feature, and why?)
What not (what data will the user not care about? What will the user not put in this text box? Are you sure?)
etc...
Other modifiers to the qustions could be:
W else
W not
W risks
W different
Combine two question words, eg, Who and when.
In the case of the form, I'd look at what I can enter into it and test various boundary conditions there,e.g. what happens if no username is supplied? I'm reminded of there being a few different forms of testing:
Black box testing - This is where you test without looking inside what is being tested. The challenge here is not being able to see inside can cause issues with limiting what are useful tests and how many different tests are worthwhile. This is of course what some default testing can look like though.
White box testing - This is where you can look at the code and have metrics like code coverage to ensure that you are covering a percentage of the code base. This is generally better as in this case you know more about what is being done.
There are also performance tests compared to logic tests that are also worth noting somewhere,e.g. how fast does the form validate me rather than just does the form do this.
Identify your assumptions from different perspectives:
How can users possibly misunderstand this?
Why do I think it acts or should act this way?
What biases might I have about how this software should work?
How do I know the requirements/design/implementation is what's needed?
What other perspectives (users, administrators, managers, developers, legal) might exist on priority, importance, goals, etc, of this software?
Is the right software being built?
Do I really know what a valid name/phone number/ID number/address/etc looks like?
What am I missing?
How might I be mistaken about (insert noun here)?
Also, use any of the mnemonics and testing lists noted here:
http://www.qualityperspectives.ca/resources_mnemonics.html
Discussing test ideas with others. When you explain your ideas to someone else, you tend to see ways to refine or expand on them.
Group brainstorming sessions. (or informally in pairs when necessary)
see these brainstorming techniques
Make data tables with major features listed across the top and side, and consider possible interactions between each pair. Doing this in three dimensions can get unwieldy.
Keep test catalogs with common questions and problem types for different kinds of tasks such as integer validation and workflow steps etc.
Make use of Exploratory Testing Dynamics and Satisfice Heuristic Test Strategy Model by James Bach. Both offer general ways to start thinking more broadly or differently about the product, which can help you switch between boxes and heuristics in testing.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 13 years ago.
Improve this question
We've all been there. You have written some code and unit tests, the tests all pass, and the code is decent (nothing's perfect, right?). Then, someone who is sure that they know better than you comes along and decides to change your code or the interfaces to your code just because he/she does not like the variable/class names that you used. No "real" refactorings, no real optimizations, no real improvement -- just different words -- not necessarily better words, just different.
My real problem with this is that (a) its a waste of time and (b) it shows a blatant disrespect for the fellow developer that wrote the code in the first place.
My visceral response is to lash out, but that's counter productive. Instead, I though that I might wright a paragraph or two as sort of a "Charter" or "Philosophy" that is adopted for the project. I'm wondering if anyone else has tried this, and if so, was it successful?
After looking at the initial comments below (which are appreciated), I think that my problem is primarily that this change broke the build for code that was already working. So time needed to be spent to fix the code for what was (in my opinion) a non value-added change.
-- Thanks
...decides to change your code or the
interfaces to your code just because
he/she does not like the
variable/class names that you used.
My real problem with this is that (a)
its a waste of time and (b) it shows a
blatant disrespect for the fellow
developer that wrote the code in the
first place.
My visceral response is to lash out...
I see some VERY CONCERNING things in those statements.
naming is REALLY, REALLY important. It is worth rewriting code to get it correct.
It is not YOUR code
How is it disrespectful?
You are taking it too personally.
I once worked with someone who freaked out when I made changes to "his" code. His code as horrible; it was buggy and unmaintainable. He was always staying late, fighting fires and breaking things - basically a negative contributor. I rewrote all his bad code for a big piece of functionality for a project one weekend and when he came back in on monday had a hissy fit. I am not saying your stuff is horrible, but maybe you need to calm down and be more objective about it.
Don't take it so personally. Step back and think about it - maybe "your" code needed fixing
We might be able to give better answers if you posted the code and changes, or at least some better idea of the situation with an example or two.
EDIT:
After seeing the code change and finding out that the build was broken, I am going to have to change the tone of this answer. I understand Steve's frustration - and i agree - that is not a good change. It makes a specific typedef more general and not very descriptive any more.
While I think some of my points are valid, in this case it looks like the changes were not appropriate.
The issue of code "ownership" is irrelevant. If the code changes are useless then everyone on the team should not be happy. If they are good changes then everyone should be happy about it. If there is a difference of opinion then you all need to find a common ground.
Breaking the build is not a good thing.
Steve, sorry if I came down harsh - it looks like in this instance you are justified in your frustration, but not because it is "your" code.
One thing that might help in this sort of situation is to require code reviews for all changes. People are less likely to make pointless changes if someone else has to review it first. If they can actually convince another developer that their change should go in then maybe it isn't so pointless after all.
Heey,
Guys! WE all need TO TALK!
Just sit together and TALK! There are always reasons to change and there are always reasons NOT to.
Decide together!
Don't just go to StackOverflow or a forum and say ask this kind of questions.
The new dev does it - he gets responses from community probably positive (yeahh, bad code should be refactored).
The current dev does it - he gets responses from the community too: "What an idiot could do such kind of changes!"
And the result is: Counterproductive, destructive, offensive environment for a long time.
Who wants it?
Just put your arguments on the table and that's it.
New dev needs some introduction too.
Old dev needs to listed TOO.
This should be collaborative work AND not pissing each other off.
Decide together, talk, as THE TEAM.
And... better ask questions like "How is it better to refactor this?"
Cheers.
In any software development team with a size > 1, standardization is key. Not only for each developer to understand the other's code, but so that the people that come along in 2 years, 5 years and 10 years can look at any part of the code and see a clear and consistent pattern. Will you... and the rest of the team... seriously still be there, working on this project, years down the line?
If you both "just have your way of doing things" and there is no formal standard for the project/company, I suggest you work with the team and/or your boss to suggest a formal standard be adopted. There are many standards already published for the various environments that you can use either as the standard, or as a starting point.
If there is a formal standard, everyone on the team is obligated to follow it... no matter how much "better" they think their way is.
So much for the hard skills.
Now on the soft skills side...
You and your colleague need to develop a healthy relationship, or decide to work in different places. Tit-for-tats that result in people feeling that they want to lash out will make everyone unhappy, not to mention gravely jeopardize the project everyone is being paid to complete. Look for a person you both respect (maybe your boss, maybe a respected and level-headed senior member of the team, maybe HR if you have a good HR department). Explain to that person what the problem is and that it makes you feel unvalued and disrespected. Ask for help talking through the situation with your colleague and agreeing to a better way of working together.
Finally, you need to be open to the possibility that your colleague may be making subjectively correct changes, even if the manner he's doing it in offends you. Separate the correct coding from the correct interpersonal interactions. Do the right thing for the project.
Well if that guy is going to maintain your code, let him do whatever he wants to.
Just remember that it is not "your" code. The code belongs to the company for which you work for. You wrote the code and you got paid for it. Let the Management do whatever they want to do with it.
Don't take things personally, move on.
Sometimes, changing names might be justified. It can be confusing if half the project refers to a person's sex, and then you check in some new code that refers to gender or something. Okay, this might be a bad example as technically they are two different things and their meaning is most likely still obvious. But if a project's code uses two different terms to refer to the same concept, it can be confusing.
Usually I try to leave people's code alone, unless I have some justification for refactoring. Luckily the same seems to go for my colleagues, so no, I have not had the need for writing such a charter yet.
How about using an automated build system, so when this person changes the code and breaks something the team will get an alert about it. This solves your problem with having to waste your time fixing something broken by someone elses change to your code. This way everyone will know that so and so made a change and broke the build, and can see for themself. The rule is "dont break the build".
You should be discussing this with the person who did it, in a non-threatening manner.
I believe every developer should take responsibility and hence own some of the code, but not all of the code. I understand the code that I've written better (irrespective of how good/bad it is) than any other guy that has ever seen it. Therefore the changes I make will be faster and less prone to error.
I don't mind anybody changing the code I've written later on, but I have a couple of conditions:
If you change the code and that causes something else to break, you are responsible for fixing it, not me.
If I don't agree with the changes you made I will change it back to the way I want it since I have to take responsibility for this piece of code in future.
Not all developers should be making changes to all the code, all the time. Only some of the time, for the purpose of getting to know the code (sharing knowledge).
I've never worked for an employer that endorses a "everyone can change anything any time" policy. Developers own certain parts of the code and they are asked specifically to make changes/refactor based on a development democracy.
You touch my code and break something, (1) you better have a good reason for the change that all developers agree with and (2) you better not leave broken things broken or ask me to go do the clean-up for you UNLESS you're my superior. I will humbly submit if that's the case.
I agree with Laurence that code reviews might help. This is an issue of how your team should work together. What might help is the notion of Egoless Programming - in a nutshell, considering the code as a joint product of the team, and trying to make decisions for the sake of the code rather than because of the programmer's ego. Your teammate violated the fourth commandment of egoless programming - "Don't rewrite code without consultation."
Maybe if your team is made aware of these principles, things will improve. I would try this.
Perhaps not completely on topic, but .... If you have developers who have the time to make changes to code just because they don't like the variable names used, then maybe the conversation should be about whether you have too many developers and which ones should be shown the door ... or how you're going to justify to management the bloated staff you have, especially in the current economic circumstances!
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
During hiring a .NET web developer I give the candidate a coding test.
I tend to limit the candidate to MSDN installed on the test server - I think it holds everything the candidate needs to complete the task.
I admit, this is not the normal case as I don't expect the candidate to do his work without use of the web.
On the other hand I don't want the candidate to google for a complete example and copy-paste it, i want to evaluate his skills.
The question is do I need to allow free use of the web during the test?
If you think the whole coding test is wrong - I would like to hear alternatives you may have for me.
As you say, 'I don't expect the candidate to do his work without use of the web' why not allow it too during the test? And what if he does copy and paste? I do that too. Surely the key is to know where to look, be discerning with what you find and apply it intelligently. Do you want to hire someone with a terrific memory or someone who can develop software for you?
When I was at school, calculators were just becoming affordable. As their use was seen as unavoidable, the exams were changed. Simple number-crunching was no longer tested in the way it was before (it was important then). Rather problem-solving techniques were to be tested.
I usually allow candidates to use whatever resources they want. After they're done, I sit down with them and go through their code together, ask questions like why they chose that particular approach etc.
If a couple of minutes of Googling was enough to not just copypaste some code but to learn enough about it to be able to defend the decisions within, then he's intelligent enough!
There are tests, where web access can be given, and there are where it doesn't really make sense.
Case where its fine to allow web access
When its unlikely to find even 60 percent of the code over the net
When you will ask to explain the code after he/she completed the code
A very specific solution using SQL query, which is unlikely to be found on the web
Case where its fine to not allow web access
Some basic programs like, recurssion, fibonacci, factorial, string manipulation, small trick programs, etc. There is no need of computer even in some of these cases
I'm very sceptical about coding tests during interviews. I think that a lot of the test I have seen, represent very specific (artificial, non real-world) problems where you would use the internet to solve them.
I think it's not really important to know how to solve such problems by heart - often time it is much more important that you know how and where to search for answers.
If you want to test the persons during the interview, I think it is better to ask them some conceptual questions instead of a specific programming problem. E.g: questions about object orientation, polymorphism, design of n-tier application, etc. etc.
Or as an example from the ASP.NET world, ask the interviewed person question such as: what is ViewState, what is a postback, what is session-/application-state, etc.
If you want to get an idea of how a candidate will perform in a job, I think it's best to try and make the conditions of the test as close as possible to the actual working conditions.
It should be pretty easy to prevent copy-and-pasters from slipping through the cracks by asking the candidate to explain his/her code.
Well, one thing you want to be aware of is that the developer you hire might not know everything that he will be thrown during the time he is working for you. If you ask him a question that he doesn't know off the top of his head you would want and expect him to research it and come back to you with proof that he understood the concepts that he just learned.
I say let them use the web - but ask them to explain in their own words how their code works. Most of my knowledge comes from online resources. However, I make sure that every line of code I write I understand.
There is a baseline knowledge that developers in a particular field should know; but you also want to figure out how quickly he can learn new things. A good test IMO is to throw a question you know he doesn't know and see how long he can figure it out using the resources he would have if he were an employee of your company.
Is your goal to see what basic knowledge the candidate has and if he can code without copying solutions from the web, then don't allow internet access. If you want to see what strategies he employs to get to a solution, let him use the web if he wants to.
I personally find it more interesting if a candidate can solve problems on a larger scale than just solving a simple programming problem. So I tend to ask him about the methods he uses when programming (Unit testing? Ever worked with it? What do you think of it?). This gives me a better picture than coding in an interview situation.
Sometimes it helps if you ask the candidates beforehand to bring a one-page coding sample to take a look at their coding style. This also saves you time during the interview.
It's important to make sure a candidate is resourceful - you don't want your programmer sitting there when they get stuck, not moving forward; you want them to use whatever resources are at hand - be it MSDN, picking someone else's brains, using the web, etc - to get the job done. Cut-n-paste from the web does seem like cheating, but (a) if you design your task carefully then it will be unique enough for there not to be a standard answer they can copy from the web, and (b) isn't re-using existing code a key part of building software? It's not much different from using 3rd-party libraries, to avoid reinventing the wheel. On the downside, of course, you also want them to show they can develop algorithms, so the unique task needs to include some element that requires that without the solution already being on the web. Trouble is, forums are the achilles heel to all of that since they can simply ask for the solution and someone, somewhere, is going to hand over the answer unwittingly!
Allow the candidate to use the web but tell him beforehand that if he used the web, you will have to evaluate HOW he solved the problem.
If he used the web for something simple such as finding the syntax or parameters which he forgot, don't mark him down. This is normal.
If he used the web for something like look at how a specific function is used, don't mark him down. This is normal.
If he searched for a specific code and then copy-paste it, then ask him about how the code works. If he can explain how the code works, then there's no reason to mark him down. If he can't explain it without looking at the site where he got the code, you have to mark him down.
If he used stackoverflow.com, check his profile for questions, answers and badges. From there, you can check how good a programmer he is.
It all depends what you want out of your successful candidate. I contest the view that knowing how to google makes you a good programmer because the simple fact is that the internet is full of bad examples as well as good ones. You don't really want your codebase to reflect how lucky your googler was on the day he cut and pasted all his code off the web. You want it to demonstrate sound practices, proven methodologies & elegant, efficient solutions that your team understand and are enthusiastic about. Not a jumble of styles that don't resemble each other. There's a wealth of good to be gotten from knowing how to get help from the interweb but real knowledge and ancient wisdom is being lost every day that people who don't really understand what they are doing are given jobs because they appear to solve problems with their ability to "google it".
If you really want to give your candidates access to the web then by all means do, but make the questions hard and scrutinise the results to see if they've picked the first solution they found or if they've picked the best solution to the problem.
As do many other respondents, I'd rather employ a resourceful developer who know how to use the web to the fullest to draw on other's experiences and previous work, than a developer who limits himself and his applications to the MSDN way of doing things.
I copy other peoples code all the time - daily in fact. The knack of it depends on finding the right solution quickly and integrating it into your existing work.
So let your candidate use the web and ask him how he came to his solutions. You might learn more about him from his methods than from how will he can remember previous solutions.
Three things I'd do.
Let applicants send in a coding example along with their cv.
Let applicants produce some real-life code (maybe even pair-program with a developer on your team) this will show you if they can actually use the tools. Internet is a tool too so they should be able to use internet.
Let applicants solve a problem in pseudo code on a blackboard during the interview. In this case you can be their "internet" by helping them.
These three approaches will show you different things. The first is a good early warning mechanism but can easily be faked (they could just download oss code from the web somewhere). The second is good to see if they can actually code but they might score badly if they're unfamiliar with the tools you use. The third will show you if they can solve theoretical problems but won't show you if they actually are good team players or if they write maintainable code.
I recently had a friend start talking to me on IM, he was in a coding test job interview. He had a couple SQL questions. At first i thought, hell you've got to do this yourself. I'm not going to help you cheat during an interview.
Then i thought about it again. I've been answering questions and talking to him about various technical issues for years on IM as part of his work. So when he encounters problems in the real world with the job if he gets hired, he'll do the same thing.
We don't talk about it much, but having a good network of friends to ask questions, and knowing how to search out relevant answers on the net are a big part of being an effective programmer or sysadmin. I've met people who were super smart programmers, but didn't really know how to find information online. They missed a lot, were kind of out of the loop. Knowing how to use resources should be important.
When i do interviews i often ask people what websites they read, what development tools they use, and why. It's a similar thing. Sure it's not about how they write x line of code, but it's about how they work.
No how to get around somebody just copy and pasting "answers". Well first, don't ask questions which have pat answers. Secondly when i'm interviewing i like to give people some code, ask them to refactor it, have them talk through what they are thinking. Then ask them to write some new code which implements a feature. Pair program with them. It's hard to hide inability to code when pair programming. While they are pairing, it totally makes sense to say, "let's go look up the api on the date time library."