What is CMS testing anyway? - testing

My company recently did a POC and have decided to use an commercial CMS. It is being implemented and we have been asked to Test it. What is there to be tested in a 3rd party CMS that has already been tested and being sold in the market?. Any direction would be great!

I would recommend adjusting your mind-set. What do you know about the test regime of this product? To start from the point of view that it's a commercially shipped product, it must have been tested, so I don't need to test it, is deeply flawed thinking.
First, all software has bugs.
Second, in testing the product you could reasonably focus on your proposed usage scenarios. You may choose patterns of use that were not anticipated by the development and test teams. At the very least you gain experience of the product's capabilities and limitations.
Third, installation into your environment may impact the system in unexpected ways. So at the very least the product must be exercised in your environment before you start to trust it. You need to explore the operational aspects, backups and restores for example. Now, before the system is live, is the time to find out how to recover from a disk crash.
I would ask the vendor is they have a regression suite you can run in your environment. if not I would devise a quick check-list of my own, trying to think about corner cases. Then also start to explore how your teams will use the product. Presumably there will be a "Build Master" role? Work with the people in that role and walk through some common scenarios. The likelyhood is that you will uncover some ways of working that are better than others.
Summary: testing isn't just about finding bugs (though you may well find some) but it's also about understanding the product better and learning how best to use it.

Related

Starting Testing department

I am joining a company, they dont have any formal testing setup. They expect me to start a testing department. I have good understanding of manual and automated testing. Not sure about how to start or which tools to use for document sharing, bugs tracking.
please guide as much info you can provide.
thanks
This is a very broad question and almost impossible to answer without significantly more knowledge of your companies products, quality goals and existing tooling... But I've got some Opinions :tm: that might help, starting with some philosophy (sorry).
What You're For
The function of a testing department isn't to test; the goal is to help the company be confident in its delivery of products. Your customers want to know that your software is accurate and stable. Your Operations team wants to avoid Production going down. Your Developers want to feel confident that their changes work and don't have any negative side effects.
I personally feel that the best way for a testing team to provide that confidence is not by writing tests; It's by editing them. The testing team provides the tooling, guidelines and expertise to help the rest of the Engineering departments make testing an integral part of the process.
It's like cooking. You don't make a well seasoned meal by chopping and sautéing and stirring and then giving it to a head chef to taste. You taste continually while you go because you're the one who knows what the food should be like. The head chef trains you and provides feedback on the final dish so that you learn how to season correctly.
Choosing Tools
Irrelevant. Mostly.
Your tools need to give you what you're after and then get out of your way. At the moment, the company barely knows what it's after, so you could even use a Google Doc to track defects.
You don't want to get in anyone's way to begin with, or they'll start to resent you. Your team needs to provide value and start to earn the social capital to change the Engineering processes to help deliver your goals.
So, use whatever document sharing tools are already in use; Whether that's a Wiki, Google, Dropbox etc. If you're choosing a new one because there's no collaboration, I'm partial to Notion.
If your team already has a collaborative build tool (eg Jenkins, Travis) it's probably best to stick with that, adding in testing steps. Again, the less friction you introduce, the better your initial outcomes.
I wouldn't bother building and maintaining a test grid; Instead, lean on a vendor like Sauce Labs for infrastructure and expertise. That way you've got easy parallelisation, wide platform coverage, test asset collection, insights, as well as their experience in supporting Testing teams. Disclaimer: I'm the Manager of Developer Relations at Sauce Labs, so I'm probably biased ;)
As for testing tools; If you want your engineering teams to collaborate on test production, you need to stick with an ecosystem they can use. This likely means whatever they're already using.
How To Start Testing
Selecting What To Test
Your organisation wants testing so bad they're hiring you. That implies there's a traumatic event that they want to avoid happening again. So, start there. Find out what it is, and create a test for it.
If Black Friday overwhelmed their site, do Load testing. If their build is always breaking, concentrate on unit testing. If functionality doesn't work in Prod, add an integration test.
Test Coverage
There's a trap for new players, and you're likely to hear this from your devs:
We're so far behind on test coverage we'll never catch up
That is absolutely true.... if you never start! Add the tests that prevent the trauma that bought you on board and you're already adding value; You'll catch that problem next time.
Another trap is setting test coverage goals. Test coverage is a great way to monitor your process but a terrible way to improve it. Force your teams to increase test coverage (or not let it slip) and they'll start to resent the process... And write crap tests just to boost the percentage.
Instead, use coverage for feedback. If coverage goes down during a commit, discuss why and talk about how to improve it. if it drops way down you might want to do something, but a little dip while you're getting started is A-OK.
Assuming you've covered the trauma that got you hired, increasing test coverage is best done on an as-worked basis. If a developer is writing new code, it gets tests. If a developer is modifying old code, it gets tests to (at least) prove that the modifications work, and ideally to prove that they don't break the old functionality either.
You may come across old code that literally can't be tested. That's a good time to refactor that code. If people are scared of refactoring because it might break, point out that that's exactly what tests are for. Try to pull out to a level where you can test. If you can't test a unit, test the class. If you can't test the class, test the package. Then, go back in and start re-working. You have to do it some day.
Oh, no, we'll be replacing the Fizzwangle with a new Buzzshooper implementation soon; There's no need to take the risk of refactoring for testability.
This is a lie. Even if they mean it truthfully, it's a lie. Buzzshooper isn't coming any time soon. Refactor that shit.
Tests Are Code, Code Is Tests
Your tests need to be treated like high quality code. Use all the abstractions you use when writing code, like inheritance, polymorphism, modularisation, composability.
Look at techniques like the Page Object Model for front end testing. Your test code should restrict implementation detail knowledge (eg, element locators) to the least number of places, so that changes are easy to implement.
Oh, and also, your Code is Code. Learn about then help your teams write code for testability, and tests for code-ability. Structure your tests and app so you can test in parallel, reliably, as fast as possible:
Give HTML elements unique, simple IDs
Write tests that test a single thing
Bypass complicated test setup by doing things like pre-populating databases
Log in once, then use session management to avoid doing it again
Use data generators to create unique test data (including logins)
Other Resources
Check out past conference talks like SauceCon Online.
Testing Talks Online has some great discussions and is the closest thing I've found to a real-life meetup during Covid.
There's also a lot of great content over at Ministry of Testing.

What are the common methods for having external companies testing your software?

We are a couple of entrepenours who have developed a cross-browser app and a backend administration system for the app. Or actually, we paid a company to develop it. Now we want it tested profesionally, but we dont want to use the same company for this purpose.
The tests may involve
Integration Testing
Functional Testing
System Testing
Stress Testing
Performance Testing
Usability Testing
For some of the tests, we think that the actual source code is required. We dont feel komfortable giving our source code away "just like that", to unknown parties, so what are the common methods for having external companies testing ones software?
you don't really need to give the source to perform the mentioned tests. you need to provide working environments or provide binaries and instructions how to deploy them. it seems sufficient for 1,2 (i don't know what does 3 means), 4 and 5. It's way too late for usability testing. it should have been done during UI design phase (how do you want to test it right now?)
but those tests are not sufficient. you forgot about penetration testing. and above tests are black box testing and they can show you how the application works.
but if you have any real plans for this application you must be sure it's maintainable. and for this you need the whitebox testing, you have to analyze the code.
you can start with automated analyzing to check the overall quality of the code. but at the end you will still need good programmers to perform the code review. but you don't really have to give them the code. you can invite them to your office and let them review the code on your workstations. unless your idea is so simple and brilliant that only one look at the code is enough to reproduce it. in such case you will need to sign NDA or give some shares to the expert who will take care of the quality

What process does professional website building follow?

I've searched for a while, but I can't find anything related on Google or here.
Me and some friends were debating starting a company, so I figure it might be good to do a quick pilot project to see how well we can work together. We have a designer who can do HTML, CSS and Flash, enjoys doing art, but doesn't like to do HTML and CSS... And 2 programmers that are willing to do anything.
My question is, from an experienced site builder's perspective, what steps do we do - in chronological order - to properly handle a website? Does the designer design the look and feel of the site, then the programmers fill in the gaps with functionality? Or do the programmers create a "mock-up" of the site with most of the functionality, then the designer spices it up? Or is it more of a back-and-forth process?
I just want to know how a professional normally handles it.
Update:
A recap taking some of the notes from each post.
Step 1: Define requirements. What will your site/application do?
Step 2: Use cases. Who will use the application, and what will they do with it? This doesn't have to be done with a bunch of crazy UML diagrams, just use whatever visual aids you think work best for you. Find a CMS vendor, or a search vendor, or both. While planning, maybe do some competitor analysis, and see how those in similar fields have done theirs.
Step 3: Visual proof-of-concept. This is done by your designer, NOT your programmers... Programmers are notoriously bad at UI. Use an image program like Photoshop, not an HTML editor. Leave it fluid and simple at first. Select the three-color theme for the site (two primaries and an accent.) Get a sense of how you want to lay things out, keeping in mind the chosen CMS and/or search functionality. Focus hard on usability, add pizzaz later. Turn the created concept into JPEG mock-ups, or create a staging site to allow the client to view the work. A staging site will allow for future releases to be tested prior to moving it to production.
Step 4: Once the site is conceptualized by your designers, have your HTML/CSS developer turn it into markup. He/she should shoot for XHTML compliance and test on as many major browsers as you can. Also a good time to set up versioning/bug tracking/management systems, to keep track of changes, bugs, and feedback.
Step 5: Have your programmers start turning your requirements into software. This can and should be done in parallel with Step 4- there's no reason they can't be coding up the major pieces and writing tests while the UI is designed and developed.
Step 6: Marry up the final UI design with the code. Test, Test, Test!!
Step 7: Display end result to client, and get client sign-off.
Step 8: Deploy the site to production.
Rinse, Repeat...
Step 1: Define requirements. What will your site/application do?
Step 2: Use cases. Who will use the application, and what will they do with it? This doesn't have to be done with a bunch of crazy UML diagrams, just use whatever visual aids you think work best for you.
Step 3: Visual proof-of-concept. This is done by your designer, NOT your programmers. Use an image program like Photoshop, not an HTML editor. Leave it fluid and simple at first. Select the three-color theme for the site (two primaries and an accent.) Get a sense of how you want to lay things out. Focus hard on usability, add pizzaz later.
Step 4: Once the site is conceptualized by your designers, have your HTML/CSS developer turn it into markup. He/she should shoot for XHTML compliance and test on as many major browsers as you can.
Step 5: Have your programmers start turning your requirements into software. This can and should be done in parallel with Step 4- there's no reason they can't be coding up the major pieces and writing tests while the UI is designed and developed.
Step 6: Marry up the final UI design with the code. Test, Test, Test!!
Rinse, Repeat...
There is no one universal way. Every shop does it differently. Hence, a warning: gross generalizations follow.
Web development typically consists of much shorter release cycles, because it's so simple to push out a release, compared to client-side software. Thus the more "agile" methods are more frequently used than the "waterfall" models encountered in developing client software.
Figure out what, exactly, you're building.
Take care of all the legal stuff (e.g. what business entity you'll be forming, how will each team member be compensated for their work, will there be health benefits, etc).
Mockups. I suggest having the designers do the mockups since programmers are notoriously bad at UI design.
Set up some sort of bug tracking / case management system so that you have a centralized place for all your feature requests and bug reports.
Start coding.
Once you have a simple version of your app, get some people to test it out to make sure you're on the right path.
???
Profit!
As a first step, I'd recommend doing a bit of up-front design using an approach such as paper prototyping, to lock down what it is you want your website to do, and roughly how you want it to look.
Next up, read up on the Agile approach to software development and see if you like the sound of what it suggests. It tends to work best with smaller, well-motivated teams.
Figure out the minimum amount of functionality you can create that you can deliver as a product so that you can get user feedback as soon as possibly. Then expect to iteratively add functionality to the product over time.
The Web Style Guide provides a pretty detailed overview of the process.
You should mix and match the lists provided here for your needs.
I just want to make sure you know one thing...
Customers are "stoopid" when it comes to web design.
You will have to claw, scrape, drag, gnash, rip, and extricate every requirement from their naive little souls. If you fail to do so? Guess who gets the blame?
The road you now look down is a hard one filled with competition, stress, and risk. It requires endurance, faith, patience, and the ability to eat ramen 5 of 7 days a week.
To add (or repeat) Dave Swersky's list.
Gather requirements from clients
Do some competitor analysis. Gather
screen shots of competitor sites.
Build a sitemap /wireframe - What is
the structure/content of the site?
Get designers to create JPG mockups.
They may use the screen shots for
"inspiration"
Get feedback from
clients based on JPEG's
Create HTML
mockups from JPEG's
Get feedback
from clients. Go back to step 4 if
necessary
Implement HTML using
technology of choice
Unit test the site
UAT and obtain sign off.
Deploy to live
client feedback is critical, they should be involved in every step to ensure a successful implementation.
Hope this helps
In addition to the steps outlined in other answers, I'd add this (to be added somewhere near the end of the "cycle"):
x. Once you have a more or less end to end solution, set up a staging site.
y. Get client sign off on staging site.
z. Deploy to production site.
Celebrate! But not too hard, there's almost always going to be a few iterations of changes, because users rarely know exactly what they really want the first time around.
So, when (not if), the client asks for changes, you can work on the changes and promote them to the staging site first! This is important because a) it gives clients a chance to preview changes before the whole world sees them b) if the integrity of the data on the production site is important, you can hopefully weed out any issues on the staging site before they impact production data.
Just to give something on the other side of the coin. Where I work, we have for the past couple of years, worked on a redesign of the company's website. Here are some highlights of the process:
Identify vendors for various functions that will be needed. In this case that meant finding a Content Management System vendor as well as a Search vendor.
Get a new design for the site that can be applied to what was selected in the first step.
Using system integrators and in-house developers, start to build some of the functionality for the site and take the flexible, customizable software in 1 and make it useful for the organization. Note that this is where a couple of years have been spent getting this working and some business decisions ironed out.
Release a preview site to verify functionality and fix bugs, add enhancements as needed.
Note that in your case you may not have the same budget but there are various CMS frameworks out there to select as well as how much integration do you want to have for the site? Does it have to talk to a half-dozen different systems? In the case I mentioned above there are CRM integrations, ESB integrations, search integrations, and translation integrations to give a few examples of where things had to be wired up correctly.
In response to the comment, be sure you and the client know what is meant by "simple" as if there is any e-commerce functionality, forums, or personalization these are examples where it can be important to know what is needed now and have an idea of what is needed down the road as there can likely be a ton of things that customers may want but you have to figure out some of the nitty-gritty details at points in the future. For example, some people may think that Google is simple, and from an end-user perspective it is though how many computers does Google have running how many different applications doing how much processing 24/7? Quite a bit, I'd imagine. Simple is good, but sometimes making something look simple can be incredibly hard to do.

Requirements or Testing? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
If you had to do without one or the other in a software project, which would you pick?
I've had plenty of projects in which the client or PM thought they could get away without one or the other. We always paid the price.
Turn this around and repeat after me: "Tests are requirements." :-)
If you mean "formal requirements", I can and easily do without those. I would much prefer a living, breathing customer who can tell me what they want over a rigid, out-of-date document. Having switched to TDD, I wouldn't ever want to go back to a "no test" environment. I choose informal requirements -- stories, on-site customer, and customer-written acceptance tests -- over formal requirements and no tests.
I'd say you could go without Testing rather than Requirements. If you don't have requirements, how do you know what you're developing?
If the programmers are good enough, they should be able to catch most of the egregious errors that testing would find.
You have to test against the requirements, so if you don't have requirements you can't do testing. So if you have to pick one, you can only pick requirements.
But not doing testing is a path to failure. Guaranteed.
If I had to pick one, it would be requirements.
It doesn't have to be a formal, excruciatingly detailed document with twenty signatures, but you have to know exactly what the customer wants and more importantly what the customer needs.
The requirements are also your first communication to the development team. How will they know what you're asking if you're not asking it clearly? At best you're at grave risk of building the wrong thing right. I'd rather have the right thing built slightly wrong.
If I were asked to choose between requirements or testing I would choose to polish up my resume. You really can’t do without either in any projects because the basic project lifecycle is:
Define Needs/Goals (AKA Requirements)
Design & Build to the requirements
Verify that you built to spec (to requirements.)
If you dont have success criteria and goals that are verifiable (and then are verified) how can you insure that you are going to succeed? And if you dont have a chance to succeed, why start the project?
I would say requirements because there always seems to be some level of "feature creep" from the client when you are developing software. Testing is one of the crucial pieces in the SDLC.
Requirements and testing are important for most projects but if you really have to pick, you should go with requirements. One of the advantages of picking requirements over testing is that, you might save some development time since the developers know what they have to build, and if the development is done with extra time in hand, you can allocate that time for testing :)
tests (feature and integration) are more important than requirements; if you can specify the tests then you have also specified the requirements, at least implictly
comments are also the developer documentation, with unit tests being the how-to 'quickstart' examples ;-)
Not sure if the requirements are referred to as an artefact or as a process. Although it is possible to skip requirements as artefact especially for smaller teams and still deliver a product, skipping requirements as process is out of question. Requirements as artefact let you model the system at cost lower than building the entire thing, do feasibility, estimates, and for a larger and more disperse team to cut communication overheads and have a common ground under the feet. Neglect the requirements and you get louse estimates (regardless if you plan a lot up front or just do a short sprint), poor idea of feasibility and possibly very inefficient communication and a lot of miscommunication.
Requirements as a process on the other hand is going to exist regardless if it is formally acknowledged or not. You cannot really exclude it, you can pretend requirements process does not exist or integrate into the design, coding, testing or into stages as late as pilot and maintenance. Obviously treating the process in this way mean it will not get fair amount of attention and resource. Consequences normally range from delivering something that is ultimately useless to having to fix the now obvious shortcomings of the product later in the development cycle or even discovering the real requirements once the product fails in the field, increasing the cost of development, defaulting on the deadlines, ruining team’s good name, destroying user confidence etc.
Testing usually boils down to validation and verification, more recently testing technology improvements let automated testing to be used as a solid tool for achieving greater efficiency in debugging and reducing time necessary for regression testing. Validation is making sure that the team has built the right product, i.e. scoped requirements are correct, not contradictory and there are no gaps. Verification on the other hand is making sure that the product is built right: no technical defects, accidental errors etc.
As we can see testing provides a safety net in the scenario where requirements were neglected. Normally as the team starts testing they need to refine their understanding of requirements and as a result modify the software. Since both requirement artefacts and software itself just represent different levels of fidelity in modelling a solution for a real life problem, and software as a model is order of magnitude more precise the testing of application evaluates requirements as well (regardless if they are implicit or explicit, formally analysed or informally communicated).
Normally the alternative to testing is to let users report a substantially larger amount of defects and shortcomings and try and fix them as part of maintenance (meaning later in product lifecycle), increasing the cost of every fix.
So requirements versus testing? Fire the manager. Ok, skip requirements if you want the project schedule slip during the testing phase and get yourself into the mess of building not what users need, skip the testing if you just need to show utter disrespect to your users.
Without requirements you don't need testing since what you end up with is exactly what was spec'd
There are categories of software that can be developed perfectly well without requirements, at least anything more than a vaguely expressed idea the length of an email.
Thing is, if you have a specific client, and a project manager, it is unlikely your software is in one of them. It's unlikely someone is specifically paying you to, say, 'make me a fun game involving a juggling monkey'.
The only category of software that can be developed without testing is failware: where your company has managed to sucker some customer into paying whether or not the software works (or if you have a really dumb customer, pay more if it doesn't work, in support and maintenance).
That's probably more likely: contracts structured so that success is less profitable than failure are still fairly common. If you think that's the case, and you want to develop working software, then consider switching to a job where your interests and your bosses are less opposed.
Without Requirements can we make a Test Plan? So We Cant do Testing even if we pick Testing instead of Requirements.
So Requirements should be Priority even if you consider Agile Testing Environment.

How much a tester should know about internal details of code?

How useful, if at all, is for the testers on a product team to know about the internal code details of a product. This does not mean they need to know every line of code but a good idea of how the code is structured, what is the object model, how the various modules are inter-linked, what are the inter-dependencies between various features etc.? This can argubaly help them in finding related issues or defects once they hit one. On the other side, this can potentially 'bias' their "user-centric" approach towards evaluating and certifying the product and can effect the testing results in the end.
I have not heard of any specific model for such interaction. (Lets assume a product that users, potentially non-technical consume, and not a framework or API that the testers are testing - in the latter case the testers may need to understand the code to test that because the user is another programmer).
That entirely depends upon the type of testing being done.
For functional system testing, the testers can and probably should be oblivious to the details of the implementation -- if they know the details they may inadvertently account for that in their test strategy and not properly test the product.
For performance and scalability testing it's often helpful for the testers to have some high-level knowledge of the structure of the codebase, as it's beneficial in identifying potential performance hotspots, and therefore writing targetted test cases. The reason this is important is that generally performance testing is a broad open-ended process, so anything that can be done to focus the testing to get results is beneficial to everybody.
This sounds similiar to this previous question: Should QA test from a strictly black-box perspective?
I've never seen a circumstance where a tester who knew a lot about the internals of system was disadvantaged.
I would assert that there are self justifying myths that an informed tester is as adequate or even better than a deeply technical one because:
It allows project managers to use 'random or low quality resources' for testing. The 'as uninformed as the user myth'. If you want this type of testing - get some 'real' users to test your stuff.
Testers are still often seen as cheaper and less valuable than developers. The 'anybody can do blackbox testing myth'.
Development can defer proper testing to the test team. Two myths in one 'we don't need to train testers' and 'only the test team does testing' myths.
What you are looking at here is the difference between black box (no knowledge of the internals), white box (all knowledge) and grey box (some select knowledge).
The answer really depends on the purpose of the code. For integration heavy projects then where and how they communicate, even if it is entirely behind the scenes, allows testers to produce appropriate non-functional test cases.
These test cases are determining whether or not a component will gracefully handle the lack of availability of a dependency. It can also be used to identify performance related issues.
For example: As a tester if I know that the Web UI component defers a request to a orchestration service that does the real work then I can construct a scenario where the orchestration takes a long time (high load). If the user then performs another request (simulating user impatience) and the web service will receive a second request while the first is still going. If we continually repeat this the web service will eventually die from stress. Without knowing the underlying model it would not be easy to find the problem
In most cases for functionality testing then black box is preferred, as soon as you move towards non-functional or system integration then understanding the interactions can assist in ensuring appropriate test coverage.
Not all testers are skilled or comfortable working/understanding the component interactions or internals so it is on a per tester/per system basis on whether it is appropriate.
In almost all cases we start with black box and head towards white as the need sees.
A tester does not need to know internal details.
The application should be tested without any knowledge of interal structure, development problems, externals depenedncies.
If you encumber the tester with those additional info you push him into a certain testing scheme and the tester should never be pushed in a direction he should just test from a non coder view.
There are multiple testing methodologies that require code reviewing, and also those that don't.
The advantages to white-box testing (i.e. reading the code) is that you can tailor your testing to only test areas that you know (from reading the code) will fail.
Disadvantages include time wasted from actual testing to understand the code.
Black-box testing (i.e. not reading the code) can be just as good (or better?) at finding bugs than white-box.
Normally both types of testing can happen on one project, developers white-box unit testing, and testers black-box integration testing.
I prefer Black Box testing for final test regimes
In an ideal world...
Testers should know nothing about the internals of the code
They should know everything the customer will - i.e. have the documents/help required to use the system/application.(this definetly includes the API description/documents if it's some sort of code deliverable)
If the testers can't manage to find the defects with these limitations, you haven't documented your API/application enough.
If they are dedicated testers (Only thing they do) then I think they should know as little about the code as possible that they are attempting to test.
Too often they try to determine why its failing, that is the responsibility of the developer not the tester.
That said I think developers make great testers, because we tend to know the edge cases for certain types of functionality.
Here's an example of a bug which you can't find if you don't know the code internals, because you simply can't test all inputs:
long long int increment(long long int l) {
if (l == 475636294934LL) return 3;
return l + 1;
}
However, in this case it would be found if the tester had 100% code coverage as a target, and looked at only enough of the internals to write tests to achieve that.
Here's an example of a bug which you quite likely won't find if you do know the code internals, because false confidence is contagious. In particular, it is usually not possible for the author of the code to write a test which catches this bug:
int MyConnect(socket *sock) {
/* socket must have been bound already, but that's OK */
return RealConnect(sock);
}
If the documentation of MyConnect fails to mention that the socket must be bound, then something unexpected will happen some day (someone will call it unbound, and presumably the socket implementation will select an arbitrary local address). But a tester who can see the code often doesn't have the mindset of "testing" the documentation. Unless they're really on form, they won't notice that there's an assumption in the code not mentioned in the docs, and will just accept the assumption. In contrast, a tester writing from the docs could easily spot the bug, because they'll think "what possible states can a socket be in? I'll do a test for each". Since no constraints are mentioned, there's no reason they won't try the case that fails.
Answer: do both. One way to do this is to write a test suite before you see/write the code, and then add more tests to cover any special cases you introduce in your implementation. This applies whether or not the tester is the same person as the programmer, although obviously if the programmer writes the second kind of test, then only one person in the organisation has to understand the code. It's arguable whether it's a good long-term strategy to have code only one person has ever understood, but it's widespread, because it certainly saves time getting something out the door.
[Edit: I decline to say how these bugs came about. Maybe the programmer of the first one was clinically insane, and for the second one there are some restrictions on the port used, in order to workaround some weird network setup known to occur, and the socket is supposed to have been created via some de-weirdifying API whose existence is mentioned in the general sockets docs, but they neglect to require its use. Clearly in both these cases the programmer has been very careless. But that doesn't affect the point: the examples don't need to be realistic, since if you don't catch bugs that only a very careless programmer would make, then you won't catch all the actual bugs in your code unless you never have a bad day, make a crazy typo, etc.]
I guess it depends how good of testing you want. If you just want to sanity check the common scenarios, then by all means, just give the testers / pizza-eaters the application and tell them to go crazy.
However, if you'd like to have a chance at finding edge cases, performance or load issues, or a whole lot of other issues that hide in the depths of your code, you'd probably be better off hiring testers who know how and when to use white box techniques.
Your call.
IMHO, I think the industry view of testers is completely wrong.
Think about it ... you have two plumbers, one is extremely experienced, knows all the rules, the building codes, and can quickly look at something and know if the work is done right or not. The other plumber is good, and get the job done reliably.
Which one would you want to do the final inspection to make sure you don't come home to a flooded house? In fact, in what other industry do they allow someone who knows hardly anything about the system they are inspecting to actually do the inspection?
I have seen the bar for QA go up over the years, and that makes me happy. In time, QA may become something that devs aspire to be.
In short, not only should they be familiar with the code being tested, but they should have an understanding that rivals the architects of the product, as well as be able to effectively interface with the product owner(s) / customers to ensure that what is being created is actually what they want. But now I am going into a whole seperate conversation ...
Will it happen? Probably sooner than you think. I have been able to reduce the number of people needed to do QA, increase the overall effectiveness of the team, and increase the quality of the product simply by hiring very skilled people with dev / architect backgrounds with a strong aptitude for QA. I have lower operating costs, and since the software going out is higher quality, I end up with lower support costs. FWIW ... I have found that while I can backfill the QA guys effectively into a dev role when needed, the opposite is almost always not true.
If there is time, a tester should definitely go through a developers code. This way, you can improve your tests to get better coverage.
So, maybe if you write your black box tests looking at the spec and think you have the time to execute all of those and will still be left with time, going through code cannot be a bad idea.
Basically it all depends on how much time you have.. Another thing you can do to improve coverage is look at the developers design documents. Those should give you a good idea of what the code is going to look like...
Testers have the advantage of being familiar with both the dev code and the test code!
I would say they don't need to know the internal code details at all. However they do need to know the required functionality and system rules in full detail - like an analyst. Otherwise they won't test all the functionality, or won't realise when the system misbehaves.
For user acceptance testing the tester does not need to know the internal code details of the app. They only need to know the expected functionality, the business rules. When a bug is reported
Whoever is fixing the bug should know the inter-dependencies between various features.