What is the best method for Software/Web Application Testing? - testing

Does any one know how to manage Software/Web Applications Testing process ?

The Hexawise test case generator can dramatically reduce your test design time (e.g., figuring out what combinations of browsers, configurations and functions should be tested so as to achieve maximum coverage in a manageable number of test cases).
Free trials are available at: http://hexawise.com/users/new
Disclosure: I created the tool based on the applied statistics-based Design of Experiments principles my dad, William G. Hunter, used to teach about when he was a professor at the University of Wisconsin.
/Slight tangent: it is surprising to me that Design of Experiments methods are widely used in manufacturing today, but rarely used in software testing. The same types of combinatorial explosions exist in both manufacturing (e.g,. thing about how many billions of ways there could be to manufacture a radial tire) and software testing (where billions of potential use cases exist in most non-trivial software applications particularly when browser types and/or configuration options are thrown into the mix). These Design of Experiments methods dramatically improve efficiency in both situations (and are extremely easy to prove in software testing projects by simple "bake-offs"); despite this they remain unused in the vast majority of testing organizations, even at Fortune 100 firms. I'm hoping to raise awareness and help change this. End of tangent/
Justin

There are lots of aspects to it, but one tool that is quickly becoming important in our arsenal is Selenium.

You need to test multiple layers of your software. Unit tests cover the smallest segments of code, like a specific method's output, rather than the way methods interact. Then at the opposite range of the spectrum there is the integrated user experience testing, which mimics a user pressing buttons and doing things with your application (using a tool like Selenium if you're writing web applications).

Related

What types of architecture or architecture layers are not suitable for automated testing?

I was recently tasked with developing automated build and release pipelines for one of my company's legacy applications. After some investigation, I keep hearing from managers and other devs that certain application layers and architectures don't lend themselves to automation, particularly automated testing. Therefore, it's often suggested I shouldn't bother trying to apply DevOps principles and AT unless I want to re-architect the whole app.
The common cited example would be PL/SQL backends or monolithic architectures. I asked why these were not suitable, but never got a really clear answer. Does anyone have any insight on when automated test should not be used in favor of dumping the old architecture and starting fresh?
Short answer - ones that suffer from testability issues.
For a more in depth one, let's first admit that many software systems are untestable, or not immediately testable. So that, the effort of
trying to apply DevOps principles and AT
is far greater than the ROI. Such notorious example is Google's ReCAPTCHA, which causes some pain for the automation testing folks (like me). The devs are actually right to say that it will take be a
re-architect the whole app
journey, as testabilty is highly related to other key software qualities such as encapsulation, coupling, cohesion, and redundancy.
common cited example would be PL/SQL backends or monolithic architectures
Now, that is totally not the case. The firt one is more data-centric and requires a deeper understanding, but there are solutions to that as well. As to, single-tiered software applications - one can argue that in contrast to the mSOA, monolithic applications are much easier to debug and test. Since a monolithic app is a single indivisible unit, you can run end-to-end testing much faster/easier.
Put simply - if your app is highly testable, is highly usable. In case, the architecture and design were aligned to a very, very specific company needs - no wonder, is usable only up to a point.

What kinds of tests are there?

I've always worked alone and my method of testing is usually compiling very often and making sure the changes I made work well and fix them if they don't. However, I'm starting to feel that that is not enough and I'm curious about the standard kinds of tests there are.
Can someone please tell me about the basic tests, a simple example of each, and why it is used/what it tests?
Thanks.
Different people have slightly different ideas about what constitutes what kind of test, but here are a few ideas of what I happen to think each term means. Note that this is heavily biased towards server-side coding, as that's what I tend to do :)
Unit test
A unit test should only test one logical unit of code - typically one class for the whole test case, and a small number of methods within each test. Unit tests are (ideally) small and cheap to run. Interactions with dependencies are usually isolated with a test double such as a mock, fake or stub.
Integration test
An integration test will test how different components work together. External services (ones not part of the project scope) may still be faked out to give more control, but all the components within the project itself should be the real thing. An integration test may test the whole system or some subset.
System test
A system test is like an integration test but with real external services as well. If this is automated, typically the system is set up into a known state, and then the test client runs independently, making requests (or whatever) like a real client would, and observing the effects. The external services may be production ones, or ones set up in just a test environment.
Probing test
This is like a system test, but using the production services for everything. These run periodically to keep track of the health of your system.
Acceptance test
This is probably the least well-defined term - at least in my mind; it can vary significantly. It will typically be fairly high level, like a system test or an integration test. Acceptance tests may be specified by an external entity (a standard specification or a customer).
Black box or white box?
Tests can also be "black box" tests, which only ever touch the public API, or "white box" tests which take advantage of some extra knowledge to make testing easier. For example, in a white box test you may know that a particular internal method is used by all the public API methods, but is easier to test. You can test lots of corner cases by calling that method directly, and then do fewer tests with the public API. Of course, if you're designing the public API you should probably design it to be easily testable to start with - but it doesn't always work out that way. Often it's nice to be able to test one small aspect in isolation of the rest of the class.
On the other hand, black box testing is generally less brittle than white box testing: by definition, if you're only testing what the API guarantees in its contracts, then the implementation can change as much as it wants without the tests changing. White box tests, on the other hand, are sensitive to implementation changes: if the internal method changes subtly - or gains an extra parameter, for example - then you'll need to change the tests to reflect that.
It all boils down to balance, in the end - the higher the level of the test, the more likely it is to be black box. Unit tests, on the other hand, may well include an element of white box testing... at least in my experience. There are plenty of people who would refuse to use white box testing at all, only ever testing the public API. That feels more dogmatic than pragmatic to me, but I can see the benefits too.
Starting out
Now, as for where you should go next - unit testing is probably the best thing to start with. You may choose to write the tests before you've designed your class (test-driven development) or at roughly the same time, or even months afterwards (not ideal, but there's a lot of code which doesn't have tests but should). You'll find that some of your code is more amenable to testing than others... the two crucial concepts which make testing feasible (IMO) are dependency injection (coding to interfaces and providing dependencies to your class rather than letting them instantiate those dependencies themselves) and test doubles (e.g. mocking frameworks which let you test interaction, or fake implementations which do everything a simple way in memory).
I would suggest reading at least book about this, since the domain is quite huge, and books tend to synthesize better such concepts.
E.g. A very good basis might be: Software Testing Testing Across the Entire Software Development Life Cycle (2007)
I think such a book might explain better everything than some out of context examples we could post here.
Hi… I would like to add on to what Jon Skeet Sir’s answer..
Based on white box testing( or structural testing) and black box testing( or functional testing) the following are the other testing techniques under each respective category:
STRUCTURAL TESTING Techniques
Stress Testing
This is used to test bulk volumes of data on the system. More than what a system normally takes. If a system can stand these volumes, it can surely take normal values well.
E.g.
May be you can take system overflow conditions like trying to withdraw more than available in your bank balance shouldn’t work and withdrawing up to a maximum threshold should work.
Used When
This can be mainly used we your unsure about the volumes up to your system can handle.
Execution Testing
Done in order to check how proficient is a system.
E.g.
To calculate turnaround time for transactions.
Used when:
Early in the development process to see if performance criteria is met or not.
Recovery Testing
To see if a system can recover to original form after a failure.
E.g.
A very common e.g. in everyday life is the System Restore present in Windows OS..
They have restore points used for recovery as one would well know.
Used when:
When a user feels an application critical to him/her at that point of time has stopped working and should continue to work, for which he performs recovery.
Other types of testing which you could find use of include:-
Operations Testing
Compliance Testing
Security Testing
FUNCTIONAL TESTING Techniques include:
Requirements Testing
Regression Testing
Error-Handling Testing
Manual-Support Testing
Intersystem testing
Control Testing
Parallel Testing
There is a very good book titled “Effective methods for Software Testing” by William Perry of Quality Assurance Institute(QAI) which I would suggest is a must read if you want to go in depth w.r.t. Software Testing.
More on the above mentioned testing types would surely be available in this book.
There are also two other very broad categories of Testing namely
Manual Testing: This is done for user interfaces.
Automated Testing: Testing which basically involves white box testing or testing done
through Software Testing tools like Load Runner, QTP etc.
Lastly I would like to mention a particular type of testing called
Exhaustive Testing
Here you try to test for every possible condition, hence the name. This is as one would note pretty much infeasible as the number of test conditions could be infinite.
Firstly there are various tests one can perform. The Question is how does one organize it. Testing is a Vast & enjoying process.
Start Testing with
1.Smoke Testing. Once passed , go ahead with Functionality Testing. This is the Backbone of Testing. If Functionality works fine then 80% of Testing is profitable.
2.Now go with User Interface testing. AS at times User Interface is something that attracts the Client more than functionality. It is the look & feel that a client gets more attracted to it.
3.Now its time to have a look on Cosmetics bugs. Generally these bugs are ignored because of time constraint. But these play a major role depending on the page it is found. A spelling mistake turns to be Major when found on Splash Screen Or Your landing page or the App name itself. Hence these can not be overlooked as well.
4.Do Conduct Compatibility Testing. i,e Testing on Various Browsers & browser Versions. May be devices & OS for Responsive applications.
Happy testing :)

How can a large number of developers write software together without either a cumbersome process or poor quality software?

I work at a company with hundreds of people writing software for essentially the same product. The quality of the software has to be high because so many people depend on it (not least the developers themselves). Because of this every major issue has resulted in a new check - either automated or manual.
As a result the process of delivering software is becoming ever more burdensome. So that requires more developers which... well you can see it is a vicious circle.
We now have a problem with releasing software quickly - the lead time even to change one line of code for a very serious issue is at least one day.
What techniques do you use to speed up the delivery of software in a large organization, while still maintaining software quality?
I also work in a large and cumbersome organization. I've had success implementing several agile software development methodologies, but there are two in particular that I have found especially valuable.
Iterative and incremental development - Keep your release cycle short and tight. Across a large team, if numerous changes are made in between releases, you can find yourself in an integration nightmare.
Large organizations lean towards big project plans with lengthy development time lines. Fight this. Planning a project out an entire year makes no sense when your whole perception might change after the first two weeks of development. Condition your stakeholders to get used to the idea of making small incremental releases and adapting to ever-changing requirements.
Automated Unit Tests - This is a great way to insure you are releasing quality software. The worst bugs are caused by seemingly innocent code changes that have unintended consequences elsewhere. Comprehensive unit tests are maybe the best way to catch this kind of bug. If you can graduate to test driven development or behavior driven development, even better.
Any large organization is going to have some mediocre developers. It's a fact of life. Automated unit testing is a great way to keep an eye on them. Have one of your better developers write their unit tests for them. You will have assurance that at least their code works (even if it's ugly).
See Continuous-Integration
And please read The Joel Test: 12 Steps to Better Code.
There are many ways to improve the process, but the key component is modularity. By clearly separating responsibilities (both in code and in the organization) and defining clear and consistent interfaces, one large team can work as many small teams all tied together.
Wise but gloomy sayings:
Fred Brooks: Adding programmers to a late project makes it later.
Gerry Sussman: Programmers combine like resistors in parallel.
Somebody else: The cost of a project is the number of programmers times the length of the schedule.
Something that might work:
Organize the project so that there is a core application built by one or two people, preferably operated via a language. Then let all the programmers effectively be users of the core, by coding in that language.
I'm thinking of language-based products: SAS, S/R, MATLAB, etc.

Are embedded developers more conservative than their desktop brethrens? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I've been in the embedded space for a while now, and it seems that most programmers I talk to seem to be doing things pretty much the same way it was done 15 years or more ago: Waterfall(ish) Development, command line tools and a small group uses lint.
Contrast this with the server/desktop environment, where there seems to be lots of activity related to all sorts of facets of programming:
XP, Scrum, Iterative, Lean/Agile
Continuous Integration
Automated Builds
Automated Unit Testing Frameworks
Refactoring tool support
Is it just that the embedded environment makes it more difficult to implement new practices or tools?
Is it that the mindset of embedded programmers steers them away from new tools/concepts?
Is it that management in the typical embedded industry behind the curve compared to IT focused fields?
I do realize that this is a generalization, and some embedded projects do use Scrum, Agile, CI, Automated Builds (in fact I worked at a company that had that in place since the 80s). But my impression is that it is a very small percentage.
We are all used to the fact that our desktop PC crashes once in a while (or at least an application on the desktop suddenly disappears). It's no big deal. The next patch will fix it.
In the embedded space, you are building something which can't be patched. Lives can depend on your device (in a car, an elevator or a medical system). Most devices are installed and then must run unattended for years. So embedded people tend to be very conservative. TCP/IP is often "too modern". They stick to their trusty serial bus with a communication "stack" that is roughly 50 lines of assembler code.
What's worse, you simply don't have the abundance of space on the device which means you can't use one of the latest programming languages which make TDD and automated builds a bliss.
Next, a lot of embedded development environments are proprietary. If your supplier doesn't support it, you won't get it. Linux has started to weaken this in the past years but a whole lot of devices are not powerful enough to run Linux, yet. And even if they were, the CPU power would be used for something else instead of running a fancy OS which comes with source.
So yes, there are powerful forces working in the background to keep the embedded space where it is.
Are embedded developers more conservative than their desktop brethrens?
Yes, because they are more concerned with the consequences of making errors. It’s a big deal to patch an embedded device. Not so much for a desktop app.
Waterfallish development is necessary in the embedded world because you are generally building hardware at the same time as the software. You need to know as soon as possible how much memory, how much processor speed, how big a flash, what if any special hardware is necessary etc...The hardware design can’t complete until you know these answers. Once you decide, that is pretty much it. The lead time for redoing a board is far too long. If you mess up then the software is going to have to work around any short-comings. Not usually an ideal situation.
As for the tools, that is largely based on what the supplier provides and any biases of the developers. On some projects I have used XP Embedded and got pretty much everything that the desktop developer gets.
XP, Scrum, Iterative, Lean/Agile:
Since most of the design is done up front (by necessity), and you usually don’t have working hardware when it is time to code, the quick turn-around processes don’t really provide much benefit.
Continuous Integration/Automated Builds
Nice to have, but not really necessary. What…it takes about 15 seconds to open the IDE and press the compile button.
Automated Unit Testing
No reason why this shouldn't be done, but only part of the code can truly be automatically tested because the other part is either hardware dependent or has some other dependencies like timing. So you can't really tell if the code is working by the automated tests.
Refactoring Tool Support
The vendors of embedded processors product is the processor. They provide the IDE support in order to encourage you to purchase their processor. They couldn't possibly afford to pay for a Visual Studio sized development team in order to add all the bells and whistles to the IDE which isn't even their product.
These some reasons I can think of:
Embedded teams are usually smaller that desktop/Web teams. Code base is smaller.
System testing is much more important than unit testing. The software needs to be tested together with hardware. Automated testing is not possible and can only be applied to a small fraction of the code base.
Embedded engineers have a different skill set than software engineers. They interact with hardware, know how to use an oscilloscope and a logic analyzer. Usually, the difficult part of their job is to find a glitch in the hardware. They do not have the time to adopt modern software methologies.
Embedded programmers are mostly electrical engineers, not computer scientists or software engineers.
They excel in their field of expertise. They bring a slower more methodical approach than most computer programmers. When it comes to programming firmware, electrical engineers know just enough to be dangerous.
Here are some of the things I've noticed electrical engineers doing in C:
All code in ONE single file
Math like variable names: x, y, z
No or missing indentation
No stardard comment headers
No comments at all
Too many comments
In their defense EE's didn't train to be computer programmers, it's not their job. I think software is the hardest part of creating embedded devices. Designing PCBs and choosing components requires skill but pales in comparison to the complexity of 10,000 lines of code.
Embedded programmers also have to deal IDE's that look and behave like the IDE's of the 90's.
MPLAB
AVR Studio
Is it just that the embedded environment makes it more difficult to implement new practices or tools?
It's partly a matter of scale. Software is NOT the product, the product is the product. however, there are thousands of different types of microcontrollers and microprocessors out there, and the most popular thousand have 3-4 different compilers that aren't completely compatible.
So a given tool is only going to be used by a few hundred or thousand engineers.
In windows development, however, there are millions of programmers of many levels - the tools produce software directly which is the product, and so it's going to get more eyeballs, and more money.
Each new product that an engineer puts out might have a different processor.
Is it that the mindset of embedded programmers steers them away from new tools/concepts?
Embedded programmers are generally software or firmware engineers, as opposed to programmers. Engineering implies a certain amount of design, design analysis, and design proof prior to implementation - in other words a ton of work is done before the first line of code is written, and the documentation, ideally, is specific enough that implementation is merely turning pseudocode like documentation into compilable code.
New tools and concepts are needed in the design phase, not the implementation phase. An IDE with intellisense may be nice, but by the time the code is being written it's useless cruft - they already know what they need.
CAD - computer aided design - tools are being developed for firmware engineers that are used in the design phase to develop models and simulations that are directly turned into code. Matlab and simulink are good examples of this. The system as a whole is designed.
In fact, one might wonder why software developers are still writing code while the engineers are making data/program flow charts and state machine diagrams. Why is UML uptake so slow in the application world? It sounds like application developers can use some of the tools in common use among embedded systems engineers...
Is it that management in the typical embedded industry behind the curve compared to IT focused fields?
Actually, it's likely the reverse. When a project starts the engineers have to pick the processor.
The processor manufacturers get less money on older chips, so they pitch the latest and greatest, and they are generally cheaper overall than the chips used in the previous design (either by die shrinks, more integration, etc).
So the design is actually using the latest and greatest chips.
The downside is that the compiler and tools are often immature. They can only build so much on the older tools, and since the target moves with each new processor, they can't focus on a lot of the nice features application programmers might like. Especially since many of those features won't be useful to an embedded engineer.
There are many other factors, some of which are enumerated by other answers, but it's really a different field even though they both involve programming.
-Adam
I would also add a couple of points here:
In general embedded projects tend to be smaller than desktop projects. This decreases the need for very elaborated software processes.
Requirements for embedded project are often precise and better defined. Therefore SCRUM and agile are not so crucial
Finally embedded projects are generally a mix of software and hardware. The software being only a part of the project embedded developpers invest less time in software processes
I agree with much that's been written here:
Old tools without the bells and whistles (far fewer refactorings available due to C/C++'s preprocessor directives, if any at all) (time consuming to choose a unit test framework vs simply using JUnit).
It's true that waterfall feels more efficient. If I'm going to open the hood and get into a hard-to-access place, I'll want to do as much as I can while I'm there, rather than exiting and closing the hood after each task just to open it again. The idea that creating the most important features first allows you the option of shipping when promised instead of going late can also be hard to grasp when you believe nothing is optional, which might be true. IME, though, when the deadline looms something always becomes unnecessary.
Less visibility into the system makes it riskier to revisit existing code to refactor or change functionality. There are often timing issues, which automated tests running on the host using stubs and mocks won't catch. It can be hard for someone who's been bitten by these issues to take a different perspective.
I'll add one more; the language of agile/scrum is in workstation programmer's terms. To an embedded developer who knows just enough C to get the job done, what is a class, object, or method? When the "user" is typically regarded as a physical person clicking and typing, and the product has no person user interface, it's easy to dismiss the idea as Not Applicable. This may change with James Grenning's forthcoming book about TDD in C. I've been reading the beta ebook and it's quite good.
I would say it's more lack of good toolsets. It's really frustrating when you want to use C++ for its compile-time features not present in C (templates, namespaces, object-orientedness, etc) rather than its run-time features (exceptions, virtual functions) -- but the device manufacturers & 3rd parties just give you a C compiler, not C++. This probably results more from market size (hundreds of millions of PCs running Windows, with hundreds of thousands or even millions of developers -- vs. hundreds of thousands of Chip X, with hundreds or low thousands of developers) than from device capability.
edit: w/r/t robustness: there are different markets out there. The car/elevator/aeronautics/medical device market is going to have to be rigorous about getting rid of bugs. Other markets (toys, MP3 players, & other consumer electronics) can be less rigorous, especially if it's possible to upgrade code in the field. ("Oops! We're sorry we deleted your music library! We just fixed that bug, you can grab the latest release at our website at your convenience!")
I'd say different sorts of problem environments.
The biggest problem with the waterfall methodology is that requirements change. In every environment I've been in, there has been at least the likelihood of a requirements change, which means that the successful methodologies are those that keep flexibility as long as possible. Even if the customer has signed off in blood, and stands to forfeit his left hand if he suggests a change, there are changes coming in the future.
In embedded programming, it is possible to nail the requirements down up front. They come from the behavior of the system as a whole, and engineers are good at nailing down system requirements. Nobody's going to come in halfway through and say that the user now wants the pacemaker to deliver syncopated impulses while the recipient is dancing.
Once the requirements are frozen beyond thawing, which never happens in software designed for human use, waterfall is a very efficient methodology. The team proceeds from well-specified requirements to overall design, then detailed design, then coding, verifying all the way that the stages are done correctly. Then it's time to debug the code (since it's never perfect when written), and final tests to make sure the code meets the requirements.
I would also posit that some fields are inherently conservative. The transportation industry for example, where trains and planes may have life spans of 30 years or so. Customers tend to require tried and true practices, probably derived from IEEE. Waterfall is what customers know, waterfall is what customers demand.

Model Based Testing Strategies

What strategies have you used with Model Based Testing?
Do you use it exclusively for
integration testing, or branch it
out to other areas
(unit/functional/system/spec verification)?
Do you build focused "sealed" models or do you evolve complex onibus models over time?
When in the product cycle do you invest in creating MBTs?
What sort of base test libraries do you exclusively create for MBTs?
What difference do you make in your functional base test libraries to better support MBTs?
­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­
[There are several essays worth reading on this. Stack Overflow won't let me post more than one, so I've aggregated them in a blog post, linked at the end of this answer.]
First, a quick note on terms. I tend to use James Bach’s definition of Testing as “Questioning a product in order to evaluate it”. All test rely on /mental/ models of the application under test. The term Model-Based Testing though is typically used to describe programming a model which can be explored via automation. For example, one might specify a number of states that an application can be in, various paths between those states, and certain assertions about what should occur in on the transition between those states. Then one can have scripts execute semi-random permutations of transitions within the state model, logging potentially interesting results.
There are real costs here: building a useful model, creating algorithms for exploring it, logging systems that allow one to weed through for interesting failures, etc. Whether or not the costs are reasonable has a lot to do with what are the questions you want to answer? In general, start with “What do I want to know? And how can I best learn about it?” rather than looking for a use for an interesting technique.
All that said, some excellent testers have gotten a lot of mileage out of automated model-based tests. Sometimes we have important questions about the application under test that are best explored by automated, high-volume semi-randomized tests. Harry Robinson (one of the leading theorists and proponents of model-based testing) describes one very colorful example where he discovered many interesting bugs in Google driving directions using a model-based test (written with ruby’s Watir library). 1
Robinson has used MBT successfully at companies including Bell Labs, Microsoft, and Google, and has a number of helpful essays.[2]
Ben Simo (another great testing thinker and writer) has also written quite a bit worth reading on model-based testing.[3]
Finally, a few cautions: To make good use of a strategy, one needs to explore both its strengths and its weaknesses. Toward that end, James Bach has an excellent talk on the limits and challenges of Model-Based Testing. This blog post of Bach’s links to his hour long talk (and associated slides).[4]
I’ll end with a note about what Boris Beizer calls the Pesticide Paradox: “Every method you use to prevent or find bugs leaves a residue of subtler bugs against which those methods are ineffective.” Scripted tests (whether executed by a computer or a person) are particularly vulnerable to the pesticide paradox, tending to find less and less useful information each time the same script is executed. Folks sometimes turn to model-based testing thinking that it gets around the pesticide problem. In some contexts model-based testing may well find a much larger set of bugs than a given set of scripted tests…but one should remember that it is still fundamentally limited by the Pesticide Paradox. Remembering its limits — and starting with questions MBT addresses well — it has the potential to be a very powerful testing strategy.
Links to all essays mentioned above can be found here: http://testingjeff.wordpress.com/2009/06/03/question-about-model-based-testing/
We haven't done any/much I&T and use unit testing almost exclusively, seasoned with a bit of system testing. But our focus is clearly on unit testing. I'm pretty strict on the APIs we build/provide, so the assumption is, if it works by itself, it will work in conjunction and there hasn't been much wrong in it yet.
Our models are focused on a single purpose/module with as little dependencies as possible.
The focus is always to start as early as possible (TDD-kinda), but unfortunately we don't always get to it. The problem is, you always have to sell it to management and then it's hard because while testing improves stability (overall QA), the people from the outside (outside of tech) can't really relate to what that means until something bad happened.
Since we use PHP, we employ PHPUnit for the unit tests. All in all, we do CI with various different tools. :)
Harry Robinson, an author of MBT-books and worked a lot with it for example at Google and Microsoft have this site with some great info and whitepapers.
http://www.geocities.com/model_based_testing/
The best way is to try by yourself a Model based testing tool. It's the best way for know if the model based testing is adapted in your context. And what sort of strategies is the good one.
I advise you the "MaTeLo" tool of All4Tec (www.all4tec.net)
"MaTeLo is a test cases generator for black box functional and system testing. Conformed to the Model Based Testing approach, MaTeLo uses Markov chains for modeling the test. This statistic addin allows products validation in a Systematic way. The efficiency is achieved by a reduction of the human resources needed, an increase of the model reuse and by the enhancement of the test strategy relevance (due to the reliability target). MaTeLo is independent and user-friendly, offers to the validation activities to pass from test scripting to real test engineering and to focus on the real added value of testing: the test plans"
You can ask an evaluation licence and try by yourself.
You can find some exemples here : http://www.all4tec.net/wiki/index.php?title=Tutorials