I have been interested in formal methods for some time. I have used formal methods to reason about some very specific sub-areas of a few projects I have been working on. I was never able to convince other team members to try the same let alone specify an entire domain with a formal method.
One method I have found particularly interesting is Alloy. I think that it may "scale" better as foundation for an entire project because it is conceptually and notationally very close to actual programming languages. Furthermore, the tools are quite solid so that the benefits of model verification are readily available.
I'd be very much interested to hear about any real-world experiences you folks might have had with using Alloy in your projects. Do you feel that it has helped you in designing a better domain model? Did find errors in your domain model during verification? Would you use it again?
I've used Alloy on a few projects and have found it helpful; on some but not all of those projects I have been able to persuade others involved to use Alloy as well, or at least to work with the Alloy models I wrote. These projects may or may not be what you have in mind in asking for 'real-world' projects, but they certainly took place in the part of the real world I work in.
In 2006 and 2007 I created a partial Alloy model for the then-current draft of the W3C XProc specification; as far as I could tell, most members of the working group never read the paper I wrote (at http://www.w3.org/XML/XProc/2006/12/alloy-models/models.html); they said "Oh, we changed that part of the spec last week, so what the model says is no longer relevant". But the paper did manage to persuade the editor of the spec that the abstract 'component' level described in the first draft of the spec was woefully underspecified and needed to be either fully specified or dropped. He dropped it, with (I think) good results for the readability and usability of the spec.
In 2010 I made an Alloy model of the XPath 1.0 data model, which uncovered some glitches in the specification. The reaction of most interested parties (including the W3C working group responsible for maintaining the XPath 1.0 spec) has, unfortunately, not been encouraging.
A research project I'm involved with has used Alloy to model the MLCD Overlap Corpus, a collection of sample documents and related information we are creating (hyperlinks suppressed at SO's insistence); the Alloy model found a couple of errors in our initial design for the corpus catalog, so it was well worth the effort.
And we have also used Alloy to formalize some modeling work we have done on the nature of transcription and on the extension of the type/token distinction to document structure (for our paper, look for the 2010 proceedings of Balisage: The Markup Conference). This lies a little bit outside Alloy's usual area of application, as it has nothing to do with software design, but Alloy's ability to check models for consistency and generate instances has been invaluable in showing us some of the logical consequences of this or that possible axiom for our model.
To answer your specific questions: yes, Alloy has helped me specify cleaner domain models, and yes, it has found errors and glitches. They have often been small, for the reasons Daniel Jackson explains in his book Software Abstractions: first, if you use models during design, you catch errors early, when everything is still small. And, second (in Jackson's words), "In hindsight, most software design issues are trivial."
He continues: "But if you don't address them head-on, trivial issues have a nasty habit of becoming nontrivial." My experience amply confirms this. Much better to head off such problems early. So yes, I will use Alloy again.
Yes, I've used Alloy and it's cousins industrially. Alloy has been most helpful in convincing me that my models weren't wildly wrong---or rather, showing me where they were wrong and gave rise to silly results. Other more specific tools, like Song's Athena and Guttman and Ramsdell's CPSA have been more useful in their narrower domains. What more would you like to hear about?
Belatedly adding to this thread... Eunsuk Kang has recently applied Alloy to perform security analyses of web APIs for some start ups (following many applications of Alloy in security such as Apurva's analysis of OAuth and Barth et al's analysis of browser based security mechanisms for CSRF etc); Pamela Zave has been working on an impressive analysis of Chord, a peer to peer storage system, and has recently written up a fix to the original algorithm.
Related
CakePHP Applications being made in our company tends to become unmaintainable as it becomes more complex. I figured that one specific reason is inheritance which makes the functions in child classes depends a lot on it's parent classes and vice-versa (implementing template method pattern). Why is CakePHP designed this way and not friendly in using Dependency Injection, Strategies, or Factory patterns?
There is not such a bad design as you claim in the framework. Sure, there are probably things that could be done better but I would like to see a more substantial critic including solid arguments and examples. I assume you're not using the framework as it was intended.
Let me quote the first paragraph from this page.
According to Eric Evans, Domain-driven design (DDD) is not a technology or a methodology. It’s a different way of thinking about how to organize your applications and structure your code. This way of thinking complements very well the popular MVC architecture. The domain model provides a structural view of the system. Most of the time, applications don’t change, what changes is the domain. MVC, however, doesn’t really tell you how your model should be structured. That’s why some frameworks don’t force you to use a specific model structure, instead, they let your model evolve as your knowledge and expertise grows.
You're not showing code (for a reason?) so I guess your problem comes from stuffing everything into the table objects in src/Model/Table/ or doing something similar.
But you're totally free to create a folder structure like
/src/Service
/src/Model/Domain
and then simply instantiate services as you need them in your controller actions. A service could be for example \App\Service\User\Registration and using objects from App\Model\Domain\User.
I agree that the framework in fact doesn't provide any recommendation or template structure for how this could look like. For exactly this topic there is a discussion going on here. Because of a lack of such a structure I've started working on a plugin that provides this. The plugin doesn't require but suggest the usage of DI containers for the people who want them.
Given the whole fancy topic around DI and DDD so far I would say there is not the one way to get things right but different paths as long as the code is easy to maintain. And honestly, as long as this goal is archived I really don't care about how you call it. :) I think many people tend do make this topic to academic instead of simply trying to be practical.
Not everybody is even needing that structure. It depends on if you're building a RAD CRUD application or a more complex app. Not every application needs a DDD approach. There are so many shades of gray when it comes to design the business layer, no matter how the framework would do it, somebody would always complain about it.
I personally almost never missed a DI container in CakePHP, not even in the biggest project having more than ~560 database tables which was a hospital management solution and it just worked well.
I would suggest you to ask a more specific question about your approach how you structured your code and showing your structure and code and then asking for advice on how to improve it instead of blaming the tool you're using in the first place without providing context.
Unfortunately CakePHP v3 can not compare to the Zend3/Laminas, Symfony or Laravel.It is 7-8 years behind the other frameworks.If you are using cake for years or it is your 1st and last framework it is normal to not realise that.But if you have to use it after Zend 3... cake seems like really bad ecosystem.
Bad documentation
Bad ORM
Poor Routing system
Bad Templating engine
Bad idea to mix Data Mapper and Active Record
DIC is totally missing
Components - not good but not terrible
...
And many more thinks that should not be underestimated like - lack of GOOD tutorials, pluigns/addons/packages
The above thinks make developers to follow bad practices that adds a lot of technical depth.
If you care just for - it works! But not how it works and why it is bad, cake will fit ok for you.
Cake can not scale as good as Symfony/Laminas if you are doing big project.(yea AWS/GC can help for scaling a lot of thinks but not for scaling source code)
Cake doesn't allow you rapid development like Laravel/Symfony for decent project.
I'm wondering who and WHY would start a new project today using Cake as it has zero benefits over the other frameworks.
Probably only devs who used only Cake for last decade and do not want to start learning new technologies or devs that thinks SOLID is just a fancy hype with zero benefits like design patterns, DRY and KISS
CakePHP framework supplies user interaction with databases using Active record, it means that exist a high coupling between business layer and database layer which has negative effects in unit testing and because of that the framework is not friendly with Dependency Injection. The same issue happens with Factory pattern, high coupling mentioned before makes more difficult use simulated objects in unit testing.
Hope it helps!
Alberto
I asked this question about Microsoft .NET Libraries and the complexity of its source code. From what I'm reading, writing general purpose libraries and writing applications can be two different things. When writing libraries, you have to think about the client who could literally be everyone (supposing I release the library for use in the general public).
What kind of practices or theories or techniques are useful when learning to write libraries? Where do you learn to write code like the one in the .NET library? This looks like a "black art" which I don't know too much about.
That's a pretty subjective question, but here's on objective answer. The Framework Design Guidelines book (be sure to get the 2nd edition) is a very good book about how to write effective class libraries. The content is very good and the often dissenting annotations are thought-provoking. Every shop should have a copy of this book available.
You definitely need to watch Josh Bloch in his presentation How to Design a Good API & Why it Matters (1h 9m long). He is a Java guru but library design and object orientation are universal.
One piece of advice often ignored by library authors is to internalize costs. If something is hard to do, the library should do it. Too often I've seen the authors of a library push something hard onto the consumers of the API rather than solving it themselves. Instead, look for the hardest things and make sure the library does them or at least makes them very easy.
I will be paraphrasing from Effective C++ by Scott Meyers, which I have found to be the best advice I got:
Adhere to the principle of least astonishment: strive to provide classes whose operators and functions have a natural syntax and an intuitive semantics. Preserve consistency with the behavior of the built-in types: when in doubt, do as the ints do.
Recognize that anything somebody can do, they will do. They'll throw exceptions, they'll assign objects to themselves, they'll use objects before giving them values, they'll give objects values and never use them, they'll give them huge values, they'll give them tiny values, they'll give them null values. In general, if it will compile, somebody will do it. As a result, make your classes easy to use correctly and hard to use incorrectly. Accept that clients will make mistakes, and design your classes so you can prevent, detect, or correct such errors.
Strive for portable code. It's not much harder to write portable programs than to write unportable ones, and only rarely will the difference in performance be significant enough to justify unportable constructs.
Even programs designed for custom hardware often end up being ported, because stock hardware generally achieves an equivalent level of performance within a few years. Writing portable code allows you to switch platforms easily, to enlarge your client base, and to brag about supporting open systems. It also makes it easier to recover if you bet wrong in the operating system sweepstakes.
Design your code so that when changes are necessary, the impact is localized. Encapsulate as much as you can; make implementation details private.
Edit: I just noticed I very nearly duplicated what cherouvim had posted; sorry about that! But turns out we're linking to different speeches by Bloch, even if the subject is exactly the same. (cherouvim linked to a December 2005 talk, I to January 2007 one.) Well, I'll leave this answer here — you're probably best off by watching both and seeing how his message and way of presenting it has evolved :)
FWIW, I'd like to point to this Google Tech Talk by Joshua Bloch, who is a greatly respected guy in the Java world, and someone who has given speeches and written extensively on API design. (Oh, and designed some exceptionally good general purpose libraries, like the Java Collections Framework!)
Joshua Bloch, Google Tech Talks, January 24, 2007:
"How To Design A Good API and Why it
Matters" (the video is about 1 hour long)
You can also read many of the same ideas in his article Bumper-Sticker API Design (but I still recommend watching the presentation!)
(Seeing you come from the .NET side, I hope you don't let his Java background get in the way too much :-) This really is not Java-specific for the most part.)
Edit: Here's another 1½ minute bit of wisdom by Josh Bloch on why writing libraries is hard, and why it's still worth putting effort in it (economies of scale) — in a response to a question wondering, basically, "how hard can it be". (Part of a presentation about the Google Collections library, which is also totally worth watching, but more Java-centric.)
Krzysztof Cwalina's blog is a good starting place. His book, Framework Design Guidelines: Conventions, Idioms, and Patterns for Reusable .NET Libraries, is probably the definitive work for .NET library design best practices.
http://blogs.msdn.com/kcwalina/
The number one rule is to treat API design just like UI design: gather information about how your users really use your UI/API, what they find helpful and what gets in their way. Use that information to improve the design. Start with users who can put up with API churn and gradually stabilize the API as it matures.
I wrote a few notes about what I've learned about API design here: http://www.natpryce.com/articles/000732.html
I'd start looking more into design patterns. You'll probably not going to find much use for some of them, but as you get deeper into your library design the patterns will become more applicable. I'd also pick up a copy of NDepend - a great code measuring utility which may help you decouple things better. You can use .NET libraries as an example, but, personally, i don't find them to be great design examples mostly due to their complexities. Also, start looking at some open source projects to see how they're layered and structured.
A couple of separate points:
The .NET Framework isn't a class library. It's a Framework. It's a set of types meant to not only provide functionality, but to be extended by your own code. For instance, it does provide you with the Stream abstract class, and with concrete implementations like the NetworkStream class, but it also provides you the WebRequest class and the means to extend it, so that WebRequest.Create("myschema://host/more") can produce an instance of your own class deriving from WebRequest, which can have its own GetResponse method returning its own class derived from WebResponse, such that calling GetResponseStream will return your own class derived from Stream!
And your callers will not need to know this is going on behind the scenes!
A separate point is that for most developers, creating a reusable library is not, and should not be the goal. The goal should be to write the code necessary to meet requirements. In the process, reusable code may be found. In that case, it should be refactored out into a separate library, where it can be reused in the future.
I go further than that (when permitted). I will usually wait until I find two pieces of code that actually do the same thing, or which overlap. Presumably both pieces of code have passed all their unit tests. I will then factor out the common code into a separate class library and run all the unit tests again. Assuming that they still pass, I've begun the creation of some reusable code that works (since the unit tests still pass).
This is in contrast to a lesson I learned in school, when the result of an entire project was a beautiful reusable library - with no code to reuse it.
(Of course, I'm sure it would have worked if any code had used it...)
I am currently trying to draw a set of UML diagrams to represent products, offers, orders, deliveries and payments. These diagrams have probably been invented by a million developers before me.
Are there any efforts to standardize the modeling of such common things? Or even the modeling of specific domains (for example car-manufacturing).
Do you know if there is some sort of repository containing UML diagrams (class diagrams, sequence diagrams, state diagrams...)?
There is a movement for documenting (as opposed to standardizing) models for certain domains. These are called analysis patterns and is a term Martin Fowler came up with. He actually wrote a book called Analysis patterns. Also, he has a dedicated section on his website where he presents some of these patterns accompanied by UML diagrams.
Maybe you'll find some inspiration that will help you in modeling your domain. I've stressed the word inspiration as I think different businesses have different requirements although they operate the same domain so the solutions you might read about may not be appropriate for your problem.
There are many tools out there that do both - but they're generally not free!
Microsoft Visio does both and is extensible. For UML artefacts they come with auto generators into VB/Java template code - but you can modify them to auto-generate any code. There are many users of Visio that have created models from which to use as templates.
Artisan Enterprize is by far the most powerful UML tool (but it's not cheap).
Some would argue that Rational Rose or RUP is the better tool
But for Car-Manufacturing and other similar real world modelling, by far the best tool is Mathworks Simulink (not because it's one of the most expensive). It is by far the best tool beccause you can animate the model - you can prove the model working before generating the slik code (in whatever grammar/language/other Models you care to push it)!
You can obtain a student license for around £180; with the 'real thing' pushing £4000 (for car-related artefacts). The full product with all the trimmings is about £15k. Simulink is also extensible with a C like language though there is a .Net addin and APIs to use a plethora of other langhuages. And, just like Visio there is a world-wide forum creating saleable, shareware & freeware real world model templates. Many world-wide Auto-Manufacturers are already using Simulink.
I think that MiniQuark question is really good and will sooner or later be provided by vendors such as Omondo, Rational IBM etc... Users doesn't just need tools, they need models out of the box and just add their business rules inside an existing well defined architecture. Why to develop from scratch a new architecture if the job has already be done ? In Java we use plenty of frameworks, existing methods etc...so why not to go one level higher and reuse architecture ? It is today impossible to guess how a project will evole and new demands are coming every day. We therefore need a stable architecture which has been tested previously and is extensible. I have seen so many projects starting with a nice architecture then realizing in the middle of the project that this is not what is the best and then changing their architecture. Renaming classes, splitting classes, creating packages etc...after the first iteration it is getting a real mess. Could you imagine what we found after 10 iterations !! a total mess !!
This mess would had been avoided if using a predefined model which has been tested previously because the missing class, or package etc..would have already been created and only a class rename would be sufficient for architecture purposes. Adding business rules methods will end the codding stage before deployment test.
I think there is a confusion between patterns and the initial question which is related to UML model re usability.
There is no today any reusable model out of the box which has been developped. This is really strange but the job has never been done or never been shared.
Omondo has tried to launch an initiative without real success. I have heard that they are working on hundred of out of box models which will be open source and given for free to the community. I hope this will be done because this is really important for me and would save me a lot of time at the beginning of a project.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
So...
I teach formal methods in software engineering. I also teach "agile methodologies". Most people seem to think this is contradictory. I think it makes a lot of sense... I also work for a company, where we need to actually get things done :) While I can apply my earned skill points on "specification" in a day-to-day basis, my colleagues typically flee away from the word "formal".
I used to think that this was due to the intrinsic way we learn how to program: we are usually driven to find a working solution, not to understand the problem. Then I thought this was due to the fact that most people in the formal community are not engineers, but mathematicians or computer scientists. Nowadays, I wonder if it just because the formal-methods community hide behind some kind of "obfuscation" law to use all the available UNICODE symbols, actively develop rude, unesthetic tools, and laugh in the face of standards.
Yes, I've been moving from a "blame them" to a "blame us" perspective ;-)
So, my question is: do you use any kind of formal methods in your company? Have you introduced them, or were they pre-requisites? What techniques do you use to clear the fog of mathematics from people's fears and incite them to use formal methods? What do you think current tools are lacking for a more general usage?
The key to getting people to buy into any methods or methodologies is to show them how it solves problems they are having. If they can see it will make their lives better you have a much improved chance of getting them to adopt the techniques.
And if you can't show them that, perhaps you wanted to adopt the methods based on philosophy rather than practicality. Unless the others share your philosophy then you're not going to get anywhere. And perhaps you shouldn't.
Over the decades there have been a great many methodologies. Newer ones always address the shortcomings of the old ones, yet projects still get in trouble and fail. Why? Because the rock stars that come up with new methodologies are rock stars, and have made a new methodology precisely because they understand the underlying issues and how to apply them. Those who come after tend to blindly follow the recipe, and it doesn't work so well.
So I think the best thing is to teach about the underlying problems and then show how various methods attempt to deal with those problems. The differences in companies, projects, and teams is so great that no one methodology can be applied successfully to all combinations. Learning to choose an appropriate tool and apply it well is crucial.
Thank you for all contributions. They are very insightful. Allow me to flame a bit (don't take it personal, though :-)
Most people seem to think that formal methods are just about program verification. Or critical systems. This may be true if we pursue the ultimate cliche: to prove we are doing the program right (v.s. validation, which asks, as a contributor said, if we are doing the right program).
But consider model finding/checking tools, such as Alloy. Learning to use a tool like this takes a negligable ammount of time for anyone used to UML and OO. Still, it can give you immediate insight over your model. It usually takes no more than 10 minutes to find a counter-example over a small enough subset of the model one's trying to use (and that includes describing the model in Alloy in the first place).
Take requirements engineering as an example. One usually draw a lot of UML. Few people use OCL, though, and many business rules are informally annoted in natural language. Why? Time constraints?
Now consider the fact that the majority just uses her/his gut-feeling to prove that a model is satisfiable. Again, why? I can take the same amount of time (probably even less, since I don't need to care about drawing aesthetics) to write that model in Alloy, and just check for satisfiability? And what kind of mathematics do I need to now? "Predicates"? Fancy name for IFs and booleans ;-) Quantifiers? Fancy names for ForEachs()...
What about big information systems? They don't need to be critical... Just try to analyze in your head a conceptual (not implementation!) diagram with over 600 classes. I see many people banging their head in the wall with easy-to-make model mistakes because they missed some constraint, or the model allows stupid things to happen.
The fact is, one does not need to use formal approaches from head to tail. Granted, I could prove a whole application in Coq, and certify that it is 100% compliant with some specification. This may be the Computer Scientist/Mathematician approach.
Still, with a GTD philisophy, why can't I delegate some tasks for the computer and allow it to help improving my development? Is it really a matter of "time", or plain, simple lack of technical abilities and will to learn/inovate?
Working with line of business IT development in an enterprise means having to transfer knowledge about the business from actual business people into the heads of developers. While I myself find abstract maths to be one of the greatest pastimes there is, it's a terrible communications tool. And communications is what it's all about. While I might conceivably have some success convincing IT people to embrace more abstract notations, I basically have no chance with the business people.
While there are some areas where I can see a role for formal methods in an enterprise (math- and logic-heavy specialist software, significant need for provable properties as in safety critical software) they provide little help with getting correct requirements on e.g. how to fulfil a customer order by issuing one or more supply orders to a set of possible external or internal providers.
I think the jury is still out on model based approaches and domain specific languages. I think they will succeed or fail depending on whether they provide quicker feedback from IT to the wishes and needs of the business side, and whether they presume business people will have to do any significant studying.
Technology is easy. Communication is hard. Formal methods may help us do things right, but those I've seen do nothing to help us do the right things. (Yes, these are cliches, but that's because they're inescapably and painfully true.)
I'm taking a course on 'Specification and Verification'. As part of the course structure we are doing the following-
1. Learning tools like PVS(Prototype Verification System) http://pvs.csl.sri.com/ and SMV(Software Modeling and Verification) http://www.cs.cmu.edu/~modelcheck/smv.html
2. Apart from that we do dissect accidents which happened because of software failures. For e.g. - Failure of Ariane V
I feel formal methods are more applicable to scenarios where the failure cost is more than the design cost. And it seems apt to use them for softwares being used in critical systems. I guess it is used in avionics, chip design etc. and the current automobile industry is also drafting it into practice.
I have tried to get people to embrace formal specification methods a few times (Z and Alloy) and have made the same expirience that you have: Most people, while feeling that they serve a useful purpose, are very uncomfortable using them for actual work.
Funny enough, the same people are more than happy to produce utterly useless UML diagrams in ginormous quantities.
I think there are two main reasons for this:
a.) Many developers are uncomfortable with the level of abstraction required by a formal approach. The fact that most entry-level mathematics education is all calculus and non discrete-mathematics might have to do something with this.
b.) Formal methods require a very bottom up design aproach where you design your core model from the ground up and make it airtight and then connect it up to the actual user requirements by providing an interface on top of it. Since we tend to have requirements drive development efforts, a top-down approach feels more natural although it often leads to inconsistent models. It's like retrofitting a basement underneath your house after it has already been built.
Formal methods make no sense in systems where the cost of failure is low.
In a production web application, you've got multiple front-end boxes, multiple back-end boxes, multiple database boxes - if a program on any one of them fails, it's a non-event. Hardware is so cheap that you can build these systems for far less than the cost of formally specifying all your software.
What strategies have you used with Model Based Testing?
Do you use it exclusively for
integration testing, or branch it
out to other areas
(unit/functional/system/spec verification)?
Do you build focused "sealed" models or do you evolve complex onibus models over time?
When in the product cycle do you invest in creating MBTs?
What sort of base test libraries do you exclusively create for MBTs?
What difference do you make in your functional base test libraries to better support MBTs?
[There are several essays worth reading on this. Stack Overflow won't let me post more than one, so I've aggregated them in a blog post, linked at the end of this answer.]
First, a quick note on terms. I tend to use James Bach’s definition of Testing as “Questioning a product in order to evaluate it”. All test rely on /mental/ models of the application under test. The term Model-Based Testing though is typically used to describe programming a model which can be explored via automation. For example, one might specify a number of states that an application can be in, various paths between those states, and certain assertions about what should occur in on the transition between those states. Then one can have scripts execute semi-random permutations of transitions within the state model, logging potentially interesting results.
There are real costs here: building a useful model, creating algorithms for exploring it, logging systems that allow one to weed through for interesting failures, etc. Whether or not the costs are reasonable has a lot to do with what are the questions you want to answer? In general, start with “What do I want to know? And how can I best learn about it?” rather than looking for a use for an interesting technique.
All that said, some excellent testers have gotten a lot of mileage out of automated model-based tests. Sometimes we have important questions about the application under test that are best explored by automated, high-volume semi-randomized tests. Harry Robinson (one of the leading theorists and proponents of model-based testing) describes one very colorful example where he discovered many interesting bugs in Google driving directions using a model-based test (written with ruby’s Watir library). 1
Robinson has used MBT successfully at companies including Bell Labs, Microsoft, and Google, and has a number of helpful essays.[2]
Ben Simo (another great testing thinker and writer) has also written quite a bit worth reading on model-based testing.[3]
Finally, a few cautions: To make good use of a strategy, one needs to explore both its strengths and its weaknesses. Toward that end, James Bach has an excellent talk on the limits and challenges of Model-Based Testing. This blog post of Bach’s links to his hour long talk (and associated slides).[4]
I’ll end with a note about what Boris Beizer calls the Pesticide Paradox: “Every method you use to prevent or find bugs leaves a residue of subtler bugs against which those methods are ineffective.” Scripted tests (whether executed by a computer or a person) are particularly vulnerable to the pesticide paradox, tending to find less and less useful information each time the same script is executed. Folks sometimes turn to model-based testing thinking that it gets around the pesticide problem. In some contexts model-based testing may well find a much larger set of bugs than a given set of scripted tests…but one should remember that it is still fundamentally limited by the Pesticide Paradox. Remembering its limits — and starting with questions MBT addresses well — it has the potential to be a very powerful testing strategy.
Links to all essays mentioned above can be found here: http://testingjeff.wordpress.com/2009/06/03/question-about-model-based-testing/
We haven't done any/much I&T and use unit testing almost exclusively, seasoned with a bit of system testing. But our focus is clearly on unit testing. I'm pretty strict on the APIs we build/provide, so the assumption is, if it works by itself, it will work in conjunction and there hasn't been much wrong in it yet.
Our models are focused on a single purpose/module with as little dependencies as possible.
The focus is always to start as early as possible (TDD-kinda), but unfortunately we don't always get to it. The problem is, you always have to sell it to management and then it's hard because while testing improves stability (overall QA), the people from the outside (outside of tech) can't really relate to what that means until something bad happened.
Since we use PHP, we employ PHPUnit for the unit tests. All in all, we do CI with various different tools. :)
Harry Robinson, an author of MBT-books and worked a lot with it for example at Google and Microsoft have this site with some great info and whitepapers.
http://www.geocities.com/model_based_testing/
The best way is to try by yourself a Model based testing tool. It's the best way for know if the model based testing is adapted in your context. And what sort of strategies is the good one.
I advise you the "MaTeLo" tool of All4Tec (www.all4tec.net)
"MaTeLo is a test cases generator for black box functional and system testing. Conformed to the Model Based Testing approach, MaTeLo uses Markov chains for modeling the test. This statistic addin allows products validation in a Systematic way. The efficiency is achieved by a reduction of the human resources needed, an increase of the model reuse and by the enhancement of the test strategy relevance (due to the reliability target). MaTeLo is independent and user-friendly, offers to the validation activities to pass from test scripting to real test engineering and to focus on the real added value of testing: the test plans"
You can ask an evaluation licence and try by yourself.
You can find some exemples here : http://www.all4tec.net/wiki/index.php?title=Tutorials