Generate events/commands using a property based testing tool? - quickcheck

As I understand it, most property testing tools operate at the level of functions. Given a set of arguments, such tools will generate random input and test output against some invariant.
I have read that ScalaCheck is now starting to include generation of events to test a statefull system. However, I can't find great deal of information on it. Is this becoming popular in rest of the *check ecosystem as well (fscheck, quickcheck, other variations)?

What you call "generation of events" to my knowledge originates in "Testing Monadic Code with QuickCheck", by Koen Claessen and John Hughes. The example they give is testing a queue. The approach that is used is always similar - as comments say, since "basic" quickcheck (I'll use lowercase quickcheck to describe the family of QuickCheck ports on various platforms) assumes it generates immutable data, at first sight it's not easy to use quickcheck to test a side-effecting, stateful system.
Until you realize that a stateful system gets to a certain state by executing a sequence of state transitions (these are variously called commands, actions, events etc). And this sequence can be represented perfectly as an immutable list of immutable transitions! Typically then each transition is executed on the real system under test, and a model of its state. Then after each transition the model state is compared with the real state.
To see how this plays out in Quvik QuickCheck (for Erlang) for example you can read "Testing Telecoms Software with Quviq QuickCheck" by Thomas Arts, John Hughes, Joakim Johansson and Ulf Wiger.
I do believe most quickchecks, including QuickCheck itself, have a layer on top of the basic quickcheck functionality that allows you to generate a sequence of state transitions, typically using a state machine like approach with pre-and postconditions etc.
I don't think this is particularly new, but probably a bit under-emphasized.
For example, FsCheck has had model based testing for years (dislosure: I am FsCheck's main contirbutor). I think the same is true for ScalaCheck. Quvik QuickCheck's is likely the most advanced implementation (certainly with the most advanced applications).

Related

Features and Use Case Diagrams Vs Requirements and Use Cases

According to "Head First Object-Oriented Analysis and Design", Complex projects involves first finding a feature list -> drawing use case diagrams -> breaking into smaller modules before implementing object oriented design (requirements gathering -> use cases -> OO -> design patterns etc.)
I want to understand, what is the criteria for the size of project when feature lists and use case diagrams should be implemented before finding requirements and writing use cases?
I am particularly interested in how can this knowledge be applied to my real wold problems
Example, I am working on a UI that send instrument commands to the server and displays the response back from the server. I know from customer feedback that the UI should have the following things:
It should be able to let the user select an instrument from available list and send any custom command and display the result
It should be able to let the user select an instrument and a command from available list and display the result (create commands using drag and drops from given lists)
It should be able to have capability of creating macros
Is this UI project small enough to not have steps for gathering features and drawing use case diagrams? Do we go straight to categorizing the asks as requirements and start gathering requirements and writing use cases?
How would one go about breaking down project of this nature to deduce it to its appropriate class diagrams?
I have tried considering the above mentioned asks as features and then tried creating requirements, mainly on the different states that one could have during the life cycle of the UI application but I am still not sure and unable to comprehend the teachings of the books on this project.
I haven't read the book, so I'm not sure what the author(s) of the book really wanted to emphasize here. But I assume that you misinterpreted it.
Without knowing the requirements there is no feature list. If you don't know what is needed then you can't say anything about the system's capabilities.
Gathering requirements is an iterative process. First you gather the high-level requirements in order to be able to start building a mental model about the system. This can help you to start think about the supported features. By sharing your mental model and the exposed feature set of the system with the stakeholder, it initiates the next iteration.
Here you can start talking about actors, user journeys, use cases, etc. These are mainly focusing on the happy paths. As you have more and more iterations you will reach a point where you can start talking about edge and corner cases: What suboptimal cases can we foreseen? What can we do (prevention, detection, mitigation)? How does it affect the system/actors/journeys?...
The better you understand the needs and circumstances, the better the design and implementation of the system could be.
UPDATE #1
Will we always have high-level and low-level (edge cases and detailed use cases) requirements i.e. we will first need to make use case diagrams and then write individual detailed use cases?
There are a lot of factors which can influence this. Just to name a few:
Is it a system, submodule, or component design?
Is it a green or a brownfield project?
Is the stakeholder experienced enough to know which information matters and which doesn't from the IT project perspecitive?
Does the architect / system designer have previous experience with the same domain?
Does wireframe or mockup exist at project kick-off?
Should the project satisfy special security, legal or governmental regulations?
etc...
In short, yes there can be circumstances where you don't need several iterations, but based on my experiences that's quite rare.

what are the relationships among procedural, object oriented and event driven paradigms?

I think procedural, object oriented and event driven paradigms are the main paradigm in the software development .And how do I build a relationship among them.
what are the relationships among procedural, object oriented and event driven paradigms?
it is hard to clarify what are the relationship among them.
Procedural and Event Driven describe the general workflow of the application or decision making logic where as Object Oriented describes more the structure of the decision making logic
Procedural describes a sequential workflow of logic, in general there are many steps that must be performed in a sequence, there may be criteria between each step that might be dependent on the outcome from previous steps, however the sequence of logic is pre-determined and hard coded into the application.
In procedural programming the State of the system is generally passed directly between the steps so it does not need to exist in a context outside of the executing logic and there is less need to formally manage the shape or structure of this State.
Prodedural logic complements Functional programming architectures but can be used in many contexts.
Procedural logic suits scenarios where interaction with external systems is instantaneous or not required or if it is OK for your logic to halt processing until the external system responds.
Procedural logic may raise events for external event driven logic to respond to, that doesn't make it event driven.
From a testing point of view, to properly test a pure Procedural Programming application will require the whole application to be completed. You could also test each step in the process by directly evaluating the state or result of each step but in pure PP the state is not maintained in a context that we can easily access outside of the logic, so the only way to test is to run each process to completion to review the results.
This means that the external state is generally less of a concern for testing PP logic.
End-To-End testing is greatly simplified because there are less permutations of outcomes to consider.
Event Driven describes a workflow where the system raises event messages, or responds to events raised from other systems. The application logic is executed in direct response to these events, in explicit contrast to Procedural Programming the timing of the events is considered not controllable and due to this many events may need to be serviced concurrently, this is in direct contract the procedural programming where each step needs to run to completion to be ready for the next step in the chain can be executed.
This means that in Event Driven logic it is more important to check the state of the system when performing decision logic. As the steps could conceivably be executed in any order, at any time, the state needs to be managed in a context outside of most of the logic, this is where OO concepts can start to become really helpful.
Most Graphical User Interfaces will implement forms of Event Driven programming. Think of a simple button click event, the user controls the timing of the execution, or if the button is clicked at all.
From a testing point of view, the current state of the system is important to evaluate or control before testing a process. Depending on the type of events this can raise complications during testing, you may need to simulate, impersonate or intercept other systems or events from or to other
Object Oriented Programming describes a style where the state of the system is modelled using classes that describe a set of metadata and the behaviours and interactions with other objects. We can create Instances of a class to create objects. In this way you can think of OO as first defining a series of templates, and then creating objects from those templates.
OO therefore ends up with a lot of additional boiler plate logic and a lot more effort needs to go into the design of the state and environment before you really get into the behavioural or reactionary logic.
OO pairs really well with Event Driven programming, objects make it easier to represent the environment and nuanced changes to it.
From a testing point of view, OO makes it possible to replicate state or the environment without having access to the real operating environment. Because the logic is defined a more granular set of behaviours within each object we can easily test these behaviours in isolation from the rest of the system.
This same boon can become a burden though, more care needs to be taken to ensure the state is defined accurately enough to obtain meaningful test results. To complete end-to-end testing there can be a lot of moving parts, because the timing of events is less constrained (if at all) compared to PP, there is a greater permutation of potential outcomes to define and automate. Therefor in OOP it becomes more important to test properly at a granular level to verify discrete logic blocks to gain confidence before testing in larger cascading sets of rules.

How to do TDD for real time applications

I've been studying the discipline of Test Driven Development and for me it has worked well for implementing algorithms and input-output systems.
So, as far as I understand, the "essence" of TDD is to actually write tests for each requirement of the application. Normally this requirement defines a behavior with inputs and outputs.
So, now. Going to real time applications. Let´s say your application runs an infinite loop. A common example is a graphics application or an audio application where each iteration of the loop means output to the screen/speakers.
Having a system like that, let´s say the requirement is something like:
"When pressing Enter button, the screen should show a circle with the text Hello World inside the circle"
So how would you test drive this kind of requirement.
Another example, just to illustrate my question better.
Let´s say I´m emulating a CPU. In each iteration, I fetch an opcode from the file, translate it and execute it. Basically there is no actual output. What happens is there is input which changes some state related to the emulation of the CPU. So no public interface for the CPU internals.
My requirement would say something like "Implement the mov operation on the cpu emulator"
Which may be part of the bigger requirement "Implement opcodes emulation"
So. What would be a good approach for tackling this behaviors/requirements using TDD?
What would be a good approach for tackling this behaviors/requirements using TDD?
What I normally see happen is that the design partitions into two pieces
A piece that is complicated, but really "easy" to test
A piece that is hard to test, but is really "simple"
Basically, you are arranging that the "risk" lies predominantly in the code that is easy to test.
One of the properties of code that is really simple: it also tends to be very stable. The combination of low risk, stable, and difficult to test means that investing in test automation here is less attractive.
Basically there is no actual output.
The write a no-op; there's no advantage to doing any work that doesn't have an observable side effect of some sort.
It's a common pattern that we look at an intermediate stage of the output. For example, if we are supposed to produce a tone from the speakers, what we might do in code is create a seam between the work of choosing the tone, and the actual mechanism of delivering the representation of the tone to the speaker. At that seam, we also capture information so that we can check it.
So no public interface for the CPU internals.
Having a test interface is normally a satisfactory outcome. Often, it will turn out that you want to publish the test interface, for use in satisfying monitoring or observability requirements.
The public interface is way too broad to test that single behavior.
Yes, that's common. The usual response is to refactor your broad testable module into several, perhaps even many, narrow testable modules. Review Parnas 1971.
It may help also to think about the distinction between public methods (accessible outside a module) and published methods (accessible by code that you don't control).

Test-Automation using MetaProgramming

i want to learn test automation using meta programming.i googled it could not find any thing.can anybody suggest me some resources where can i get info about "how to use Meta Programming for making test automation easy"?
That's a broad topic and not a lot has been written about it, because of the "dark corners" of metaprogramming.
What do you mean by "metaprogramming"?
As background, I consider metaprogramming to be any activity in which a tool (which we call a "metaprogramming tool") is used to inspect or modify the application software to achieve some effect.
Many people consider "reflection" to be a kind of metaprogramming; other consider (C++-style) templates to be metaprogramming; some suggest aspect-oriented programming.
I sort of agree but think these are weak versions of what you want, because each has severe limits on what it can see or do to source code. What you really want is a metaprogramming tool that has access to everything in your source program (yes, comments too!) Such tools are called Program Transformation Systems (PTS); they work by parsing the source code and operating on the parsed representation of the program. (I happen to build one of these, see my bio). PTSes can then analyze the code accurate, and/or make reliable changes to the code and regenerate valid source with the changes. PS: a PTS can implement all those other metaprogramming techniques as special cases, so it is strictly more general.
Where can you use metaprogramming for testing?
There are at least 2 areas in which metaprogramming might play a role:
1) Collection of information from tests
2) Generation of tests
3) Avoidance of tests
Collection.
Collection of test results depends on the nature of tests. Many tests are focused on "is this white/black box functioning correctly"? Assuming the tests are written somehow, they have to have access to the box under test,
be able to invoke that box in a realistic ways, determine if the result is correct, and often tabulate the results to that post-testing quality assessments can be made.
Access is the first problem. The black box to be tested may not be easily accessible to a testing framework: driven by a UI event, in a non-public routine, buried deep inside another function where it hard to get at.
You may need metaprogramming to "temporarily" modify the program to provide access to the box that needs testing (e.g., change a Private method to Public so it can be called from outside). Such changes exist only for the duration of the test project; you throw the modified program away because nobody wants it for anything but the test results. Yes, you have to ensure that the code transformations applied to make things visible don't change the program functionality.
The second problem is exercising the targeted black box in a realistic environment. Each code module runs in a world in which it assumes data and the environment are "properly" configured. The test program can set up that world explicitly by making calls on lots of the program elements or using its own custom code; this is usually the bulk of a test routine, and this code is hard to write and fragile (the application under test keeps changing; so do its assumptions about the world). One might use metaprogramming to instrument the application to collect the environment under which a test might need to run, thus avoiding the problem of writing all the setup code.
Finally, one might want to record more than just "test failed/passed". Often it is useful to know exactly what code got tested ("test coverage"). One can instrument the application to collect what-got-executed data; here's how to do it for code blocks: http://www.semdesigns.com/Company/Publications/TestCoverage.pdf using a PTS. More sophisticated instrumentation might be used to capture information about which paths through the code have been executed. Uncovered code, and/or uncovered paths, show where tests have not been applied and you arguably know nothing about what the program does, let alone whether it is buggy in a straightforward way.
Generation of tests
Someone/thing has to produce tests; we've already discussed how to produce the set-up-the-environment part. What about the functional part?
Under the assumption that the program has been debugged (e.g, already tested by hand and fixed), one could use metaprogramming to instrument the code to capture the results of execution of a black box (e.g., instance execution post-conditions). By exercising the program, one can then produce (by definition) "correctly produces" results which can be transformed into a test. In this way, one might construct a huge variety of regression tests for an existing program; these will be valuable in verifying the further enhancements to the program don't break most of its functionality.
Often a function has qualitatively different behaviors on different ranges of input (e.g., for x<10, produced x+1, else produces x*x). Ideally one would like to provide a test for each qualitively different results (e.g, x<10, x>=10) which means one would like to partition the input ranges. Metaprogrammning can help here, too, by enumerating all (partial) paths through module, and providing the predicate that controls each path.
The separate predicates each represent the input space partition of interest.
Avoidance of Tests
One only tests code one does not trust (surely you aren't testing the JDK?) Any code consructed by a reliable method doesn't need tests (the JDK was constructed this way, or at least Oracle is happy to have you beleive it).
Metaprogramming can be used to automatically generate code from specifications or DSLs, in relaible ways. Such generated code is correct-by-construction (we can argue about what degree of rigour), and doesn't need tests. You might need to test that DSL expression achieves the functionaly you desired, but you don't have to worry about whether the generated code is right.

Testing in Lisp

I am new to Lisp, and I am learning Scheme through the SICP videos. One thing that seems not to be covered (at least at the point where I am) is how to do testing in Lisp.
In usual object oriented programs there is a kind of horizontal separation of concerns: methods are tied to the object they act upon, and to decompose a problem you need to fragment it in the construction of several objects that can be used side by side.
In Lisp (at least in Scheme), a different kind of abstraction seems prevalent: in order to attack a problem you design a hierarchy of domain specific languages, each of which is buil upon the previous one and acts at a coarser level of detail and higher level of abstraction.
(Of course this is a very rough description, and objects can be used vertically, or even as building blocks of DSLs.)
I was wondering whether this has some effect on testing best practices. So the quetsion is two-fold:
What are the best practices while testing in Lisp? Are unit tests as fundamental as in other languages?
What are the main test frameworks (if any) for Lisp? Are there mocking frameworks as well? Of course this will depend on the dialect, but I'd be interested in answers for Scheme, CL, Clojure or other Lisps.
Here's a Clojure specific answer, but I expect most of it would be equally applicable to other Lisps as well.
Clojure has its own testing framework called clojure.test. This lets you simply define assertions with the "is" macro:
(deftest addition
(is (= 4 (+ 2 2)))
(is (= 7 (+ 3 4))))
In general I find that unit testing in Clojure/Lisp follows very similar best practices to testing for other languages. It's the sample principle: you want to write focused tests that confirm your assumptions about a specific piece of code behaviour.
The main differences / features I've noticed in Clojure testing are:
Since Clojure encourages functional programming, it tends to be the case that tests are simpler to write because you don't have to worry as much about mutable state - you only need to confirm that the output is correct for a given input, and not worry about lots of setup code etc.
Macros can be handy for testing - e.g. if you want to generate a large number of tests that follow a similar pattern programatically
It's often handy to test at the REPL to get a quick check of expected behaviour. You can then copy the test code into a proper unit test if you like.
Since Clojure is a dynamic language you may need to write some extra tests that check the type of returned objects. This would be unnecessary in a statically typed language where the compiler could provide such checks.
RackUnit is the unit-testing framework that's part of Racket, a language and implementation that grew out of Scheme. Its documentation contains a chapter about its philosophy: http://docs.racket-lang.org/rackunit/index.html.
Two testing frameworks that I am aware of for Common Lisp are Stefil (in two flavours, hu.dwim.stefil and the older stefil), FiveAM, and lisp-unit. Searching in the quicklisp library list also turned up "unit-test", "xlunit", and monkeylib-test-framework.
I think that Stefil and FiveAM are most commonly used.
You can get all from quicklisp.
Update: Just seen on Vladimir Sedach's blog: Eos, which is claimed to be a drop-in replacement for FiveAM without external dependencies.