Related
a bit of a noob question. I'm looking to test a solidity smart contract function without waiting 6 days. It's a feature implemented that will allow me to interact with the contract and be paid X amount only after 6 days. How can test that? TIA
well it really depends on your tool set, if you are using hardhat you can write a test, that way you can easily know if that function work as intended, here to see hardhat testing and here to simulate the pass of the time you only need to know how many blocks are mined in that time to use it
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
Code can be perfect, and also perfectly useless at the same time. Getting requirements right is as important as making sure that requirements are implemented correctly.
How do you verify that users' requirements are addressed in the code you're working on?
You show it to the users as early and as often as possible.
Chances are that what they've asked for isn't actually what they want - and the best way of discovering that is to show them what you've got, even before it's finished.
EDIT: And yes, this is also an approach to answering questions on StackOverflow :)
You write tests that assert that the behavior the user requires exists. And, as was mentioned in another answer, you get feedback from the users early and often.
even if you talk with the user, and get everything right, the user might have gotten it wrong. They won't know until they use the software that they didn't want what they asked for. the surest way is to do some sore of prototype that allows the user to "try it out" before you write the code. you could try something like paper prototyping
If possible, get your users to write your acceptance tests. This will help them think through what it means for the application to work correctly. Break the development down into small increments that build on each other. Expose these to the customer early (and often), getting them to use it, as others have said, but also have them run their acceptance tests. These should also be developed in tandem with the code under test. Passing the test won't mean that you have completely fulfilled the requirements (the tests themselves may be lacking), but it will give you and the customer some confidence that you are on the right track.
This is just one example of where heavy customer interaction pays off when developing code. The way to get the most assurance that you are developing the right code is having the customer participating in the development effort.
How do you verify that users' requirements are addressed in the code you're working on?
For a question put in this form the answer is "You can't".
The best way is to work with users from the very first days, show them prototypes and incorporate their feedback continuously.
Even so, at the end of the road, there will likely be nothing resembling what was originally discussed and agreed on.
Ask them what they want you to build before you build it.
Write that down and show them the list of requirements you have written down.
Get them to sign off on the functional design.
Build a mock up and confirm that it does what they want it to.
Show them the features as it is being implemented to confirm that they are correct.
Show them the application when it's finished and allow them to go through acceptance testing.
They still wont be happy but you will have done everything you can.
Any features that are not in the document they signed off can be considdered change requests which you can charge them extra. Get them to sign off everything you show them, to limit your liability
by using development method that often controls alignement between the implementation and the requirements.
For me, the best way is to involve a "expert customer" to validate and test in a interative way as often as possible the implementation ....
If you don't, you risk to have, as you said, a very beautiful soft perfectly useless....
you can try personas; a cohort of example users that use the system.
quantify their needs, wants, and make up scenarios of what is important to them; and what they need to get done with the software.
most importantly- make sure that the users (the persona's) goals are met.
here's a post I wrote that explains it in more detail.
You write unit tests that expect an answer that supports the requirements. If the requirement is to sum a set of numbers, you write
testSumInvoice()
{
// create invoice of 3 lines of $1, $2, $3 respectively
Invoice myInvoice = new Invoice().addLine(1).addLine(2).addLine(3);
assertTrue(myInvoice.getSum(), 6);
}
If the unit test failed, either your code is wrong or possible was changed due to some other requirement. Now you know that there is a conflict between the two cases that needs to be resolved. It could be as simple as updating the test code or as complex as going back to the customer with a newly discovered edge case that isn't covered by the requirements.
The beauty of writing unit tests is it forces you to understand what the program should do such that if you have trouble writing the unit test, you should revisit your requirements.
I don't really agree that code can be perfect...but that's outside of the real question. You need to find out from the users prior to any design or coding is done what they want - ask them 'what does success look like', 'what do you expect when the system is complete', 'how do you expect to use it'...and video tape the response, mindmap it, or wireframe it and than give review it with them to ensure you captured the most important aspects. You can than use those items to verify the iterative deliveries...expect the users to change their mind/needs over time and once they have 'it in their hand' (IKIWISI - I Know It When I See It)...and record any change requests in the same fashion.
AlbertoPL is right: "Most of the time even the users don't know what they want!"
And if they know, they have a solution in mind and specify aspects of that solution instead of just telling the problem.
And if they tell you a problem, they may have other problems without being aware that these are related by having a common cause or a common solution.
Thus, before you implement mockups and prototypes, go and watch the use of what the customer already has or what the staff is still doing by hand.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
Suppose I have a bunch of User Stories ( as a result of the planing Session I went through with my team ). I don't have any code in the application yet and going to start with my 'A'
or highest Priority Stories/Epics
Say for example
"As A User I should be able to Search for more users so that I can add more friends on the website"
So how should the team go about coding the application while doing TDD.
Team starts with creating Unit tests ie .that take care of Creating models
Then everybody takes a story and starts writing functional tests to create my controllers / Views ( So Should they be doing integration testing while writing functional tests )
Then do the integration tests
I am actually confused how the integration tests fit in.if all the integration tests work ( ie all the functional, units tests should anyway pass )
So, If the application is just starting ( ie no Code has been written yet ). What is the process people usually take for TDD/BDD when they pick up a Story and start, for implementing a application from scratch
Very good question! The TDD/BDD way would suggest you take the user stories and write validation points (read high level tests). They use GWT (Given/When/Then) speak as follows.
"As A User I should be able to Search for more users so that I can add more friends on the website"
given the website URL
when the site loads
then a search field should be visible/accessible.
This is your first piece of feedback and first opportuniuty to iterate with the product owner. Ask questions like where should the search bar go? Should it auto complete? Etc. Next you assign "behavior to the UI objects. These also have validation points.
This would define the behavior of the search button:
given a go button next to the search field
when then button is clicked
then a search should be performed
this would describe the logic of your search:
given a search term "John" and a user set including "John, Joan, Jim, Steve"
when a search is performed
then the results should contain "John" an "Joan"
The first validation point would describe the behavior of linking the controller search button to an arbitrary model implementing the search algorithm. The second validation point describes the search algorithm itself. The advantage is that these pieces are defined independently and can be designed in parallel. It also gives you a nice API and small easily to plan features to iterate on. It also gives you the ability to iterate/refine on any piece of the puzzle without affecting the rest of the pie.
Update I also want to mention that what I refer to as validation points can loosely be associated with UATs or User Acceptance Tests. Don't get hung up on the terms because they're irrelevant. Focus on the idea behind them. You need to take the user story and break it down into specs. (can be done in one or many passes using either UATs or validation points or both or magic beans just make sure you break them down.) If what you have broken your user stories into can be written in a tool like FitNesse, JUnit, or RSpec then use one of these tools, other wise you need either further conversation (Are your user stories too vague?) or another pass over what you have to break down further (UATs to validation points). Don't obsess over the tools and feel like you need to automate everything from the beginning. Leave Selenium alone until you get the manual process down. Eventually you want specs that can be written in programmatic test-like form at this time you should be able to use something as simple as JUnit to start coding. When you get better/fancier you can pick up EasyB or RSpec story runner and other things.
This is where we usually start off with a Sprint 0 and in this spring is where we'll have what XP call's a Spike Session (or a Throw away code session). In this session is where you can begin prototyping.
In your session write a few user acceptance tests (preferrably in the BDD format) and then start writing a test first to match one of your UAT's.
For example:
Given a search is requested
where user's name is "testUser"
1 result should be returned.
With this you now have a goal for your first test, which you write, then begin writing code to make that test pass. As you go forward you should begin to see how the app should be put together to complete the story.
Then I would begin in the next sprint building stories/task's to complete the feature as needed based upon what you discovered in the sprint 0.
"I am actually confused how the integration tests fit in.if all the
integration tests work ( ie all the functional, units tests should anyway pass )"
It depends. Sure, it's possible to write integration tests in such a way that all unit and functional tests pass. It's just much more difficult.
Imagine that you have 3 models, 3 controllers and 3 views. Imagine that all are super simple with no conditions or loops and have one method each.
You can now (unit) test each one of these for a total of 9 assertions and have full coverage. You can throw in an integration test to make sure all these things work well together.
If instead you skip units/functionals and needed to have full coverage, you're going to need 27 assertions (3 x 3 x 3).
In practice things are more complicated of course. You'll need a much larger amount of integration tests to get to the same level of coverage.
Also, if you practice TDD/BDD, more often than not will wind up with lots of unit tests anyway. The integration test is there to make sure all these pieces fit well together and do what the customer wants. The pieces themselves have been tested individually by the unit tests.
First, break the story apart. You'll need:
A User object: What does it do? Create some tests to figure this out and write the code
A way to search users; maybe a SearchUserService? Again create tests and write the code
A way to connect users ...
Now, you have the model. Next, you do the same for the controllers. When they work, you start with the views.
Or, when you're a pro and have done this a thousand times already, you might be able to roll several steps at once.
But you must first chop the problem into digestible pieces.
Next for the integration tests. They will simulate what a user does. In the general case, you need to write a set of unit tests (they are just called integration tests but you should also be able to run them automatically). These tests need to talk to the web app just like the user does, so you need a simulated web browser, etc.
You can try httpunit or env.js for this.
If you're doing TDD, you start with a test that shows that the system does not perform the required behaviour described by the user story. When that is failing in the way you expect, with useful diagnostics, you then start implementing the behaviour by adding or modifying classes, working unit-test first.
So, in TDD you write integration tests before you write unit tests.
To bootstrap the whole process, one usually writes a "walking skeleton": a system that performs the thinnest slice of realistic functionality possible. The walking skeleton let's one build up the integration test infrastructure against simple functionality.
The rest of the project then fleshes out that skeleton.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I have noted over the years, that I tend to write maybe a screen full of code, then test to make sure it does what it should.
Some of the benefits of this technique are
Syntax errors are a result of the new code,
so you don't have to look far to find the cause.
It is cheap to set up a temporary condition, that lets you test the else
clause of an if statement, so you can be sure to get error messages,
and the like correct when they are cheap to test.
How do you tend to code?
What benefits do you get by doing it that way?
EDIT: Like most of my questions, I really haven't set the context well enough. I am not really talking about unit test level granularity. I am referring to making sure the local bit of code does exactly what I intend it to, at the time of implementation.
I'd like to say I always write a unit test before I write the corresponding code to pass it, but I'd be lying.
I tend to code until I have something that should produce a well-defined observable behavior. Usually, this is a single public API function, sometimes a full class. This also encourages me to break down the problem into small functions with well-defined observable behavior. Most of my functions are smaller than a full screen. If a function is too complex to test, then it's probably badly designed from other perspectives anyhow.
Personally I find I tend to write the obvious Interfaces and drag in the utility resources (be they C# libraries, CSS, whatever) before I actually write tests.
I think there's a balance between zealotry and experience to be struck.
This may sound silly, but I usually test the code I write after each "processing task". Meaning, if I open a file, I test the routine. If I connect to a Database and pull out a single record, I test that routine. Or sometimes I write a test that just exercises all the methods of a class just to see if they work.
I don't think I use a hard or fast rule, but mostly when I write code to preform a task, I test to "verify" it does what it's supposed to do.
Exactly as much as I have to. Sometimes that means a few hundred lines, especially if I'm adding a large system to an existing framework, when the application wouldn't even run without some part of it.
I suppose I follow the principle of testing whenever I can. Obviously that doesn't mean halfway through writing a loop, but when I'm done with the loop I'll try it out before moving on. The less you changed since the last test, the easier it'll be to figure out what was changed that caused your error condition. :)
I usually do what you describe, but I don't get a full page written before I test. I've found that if I write some code then write a test, I usually have to refactor the code to make it more testable. This seems a little bit wasteful, so I'm down to just a few lines of code before I write a unit test. I find that I'm moving closer and closer to strictly adhering to TDD.
I don't use TDD, but build what are effectively test stubs first, that become the actual application.
For instance, in a WinForms app, I build the buttons first, and test them. Then when I build the class, I test that the class's methods are being called by the UI.
Then, if for instance I'm going to put the actual work into a background worker, I build that with nothing inside it, and test that the Start/Progress/Complete handlers all fire, and are handled by the class that creates the BGW.
Then I start putting the functionality into the methods, and thus already have a tested test harness. It's very rare that I have to build a separate harness for this, since every increment is small, and tested before the next level of complexity is added.
The benefit is that I don't have to hold too much complexity in mind at a time, and very little is added without the foundations it relies on already being well tested.
I've never found unit testing to be any kind of issue - what I really want is automated testing at a higher level than that.
As you did not mention in which language environment you code...
As I work in Smalltalk, syntax is checked in the editor while I type, and whenever I accept a method, so thats not an issue. (For those who don't know Smalltalk: it is not file-based, but object oriented; that means that you add method-objects one-at-a-time to a class object, and the system compiles each as it is "accepted" in the editor).
For small methods which are algorithmic or which do not need a big framework/setup, I add a little comment which tests that method and which can be executed by a click. There is also a test-runner to extract all these and run them as a unit test.
For bigger stuff, a TestCase class is updated for every few methods and the test-runner button clicked from time to time, stopping me on a red light.
So I would say, a test is done for every 10 lines or so.
I admit, doing so requiresd a highly reactive and incremental IDE - otherwise, it cannot be done so easily and I would revert to say a roughly a letter-size page-of-code before testing. I do not consider compilability as "a test", so syntactic correctness does not count.
EDIT: For your amusement, here is a concrete example from the Collection class:
For those who don't know smalltalk:
quoted strings are comments;
+/- is an operator to create a measurement value;
/ creates fractions;
{...} is array creation;
the testcases at the end are directly executable (so called doIt) from within the editor.
sum
"sum up all elements.
This is implemented using a variant of the normal inject:into: pattern.
The reason for this is that it is not known whether we are dealing with number
(i.e. if 0 is a good initial value for the sum).
Consider a collection of measurement or physical objects, 0 would be the unitless
value and would not be appropriate to add with the unit-ed objects."
| sum sample |
sample := self anElement.
sum := self inject: sample into: [:accum :each | accum + each].
^ sum - sample.
"
TestCase should: [ { } sum ] raise:Error.
TestCase should: [ '' sum ] raise:Error.
TestCase assert: ( { 1 } sum = 1 ).
TestCase assert: ( { 1. 2. 3. 4. } sum = 10 ).
TestCase assert: ( (1 to:10) sum = 55 ).
TestCase assert: ( 'abc' asByteArray sum = 294 ).
TestCase assert: ( { 10 +/- 2.
20 +/- 4.
100 +/- 10 } sum = (130 +/- 16) ).
TestCase assert: ( { (1 / 9).
(1 / 7).
} sum = (16 / 63) ).
"
Depends on the size/scale of the project. If its a short program (trivial to compile and run), I will test it early and often any time I add in any new functionality. This lets me catch most errors quickly.
In a large project (company-size), I'll test my piece in isolation like this, IF I can. Otherwise, pay attention to tests on those daily builds.
In short, test as often as possible, so long as the compile/run time doesn't take so long you consider taking up office swordfighting!
I tend to test each feature of a program. Not each function, but a series of functions that form a feature.
Benefits this way that I don't have a lot of overhead to test each function, but test it after each other.
The project I am on now is supposed to be Unit Test first then development, and for the most part it is, but sometimes the person writing the test and the person implementing are not always on the same page.
So I like having a unit test for checking the main functionality of the method needed, then having the person implementing the code to write several unit tests checking the various edges of code.
The older I get, the less code I write before running/testing.
In part, that's a consequence of technical advances: I started out writing code on COBOL coding sheets to be transformed into punched cards twice a week when the punch girl came in. I generally wouldn't even attempt a compile of a new program until it was largely complete and desk-checked, which was usually a couple of thousand lines and a few weeks.
These days, when I'm on my game, I don't write any code before testing, I write a test before coding. I'm weak and not always sure how to write the test, though, so sometimes I tell myself I'm being pragmatic by just doing it. It's surprising how often that turns out to have been a bad idea, though: code that I wrote as a consequence of TDD tends to be easier to test, easier to modify and mostly just better than code that got tests later.
But that's just me, YMMV.
Usually, as soon as I complete a function, I compile it, switch to the REPL, and test it with some ad hoc made up data (also edge cases). Sometimes (more often than I'd like) a few debug cycles (edit-compile-test) are necessary to get the desired behaviour. Of course, this kind of development style is only viable if you can individually compile functions into a running runtime that provides a REPL, otherwise you would spend too much time waiting for a complete compile. I use SBCL with SLIME.
I try to make the first time my code runs be var a unit test.
Sometimes I write the test first, sometimes I write the method/class first.
I like to feel good about my self,
Therefore I like to give myself positives
feedback often,
Therefore I try to
“prove” a new method works soon after I wrote it.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
For the popular languages and libraries we use every day:
What are examples of some bad design, embarrassing APIs, or generally bad usability? Design errors that we have to pay for because they introduce subtle bugs, we have to use awkward workarounds or memorize unintuitive ways to get things done.
I'm especially thinking of issues like: There's a class in an OO language that really shouldn't inherit from that other class. There's special operator that makes a certain language hard to parse, and it turned out to be unused anyway. A function that's misnamed or is often in use for other things than it was designed for (I'm thinking of std::getline to tokenize strings).
I'm not looking for contributions that bash languages and claim that, say, Perl or some other language is just badly designed. I'm more looking for concrete examples or anecdotes about things that clearly should have been done differently. (Maybe the designers caught it too late and tried to fix it in subsequent versions, but had to retain backward compatibility.)
Java's URL class does (or did, a few years ago) a DNS lookup when determining the hashCode of a URL object. So, not only is a hash table with URLs as keys extremely slow, but it can change at runtime if two successive DNS requests return different values or you unplug the network cable!
Java class called "NullPointerException"
Having migrated from C++ to Java I always found it amusing to find a NullPointerException in a language with "no pointers"
The majority of C++ is design flaws yet most people learn to live with it. [Ducks for mass of haters downvotes]
PHP has a good bunch of them, the ($needle,$haystack) ($haystack,$needle) inconsistencies.
The string functions with mb_ prefixed to them to enable multibytesupport.
My favourite mysql_escape_string and mysql_real_escape_string...
Lets not get started on the OO part... :)
My personal favorite is atoi of the C standard lib.
int atoi ( const char * str );
On success, the function returns the converted integral number as an int value.
If no valid conversion could be performed, a zero value is returned.
Well, unfortunately it's painful to convert "0" with this function as you never sure if it was an error or "0".
Java "Calendar" API. It is error-prone and hard to use.
To enumerate just a few of the problems:
misleading name: a Calendar object is supposed to model a "calendar system", but in practice encapsulates a time object, i.e it is a kind of Date (which is itself a problem, since Date should better be named DateTime or TimeStamp)
there is only one concrete subclass of Calendar (GregorianCalendar), but the abstract Calendar class contains constants only useful in this specific case (JANUARY, MONDAY, AM_PM, ERA)
you modify fields by using constants ("MONDAY", "WEEK_OF_MONTH" etc); these are integers and can be mixed up rather easily.
in fact, just about every argument and return value is an integer; the usual problems with 0- and 1-based numbers apply (is January 0 or 1?=
constants like "HOUR" and "HOUR_OF_DAY" that only make sense when the AM/PM system is used (which nobody outside the US understands anyway ;-)
My favorite comes from the world of Smalltalk. Squeak to be specific. In Squeak the Semaphore class inherits from LinkedList. Semaphores may utilize linked lists, but they are not linked lists themselves. This makes for a very odd interface. This is terrible OO design.
API's where functions return null when they should return empty items...and then don't document when, how, or why they'll return null.
Checked vs. unchecked exceptions in Java. Most Java developers either don't know when to use which, or, if they do, they disagree.
Then we have things like IOExeption which you can never handle them when they occur but have to throw them up (in all senses of the word). When you have finally reached a place where you can handle them, you have no idea how to figure out what might have caused them, so you can only present the user with the message and stack trace and hope she can figure it out (here "she" is normal user who knows that Java is a kind of coffee).
http://www.infoq.com/presentations/effective-api-design
Skip to about 40 minutes and he talks about some of the XML APIs in Java.
In general, it's a pretty interesting presentation.
I found one in Ruby's Dir.glob().
I haven't been able to prove it's not related to my particular environment yet, since my environment is old and cobbled together by hand and I need to continue to support it unfortunately. There seem to be at least 5 cases:
Directory has files and some match -> List of those files
Directory has files and none match -> Empty list
Directory is empty -> Nil
Directory exists but the current user can't read it (don't know what happens here)
Directory is missing (haven't tested this either)
It had me pulling my hair out because the documentation doesn't describe it that way.
In Java I would go with InterruptedException being a checked exception. If you need to know that your sleep method woke up early, that should have been a boolean returned from the method. Nothing gives checked exceptions a bad name like InterruptedException.