For some reason, I want to use thread.sleep in my serenity screenplay project, how can I use it? - serenity-bdd

I am already using implicit and fluent wait, but want to use thread.sleep, so want to know the syntax of it

Using Thread.sleep() is discouraged and there is no Performable for that in serenity.
Many testers pepper their web tests with Thread.sleep() statements, but this is the sub-optimal at best. It slows down the tests, makes them more brittle, and can hide genuine application issues. More insidiously, when our steps are artificially slowed by sleep statements, it takes longer not only to run and troubleshoot existing tests, but also to develop new ones.
https://johnfergusonsmart.com/handling-waits-and-asynchronous-pages-in-serenity-bdd/

I know my reply is very late and it's a very bad practice but I am posting here just for the sake of if it can be done. Also from your question it seems like you want to make a task of it. To do this you can make a anonymous task. For example
Integer wait = 500;
Task.where("{0} uses thread sleep",(actor)-> {
try {
Thread.sleep(wait);
} catch (InterruptedException e) {
e.printStackTrace();
}
});
You can wrap it inside a function or assign it to a Task variable. Just if you are wandering Task class have a overloaded method named where which takes a Consumer<Actor> as second argument instead of Performable.
Also you can make a normal task class by implementing Task and use thread sleep in performAs method.
But again considering you are using serenity I doubt it will come to the point where you would have to use Thread.sleep.

Related

How are Kotlin coroutines different from async-await in Dart?

Dart makes asynchronous programming extremely easy. All you need to do is surround the asynchronous code in an async method, and within it, use await before every call that is going to take a while.
I'm new to Kotlin, and asynchronous programming doesn't seem that simple here. (Probably because Dart is single-threaded.)
It'd be nice to get a rough outline of the differences both languages provide in their implementation of asynchronous code.
Apologize if I miss-stated any facts. Thanks in advance!
Dart makes asynchronous programming extremely easy. All you need to do is surround the asynchronous code in an async method, and within it, use await before every call that is going to take a while.
Yes (though async+await is not Dart's invention, it dates back to at least C# 5.0 in 2012, which then directly inspired JavaScript, Python, Julia, Kotlin, Swift, Rust, many others, and Dart).
I'm new to Kotlin, and asynchronous programming doesn't seem that simple here.
Kotlin 1.1 has async+await, although await is a postfix method, not an operator unlike in most other languages, but the end-result is the same.
It'd be nice to get a rough outline of the differences both languages provide in their implementation of asynchronous code.
Kotlin and Dart are different languages because they solve different problems, consequently there's simply too much to write about their differences, even when focused entirely on how they handle concurrency and coroutines.
...but in-short, the main difference (as far as you're concerned) is syntactical (which is as far as I can tell: Be aware that I am not a Dart/Flutter nor Kotlin expert, I just know how to read documentation and use Google)
I suggest seeing some simple examples in Kotlin, such as:
First-off, read the announcement where await was introduced to Kotlin 1.1: https://kotlinlang.org/docs/whatsnew11.html#coroutines-experimental
And seeing how it interops with Swift's async + await functions here: https://kotlinlang.org/docs/whatsnew1530.html#experimental-interoperability-with-swift-5-5-async-await (Swift's async features work the same way as Dart's, as far as I know, except without enforced thread isolation)
Kotlin Coroutines Async Await Sequence
This article (which I only skimmed) seems good too: https://www.raywenderlich.com/books/kotlin-coroutines-by-tutorials/v2.0/chapters/5-async-await
I'm new to Kotlin, and asynchronous programming doesn't seem that simple here.
In fact, Kotlin takes it to the next level of simplicity: it's almost invisible. For example:
suspend fun main() {
println("Hello")
delay(1000)
println("Hello again")
}
This code, unbeknownst to you, is actually implemented as asynchronous. But you just see simple, sequential code. The compiled code (in case of the JVM backend) has structure something like this:
public static void main(String[] args) {
System.out.println("Hello");
globalThreadPool.scheduleAfterDelay(() -> {
println("Hello again");
}, 1000, TimeUnit.MILLISECONDS);
}
On top of that, Kotlin makes it super-simple to adapt any async code you may have today so that you can use in the same native way as the above built-in delay function.
Where people trip up mostly is not this basic scenario, but dealing with more advanced topics like structured concurrency, choosing the right thread pool to run your code, error propagation, and so on.
I haven't studied Dart, but from what I know about the async-await pattern in other languages, whenever you call an async function, you have implicitly created a concurrent task, which is very easy to leak out -- all it takes is forgetting to await on it. Kotlin prevents these bad outcomes by design and forces you to address the concurrency you're creating head-on, instead of decyphering out-of-memory logs from production.
The most important difference, beside the syntax, is the multithreading model of these languages.
Check this article:
Dart supports multi-threading using Isolates. Right in the introduction to Isolates, it has been said that
isolates [are] independent workers that are similar to threads but don’t share memory, communicating only via messages.
While Kotlin (on JVM) uses Java threads under the hood, which have access to shared memory.
async/await in both languages is implemented roughly the same, using CPS (glorified callbacks). The important distinction, in Dart you have single threaded event loop dispatching these callbacks, while in Kotlin on JVM you can have multiple event dispatches working together and continuations (callbacks) running truly in parallel on different threads and sharing memory, with all the benefits and issues resulting from that.
Also, note, Kotlin aims to be a multiplatform language, so while on JVM it has multithreaded model, if you compile Kotlin program into JS backend, it would be single-threaded with event-loop, basically same as Dart.
P.S. Watch this video from Roman Elizarov (designer of coroutines in Kotlin), is has a good overview of coroutine usage and internals.

Should runBlocking only be used for tests and in main function?

I have this requirement for a function that gets called periodically:
1. Get some input
2. Do 8 independent computations based on the input
3. Merge the results from these 8 computations and output the merged result
Since I've got at least 8 processors, I can do the 8 independent computations in parallel. So I created the following function:
fun process(in: InputType): ResultType {
runBlocking(Dispatchers.Default) {
val jobs = in.splitToList().map { async { processItem(it) } }
return jobs.awaitAll()
}
}
However, I've read in the documentation of runBlocking that it is "to be used in main functions and in tests."
This function is not the main function but is called way down in the call hierarchy in an application that does not otherwise use coroutines anywhere else.
What should I use to achieve this requirement if I shouldn't use runBlocking?
There is nothing wrong in using runBlocking() like this. The main point is to not overuse runBlocking() as a cheap way to convert regular code into coroutine one. When converting to coroutines, it may be tempting to just put runBlocking() everywhere in our code and that's all. This would be wrong, because it ignores structured concurrency and we risk blocking threads that should not be blocked.
However, if our whole application is not based on coroutines, we just need to use them in some place and we never need to cancel background tasks, then I think runBlocking() is just fine.
Alternative is to create CoroutineScope and keep it in some service with clearly defined lifecycle. Then we can easily manage background tasks, cancel them, etc.

TDD Unit Test sub-methods

I'm in a dilemma of whether I'll write tests for methods that are a product of refactoring another method.
First question, consider this plot.
class Puzzle(
val foo: List<Pieces>,
val bar: List<Pieces>
) {
init {
// code to validate foo
// code to validate bar
}
}
Here I'm validating parameters in constructing an object. This code is the result of TDD. But with TDD we write fail_test -> pass test -> refactor, when refactoring I transferred validator methods to a helper class PuzzleHelper.
object PuzzleHelper {
fun validateFoo() {
...
}
fun validateBar() {
...
}
}
Do I still need to test validateFoo and validateBar in this case?
Second question
class Puzzle(
val foo: List<Pieces>,
val bar: List<Pieces>
) {
...
fun getPiece(inPosition: Position) {
validatePosition()
// return piece at position
}
fun removePiece(inPosition: Position) {
validatePosition()
// remove piece at position
}
}
object PuzzleHelper {
...
fun validatePosition() {
...
}
}
Do I still need to write test for getPiece and removePiece that involve position validation?
I really want to be fluent in using TDD, but don't know how to start. Now I finally dive-in and don't care whats ahead, all I want is product quality. Hope to hear from your enlightenment soon.
When we get to the refactoring stage of the Red -> Green -> Refactor cycle, we're not supposed to add any new behavior. This means that all the code is already tested, so no new tests are required. You can easily validate you've done this by changing the refactored code and watch it fail a test. If it doesn't, you added something you weren't supposed to.
In some cases, if the extracted code is reused in other places, it might make sense to transfer the tests to a test suite for the refactored code.
As for the second question, that depends on your design, as well as some things that aren't in your code. For example, I don't know what you'd like to do if validation fails. You'll have to add different tests for those cases in case validation fails for each method.
The one thing I would like to point out, is that placing methods in a static object (class functions, global functions, however you want to call it) makes it harder to test code. If you'd like to test your class methods when ignoring validation (stubbing it to always pass) you won't be able to do that.
I prefer to make a collaborator that gets passed to the class as a constructor argument. So your class now gets a validator: Validator and you can pass anything you want to it in the test. A stub, the real thing, a mock, etc.
Do I still need to test validateFoo and validateBar in this case?
It depends.
Part of the point of TDD is that we should be able to iterate on the internal design; aka refactoring. That's the magic that allows us to start from a small investment in up front design and work out the rest as we go -- the fact that we can change things, and the tests evaluate the change without getting in the way.
That works really well when the required behaviors of your system are stable.
When the required behaviors of the system are not stable, when we have a lot of decisions that are in flux, when we know the required behaviors are going to change but we don't know which... having a single test that spans many unstable behaviors tends to make the test "brittle".
This was the bane of automated UI testing for a long time -- because testing a UI spans pretty much every decision at every layer of the system, tests were constantly in maintenance to eliminate cascades of false positives that arose in the face of otherwise insignificant behavior changes.
In that situation, you may want to start looking into ways introduce bulkheads that prevent excessive damage when a requirement changes. We start writing tests that validate that the test subject behaves in the same way that some simpler oracle behaves, along with a test that the simpler oracle does the right thing.
This, too, is part of the feedback loop of TDD -- because tests that span many unstable behaviors is hard, we refactor towards designs that support testing behaviors at an isolated grain, and larger compositions in terms of their simpler elements.

VB.NET: What happens if I run CPU-bound code using Await?

I am trying to understand async/await. I understand that you should not Await a CPU-bound method, but to help my understanding I am curious what happens if you do. Consider:
Public Async Function DoSomeTasks()
Await LongRunningCPUBoundMethod1()
LongRunningCPUBoundMethod2()
End Function
Public Async Function LongRunningCPUBoundMethod1() As Task
' Do stuff synchronously
End Function
Public Sub LongRunningCPUBoundMethod2()
' Do stuff synchronously
End Sub
How will the Task Scheduler handle the CPU resources? In what order will these methods execute? Will LongRunningCPUBoundMethod1 or LongRunningCPUBoundMethod2 execute first?
The thing to remember here is that Async/Await code is not necessarily multi-threaded. You can use them to help with multi-threaded code by awaiting items that start a separate thread, but what they really do is allow you to break up several tasks efficiently in the same thread.
This doesn't come without some overhead; switching between asynchronous tasks has a cost. When you await cpu-bound tasks, you've added that cost to the already cpu-intensive work, and therefore made things worse rather than better. However, if you combine this with code that starts the cpu-heavy tasks in a separate thread, and then uses a WaitHandle or a Task to send the results back, you might be fine again (depending on how many items you're awaiting relative to the number of available cores), because now you're taking advantage of the multiple cores in your CPU.
Additionally, let's look at this in context of .Net WinForms. It's important to remember here that you never want to do significant CPU work on main UI thread. Really, anything that blocks for more than a few milliseconds is problematic. If that thread is busy, the Windows Message pump doesn't run, you can't respond to events, and your user interface becomes unresponsive.
To understand Await in this context, think of it as if it breaks your method up into two parts (or more, if there is more than one Await). Everything up to and including the line with Await runs immediately, and everything after the await is hidden away by the compiler in a new callback method (called a continuation) that will be called with the same context (including variables local to the original method) and in the same thread when the Await has finished.
With this information, it should be clear that if you directly Await a cpu-bound method, you're still doing that work immediately on the UI thread, and your user interface is still in trouble. However, you can again account for this by starting the cpu-bound method in it's own thread. Await, in conjunction with Tasks, make this relatively easy to do without having to write a lot of new code. Certainly it's much better than the old DoEvents() technique.
Order of execution.
1.) LongRunningCPUBoundMethod1()
2.) LongRunningCPUBoundMethod2()
Heres how you could mess with the program flow and excution
var task = LongRunningCPUBoundMethod1();
LongRunningCPUBoundMethod2();
var result = await task;
// now result contains what was returned by LongRunningCPUBoundMethod1()
Sorry, I dont know how await/async affects CPU resources.

What's a good mechanism to move from global state to patterns like dependency injection?

Background
I'm in the process of reworking and refactoring a huge codebase which was written with neither testability nor maintainability in mind. There is a lot of global/static state going on. A function needs a database connection, so it just conjures one up using a global static method: $conn = DatabaseManager::getConnection($connName);. Or it wants to load a file, so it does it using $fileContents = file_get_contents($hardCodedFilename);.
Much of this code does not have proper tests and has only ever been tested directly in production. So the first thing I am intending on doing is write unit tests, to ensure the functionality is correct after refactoring. Now sadly code like the examples above is barely unit testable, because none of the external dependencies (database connections, file handles, ...) can be properly mocked.
Abstraction
To work around this I have created very thin wrappers around for example the system functions, that can be used in places where non-mockable function calls were used before. (I'm giving these examples in PHP, but I assume they are applicable for any other OOP language as well. Also this is a highly shortened example, in reality I am dealing with much larger classes.)
interface Time {
/**
* Returns the current time in seconds since the epoch.
* #return int for example: 1380872620
*/
public function current();
}
class SystemTime implements Time {
public function current() {
return time();
}
}
These can be used in the code like so:
class TimeUser {
/**
* #var Time
*/
private $time;
/**
* Prints out the current time.
*/
public function tellsTime() {
// before:
echo time();
// now:
echo $this->time->current();
}
}
Since the application only depends on the interface, I can replace it in a test with a mocked Time instance, which for example allows to predefine the value to return for the next call to current().
Injection
So far so basic. My actual question is how to get the proper instances into the classes that depend upon them. From my Understanding of Dependency injection, services are meant to be passed down by the application into the components that need them. Usually these services would be created in a {{main()}} method or at some other starting point and then strung along until they reach the components where they are needed.
This model likely works well when creating a new application from scratch, but for my situation it's less than ideal, since I want to move gradually to a better design. So I've come up with the following pattern, which automatically provides the old functionality while leaving me with the flexibility of substituting services.
class TimeUser {
/**
* #var Time
*/
private $time;
public function __construct(Time $time = null) {
if ($time === null) {
$time = new SystemTime();
}
$this->time = $time;
}
}
A service can be passed into the constructor, allowing for mocking of the service in a test, yet during "regular" operation, the class knows how to create its own collaborators, providing a default functionality, identical to what was needed before.
Problem
I've been told that this approach is unclean and subverts the idea of dependency injection. I do understand that the true way would be to pass down dependencies, like outlined above, but I don't see anything wrong with this simpler approach. Keep in mind also that this is a huge system, where potentially hundreds of services would need to be created up front (Service locator would be an alternative, but for now I am trying to go this other direction).
Can someone shed some light onto this issue and provide some insight into what would be a better way to achieve a refactoring in my case?
I think You've made first good step.
Last year I was on DutchPHP and there was a lecture about refactoring, lecturer described 3 major steps of extracting responsibilyty froma god class:
Extract code to private method (it should be simple copy paste since
$this is the same)
Extract code to separate class and pull
dependency
Push dependency
I think you are somewhere between 1st and 2nd step. You have a backdoor for unit tests.
Next thing according to above algorithm is to create some static factory (lecturer named it ApplicationFactory) which will be used instead of creation of instance in TimeUser.
ApplicationFactory is some kind of ServiceLocator pattern. This way you will inverse dependency (according to SOLID principle).
If you are happy with that you should remove passing Time instance into constructor and use ServiceLocator only (without backdoor for unit tests, You should stub service locator)
If you are not, then You have to find all places where TimeUser is being instantiated and inject Time implemenation:
new TimeUser(ApplicationFactory::getTime());
After some time yours ApplicationFactory will become very big. Then You have to made a decision:
Split it into smaller factories
Use some dependency injection container (Symfony DI, AurynDI or
something like that)
Currently my team is doing something similar. We are extracting responsibilities to seperate classes and inject them. We have an ApplicationFactory but we use it as service locator at as hight level as possible so classes bellow gets all dependencies injected and don't know anything about ApplicationFactory. Our application factory is big and now we are preparing to replace it with SymfonyDI.
You asked for a good mechanism to do this.
You've described some stages you might force the program to go through to accomplish this, but you are still apparantly planning to do this by hand at apparantly a very high cost.
If you really want to get this done on a huge code base, you might consider automating the steps using a program transformation engine: http://en.wikipedia.org/wiki/Program_transformation
Such a tool can let you write explicit rules for modifying code. Done right, this can make code changes reliably. That doesn't minimize your need for testing, but can let you spend more time writing tests and less time hand-changing the code (erroneously).