Measuring execution time using Micrometer and WebFlux - spring-webflux

I'd like to measure the length of some async calls made with WebFlux. I've been reading through various sources, as I understand the #Timed annotation is working with AspectJ and basically just starts a timer before the method call and stops it after. This obviously won't work with async methods.
Are there any solutions for WebFlux or the only thing I can do is passing around execution timestamps, cluttering my application logic?

Project Reactor natively supports Micrometer, please refer to the documentation to find out more:
https://projectreactor.io/docs/core/milestone/reference/#_publisher_metrics
For example, you may want to monitor reactor.flow.duration.

Related

Does Restlet support reactive programming?

Does anyone know if Restlet supports reactive programming for handling its request? If not, are the available implementations (BIO & NIO)? I understand there is a problem/bug with Restlet 2.2.x .
How is Restlet async handled? I will like to know the design behind this.
There is a NIO module available that you can use:
http://restlet.com/technical-resources/restlet-framework/guide/2.3/extensions/nio
https://github.com/restlet/restlet-framework-java/tree/master/modules/org.restlet.ext.nio
And there are async capabilities as shown in this test case with the response handler on the request, around line #100 or so:
https://github.com/restlet/restlet-framework-java/blob/master/modules/org.restlet.test/src/org/restlet/test/engine/connector/AsynchroneTestCase.java
Regarding the issue with the Camel integration, it's not clear yet where the problem lies, whether it's in the integration, or in Restlet Framework per se. More investigation is needed.
For the design behind the async handling, don't hesitate to dive in the sources of the project, after all, it's open source! You can start with looking at the Request and its setOnResponse() method which is the method setting the callback handler.

Determine whether or not web application is running in UI test mode

I am trying to figure out the best way to determine if I am running UI tests for a web application. The reason I am trying to do this is because if I am running UI tests, the only purpose of these tests are to make sure that the UI is working properly and to do that, they should run against mocked APIs (we have a separate set of integration tests to make sure the UI and a true backend API work properly together). Also mocking the API calls will make test test run a lot faster which is another reason to mock them. I consider these "unit tests" for the UI.
I also don't want to have 2 separate copies of the same codebase where everything is the same except the UI test version includes the javascript file that mocks all the required calls needed for the UI tests to run properly. If I where able to figure out that I am running the application in UI test mode then I would be able to know whether or not to include the javascript file to mocks the calls.
Is there any "standard" or "accepted" way to do something like this?
When you start running tests - raise a flag in the DB and have a service you can call to check that flag. make sure to turn that flag off once the tests ended.
The short answer to "Is there any standard or accepted way to do something like this?" would be: no.
This is mainly because you don't want your UI to know this kind of information at all. You want your UI to just be your UI. As soon as your UI starts taking some decisions based on whether it's in "test mode" or "production mode", you embark on a slippery slope that will ultimately lead to a nightmare code-base.
This does not mean your problem cannot be solved; just that the solution should be approached in a different way. I'll first explain the general principles without any language specifics, then provide some guidelines for javascript.
General Principles
The only reason for you to be struggling with this is that your UI is too tightly coupled to the API.
The solution happens to be exactly the same as any situation when you wish to use mocks.
Program to an interface, not an implementation. (Ensure your UI binds only to an abstraction of the API - not the "true/production API".)
Separate instantiation from interaction. (Don't let your UI create any of its API dependencies, because that binds it to a specific implementation - rather provide interface on the UI for it to be given the specific API instance it should use.)
Program to an interface
First note that the above phrase does not mean your language needs to support an "interface" construct. (It's just an unfortunate choice of name by some language implementors.)
Define a base class/object which defines each of the methods/messages that your API should support. (However, none of these will actually be implemented on the base class/object.)
Your UI should have a variable/field/reference to the APIInterface.
Your UI will call the methods it needs from the API via the interface reference. E.g. APIRef.DoMethod1(...) or APIRef->DoMethod1(...) or [APIRef DoMethod1:...] etc.
Separate instantiation from interaction
The thing to avoid here is:
CreateUI {
APIRef = CreateAPI;
}
The above binds your UI to a specific implementation, and forces you to include those files/dependencies in your UI code. You would rather have your UI be told which API to use. E.g.
CreateUI(APIInterface APIToUse) { //NB: Notice that the type use to refer
//to the API is the abstract base type
//defined earlier (keeping to the "Program
//to an interface" principle).
APIRef = APIToUse;
}
//or
SetAPI(APIInterface APIToUse) {
APIRef = APIToUse;
}
Now your production application could look something like this:
API = CreateTrueAPI;
UI = CreateUI(API);
Whereas your test application could look something like this:
API = CreateMockAPI;
UI = CreateUI(API);
Notice how with this solution, your UI doesn't have a clue about "test mode" or "production mode". It just uses the API it is given. The only thing that knows about the "test mode" (in a manner of speaking) and the mock API is the test application.
Applying the principles to Javascript
First, let me state for the record: although I am familiar with the language principles of Javascript, I have never done JS development. So there may be some unforeseen complications. However, in the worst case, with a little tweaking and research, I'm sure you'll figure something out.
Javascript supports duck-typing which basically means you can send any message to any object, and at runtime the object will decide if it can actually process the message. You lose out on compile-time checking that you haven't made any typo errors, but as I understand it: you don't really need to define the abstract base interface at all.
So...
Simply ensure your UI has a reference to an API object.
Ensure your UI doesn't include any API implementation files (neither the true/production version nor the mock version).
In your production host create the true API, create the UI and pass the true API to the UI.
In your test host create the mock API, create the UI and pass the mock API to the UI.

How do I access/read the Phonegap API?

I know there is this: http://docs.phonegap.com/en/2.1.0/index.html but it doesn't really help.
I am trying to learn about the appView variable (I think it's a variable). I would've said it was a class but it starts with a lower case letter :/
The reason I am trying to learn that is because I am trying to understand the appView.addJavascriptInterface(Object, String) method.
My main goal is to send a variable from a java file to a javascript file. Tutorials online seem to be using the method stated above. Because the method takes in an object, the tutorials seem to be creating another class. I want to simplify my code as much as possible so I was seeing if there are any other options.
You will want to write a Plugin. We've already gone through the pain of the JS to Java and back to JS communication. If you purely use addJavaScriptInterface you will run into some edge cases where it doesn't work and we already guard against.
In appView.addJavascriptInterface(Object, String) method Object refers to Java object from which you want to transfer data from Java to java script.
You can't achieve functionality without creating new class.
Apart from Plugin usage above mentioned way only we can achieve communication between java and javascript in phonegap apps.

Using .Net 4.0 new features for parallel tasks

I've previously asked a question about designing a service that receives video files, sends them to an encoding service, waits for the encoding to be completed, and then downloads the files.
I started writing the code for that and one of my workmates suggested I use .Net 4.0 new features, instead of writing it using BackgroundWorker. I've done some reading and the Parallel feature sounds great. Are there any more new features I should implement? I'm new to .net 4.0.
Thanks!
Parallel Extensions is certainly one good option here. Another you might want to consider is Reactive Extensions, which implements a "push" model instead. It takes a little while to get your head round, but it's very elegant - and might work very well with your asynchronous model.

Simple, non-networking example of Twisted/PyGTK

I was struggling with getting some asynchronous activity to work under PyGTK, when someone suggested that I look at using Twisted.
I know that Twisted started as a networking framework, but that it can be used for other things. However, every single example I've ever seen involves a whole lot of network-based code. I would like to see an example of using Twisted for a simple PyGTK desktop app, without the needing to expend the extra mental effort of understanding the network aspect of things.
So: Is there a clean, simple tutorial for or example of using Twisted to create a GTK (PyGTK) app and perform asynchronous tasks?
(Yes, I've seen pbgtk2.py. It's uncommented, network-centric and completely baffling to a newcomer.)
Updated: I had listed various gripes with glib.idle_add/gtk.gdk.lock and friends not working properly under Windows. This was all reasoned out on the pygtk list - there's some trickery that is needed with PyGTK to get asynchronous behaviour working under Windows.
However, my point still stands that any time I mention doing asynchronous activity in PyGTK, someone says "don't use threads, use Twisted!" I want to know why and how.
Twisted to perform is asynchronous tasks in pygtk simply uses functions such as gobject.io_add_watch/glib.io_add_watch and gobject.timeout_add/glib.timeout_add (plus some others, you find them in the gobject and glib module), so there's not much difference in using raw pygtk functions or twisted if you don't need networking.
As an addition twisted has the same problems as pygtk with asynchronous tasks, twisted use the same loop as of pygtk and so it gets blocked if you perform some blocking task!
The best thing to do is to use one of the glib functions that are intended basically for handle such situations.
I've tested in an application the correct behaviour under windows of twisted+pygtk but I avoided to do blocking stuff (max reading from a large file, chunk per chunk basically using glib.idle_add or glib.io_add_watch, in the sense that twisted uses something like that).
For example I'm not sure that spawning process and processing stdout with glib.io_add_watch seems to not work. I've written an article on my blog that handle the performing of asynchronous processes in pygtk, not very sure that works on windows though it may depend on the version.