Can TFS build-server compare tests performance after each build? - testing

I want to observe the dynamics of performance changes.
For example, I have method which generates random number. The first version of this method spends 350 ms to generate. The second version spends 450 ms.
Tests on the TFS build server have to throw error because now the method is running slower.
Can TFS storage and compare previous performance results?
How to write test methods to do that?

I would recommend that instead of doing something as complicated as comparing does why not just f run it once and decide.
You can easily compare the current execution time to a 350ms baseline and fail the test of it goes over.
If you go down the road of comparing to the last you will unnecessarily increase the test time. You will add the time to load the last from storage and danger the new value...

You could use the Stopwatch class to measure the time needed to perform the calculations. After the calculations are done you write the time needed in to a file. On the next testrun you compare the measured time to the value in the file. If It's the same or faster you replace the value in the file and let the test succeed.
If it is higher, you fail the test and write the time previously needed and the time your test needed this time as your fail message.
If the file does not exist, your test has to succeed since it can't compare the measured time.
Of course that's not using the TFS test environment directly, but this way you can fine tune your stuff the exact way you want it.

Related

How to handle huge amount of changes in real time planning?

If real time planning and daemon mode is enabled, when an update or addition of planning entity is to be made a problem fact change must be invoked.
So let say the average rate of change will be 1/sec, so for every second a problem fact change must be called resulting to restarting the solver every second.
Do we just invoke or schedule a problem fact change every second resulting to restarting the solver every second or if we know that there will be huge amount of changes, stop the solver first, apply changes then start the solver?
In the scenario you describe, the solver will be likely restarted every time. It's not a complete restart as if you just call the Solver.solve() with the updated last known solution, but the ScoreDirector, a component responsible for score calculation, is restarted each time a problem change is applied.
If problem changes come faster, they might be processed in a batch. The solver checks problem changes between the evaluation of individual moves, so if multiple changes come before the solver finishes the evaluation of the current move, they are all applied and the solver restarts just once. In the opposite case, when there are seldom changes coming, the restart doesn't matter much, as there is enough time for the solver to improve the solution.
But the rate of 1 change/sec will likely lead to frequent solver restarts and will affect its ability to produce better solutions.
The solver does not know if there is going to be a bigger amount of changes in the next second. The current behavior may be improved by processing the problem changes periodically in a predefined time interval rather than between move evaluations.
Of course, the periodic grouping of problem changes can be done outside the solver as well.

Recommended way of measuring execution time in Tensorflow Federated

I would like to know whether there is a recommended way of measuring execution time in Tensorflow Federated. To be more specific, if one would like to extract the execution time for each client in a certain round, e.g., for each client involved in a FedAvg round, saving the time stamp before the local training starts and the time stamp just before sending back the updates, what is the best (or just correct) strategy to do this? Furthermore, since the clients' code run in parallel, are such a time stamps untruthful (especially considering the hypothesis that different clients may be using differently sized models for local training)?
To be very practical, using tf.timestamp() at the beginning and at the end of #tf.function client_update(model, dataset, server_message, client_optimizer) -- this is probably a simplified signature -- and then subtracting such time stamps is appropriate?
I have the feeling that this is not the right way to do this given that clients run in parallel on the same machine.
Thanks to anyone can help me on that.
There are multiple potential places to measure execution time, first might be defining very specifically what is the intended measurement.
Measuring the training time of each client as proposed is a great way to get a sense of the variability among clients. This could help identify whether rounds frequently have stragglers. Using tf.timestamp() at the beginning and end of the client_update function seems reasonable. The question correctly notes that this happens in parallel, summing all of these times would be akin to CPU time.
Measuring the time it takes to complete all client training in a round would generally be the maximum of the values above. This might not be true when simulating FL in TFF, as TFF maybe decided to run some number of clients sequentially due to system resources constraints. In practice all of these clients would run in parallel.
Measuring the time it takes to complete a full round (the maximum time it takes to run a client, plus the time it takes for the server to update) could be done by moving the tf.timestamp calls to the outer training loop. This would be wrapping the call to trainer.next() in the snippet on https://www.tensorflow.org/federated. This would be most similar to elapsed real time (wall clock time).

Optaplanner - Real-time planning doesn't find good solution

I am trying to use real time planning using a custom SolverThread implementing SolverEventListener and the daemon mode.
I am not interested in inserting or deleting entities. I am just interested in "updating" them, for example, changing the "priority" for a particular entity in my PlanningEntityCollectionProperty collection.
At the moment, I am using:
scoreDirector.beforeProblemPropertyChanged(entity);
entity.setPriority(newPriority);
scoreDirector.afterProblemPropertyChanged(entity);
It seems that the solver is executed and it manages to improve the actual solution, but it only spends a few ms on it:
org.optaplanner.core.impl.solver.DefaultSolver: Real-time problem fact changes done: step total (1), new best score (0hard/-100medium/-15soft).
org.optaplanner.core.impl.solver.DefaultSolver: Solving restarted: time spent (152), best score (0hard/-100medium/-15soft), environment mode (REPRODUCIBLE),
Therefore, the solver stops really soon, considering that my solver has a 10 seconds UnimprovedSecondsSpentLimit. So, the first time the solver is executed, it stops after 10 seconds, but the following times, it stops after a few ms and doesn't manage to get a good solution.
I am not sure I need to use "beforeProblemPropertyChanged" when the planning entity changes, but I can't find any alternative because "beforeVariableChanged" is used when the planning variable changes, right? Maybe optaplanner just doesn't support updates in the entities and I need to delete the old one using beforeEntityRemoved and inserted it again using beforeEntityAdded?
I was using BRANCH_AND_BOUND, however, I have changed to local search TABU_SEARCH and it seems that the scheduler uses 10 seconds now. However, it seems stuck in a local optima because it doesn't manage to improve the solution, even with a really small collection (10 entities).
Anyone with experience with real time planning?
Thanks
The "Solving restarted" always follows very shortly after "Real-time problem fact changes done", because real-time problem facts effectively "stop & restart" the solver. The 10 sec unimproved termination only starts counting again after the restart.
DEBUG logging (or even TRACE) will show you what's really going on.

why run time is considered for complexity analysis and not compile time?

While analyzing any algorithm we consider its time complexity the root issue i.e. the designer concerns himself primarily with run time and not compile time.
It is that whenever we analyze the complexity of some given algorithm we only care about the run time required by the algo and not the compile time. Why is that?
Compilation occurs once. Over the lifetime of a product it therefore has a constant cost. Complexity is a measure of how costs grow proportionally to input. A constant cost does not grow proportionally to input. It therefore contributes zero.
Because, to be blunt, the programmer doesn't really matter. You are designing code for a client and at the end of the day you will compile that code for them. Once it is compiled into a .jar or .exe it shouldn't have to compile again. They don't care if it took you four hours to compile they only see the run time efficiency. If you only compile once but you run the code 4000 times which is going to matter more. Runtime or compile time?
(N.B. I'm posting late)
Ok, So I see the point why it's pretty obvious.
Simply put "The compilation time doesn't really matter"
All that matters is how fast the program executes on a discreet set of provided input.
compile time may vary programmer to programmer according to their program size for same task but run time only vary according to algorithm, thus run time matters.
because compile time is a function of run time

Is there a fast way to find out presence of elements on form?

I am automating a form, which has many fields all of which are dynamic, i.e. fields are generated on basis of value selected in preceding field. At present I am waiting for each field, if it appears I fill it, otherwise I skip it. However, this has made the process very slow. Is there a more efficient way to do it?
As suggested by Vinay you can reduce certain amount of execution time but not entirely.
Does it not take time when you are testing it manually? If the total execution time for the scenario is taking more time than doing manually then this scenario is not a good candidate for automation. But if the time taken is less than doing manually then it is still worth spending.
It depends upon the application speed. when you are able to do it in manual.It's possible in Automation too.We could reset the implicit wait time to speed up the process.
driver.manage().timeouts().implicitlyWait(0,TimeUnit.SECONDS);
Dont forget to set the time to use it again.