Thunder Client math operations - testing

Thunder Client doesn't seem to support math operations in its tests.
Is there any way I could make it calculate a multiplication like this?

You can perform math operations using filters
you can use {{quantityCols | multiply("pageSize")}}
documentation here
https://github.com/rangav/thunder-client-support/blob/master/docs/filters.md#multiply

Related

Operations performed on the communications between the server and clients

Part of federated learning research is based on operations performed on the communications between the server and clients such as dropping part of the updates (drop some gradients describing a model) exchanged between clients and server or discarding an update from a specific client in a certain communication round. I want to know if such capabilities are supported by Tensorflow-federated (TFF) framework and how they are supported because, from a first look, it seems to me the level of abstraction of TFF API does not allow such operations. Thank you.
TFF's language design intentionally avoids a notion of client identity; there is desire to avoid making a "Client X" addressable and discarding its update or sending it different data.
However, there may be a way to run simulations of the type of computations mentioned. TFF does support expressing the following:
Computations that condition on properties of tensors, for example ignore an update that has nan values. One way this could be accomplished would be by writing a tff.tf_computation that conditionally zeros out the weight of updates before tff.federated_mean. This technique is used in tff.learning.build_federated_averaing_process()
Simulations that run a different computations on different sets of clients (where a set maybe a single client). Since the reference executor parameterizes clients by the data they posses, a writer of TFF could write two tff.federated_computations, apply them to different simulation data, and combine the results.

How to choose best parameters with a differential evolution algorithm

for an assignment in class i need to optimize 4 10-dimensional functions, when implementing the differential evolution i noted that all the functions needed different parameter settings. By playing around it seemed that especially when choosing your crossover-rate high and your F around 0.5 seemed to work fine.
However on one function, the 10-dimensional Katsuura function, my differential algorithm seems to fail. I tried a bunch of parameters but keep scoring a 0.01 out of 10. Does differential evolution not work for certain objective functions?
I tried implementing PSO for this problem as well but that failed too so i seem to think this function has certain properties which can only be solved by certain algorithms?
i Inspired my DE on this article:
https://en.wikipedia.org/wiki/Differential_evolution
With kind regards,
Kees Til
If you look at the function you will notice that this function is pretty tough. It is something usual heuristics like DE and PSO to have problems with so tough functions.

Back&: Compute Units

I'm working with the Backand with an Ionic mobile app that I'm developing. It's a data intensive app so I'm wanting to make sure doing my interactions most efficiently. Is there some parameters documented somewhere that tells me what are better ways to make calls? Like I'm assuming a Query using MySQL does requires more Compute Units than a REST GET call with filter but how much less? Thanks for any help as I haven't found any documentation on Backand Compute Units.

Windowing functions in Dataflow and Big Query

I am looking at analysing streaming data (web events).
Is there a good rule of thumb to help me determine if I should
Perform Grouping and Aggregation in Dataflow and write the output
or
Use Dataflow to stream into Big Query and possibly use a range decorator to limit data / use a windowing function for partitions and aggregate via SQL.
Looking at the examples in the documentation and this article
https://cloud.google.com/dataflow/blog/dataflow-beam-and-spark-comparison
Classic Batch Programming, Hourly Team Scores, All-time User Scores, User Behaviour Analysis feel like they are straightforward to create via SQL (given "created" and "write" timestamps are recorded)
The Spam filtering example I can see the limitations to using BQ if this applied on a per-event streaming basis).
The semantics of Dataflow seem to overlap in terms of GroupBy, Join, Combine, Windowing as well as BQ supporting streaming inserts with availability in seconds, well short enough for hour level aggregation.
Is there something fundamental I have not understood? Or is there a case that streaming into BigQuery and then querying will start to become unreliable?
Thank you
Chris
(Apologies if this question is a bit vague - happy to be redirected to a better place to ask)
Whether one chooses to perform grouping and aggregation in Dataflow or using BigQuery operations (after having ingested data using Dataflow) is something that depends on the application logic and on what consumes the output. For example, sessions and sliding windows are both hard to express in SQL; while Dataflow supports arbitrary processing such as triggered estimates. Another thing to consider is that it may be easier to express the computation logic using an imperative programming language instead of using SQL.
Below, not necessarily answers your exact question, but rather adds yet another aspect to consider:
1. If you are building process that supposed to power your infrastructure – dataflow might be a good choice. Of course you bound to your tech team resources.
2. In case if you plan for ad-hocs and self-serve type of activity by non-tech personnel (of course tech personnel is not excluded here also) – you can focus on employing BigQuery’s query features (including windowing functions) and make sure you have good real working examples that rest of your company can use as a template to start leveraging power of BigQuery and GCP in general. This proved to work great! Domain experts now can answer their questions (like you enlisted in your question) by themselves w/o having tech people in between. Quality and Timing much better in this scenario!

precision and recall in lucene

hello all
I was wondering, if I want to measure precision and recall in lucene then what's the best way for me to do it? is there any sample cource code that I can use?
a little background, I am using lucene to create a kind of search engine for my thesis. and I also wanted to do an analysis of how well those search engine performs and the only way to do that I think is for me to compute the precision and recall metrics. so any suggestion would be helpful.
thanks though
You can try these email threads. Alternatively, you can use MRR.
See also Search Application Relevance Issues.