I have a table like this that captures the experiment data:
treatment metric values
control metric values
1
2
3
6
5
7
...
...
I want to calculate the P value for the experiment in Presto using SQL. I can take average of metric values for both treatment and control groups to compare but I need P-value to see if the results are statistically significant.
Given your data format, assuming equal population sizes, all users in the experiment are in the data set, etc:
SELECT
NORMAL_CDF(
ABS(AVG("treatment metric values") - AVG("control metric values")),
SQRT(VAR_SAMP("treatment metric values") + VAR_SAMP("control metric values")),
0
) AS p_value
Related
I have a dataset with parallel time series. The column 'A' depends on columns 'B' and 'C'. The order (and the number) of dependent columns can change. For example:
A B C
2022-07-23 1 10 100
2022-07-24 2 20 200
2022-07-25 3 30 300
How should I transform this data, or how should I build the model so the order of columns 'B' and 'C' ('A', 'B', 'C' vs 'A', C', 'B'`) doesn't change the result? I know about GCN, but I don't know how to implement it. Maybe there are other ways to achieve it.
UPDATE:
I want to generalize my question and make one more example. Let's say we have a matrix as a singe observation (no time series data):
col1 col2 target
0 1 a 20
1 2 a 30
2 3 b 30
3 4 b 40
I would like to predict one value 'target' per each row/instance. Each instance depends on other instances. The order of rows is irrelevant, and the number of rows in each observation can change.
You are looking for a permutation invariant operation on the columns.
One way of achieving this would be to apply column-wise operation, followed by a global pooling operation.
How that achieves your goal:
column-wise operations are permutation equivariant; that is, applying the operation on the columns and permuting the output, is the same as permuting the columns and then applying the operation.
A global pooling operation (e.g., max-pool, avg-pool) across the columns is permutation invariant: the result of an average pool does not depend on the order of the columns.
Applying a permutation invariant operation on top of a permutation equivariant one results in an overall permutation invariant function.
Additionally, you should look at self-attention layers, which are also permutation equivariant.
What I would try is:
Learn a representation (RNN/Transformer) for a single time series. Apply this representation to A, B and C.
Learn a transformer between the representation of A to those of B and C: that is, use the representation of A as "query" and those of B and C as "keys" and "values".
This will give you a representation of A that is permutation invariant in B and C.
Update (Aug 3rd, 2022):
For the case of "observations" with varying number of rows, and fixed number of columns:
I think you can treat each row as a "token" (with a fixed dimension = number of columns), and apply a Transformer encoder to predict the target for each "token", from the encoded tokens.
I am currently trying to find optimal portfolio weights by optimizing a utility function that depends on those weights. I have a dataframe of containing the time series of returns, named rets_optns. rets_optns has 100 groups of 8 assets (800 columns - 1st group column 1 to 8, 2nd group column 9 to 16). I also have a dataframe named rf_options with 100 columns that present the corresponding risk free rate for each group of returns. I want to create a new dataframe composed by the portfolio's returns, using this formula: p. returns= rf_optns+sum(weights*rets_optns). It should have 100 columns and each columns should represent the returns of a portfolio composed by 8 assets belonging to the same group. I currently have:
def pret(rf,weights,rets):
return rf+np.sum(weights*(rets-rf))
It does not work
I have this CSV file
http://www.sharecsv.com/s/2503dd7fb735a773b8edfc968c6ae906/whatt2.csv
I want to create three columns, 'MT_Value','M_Value', and 'T_Data', one who has the mean of the data grouped by year and month, which I accomplished by doing this.
data.groupby(['Year','Month']).mean()
But for M_value I need to do the mean of only the values different from zero, and for T_Data I need the count of the values that are zero divided by the total of values, I guess that for the last one I need to divide the amount of values that are zero by the amount of total data grouped, but honestly I am a bit lost. I looked on google and they say something about transform but I didn't understood very well
Thank you.
You could do something like this:
(data.assign(M_Value=data.Valor.where(data.Valor!=0),
T_Data=data.Valor.eq(0))
.groupby(['Year','Month'])
[['Valor','M_Value','T_Data']]
.mean()
)
Explanation: assign will create new columns with respective names. Now
data.Valor.where(data.Valor!=0) will replace 0 values with nan, which will be ignored when we call mean().
data.Valor.eq(0) will replace 0 with 1 and other values with 0. So when you do mean(), you compute count(Valor==0)/total_count().
Output:
Valor M_Value T_Data
Year Month
1970 1 2.306452 6.500000 0.645161
2 1.507143 4.688889 0.678571
3 2.064516 7.111111 0.709677
4 11.816667 13.634615 0.133333
5 7.974194 11.236364 0.290323
... ... ... ...
1997 10 3.745161 7.740000 0.516129
11 11.626667 21.800000 0.466667
12 0.564516 4.375000 0.870968
1998 1 2.000000 15.500000 0.870968
2 1.545455 5.666667 0.727273
[331 rows x 3 columns]
I have a cube built on a fact which, amongst others, includes the Balance and Percentage columns. I have a calculation which multiplies the Balance by the Percentage to obtain an Adjusted Value. I now need to have this Adjusted Value divided by the sum of all balances, to get weighted values.
The problem is that this sum of all balances doesn't apply to the whole dataset. Rather, it should be calculated on a filtered subset of the whole data. This filtering is being done in Excel using a pivot table, so i do not know what conditions will be used to filter.
So, for example, this would be the pivot i'd like to see:
ID Balance Percentage Adjusted Value Weighted Adjusted Value
1 100 1.5 115 0.38 (ie 115/300)
2 50 2 51 0.17 (ie 51/300)
3 150 1 150 0.50 (ie 150/300)
300 is obtained by summing the balance of the rows that show in the filtered pivot.
Can this calculation be somehow done in OLAP? Or is it impossible to compute this sum with what i know?
Yes should be possible; e.g., assuming 1/2/3 are the children of a common parent, then the following calculated measure should do the trick :
WAV = AV / ( id.parent, Balance )
If not we would need more information about the actual data model and query.
Below is some data:
Test Day1 Day2 Score
A 1 2 100
B 1 3 62
C 3 4 90
D 2 4 20
E 4 5 80
I am trying to take the values from column 'day' and 'day2' and use them to select the row number for the column score. For example for Test A I would like to find the sum of 100 and 62 because that is the values of the first and second rows of score. Test B I would like to find the sum of 100, 62 and 90.
Is their anyway to do this in the Compute Variable window? Found in the menu Transform-Compute Variable?
I tried the following:
Score(MEAN(VALUE(Day1), VALUE(DAY2)))
This is not the proper way to call the cell location of Score and I received an error.
Can anyone help?
Thank you!
You really have two different datasets here. One is a dataset of scores numbered 1 through 5.
The other is a dataset that includes indexes into the score dataset. So the steps would be something like this.
First take the scores dataset and transpose it so that it has one row and 5 columns (Data>Transpose)
Then match that dataset to each case in the main dataset (Data>Merge Files>Add Variables).
Next you have to resort to using syntax directly.
You would declare a vector for the scores (VECTOR)
Finally, you use COMPUTE to index into the scores.
For your real problem, I suppose that you might have batches of scores and maybe there are some gaps. The Restructure Data Wizard can help you generalize this - convert cases into variables, but let's not go there yet.
HTH,
Jon Peck