FOREX: How to convert a USD denominated price per pip, per Lot, into EUR? - metatrader4

I am asking for a FOREX trading account, being in EUR currency.
I know that 1.00 lot is 10 USD per one pip, 0.10 lot is 1 USD per one pip and 0.01 lot is 0.1 USD per one pip.
But since my account is in EUR,
1.00 lot is how many EUR per pip?
0.10 lot is how many EUR per pip?
0.01 lot is how many EUR per pip?
Is it the same?

Facts
Given the actual value of the principally variable EURUSD exchange rate is 1.06000 at the moment, the pip value per lot is USD 10.
That means one gets USD 10.60 for each EUR 10.00 brought to exchange.
Answer
You need to pay 9.43 EUR to receive those USD 10, plus some transactional costs ( either deferred or hidden in spread markup et al ), that are itemised and stipulated in Terms and Conditions agreed with your Bank / Broker of choice.
Thus one needs, in the above idealised / simplified case:
EUR 9.43 per pip per 1.00 standard Lot
EUR 0.94 per pip per 0.10 standard Lot
EUR 0.09 per pip per 0.01 standard Lot
Explicit Caution
One ought rather always check and double-check all applicable trading Terms & Conditions with the respective Bank or Broker of one's choice Representative, as additional conditions may and do apply. If in any doubts, do request a written, signed official statement about any fact, steps and measures that one is not absolutely sure about, so as to avoid any later adverse impact in case some divergent trade-execution or NTO practice may appear in some specific cases.

Related

Include belonging into model

If you had data like (prices and market-cap are not real)
Date Stock Close Market-cap GDP
15.4.2010 Apple 7.74 1.03 ...
15.4.2010 VW 50.03 0.8 ...
15.5.2010 Apple 7.80 1.04 ...
15.5.2010 VW 52.04 0.82 ...
where Close is the y you want to predict and Market-cap and GDP are your x-variables, would you also include Stock in your model as another independent variable as it could for example be that price building for Apple works than differently than for VW.
If yes, how would you do it? My idea is to assign 0 to Apple and 1 to VW in the column Stock.
You first need to identify what exactly are you trying to predict. As it stands, you have longitudinal data such that you have multiple measurements from the same company over a period of time.
Are you trying to predict the close price based on market cap + GDP?
Or are you trying to predict the future close price based on previous close price measurements?
You could stratify based on company name, but it really depends on what you are trying to achieve. What is the question you are trying to answer ?
You may also want to take the following considerations into account:
close prices measured at different times on the same company are correlated with each other.
correlations between two measurements soon after each other will be better than correlations between two measurements far apart in time.
There are four assumptions associated with a linear regression model:
Linearity: The relationship between X and the mean of Y is linear.
Homoscedasticity: The variance of residual is the same for any value of X.
Independence: Observations are independent of each other.
Normality: For any fixed value of X, Y is normally distributed.

Understanding spacy textcat_multilabel scorer output

I'm trying to understand the output of my textcat_multilabel job. I have 4 text categories and I'm using spacy version 3.2.0 (The methodologies have changed a lot recently and I don't really understand the documentation).
E
#
LOSS TEXTC...
CATS_SCORE
SCORE
0
0
1.00
51.86
0.52
0
200
122.15
52.90
0.53
This is what I have in my config file. (btw. What is v1?)
scorer = {"#scorers":"spacy.textcat_multilabel_scorer.v1"}
threshold = 0.5
In fact, everything in the standard config file is unchanged from the suggestions except the dropout which I increased to 0.5.
The final row of my job shows these values: 0 8400 2.59 87.29 0.87
I am very impressed with the results that I'm getting with this job. Just need to understand what I'm doing.
E is epochs
# is training iterations / batches (see here)
LOSS_TEXTCAT is the loss of your textcat component. Loss normally fluctuates the first few iterations and then trends downward. The exact values are meaningless.
SCORE_TEXTCAT is the score of your textcat component on your dev set, see the docs for some details on that.
SCORE is overall score of your pipeline, a weighted average of any components you have. But you only have a textcat so it's basically the same as that score.
v1 is just version 1, components are versioned in case they are updated later so you can use older versions with newer code.

Should I trim outliers from input features

Almost half of my input feature columns have offshoot "outliers" like when the mean is 19.6 the max is 2908.0. Is it OK or should I trim those to mean + std?
msg_cnt_in_x msg_cnt_in_other msg_cnt_in_y \
count 330096.0 330096.0 330096.0
mean 19.6 2.6 38.3
std 41.1 8.2 70.7
min 0.0 0.0 0.0
25% 0.0 0.0 0.0
50% 3.0 1.0 8.0
75% 21.0 2.0 48.0
max 2908.0 1296.0 4271.0
There is no general answer to that. It depends very much on your probem and data set.
You should look into your data set and check whether these outlier data points are actually valid and important. If they are caused by some errors during data collection you should delete them. If they are valid, then you can expect similar values in your test data and thus the data points should stay in the data set.
If you are unsure, just test both and pick the one that works better.

Basic rules for custom cluster configuration when using distributed learning in Cloud ML

I am investigating the use of custom scale tiers in the Cloud Machine Learning API.
Now, I dont know precisely how to design my custom tiers! I basically use a CIFAR type of model, and I decided to use:
if args.distributed:
config['trainingInput']['scaleTier'] = 'CUSTOM'
config['trainingInput']['masterType'] = 'complex_model_m'
config['trainingInput']['workerType'] = 'complex_model_m'
config['trainingInput']['parameterServerType'] = 'large_model'
config['trainingInput']['workerCount'] = 12
config['trainingInput']['parameterServerCount'] = 4
yaml.dump(config, file('custom_config.yaml', 'w'))
But I hardly can find any information on how to dimension properly the cluster. Are there any "rules of thumb" out there? Or do we have to try and test?
Many thanks in advance!
I have done a couple of small experiments, which might be worth sharing. My setup wasn't a 100% clean, but I think the rough idea is correct.
The model looks like the cifar example, but with a lot of training data. I use averaging, decaying gradient as well as dropout.
The "config" naming is (hopefully) explicit : basically 'M{masterCost}_PS{nParameterServer}x{parameterServerCost}_W{nWorker}x{workerCost}'. For parameter servers, I always use "large_model".
The "speed" is the 'global_step/s'
The "cost" is the total ML unit
And I call "efficiency" the number of 'global_step/second/ML unit'
Here are some partial results :
config cost speed efficiency
0 M1 1 0.5 0.50
1 M6_PS1x3_W2x6 21 10.0 0.48
2 M6_PS2x3_W2x6 24 10.0 0.42
3 M3_PS1x3_W3x3 15 11.0 0.73
4 M3_PS1x3_W5x3 21 15.9 0.76
5 M3_PS2x3_W4x3 21 15.1 0.72
6 M2_PS1x3_W5x2 15 7.7 0.51
I know that I should run many more experiments, but I have no time for this.
If I have time, I will dig in deeper.
The main conclusions are :
it might be worth trying a few setup on a small amount of iterations, just for deciding which config to use before going on hyper parameter tuning.
What is good, is that the variation is quite limited. From 0.5 to 0.75, this is a 50% efficiency increase, it is significant but not explosive.
For my specific problem, basically, large and expensive units are overkill for my problem. The best value I can get is using "complex_model_m".

Sequence-depedent calculation on SQL

I have a silo where I buy material from different suppliers (each time I do this, the added material has a lot number associated) and mix them inside. Then I consume mixed material from this silo.
I want to be able to calculate the proportion existing on the silo at any given moment with SQL.
I know how to do this with sequential programming but it must be a way to do this in SQL.
The table I have is a historic of movements, containing all necessary information for the calculations:
ID
Date
Lot number (only relevant if it's a buy)
Movement type (buy (1) or consumption (0))
Quantity
Here it is a SQL fiddle for testing: http://sqlfiddle.com/#!3/4ca50/1
In this example, the initial movement is a buy of 10.000 of lot 1000, then a consumption of 1.000 (so 9.000 remaining) and then another buy of 5.000 units of lot 2000.
Now, If I run the desired query up to that moment, it should return:
Lot 1000: 9.000 units (64,29%)
Lot 2000: 5.000 units (35,71%)
Next there are two consumptions for a total of 4.000 units, so at that point in the silo there a 10.000 remaining.
If I run the query until then, it should keep the same percentage but different amounts:
Lot 1000: 6.429 units (64,29%)
Lot 2000: 3.571 units (35,71%)
Then another buy of 8.000 units of lot 3.000, there are 18.000 total in the silo, so expected output is:
Lot 1000: 6.429 units (35,71%)
Lot 2000: 3.571 units (19,83%)
Lot 3000: 8.000 units (44,44%)
And finally after the last consuption of 2.000 there are 16.000 remaining in the same percentage:
Lot 1000: 5.715 units (35,71%)
Lot 2000: 3.173 units (19,83%)
Lot 3000: 7.112 units (44,44%)
I expect that you got the idea... basically each buy changes the composition in percentage, and each consumption keeps proportion but changes quantity.
I don't know where even to begin with the query... maybe it isn't even doable in pure SQL?