Function iMA is returning different return value from expected (MQL5) - moving-average

I'm using MQL5 (my first code).
I want to use a script that uses MA, but first, I wanted to confirm the value to verify I'm doing correctly. Using a very basic code into script:
double x=0;
x = iMA(Symbol(),Period(),100,0,MODE_SMA,PRICE_CLOSE);
Alert("The actual MA from last 100 points of EURUSD actually is: " + x;
The expected value is near the actual price... 1.23456, but this function is returning 10.00000 or 11.0000.
I believe I'm missing something, and https://www.mql5.com/es/docs/indicators/ima helplink is not quite clear enough.
I already saw another similar function: MA[0] which seems to bring the moving average from specific candle, but, I don't know how to manage the Period range (100) or if is related to Close/Open variables on it. I didn't find any specific helplink to review.
Any ideas are very appreciated!!!

x should be int, it is a handler of the MA. So each indicator when created in MT5 receives its handler, and you can use it later to get what you need. If you need several MA's - create several handlers and give each of them different names (x1, x2 or add some sense). Expert advisors in the default build of MT5 are good examples on what to do.

The iMA function Returns the handle of a specified technical indicator, not the "moving average" value.
For example, to get the value of the Moving average you can use this (in MQ4):
EMA34Handler = iMA(NULL,0,34,0,MODE_EMA,PRICE_CLOSE);
EMA34Value = CopyBuffer(EMA34Handler, 0,0);

Related

Is there a way to turn off a vehicle signal in SUMO?

I know that you can turn on a vehicle signal (for example, the left indicator) in traci using:
traci.vehicle.setSignals(vehID, int)
where the integer related to the specific signal can be found using the following link (https://sumo.dlr.de/docs/TraCI/Vehicle_Signalling.html#signaling), but is there a way of turning off a specific signal that would be otherwise turned on by the program (i.e., a setSignalOff)?
I think that there is a function in the underlying C++ code (switchOffSignal() in MSVehicle.h) but there doesn't appear to be a traci command that turns off a specific signal.
I appreciate that it is (generally) a pleasant visual aesthetic and has no impact on vehicle behaviour, but it would be very useful for what I am trying to do!
Switching off signals should work from traci. By using sometihng like traci.vehicle.setSignals("ego", 0), I can switch them off. Be aware that this will be reset after the step, so you may have to do that in every timestep.
So, Michael is right in that:
traci.vehicle.setSignals("ego", 0)
should turn off all signals (although the signals still appeared on for me visually, which confused me initially).
To turn off individual signals but keep the others on you need to:
For all the "on" signals find the value of 2^n, where n is the bit integer (which can be found using the following link: https://sumo.dlr.de/docs/TraCI/Vehicle_Signalling.html)
Sum all these 2^n values (let's call this variable x) and use this value in the setSignals function: traci.vehicle.setSignals("ego", x).
So for example, if we want the brake light, the right indicator and the high beam on (but all the other signals off) we would do:
RightIndicatorValue = pow(2,0)
BrakeLightValue = pow(2,3)
HighBeamValue = (2,6)
SignalValue = RightIndicatorValue + BrakeLightValue + HighBeamValue
traci.vehicle.setSignals(("ego", SignalValue)

X and Y inputs in LabVIEW

I am new to LabVIEW and I am trying to read a code written in LabVIEW. The block diagram is this:
This is the program to input x and y functions into the voltage input. It is meant to give an input voltage in different forms (sine, heartshape , etc.) into the fast-steering mirror or galvano mirror x and y axises.
x and y function controls are for inputting a formula for a function, and then we use "evaluation single value" function to input into a daq assistant.
I understand that { 2*(|-Mpi|)/N }*i + -Mpi*pi goes into the x value. However, I dont understand why we use this kind of formula. Why we need to assign a negative value and then do the absolute value of -M*pi. Also, I don`t understand why we need to divide to N and then multiply by i. And finally, why need to add -Mpi again? If you provide any hints about this I would really appreciate it.
This is just a complicated way to write the code/formula. Given what the code looks like (unnecessary wire bends, duplicate loop-input-tunnels, hidden wires, unnecessary coercion dots, failure to use appropriate built-in 'negate' function) not much care has been given in writing it. So while it probably yields the correct results you should not expect it to do so in the most readable way.
To answer you specific questions:
Why we need to assign a negative value and then do the absolute value
We don't. We can just move the negation immediately before the last addition or change that to a subtraction:
{ 2*(|Mpi|)/N }*i - Mpi*pi
And as #yair pointed out: We are not assigning a value here, we are basically flipping the sign of whatever value the user entered.
Why we need to divide to N and then multiply by i
This gives you a fraction between 0 and 1, no matter how many steps you do in your for-loop. Think of N as a sampling rate. I.e. your mirrors will always do the same movement, but a larger N just produces more steps in between.
Why need to add -Mpi again
I would strongly assume this is some kind of quick-and-dirty workaround for a bug that has not been fixed properly. Looking at the code it seems this +Mpi*pi has been added later on in the development process. And while I don't know what the expected values are I would believe that multiplying only one of the summands by Pi is probably wrong.

Coalesce returning wrong value after a function call followed by multiplication

I have a report that presents information and I'm getting inconsistent information based on what appears to be some issue with a SQL view or possibly a SQL Function nested within the view. I've tried finding a way to debug the SQL View, however, it looks like SSMS only will debug Stored Procedures, so I'm not really sure how to step through and see what is happening. It really has me stumped and I can't help but wonder if it isn't a rounding issue.
GetItemAverageCost RETURNS DECIMAL(12,2) and the DataType in sitli.QuantityIssuedAtStockUOM is System.Int64 / bigint (sidenote: I'm confused about why LINQPad shows 2 data types for that column. In the tree on the left, after expanding the sitli table and hovering over the QuantityIssuedAtStockUOM the balloon BigInt NOT NULL pops up, but when I Take(100) and hover over the column in the result set it says System.Int64). Anyroad, here is the COALESCE function.
COALESCE((dbo.GetItemAverageCost(ItemModel.IDItemModel)*sitli.QuantityIssuedAtStockUOM) / ISNULL(NULLIF(ItemModel.UOMFactor, 0),1),0) -- 259.73
--ROUND(COALESCE((dbo.GetItemAverageCost(ItemModel.IDItemModel)*sitli.QuantityIssuedAtStockUOM) / ISNULL(NULLIF(ItemModel.UOMFactor, 0),1),0),2) -- 259.73
--COALESCE(ROUND((dbo.GetItemAverageCost(ItemModel.IDItemModel)*sitli.QuantityIssuedAtStockUOM) / ISNULL(NULLIF(ItemModel.UOMFactor,2), 0),1),0) -- 259.70
--COALESCE((ROUND(dbo.GetItemAverageCost(ItemModel.IDItemModel),2)*sitli.QuantityIssuedAtStockUOM) / ISNULL(NULLIF(ItemModel.UOMFactor, 0),1),0) -- 259.73
original / wrong coalesce:
COALESCE(dbo.GetItemAverageCost(ItemModel.IDItemModel)*sitli.QuantityIssuedAtStockUOM,0)
I'm not sure what else to include, but I haven't found many resources online that offer insight into this kind of a situation. Many thanks in advance for your time.
EDIT: GetItemAverageCost:
ALTER FUNCTION GetItemAverageCost
(
#IDItemModel varchar(8000)
)
RETURNS DECIMAL(16,4)
--RETURNS DECIMAL(12,2)
AS
BEGIN
RETURN
(
SELECT
COALESCE(AVG(poli.UnitPrice),0) as AvgCost
-- COALESCE(ROUND(AVG(poli.UnitPrice),0),2) as AvgCost 260.00
FROM ItemModel im
LEFT JOIN VendorItem vi
ON im.IDItemModel = vi.IDItemModel
JOIN POLineItem poli
ON vi.IDVendorItem = poli.IDVendorItem
WHERE
im.IDItemModel = #IDItemModel
GROUP BY
im.IDItemModel,
im.ItemNumber
)
END
To fix; have your function return 16,4 instead of 12,2 and then ROUND two two decimals after multiplying by the quantity.
"When a given report is run, there are no errors thrown. But the calculations are off. For example a part number 12 shows a quantity of 24 were issued at a cost of $259.73. However, each part costs $10.82 so the calculation should be $259.68. I'm not sure where the difference of 5 cents is coming from. The $259.73 is the result of the COALESCE function above. Hopefully this makes sense"
Run the SQL only for part 12 independent of the function and you'll see the average is 10.822083333333333333333333333333 (10.82 5/24ths)
24*unitprice = $259.73
unitprice = 259.73/24
unit price = $10.82 5/24.
You'll see the variance is $.05
10.82 5/24ths. *24 = 259.73
10.82 * 24 = 259.68
That difference of 5cents doesn't go evenly into the remaining 24. thus the rounding error when using your function.
When you think of going to the store and buying something it's always at amounts to the whole penny. When you go to the gas station they charge to the nearest .00001 cents. (or in your case 4 decimals)
The rounding when using fractions of pennies isn't done until multiplied by the quantity or when actual cash needs to change hands. If done too early you get rounding errors you are seeing.
Thus you eliminate over/under charging rounding errors and at most you'll charge a fraction of a penny less or more than you should.
Okay, so many thanks to all who helped along the way. There were a couple of issues preventing me from getting the correct answer. For one thing, I was working with the incorrect expression for much of the time. Secondly, after I figured out which expression to use, it was a matter of placing the ROUND function in the correct place.
So, the expression I should have been using to get my average cost is:
COALESCE(dbo.GetItemAverageCost(Item.IDItemModel) / ISNULL(NULLIF(UOMFactor, 0),1),0)
When I moved this into the WorkOrderItemInstructionPartCosts View, my report was then producing $10.82. Then I added *sitli.QuantityIssuedAtStockUOM to the line and was getting $259.73. Then I applied the ROUND function to the COALESCE function and voila! the correct value ($259.68) is being produced.
The final line looks like this:
ROUND(COALESCE(dbo.GetItemAverageCost(ItemModel.IDItemModel) / ISNULL(NULLIF(UOMFactor, 0),1),0),2)*sitli.QuantityIssuedAtStockUOM
Once again, thank you to all who helped me in the effort to resolve this and sorry for not having accurate information to begin with.
Best,
Jonathan

Elm: avoiding a Maybe check each time

I am building a work-logging app which starts by showing a list of projects that I can select, and then when one is selected you get a collection of other buttons, to log data related to that selected project.
I decided to have a selected_project : Maybe Int in my model (projects are keyed off an integer id), which gets filled with Just 2 if you select project 2, for example.
The buttons that appear when a project is selected send messages like AddMinutes 10 (i.e. log 10 minutes of work to the selected project).
Obviously the update function will receive one of these types of messages only if a project has been selected but I still have to keep checking that selected_project is a Just p.
Is there any way to avoid this?
One idea I had was to have the buttons send a message which contains the project id, such as AddMinutes 2 10 (i.e. log 10 minutes of work to project 2). To some extent this works, but I now get a duplication -- the Just 2 in the model.selected_project and the AddMinutes 2 ... message that the button emits.
Update
As Simon notes, the repeated check that model.selected_project is a Just p has its upside: the model stays relatively more decoupled from the UI. For example, there might be other UI ways to update the projects and you might not need to have first selected a project.
To avoid having to check the Maybe each time you need a function which puts you into a context wherein the value "wrapped" by the Maybe is available. That function is Maybe.map.
In your case, to handle the AddMinutes Int message you can simply call: Maybe.map (functionWhichAddsMinutes minutes) model.selected_project.
Clearly, there's a little bit more to it since you have to produce a model, but the point is you can use Maybe.map to perform an operation if the value is available in the Maybe. And to handle the Maybe.Nothing case, you can use Maybe.withDefault.
At the end of the day is this any better than using a case expression? Maybe, maybe not (pun intended).
Personally, I have used the technique of providing the ID along with the message and I was satisfied with the result.

Circumventing R's `Error in if (nbins > .Machine$integer.max)`

This is a saga which began with the problem of how to do survey weighting. Now that I appear to be doing that correctly, I have hit a bit of a wall (see previous post for details on the import process and where the strata variable came from):
> require(foreign)
> ipums <- read.dta('/path/to/data.dta')
> require(survey)
> ipums.design <- svydesign(id=~serial, strata=~strata, data=ipums, weights=perwt)
Error in if (nbins > .Machine$integer.max) stop("attempt to make a table with >= 2^31 elements") :
missing value where TRUE/FALSE needed
In addition: Warning messages:
1: In pd * (as.integer(cat) - 1L) : NAs produced by integer overflow
2: In pd * nl : NAs produced by integer overflow
> traceback()
9: tabulate(bin, pd)
8: as.vector(data)
7: array(tabulate(bin, pd), dims, dimnames = dn)
6: table(ids[, 1], strata[, 1])
5: inherits(x, "data.frame")
4: is.data.frame(x)
3: rowSums(table(ids[, 1], strata[, 1]) > 0)
2: svydesign.default(id = ~serial, weights = ~perwt, strata = ~strata,
data = ipums)
1: svydesign(id = ~serial, weights = ~perwt, strata = ~strata, data = ipums)
This error seems to come from the tabulate function, which I hoped would be straightforward enough to circumvent, first by changing .Machine$integer.max
> .Machine$integer.max <- 2^40
and when that didn't work the whole source code of tabulate:
> tabulate <- function(bin, nbins = max(1L, bin, na.rm=TRUE))
{
if(!is.numeric(bin) && !is.factor(bin))
stop("'bin' must be numeric or a factor")
#if (nbins > .Machine$integer.max)
if (nbins > 2^40) #replacement line
stop("attempt to make a table with >= 2^31 elements")
.C("R_tabulate",
as.integer(bin),
as.integer(length(bin)),
as.integer(nbins),
ans = integer(nbins),
NAOK = TRUE,
PACKAGE="base")$ans
}
Neither circumvented the problem. Apparently this is one reason why the ff package was created, but what worries me is the extent to which this is a problem I cannot avoid in R. This post seems to indicate that even if I were to use a package that would avoid this problem, I would only be able to access 2^31 elements at a time. My hope was to use sql (either sqlite or postgresql) to get around the memory problems, but I'm afraid I'll spend a while getting that to work, only to run into the same fundamental limit.
Attempting to switch back to Stata doesn't solve the problem either. Again see the previous post for how I use svyset, but the calculation I would like to run causes Stata to hang:
svy: mean age, over(strata)
Whether throwing more memory at it will solve the problem I don't know. I run R on my desktop which has 16 gigs, and I use Stata through a Windows server, currently setting memory allocation to 2000MB, but I could theoretically experiment with increasing that.
So in sum:
Is this a hard limit in R?
Would sql solve my R problems?
If I split it up into many separate files would that fix it (a lot of work...)?
Would throwing a lot of memory at Stata do it?
Am I seriously barking up the wrong tree somehow?
Yes, R uses 32-bit indexes for vectors so they can contain no more than 2^31-1 entries and you are trying to create something with 2^40. There is talk of introducing 64-bit indexes but that will be some way off before appearing in R. Vectors have the stated hard limit and that is it as far as base R is concerned.
I am unfamiliar with the details of what you are doing to offer any further advice on the other parts of your Q.
Why do you want to work with the full data set? Wouldn't a smaller sample that can fit in to the restrictions R places on you be just as useful? You could use SQL to store all the data and query it from R to return a random subset of more appropriate size.
Since this question was asked some time ago, I'd like to point that my answer here uses the version 3.3 of the survey package.
If you check the code of svydesign, you can see that the function that causes all the problem is within a check step that looks whether you should set the nest parameter to TRUE or not. This step can be disabled setting the option check.strata=FALSE.
Of course, you shouldn't disable a check step unless you know what you are doing. In this case, you should be able to decide yourself whether you need to set the nest option to TRUE or FALSE. nest should be set to TRUE when the same PSU (cluster) id is recycled in different strata.
Concretely for the IPUMS dataset, since you are using the serial variable for cluster identification and serial is unique for each household in a given sample, you may want to set nest to FALSE.
So, your survey design line would be:
ipums.design <- svydesign(id=~serial, strata=~strata, data=ipums, weights=perwt, check.strata=FALSE, nest=FALSE)
Extra advice: even after circumventing this problem you will find that the code is pretty slow unless you remap strata to a range from 1 to length(unique(ipums$strata)):
ipums$strata <- match(ipums$strata,unique(ipums$strata))
Both #Gavin and #Martin deserve credit for this answer, or at least leading me in the right direction. I'm mostly answering it separately to make it easier to read.
In the order I asked:
Yes 2^31 is a hard limit in R, though it seems to matter what type it is (which is a bit strange given it is the length of the vector, rather than the amount of memory (which I have plenty of) which is the stated problem. Do not convert strata or id variables to factors, that will just fix their length and nullify the effects of subsetting (which is the way to get around this problem).
sql could probably help, provided I learn how to use it correctly. I did the following test:
library(multicore) # make svy fast!
ri.ny <- subset(ipums, statefips_num %in% c(36, 44))
ri.ny.design <- svydesign(id=~serial, weights=~perwt, strata=~strata, data=ri.ny)
svyby(~incwage, ~strata, ri.ny.design, svymean, data=ri.ny, na.rm=TRUE, multicore=TRUE)
ri <- subset(ri.ny, statefips_num==44)
ri.design <- svydesign(id=~serial, weights=~perwt, strata=~strata, data=ri)
ri.mean <- svymean(~incwage, ri.design, data=ri, na.rm=TRUE)
ny <- subset(ri.ny, statefips_num==36)
ny.design <- svydesign(id=~serial, weights=~perwt, strata=~strata, data=ny)
ny.mean <- svymean(~incwage, ny.design, data=ny, na.rm=TRUE, multicore=TRUE)
And found the means to be the same, which seems like a reasonable test.
So: in theory, provided I can split up the calculation by either using plyr or sql, the results should still be fine.
See 2.
Throwing a lot of memory at Stata definitely helps, but now I'm running into annoying formatting issues. I seem to be able to perform most of the calculation I want (much quicker and with more stability as well) but I can't figure out how to get it into the form I want. Will probably ask a separate question on this. I think the short version here is that for big survey data, Stata is much better out of the box.
In many ways yes. Trying to do analysis with data this big is not something I should have taken on lightly, and I'm far from figuring it out even now. I was using the svydesign function correctly, but I didn't really know what's going on. I have a (very slightly) better grasp now, and it's heartening to know I was generally correct about how to solve the problem. #Gavin's general suggestion of trying out small data with external results to compare to is invaluable, something I should have started ages ago. Many thanks to both #Gavin and #Martin.