How do trail_points and trail_offset work with Takeprofit and Stoploss - scripting

I'm new to Pine Scripts and I'm trying to write a Strategy to test a new Indicator, the below is my code
if Up and (downbefore == true)
strategy.entry("buy",strategy.long,1000000)
strategy.exit("Exit buy", from_entry="buy", profit = 150000, loss = 10000, trail_points = 5000, trail_offset = 100)
upbefore := true
downbefore := false
if Down and (upbefore == true)
strategy.entry("Sell",strategy.short,1000000)
strategy.exit("Exit sell", from_entry="Sell", profit = 150000, loss = 10000, trail_points = 5000, trail_offset = 100)
upbefore := false
downbefore := true
I want to ask the behavior of profit and loss each and everytime the
price hit 100, and hit 5000 unit of profit.
Will the loss value change from 100000 to 50000 and then 0 if the price hit the 50000 100000
150000 unit?
and if so, what will the trail_offset do on this fomular? and how will it affect the profit and loss when the price hit 50000 100000 150000 unit?
I did read the document at https://www.tradingview.com/pine-script-reference/v5/ but it is difficult for me to visualize how it work in the real situation.
If possible, please give me an example of how it works.
Thank alot.
NOTE: it is hard for me since there is the theory of trail_price also, it is almost the same as trail_point to the point I can not say the different, since we just need to add the executed_price with point and we will get the price on trail_price, so why bother using trail_price? why we must have 2 of them, both trail_price and trail_point?

You nailed it
"trail_price" is for specifying a which exact price the trailing should start
"trail_point" does the same and requires a number of ticks/pips/usd away from the entry to start
We have to use either one, not both at the same time - the one to use depends on your strategy code logic
2/
strategy.exit("Exit buy", from_entry="buy", profit = 150000, loss = 10000, trail_points = 5000, trail_offset = 100)
The way I understand it
When your long reaches 5000 pips in profit, then the trailing stop activates and at every 100 pips increment from there, the trailing will move 100 pips higher too.
There will be a 100 pips difference between the highest point reached by your trade and the trailed stop loss.
3/The best way to understand what's going on is to plot your stop loss.
There are plenty of trailing stop loss + trailing stop loss trigger price activation examples in the open-source library.
You could use one of those

Related

How to reward for two parameters in reinforcement learning?

I have a two box that should touch each other in straight line, so I have done two approach to reward:
Approach 1: Reward when distance is decreasing
this approach works well in 50% event after 100 million steps of training. The problem is that two box do not touch each other completely straight and it fails in 50%
Approach 2 Reward when distance is decreasing and radius difference between two box is decreasing
So here is the methods
if(distance < lastDistance)
AddReward(...)
lastDistance = distance;
if(radius < lastRadius )
AddReward(...)
lastRadius = radius;
The problem with approach 2 is that the box 1 which is moving is only rotating after a 10 million steps and even not decreasing distance
So How can I reward for multi parameters (distance, radius) problems
May be you can try to use only the first way and improve it.
You can add more conditions to help your agent to reach the goal.
for example :
reward = 0
if(distance < lastDistance)
reward += 1
lastDistance = distance
if(distance < 5)
reward += 5
if(distance < 3)
reward += 10
PS : the values are chosen randomly. Change them to reach the goal you want.

Function call faster than on the fly calculation?

I am now seriously confused. I have a function creating a table with a random number of entries, and I tried two different methods to choose that number (which is somewhat wheighted):
Method 1, separated function
local function n()
local n = math.random()
if n < .7 then return 0
elseif n < .8 then return 1
end
return 2
end
local function final()
for i = 1, n() do
...
end
end
Method 2, direct calculation
local function final()
local n = math.random()
if n < .7 then n = 0
elseif n < .8 then n = 1
else n = 2
end
for i = 1, n do
...
end
end
The problem is: for some reason, the first method performs 30% faster than the second. Why is this?
No, call will never be faster than plainly inlining it. All the difference for the first method is adding extra work of setting up stack and dismantling it. The rest of code, both original and compiled is exactly the same, so it is only natural that "just calculation" will be faster than "just calculation + some extra work".
Your benchmark seem to be imprecise. For such a lightweight function a for loop and os.clock call themselves will take almost as many time as the function itself, so combined with os.clock inherent low resoulution and small amount of loops your data is not really statistically significant and you're mostly seeing results of random hiccups in your hardware. Use better timer and increase number of loops to at least 1000000.

what is the most efficient way to create a categorical variable based on another continuous variable in python pandas?

I have a continuous variable A (say, earnings) in my dataframe. I want to make a categorical variable B off that. Specifically, I'd like to define the second variable as going up in increments of 500 until a certain limit. For instance,
B= 1 if A<500
2 if A>=500 & A<1000
3 if A>=1000 & A<1500
....
11 if A>5000
What is the most efficient way to do this in Pandas? In STATA in which I mostly program, I would either use replace and if (tedious) or loop if I have many categories. I want to break out of STATA thinking when using Pandas but sometimes my imagination is limited.
Thanks in advance
If the intervals are regular and the values are positive as they seem to be in the example, you can get the integer part of the values divided by the length of the interval. Something like
df['category'] = (df.A / step_size).astype(int)
Note that if there are negative values you can run into problems, e.g. anything between -500 and 500 comes out as 0. But you can get around this by adding some base value before dividing. You can effectively define you're catgeories as the multiples of step size from some base value, which happens to be zero above.
Something like
df['category'] = ((df.A + base) / step_size).astype(int)
Here'a another approach for intervals which aren't regularly spaced:
lims = np.arange(500, 5500, 500)
df['category'] = 0
for lim in lims:
df.category += df.A > lim
This method is good when you have a relatively small number of limits but slows down for many, obviously.
Here's some benchmarking for the various methods:
a = np.random.rand(100000) * 6000
%timeit pd.cut(a, 11)
%timeit (a / 500).astype(int)
100 loops, best of 3: 6.47 ms per loop
1000 loops, best of 3: 1.12 ms per loop
%%timeit
x = 0
for lim in lims:
x += a > lim
100 loops, best of 3: 3.84 ms per loop
I put pd.cut in there as well as per John E's suggestion. This yields categorical variables rather than integers as he pointed out which have different uses. There are pros and cons to both approaches and the best method would depend on the scenario.

Time to turn 180 degrees

I have a space ship, and am wanting to calculate how long it takes to turn 180 degrees. This is my current code to turn the ship:
.msngFacingDegrees = .msngFacingDegrees + .ROTATION_RATE * TV.TimeElapsed
My current .ROTATION_RATE is 0.15, but it will change.
I have tried:
Math.Ceiling(.ROTATION_RATE * TV.TimeElapsed / 180)
But always get an answer of 1. Please help.
To explain why you get 1 all the time:
Math.Ceiling simply rounds up to the next integer, so your sum contents must always be < 1.
Rearranging your sum gives TV.TimeElapsed = 180 * (1/.ROTATION_Rate). With a ROTATION_Rate of 0.15 we know that TV.TimeElapsed needs to reach 1200 before your overall function returns > 1.
Is it possible that you're always looking at elapsed times less than this threshold?
Going further to suggest what your sum should be is harder - Its not completely clear without more context.

Is there an iterative way to calculate radii along a scanline?

I am processing a series of points which all have the same Y value, but different X values. I go through the points by incrementing X by one. For example, I might have Y = 50 and X is the integers from -30 to 30. Part of my algorithm involves finding the distance to the origin from each point and then doing further processing.
After profiling, I've found that the sqrt call in the distance calculation is taking a significant amount of my time. Is there an iterative way to calculate the distance?
In other words:
I want to efficiently calculate: r[n] = sqrt(x[n]*x[n] + y*y)). I can save information from the previous iteration. Each iteration changes by incrementing x, so x[n] = x[n-1] + 1. I can not use sqrt or trig functions because they are too slow except at the beginning of each scanline.
I can use approximations as long as they are good enough (less than 0.l% error) and the errors introduced are smooth (I can't bin to a pre-calculated table of approximations).
Additional information:
x and y are always integers between -150 and 150
I'm going to try a couple ideas out tomorrow and mark the best answer based on which is fastest.
Results
I did some timings
Distance formula: 16 ms / iteration
Pete's interperlating solution: 8 ms / iteration
wrang-wrang pre-calculation solution: 8ms / iteration
I was hoping the test would decide between the two, because I like both answers. I'm going to go with Pete's because it uses less memory.
Just to get a feel for it, for your range y = 50, x = 0 gives r = 50 and y = 50, x = +/- 30 gives r ~= 58.3. You want an approximation good for +/- 0.1%, or +/- 0.05 absolute. That's a lot lower accuracy than most library sqrts do.
Two approximate approaches - you calculate r based on interpolating from the previous value, or use a few terms of a suitable series.
Interpolating from previous r
r = ( x2 + y2 ) 1/2
dr/dx = 1/2 . 2x . ( x2 + y2 ) -1/2 = x/r
double r = 50;
for ( int x = 0; x <= 30; ++x ) {
double r_true = Math.sqrt ( 50*50 + x*x );
System.out.printf ( "x: %d r_true: %f r_approx: %f error: %f%%\n", x, r, r_true, 100 * Math.abs ( r_true - r ) / r );
r = r + ( x + 0.5 ) / r;
}
Gives:
x: 0 r_true: 50.000000 r_approx: 50.000000 error: 0.000000%
x: 1 r_true: 50.010000 r_approx: 50.009999 error: 0.000002%
....
x: 29 r_true: 57.825065 r_approx: 57.801384 error: 0.040953%
x: 30 r_true: 58.335225 r_approx: 58.309519 error: 0.044065%
which seems to meet the requirement of 0.1% error, so I didn't bother coding the next one, as it would require quite a bit more calculation steps.
Truncated Series
The taylor series for sqrt ( 1 + x ) for x near zero is
sqrt ( 1 + x ) = 1 + 1/2 x - 1/8 x2 ... + ( - 1 / 2 )n+1 xn
Using r = y sqrt ( 1 + (x/y)2 ) then you're looking for a term t = ( - 1 / 2 )n+1 0.36n with magnitude less that a 0.001, log ( 0.002 ) > n log ( 0.18 ) or n > 3.6, so taking terms to x^4 should be Ok.
Y=10000
Y2=Y*Y
for x=0..Y2 do
D[x]=sqrt(Y2+x*x)
norm(x,y)=
if (y==0) x
else if (x>y) norm(y,x)
else {
s=Y/y
D[round(x*s)]/s
}
If your coordinates are smooth, then the idea can be extended with linear interpolation. For more precision, increase Y.
The idea is that s*(x,y) is on the line y=Y, which you've precomputed distances for. Get the distance, then divide it by s.
I assume you really do need the distance and not its square.
You may also be able to find a general sqrt implementation that sacrifices some accuracy for speed, but I have a hard time imagining that beating what the FPU can do.
By linear interpolation, I mean to change D[round(x)] to:
f=floor(x)
a=x-f
D[f]*(1-a)+D[f+1]*a
This doesn't really answer your question, but may help...
The first questions I would ask would be:
"do I need the sqrt at all?".
"If not, how can I reduce the number of sqrts?"
then yours: "Can I replace the remaining sqrts with a clever calculation?"
So I'd start with:
Do you need the exact radius, or would radius-squared be acceptable? There are fast approximatiosn to sqrt, but probably not accurate enough for your spec.
Can you process the image using mirrored quadrants or eighths? By processing all pixels at the same radius value in a batch, you can reduce the number of calculations by 8x.
Can you precalculate the radius values? You only need a table that is a quarter (or possibly an eighth) of the size of the image you are processing, and the table would only need to be precalculated once and then re-used for many runs of the algorithm.
So clever maths may not be the fastest solution.
Well there's always trying optimize your sqrt, the fastest one I've seen is the old carmack quake 3 sqrt:
http://betterexplained.com/articles/understanding-quakes-fast-inverse-square-root/
That said, since sqrt is non-linear, you're not going to be able to do simple linear interpolation along your line to get your result. The best idea is to use a table lookup since that will give you blazing fast access to the data. And, since you appear to be iterating by whole integers, a table lookup should be exceedingly accurate.
Well, you can mirror around x=0 to start with (you need only compute n>=0, and the dupe those results to corresponding n<0). After that, I'd take a look at using the derivative on sqrt(a^2+b^2) (or the corresponding sin) to take advantage of the constant dx.
If that's not accurate enough, may I point out that this is a pretty good job for SIMD, which will provide you with a reciprocal square root op on both SSE and VMX (and shader model 2).
This is sort of related to a HAKMEM item:
ITEM 149 (Minsky): CIRCLE ALGORITHM
Here is an elegant way to draw almost
circles on a point-plotting display:
NEW X = OLD X - epsilon * OLD Y
NEW Y = OLD Y + epsilon * NEW(!) X
This makes a very round ellipse
centered at the origin with its size
determined by the initial point.
epsilon determines the angular
velocity of the circulating point, and
slightly affects the eccentricity. If
epsilon is a power of 2, then we don't
even need multiplication, let alone
square roots, sines, and cosines! The
"circle" will be perfectly stable
because the points soon become
periodic.
The circle algorithm was invented by
mistake when I tried to save one
register in a display hack! Ben Gurley
had an amazing display hack using only
about six or seven instructions, and
it was a great wonder. But it was
basically line-oriented. It occurred
to me that it would be exciting to
have curves, and I was trying to get a
curve display hack with minimal
instructions.