I wrote some code to buy and sell orders but I don't know what the problem is - it keeps repeatedly taking the same orders - google-colaboratory

This is my code:
import yfinance as yf
# Set the ticker symbol and thresholds for support and resistance levels
ticker_symbol = "RELIANCE.NS"
support_threshold = 100
resistance_threshold = 200
# Fetch data for the ticker
ticker = yf.Ticker(ticker_symbol)
while True:
# Get the real-time price data in 5-minute intervals
data = ticker.history(interval="5m")
# Calculate the recent highs and lows (support and resistance levels)
recent_highs = data["High"][-10:].max()
recent_lows = data["Low"][-10:].min()
# Check if the current price is above or below the support and resistance levels
support_level = "above" if data["Close"][-1] > recent_lows else "below"
resistance_level = "above" if data["Close"][-1] > recent_highs else "below"
# Take a long position if the current price is above the support level and below the resistance level
if support_level == "above" and resistance_level == "below":
alice.place_order(transaction_type = TransactionType.Buy,
instrument = alice.get_instrument_by_symbol('NSE', 'INFY'),
quantity = 1,
order_type = OrderType.Market,
product_type = ProductType.Delivery,
price = 0.0,
trigger_price = None,
stop_loss = None,
square_off = None,
trailing_sl = None,
is_amo = False,
order_tag='order1' )
print("long")
# Take a short position if the current price is below the support level or above the resistance level
elif support_level == "below" or resistance_level == "above":
alice.place_order(transaction_type = TransactionType.Sell,
instrument = alice.get_instrument_by_symbol('NSE', 'INFY'),
quantity = 1,
order_type = OrderType.Market,
product_type = ProductType.Delivery,
price = 0.0,
trigger_price = None,
stop_loss = None,
square_off = None,
trailing_sl = None,
is_amo = False,
order_tag='order1' )
print("short")
# Print current price, support level, and resistance level every 5 minutes
print(f"Price: {data['Close'][-1]}, Support level: {recent_lows}, Resistance level: {recent_highs}")
time.sleep(500)
output
long
Price: 2530.699951171875, Support level: 2525.050048828125, Resistance level: 2534.199951171875
long
Price: 2530.75, Support level: 2525.050048828125, Resistance level: 2534.199951171875
long
Price: 2530.949951171875, Support level: 2525.050048828125, Resistance level: 2534.199951171875
long
Price: 2530.699951171875, Support level: 2525.050048828125, Resistance level: 2534.199951171875
long
Price: 2530.14990234375, Support level: 2525.050048828125, Resistance level: 2534.199951171875
long
Price: 2530.0, Support level: 2525.050048828125, Resistance level: 2534.199951171875
long
Price: 2530.14990234375, Support level: 2525.050048828125, Resistance level: 2534.199951171875
long
Price: 2530.0, Support level: 2525.050048828125, Resistance level: 2534.199951171875
long
Price: 2530.0, Support level: 2525.050048828125, Resistance level: 2534.199951171875
long
Price: 2530.0, Support level: 2525.050048828125, Resistance level: 2534.199951171875
long
Price: 2530.0, Support level: 2525.050048828125, Resistance level: 2534.199951171875
long
Price: 2530.050048828125, Support level: 2525.050048828125, Resistance level: 2534.199951171875
long
Price: 2530.0, Support level: 2525.050048828125, Resistance level: 2534.199951171875
long
Price: 2530.75, Support level: 2525.050048828125, Resistance level: 2534.199951171875
long
Price: 2530.64990234375, Support level: 2525.050048828125, Resistance level: 2534.199951171875
long
Price: 2530.64990234375, Support level: 2525.050048828125, Resistance level: 2534.199951171875
long
Price: 2530.300048828125, Support level: 2525.050048828125, Resistance level: 2534.199951171875
long
Price: 2531.0, Support level: 2525.050048828125, Resistance level: 2534.199951171875
long
Price: 2531.0, Support level: 2525.050048828125, Resistance level: 2534.199951171875
long
Price: 2531.0, Support level: 2525.050048828125, Resistance level: 2534.199951171875

As far as I can see the code is doing exactly what you told it to. Maybe you should elaborate more on what is the problem.
Every 5 min it refreshes the price and support / resistance values as defined by your calculations.
Since the tested on interval always did fit your condition of being between support and resistance it does exactly what you told it to and recommends "long".
If you want to loop through different tickers instead of running for the same one every 5 min, consider placing a for loop over a list of tickers around your functional code.

Related

How do trail_points and trail_offset work with Takeprofit and Stoploss

I'm new to Pine Scripts and I'm trying to write a Strategy to test a new Indicator, the below is my code
if Up and (downbefore == true)
strategy.entry("buy",strategy.long,1000000)
strategy.exit("Exit buy", from_entry="buy", profit = 150000, loss = 10000, trail_points = 5000, trail_offset = 100)
upbefore := true
downbefore := false
if Down and (upbefore == true)
strategy.entry("Sell",strategy.short,1000000)
strategy.exit("Exit sell", from_entry="Sell", profit = 150000, loss = 10000, trail_points = 5000, trail_offset = 100)
upbefore := false
downbefore := true
I want to ask the behavior of profit and loss each and everytime the
price hit 100, and hit 5000 unit of profit.
Will the loss value change from 100000 to 50000 and then 0 if the price hit the 50000 100000
150000 unit?
and if so, what will the trail_offset do on this fomular? and how will it affect the profit and loss when the price hit 50000 100000 150000 unit?
I did read the document at https://www.tradingview.com/pine-script-reference/v5/ but it is difficult for me to visualize how it work in the real situation.
If possible, please give me an example of how it works.
Thank alot.
NOTE: it is hard for me since there is the theory of trail_price also, it is almost the same as trail_point to the point I can not say the different, since we just need to add the executed_price with point and we will get the price on trail_price, so why bother using trail_price? why we must have 2 of them, both trail_price and trail_point?
You nailed it
"trail_price" is for specifying a which exact price the trailing should start
"trail_point" does the same and requires a number of ticks/pips/usd away from the entry to start
We have to use either one, not both at the same time - the one to use depends on your strategy code logic
2/
strategy.exit("Exit buy", from_entry="buy", profit = 150000, loss = 10000, trail_points = 5000, trail_offset = 100)
The way I understand it
When your long reaches 5000 pips in profit, then the trailing stop activates and at every 100 pips increment from there, the trailing will move 100 pips higher too.
There will be a 100 pips difference between the highest point reached by your trade and the trailed stop loss.
3/The best way to understand what's going on is to plot your stop loss.
There are plenty of trailing stop loss + trailing stop loss trigger price activation examples in the open-source library.
You could use one of those

Optimal solution for this interview question

Can anyone provide optimal solution with Time Complexity? Very appreciated!
Determine bandwidth required to run prescheduled video streams.
There can be 10s of thousands of streams and all the scheduling data is available at the start.
There may be time intervals when no streams are running
Inputs
int startTime; //start time for the stream
int duration; //how long the stream runs
int bandwidth; // consumed bandwidth from start to end
Output:
list[int] totalBW; // bandwidth needed at every time instant
Example:
input (list may not sorted)
[ [0,2,10], [1,2,10], [2,1,20] ]
output
[10, 20, 30]
Explanation
At time 0: only the first element needs bandwidth 10 => [10, ...]
At time 1: first two elements need bandwidth 10 + 10 => [10, 20, ...]
At time 2: the second and third element need bandwidth 10 + 20 => [10, 20, 30]
The brute force approach using python:
def solution(streams):
_max = max(streams, key=lambda x: (x[0]+x[1]))
_max_time = _max[0] + _max[1]
res = [0] * _max_time
for s, d, bw in streams:
for i in range(d):
res[s+i] += bw
return res
Is there any more efficient approach?
Is there any more efficient approach?
Yes.
The first step would be to transform the original data into a set of "at time = T, bandwidth changes by N" events, sorted in chronological order, and simplified by merging events that happen at the same time.
For your example, if the input is [ [0,2,10], [1,2,10], [2,1,20] ] then it would be broken up into:
** [ [0,2,10] **
At 0, bandwidth += 10
At 2, bandwidth += -10
** [1,2,10] **
At 1, bandwidth += 10
At 3, bandwidth += -10
** [2,1,20] **
At 2, bandwidth += 20
At 3, bandwidth += -20
..then sorted and simplified (merging events that happen at the same time - e.g. bandwidth += -10, bandwidth += 20 becomes a single bandwidth += 10) to get:
At 0, bandwidth += 10
At 1, bandwidth += 10
At 2, bandwidth += 10
At 3, bandwidth += -30
From there it's a simple matter of generating the final array from the sorted list:
10, 20, 30, 0
To understand why this is more efficient, imagine what happens if the time is tracked with higher precision (e.g. maybe milliseconds instead of seconds) and the input is [ [0,2000,10], [1000,2000,10], [2000,1000,20] ] instead. For my approach generating the final array would become an outer loop with 4 iterations and an inner loop that can be a highly optimized memset() (C) or rep stosd (80x86 assembly) or np.full() (Python with NumPy); and for your approach the outer loop needs 30000 iterations where the inner loop wastes a huge amount of time repeating a linear search that (for most iterations of the outer loop) finds the same answer as the previous iteration of the outer loop.

How to add sequential (time series) constraint to optimization problem using python PuLP?

A simple optimization problem: Find the optimal control sequence for a refrigerator based on the cost of energy. The only constraint is to stay below a temperature threshold, and the objective function tries to minimize the cost of energy used. This problem is simplified so the control is simply a binary array, ie. [0, 1, 0, 1, 0], where 1 means using electricity to cool the fridge, and 0 means to turn of the cooling mechanism (which means there is no cost for this period, but the temperature will increase). We can assume each period is fixed period of time, and has a constant temperature change based on it's on/off status.
Here are the example values:
Cost of energy (for our example 5 periods): [466, 426, 423, 442, 494]
Minimum cooling periods (just as a test): 3
Starting temperature: 0
Temperature threshold(must be less than or equal): 1
Temperature change per period of cooling: -1
Temperature change per period of warming (when control input is 0): 2
And here is the code in PuLP
from pulp import LpProblem, LpMinimize, LpVariable, lpSum, LpStatus, value
from itertools import accumulate
l = list(range(5))
costy = [466, 426, 423, 442, 494]
cost = dict(zip(l, costy))
min_cooling_periods = 3
prob = LpProblem("Fridge", LpMinimize)
si = LpVariable.dicts("time_step", l, lowBound=0, upBound=1, cat='Integer')
prob += lpSum([cost[i]*si[i] for i in l]) # cost function to minimize
prob += lpSum([si[i] for i in l]) >= min_cooling_periods # how many values must be positive
prob.solve()
The optimization seems to work before I try to account for the temperature threshold. With just the cost function, it returns an array of 0s, which does indeed minimize the cost (duh). With the first constraint (how many values must be positive) it picks the cheapest 3 cooling periods, and calculates the total cost correctly.
obj = value(prob.objective)
print(f'Solution is {LpStatus[prob.status]}\nThe total cost of this regime is: {obj}\n')
for v in prob.variables():
print(f'{v.name} = {v.varValue}')
output:
Solution is Optimal
The total cost of this regime is: 1291.0
time_step_0 = 0.0
time_step_1 = 1.0
time_step_2 = 1.0
time_step_3 = 1.0
time_step_4 = 0.0
So, if our control sequence is [0, 1, 1, 1, 0], the temperature will look like this at the end of each cooling/warming period: [2, 1, 0, -1, 1]. The temperature goes up 2 whenever the control input is 1, and down 1 whenever the control input is 1. This example sequence is a valid answer, but will have to change if we add a max temperature threshold of 1, which would mean the first value must be a 1, or else the fridge will warm to a temperature of 2.
However I get incorrect results when trying to specify the sequential constraint of staying within the temperature thresholds with the condition:
up_temp_thresh = 1
down = -1
up = 2
# here is where I try to ensure that the control sequence would never cause the temperature to
# surpass the threshold. In practice I would like a lower and upper threshold but for now
# let us focus only on the upper threshold.
prob += lpSum([e <= up_temp_thresh for e in accumulate([down if si[i] == 1. else up for i in l])]) >= len(l)
In this case the answer comes out the same as before, I am clearly not formulating it correctly as the sequence [0, 1, 1, 1, 0] would surpass the threshold.
I am trying to encode "the temperature at the end of each control sequence must be less than the threshold". I do this by turning the control sequence into an array of the temperature changes, so control sequence [0, 1, 1, 1, 0] gives us temperature changes [2, -1, -1, -1, 2]. Then using the accumulate function, it computes a cumulative sum, equal to the fridge temp after each step, which is [2, 1, 0, -1, 1]. I would like to just check if the max of this array is less than the threshold, but using lpSum I check that the sum of values in the array less than the threshold is equal to the length of the array, which should be the same thing.
However I'm clearly formulating this step incorrectly. As written this last constraint has no effect on the output, and small changes give other wrong answers. It seems the answer should be [1, 1, 1, 0, 0], which gives an acceptable temperature series of [-1, -2, -3, -1, 1]. How can I specify the sequential nature of the control input using PuLP, or another free python optimization library?
The easiest and least error-prone approach would be to create a new set of auxillary variables of your problem which track the temperature of the fridge in each interval. These are not 'primary decision variables' because you cannot directly choose them - rather the value of them is constrained by the on/off decision variables for the fridge.
You would then add constraints on these temperature state variables to represent the dynamics. So in untested code:
l_plus_1 = list(range(6))
fridge_temp = LpVariable.dicts("fridge_temp", l_plus_1, cat='Continuos')
fridge_temp[0] = init_temp # initial temperature of fridge - a known value
for i in l:
prob += fridge_temp[i+1] == fridge_temp[i] + 2 - 3*s[i]
You can then sent the min/max temperature constraints on these new fridge_temp variables.
Note that in the above I've assumed that the fridge temperature variables are defined at one more intervals than the on/off decisions for the fridge. The fridge temperature variables represent the temperature at the start of an interval - and having one extra one means we can ensure the final temperature of the fridge is acceptable.

Solving a nonlinear optimization problem with KKT by implementing fmin_slsqp()

I want to solve a non-linear optimization problem. I was trying to solve it with KKT, but after all, I realized that it is hard to write code to solve this.
I want to know the optimal payment allocation that minimizes the repayment period until I pay off all the debts. Below is the function of my objective equation - My logic was to get the payoff period for each debt and minimize the sum:
My_Loans = {
'Name' : ['A', 'B','C','D'],
'Principal' : [350, 2000, 600, 750],
'APR' : [6, 4, 4, 5]}
My_Loans = pd.DataFrame(My_Loans)
PV = My_Loans['Principal']
APR = My_Loans['APR']
def repayment_period(PV, APR, PMT):
'''inputs are in lists format. PV is the principal amount, APR is Annual Percentage Rate, and PMT will be my variable which represents the monthly payment.'''
num_loans = len(PV)
times = []
for j in range(num_loans):
i = (APR[j]/12)/100
accrued_interest = PV[j] * i
N = round(-(log(1-((PV[j]*i)/PMT[j])))/log(1+i))
times.append(N)
total_period = sum(times)
return total_period
and it has one inequality constraint which is the sum of PMT (variable) < 2000 (the sum of monthly allocation must be less than 2000).
def ieq_constraint(x):
return np.atleast_1d(np.sum(x)-2000)
from scipy import optimize as pf
op.fmin_slsqp(how_long, np.array([0]), ieqcons=[ieq_constraint])
But I can't get this to work. What can I try next? I am open to better approaches if necessary.

Is there a formula for rating WebRTC audio quality as Excellent, Good, Fair, or Poor?

I have been able to get various stats (Jitter, RTT, Packet lost, etc) of WebRTC audio call using RTCPeerConnection.getStats() API.
I need to rate the overall call quality as Excellent, Good, Fair, or Poor.
Is there a formula that uses WebRTC stats to give a overall rating? if not, which of the WebRTC stat(s) that I should give more weightage?
We ended up using MOS (mean opinion score) algorithm to calculate voice call quality metric.
Here is the formula we used -
Take the average latency, add jitter, but double the impact to latency
then add 10 for protocol latencies
EffectiveLatency = ( AverageLatency + Jitter * 2 + 10 )
Implement a basic curve - deduct 4 for the R value at 160ms of latency
(round trip). Anything over that gets a much more agressive deduction
if EffectiveLatency < 160 then
R = 93.2 - (EffectiveLatency / 40)
else
R = 93.2 - (EffectiveLatency - 120) / 10
Now, let's deduct 2.5 R values per percentage of packet loss
R = R - (PacketLoss * 2.5)
Convert the R into an MOS value.(this is a known formula)
MOS = 1 + (0.035) * R + (.000007) * R * (R-60) * (100-R)
We found the formula from https://www.pingman.com/kb/article/how-is-mos-calculated-in-pingplotter-pro-50.html