TradingView PineScript v5 Backtesting Strategy - variables

I need help with PineScript v5 on TradingView. I want to backtest a strategy on a 5-minute time frame. The plan is to draw lines on the first 5-minute candle for the high and low. If the price then closes above the first candle's high or below its low, and within the next 5 candles touches the high or low line again, then I want to go long or short with a stop loss at the day's high or low and a trade ratio of 1:1.5. I also want to calculate the number of shares to buy. Can you help me with this? Thank you!
i tried below but getting error
Syntax error at input '('
strategy("My Strategy", overlay=true)
// Set the time frame
var timeFrame = input(title="Time Frame", type=input.resolution, defval="5")
// Get the first candle high and low
var firstCandleHigh = high[timeFrame](close)
var firstCandleLow = low[timeFrame](close)
// Draw lines for the first candle high and low
plot(firstCandleHigh, color=green, linewidth=2)
plot(firstCandleLow, color=red, linewidth=2)
// Buy if the price closes above the first candle high
if close > firstCandleHigh
strategy.entry("Long", strategy.long)
// Sell if the price closes below the first candle low
if close < firstCandleLow
strategy.entry("Short", strategy.short)
// Set the stop loss to the day's high or low
if strategy.position_size > 0
strategy.exit("Stop Loss", stop=daylow)
if strategy.position_size < 0
strategy.exit("Stop Loss", stop=dayhigh)
// Set the trade ratio to 1:1.5
strategy.risk = 1
sharesToBuy = strategy.calc_position_size(strategy.risk/close)
strategy.order_size = sharesToBuy

Related

naudio SineWaveProvider32 gives clicks when changing Amplitude

I am using naudio with SineWaveProvider32 code directly from http://mark-dot-net.blogspot.com/2009/10/playback-of-sine-wave-in-naudio.html to generate
sine wave tones. The relevant code in the SineWaveProvider32 class:
public override int Read(float[] buffer, int offset, int sampleCount)
{
int sampleRate = WaveFormat.SampleRate;
for (int n = 0; n < sampleCount; n++)
{
buffer[n + offset] =
(float)(Amplitude * Math.Sin((2 * Math.PI * sample * Frequency) / sampleRate));
sample++;
if (sample >= sampleRate) sample = 0;
}
return sampleCount;
}
I was getting clicks/beats every second, so I changed
if (sample >= sampleRate) sample = 0;
to
if (sample >= (int)(sampleRate / Frequency)) sample = 0;
This fixed the clicks every second (so that "sample" was always relative to a zero-crossing, not the sample rate).
However, whenever I set the Amplitude variable, I get a click. I tried setting it only when the buffer[] was at a zero-crossing,
thinking that a sudden jump in amplitude might be causing the problem. That did not solve the problem. I am setting the Amplitude to values between
0.25 and 0.0
I tried adusting the latency and number of buffers as suggested in NAudio change volume in runtime but that
had no effect either.
My code that changes the Amplitude:
public async void play(int durationMS, float amplitude = .25f)
{
PitchPlayer pPlayer = new PitchPlayer(this.frequency, amplitude);
pPlayer.play();
await Task.Delay(durationMS/2);
pPlayer.provider.Amplitude = .15f;
await Task.Delay(durationMS /2);
pPlayer.stop();
}
the clicks are caused by a discontinuity in the waveform. This is hard to fix in a class like this because ideally you would slowly ramp the volume from one value to the other. This can be done by modifying the code to have a target amplitude, and then if the current amplitude is not equal to the target amplitude then you move towards it by a small delta amount calculated each time through the loop. So over a period of say 10ms, you move from the old to new amplitude. But you'd need to write this yourself unfortunately.
For a similar concept where the frequency is being changed gradually rather than the amplitude, take a look at my blog post on portamento in NAudio.
Angular speed
Instead of frequency it is easier to think in terms of angular speed. How much to increase the angular argument of a sin() function for each sample.
When using radians for angle, one periode completing a full circle is 2*pi so the angular velocity of one Hz is (2*pi)/T = (2*pi)/1/f = f*2*pi = 1*2*pi [rad/s]
The sample rate is in [samples per second] and the angular velocity is in [radians per second] so to get the [angle per sample] you simply divide angular velocity by sample rate to get [radians/second]/[samples/second] = [radians/sample].
That is the number to continuously increase the angle of the sin() function for each sample - no multiplication is needed.
To sweep from one frequency to another you simply move from one angular increment to another in small steps over a number of samples.
By sweeping between frequencies there will be a continuous chain of adjacent samples and transient spread out smoothly over time.
Moving from one amplitude to another could also be spread out over multiple samples to avoid sharp transients.
Fade in and fade out incrementally adjusting the amplitude at the start and end of a sound is more graceful than stepping the output from one level to another in one sample.
Sharp steps produce rings on the water that propagate out in the world.
About sin() calculations
For speedy calculations it may be better to rotate a vector of the length of the amplitude and calculate sn=sin(delta), cs=cos(delta) only when angular speed changes:
Wikipedia Link to theory
where amplitude^2 = x^2 + y^2, each new sample can be calculated as:
px = x * cs - y * sn;
py = x * sn + y * cs;
To increase the amplitude you simply multiply px and py by a factor say 1.01. To make the next sample you set x=px, y=py and run the px, py calculation again with cs and sn the same all the time.
py or px can be used as the signal output and will be 90 deg out of phase.
On the first sample you can set x=amplitude and y=0.

How to display stimulus accurately using frames

I have a bit of code attached below that display a stimulus for a certain number of frames.
from psychopy import visual, logging, event, core
#create a window to draw in
myWin = visual.Window((600,600), allowGUI=False, blendMode='add', useFBO=True)
logging.console.setLevel(logging.DEBUG)
#INITIALISE SOME STIMULI
grating1 = visual.GratingStim(myWin,mask="gauss",
color=[1.0,1.0,1.0],contrast=0.5,
size=(1.0,1.0), sf=(4,0), ori = 45,
autoLog=False)#this stim changes too much for autologging to be useful
grating2 = visual.GratingStim(myWin,mask="gauss",
color=[1.0,1.0,1.0],opacity=0.5,
size=(1.0,1.0), sf=(4,0), ori = -45,
autoLog=False)#this stim changes too much for autologging to be useful
for frameN in range(300):
grating1.draw()
grating2.draw()
win.flip()
myWin.flip() #update the screen
At 60Hz frame refresh rate, 300 frames should be approximately 5 seconds. When I test it out - it is definitely longer then that.
In my experiment, I need the number of frames to be as few as 2 frames - and it seems that my code isn't going to be displaying that accurately.
I was wondering if there is a better way to display the number of frames? Such as using the grating1.draw() and grating1.draw() before the for-loop maybe?
I appreciate any help - thanks!
The timing discrepancy might be due to the frame rate not being exactly 60Hz. Try using myWin.getActualFrameRate() (PsychoPy documentation) to get the actual frame rate. Multiplying the real frame rate by 5.0 seconds should theoretically allow you to draw for exactly 5.0 seconds.
frame_rate = myWin.getActualFrameRate(nIdentical=60, nMaxFrames=100,
nWarmUpFrames=10, threshold=1)
# The number of frames you want to display is equal to
# the product of frame-rate and the number of seconds to display
# the stimulus.
# n_frames = frame_rate * n_seconds
for frameN in range(frame_rate*5.0):
grating1.draw()
grating2.draw()
myWin.flip()

remoteIO input square wave frequency is low

i input to the audioJack a square wave in the frequency of 2-3 khZ for about 5 seconds.
the square wave is 1 and 0 - no negative values.
i get some periodic signal that going between -32000 to 32000 (but my signal is positive!? )
i have check how many times my values are crossing the zero- i get 500 in 5 seconds, which means 100 per second .
what am i missing here ? 3khz is 3000 per second.
my sampling code is in my previous post :
error in audio Unit code -remoteIO for iphone
any explanation on the frequency domain here? am i missing samples ? how can i improve it? should i do :
float bufferLength = 0.005;
AudioSessionSetProperty(kAudioSessionProperty_PreferredHardwareIOBufferDuration, sizeof(bufferLength), &bufferLength);
status = AudioOutputUnitStart(audioUnit);
thanks alot!

gravity simulation

I want to simulate a free fall and a collision with the ground (for example a bouncing ball). The object will fall in a vacuum - an air resistance can be omitted. A collision with the ground should causes some energy loss so finally the object will stop moving. I use JOGL to render a point which is my falling object. A gravity is constant (-9.8 m/s^2).
I found an euler method to calculate a new position of the point:
deltaTime = currentTime - previousTime;
vel += acc * deltaTime;
pos += vel * deltaTime;
but I'm doing something wrong. The point bounces a few times and then it's moving down (very slow).
Here is a pseudocode (initial pos = (0.0f, 2.0f, 0.0f), initial vel(0.0f, 0.0f, 0.0f), gravity = -9.8f):
display()
{
calculateDeltaTime();
velocity.y += gravity * deltaTime;
pos.y += velocity.y * deltaTime;
if(pos.y < -2.0f) //a collision with the ground
{
velocity.y = velocity.y * energyLoss * -1.0f;
}
}
What is the best way to achieve a realistic effect ? How the euler method refer to the constant acceleration equations ?
Because floating points dont round-up nicely, you'll never get at a velocity that's actually 0. You'd probably get something like -0.00000000000001 or something.
you need to to make it 0.0 when it's close enough. (define some delta.)
To expand upon my comment above, and to answer Tobias, I'll add a complete answer here.
Upon initial inspection, I determined that you were bleeding off velocity to fast. Simply put, the relationship between kinetic energy and velocity is E = m v^2 /2, so after taking the derivative with respect to velocity you get
delta_E = m v delta_v
Then, depending on how energyloss is defined, you can establish the relationship between delta_E and energyloss. For instance, in most cases energyloss = delta_E/E_initial, then the above relationship can be simplified as
delta_v = energyloss*v_initial / 2
This is assuming that the time interval is small allowing you to replace v in the first equation with v_initial, so you should be able to get away with it for what your doing. To be clear, delta_v is subtracted from velocity.y inside your collision block instead of what you have.
As to the question of adding air-resistance or not, the answer is it depends. For small initial drop heights, it won't matter, but it can start to matter with smaller energy losses due to bounce and higher drop points. For a 1 gram, 1 inch (2.54 cm) diameter, smooth sphere, I plotted time difference between with and without air friction vs. drop height:
For low energy loss materials (80 - 90+ % energy retained), I'd consider adding it in for 10 meter, and higher, drop heights. But, if the drops are under 2 - 3 meters, I wouldn't bother.
If anyone wants the calculations, I'll share them.

How can I make the iPhone recording and playback audio level meter look the same?

I display an audio level meter for recording and playback of that recording. The level values are from 0 - 1.0. I draw a bar representing the 0 - 1.0 value on screen. To get the audio levels for recording I use:
OSStatus status = AudioQueueGetProperty(aqData.mQueue, kAudioQueueProperty_CurrentLevelMeter, aqLevelMeterState, &levelMeterStateDataSize);
float level = aqLevelMeterState[0].mAveragePower;
For playback I use:
// soundPlayer is an AVSoundPlayer
float level = [soundPlayer averagePowerForChannel:0];
I normalize level from 0 - 1.0.
Right now they look very different when showing the bar. The recording meter bar is more on the low end while the playback bar meter, when playing back that same recorded audio, stays more in the middle.
I'm trying to make the two meters look the same, but I'm fairly new to audio. I've done a bit of research and know that the recording is returning an RMS value and the playback is returning it in decibels.
Can someone knowledgeable in audio point me to links or documents, or give a little hint to make sense of these two values so that I can start making them look the same?
It is the RMS or root mean square for a given timer interval. RMS is calculated by summing the square of each signal value for the total, divining that the number of samples to get the mean and then taking the square root of the mean.
uint64 avgTotal;
for(int i =0; i < sampleCount; i++)
avgTotal+= (sample[i]*sample[i])/(uint64)sampleCount; //divide to help with overlfow
float rms = sqrt(avgTotal);
You will have to understand enough about the data you a playing to get the signal values. The durration of time that you consider should not matter that much. 50-80ms should do it.
Decibels are logarithmically scaled. So you probably want some equation such as:
myDb = 20.0 * log (myRMS / scaleFactor);
somewhere, where you'd have to calibrate the scaleFactor to match up the values you want for full scale.