I am a novice at using TradingView's Pinescript and having a hard time finding an easy to understand example of a script. I am used to Java/C++ and Pinescript is very different. I am trying to build a script that will scan a stock chart and look for gaps of over 5%. Here is psuedocode for what I am trying to create:
if(difference between open of current day and previous day close > 5%) {
plot a green circle or red circle, depending on if gap was up or down
}
Thank you in advance!
You're best bet would be to go through their tutorial
There's some odds choices in this language if you have any programming background so it's probably a good idea to read it all (it's not that much). E.g.
open is the current bars open price, but open[1] is the previous bar open price (so should be read as open[current_index-1])
you can't use the plot calls inside function bodies
as for you question (not tested, but should be close enough to give the right idea):
study(title='gap detector', overlay=true)
//plotshape(<condition>, <options>) // condition must be true to plot something
is_percentage_increase = if (close-close[1])/close[1] > 0.05
true
plotshape(is_percentage_increase, style=shape.circle, color=green)
Pine scripting is easy to use; Initially it was bit hard to understand, Once started using it it becomes so useful to strategize the logic.
In your case you can use conditional operator as well to detect this.This will work in Version 2 .The version 3 is bit differrent
//version =2
study(title ="Experementing the code ",overlay =true ,shorttitle ="testing") //overlay=false to get this down of the chart as seperate layout
plotchar( (close-close[1])/close[1] >0.05 ? 1:na ,char =' ',text ="plot\nTest",textcolor=red,size.huge)
Instead of if the condition you can use ?: operator to do this job.
Please make sure plotchar(.....) coming in the same line, not in separate line.
Pine has lot of cool features to use and helped me to derive my own strategy. The tutorial is really good.
Note if you don't put char='' above it will print STAR as the default character. And in the character even if you put char='testtest' it will print the only t .
Related
I am currently working on a project which has some requirements for VBA. The data is found in excel. What I need to ask/bounce ideas off of is for a way to write some code that will abide to the following conditions:
where Xmw and Ymw are in megawatt, and X and Y are generation plants
Xmw=<1000 -always true
and
Ymw=<1000 -always true
2000=Xmw+Ymw -Equation 1
and
10=X+Y -Equation 2
Essentially, since the maximum absolute value of generation to increase and decrease is 2000, and the maximum amount of plants that can be used is 10. I'm stuck at this point because I can't find the relation between the 2 equations. Additionally, the existing program identifies generation to use, but it doesn't follow it to the 2 provided equations.
Existing programming identifies which generation plants are in the "pools" of X and Y.
Any help would be greatly appreciated.
I am posting this in an answer because it would be too much to post in a comment (please comment and let me know if this hits the mark)
You could use a conditional that looks something like this:
power_generated=Ymw+Xmw
if power_generated<>2000 then
'do stuff here to add another power plant to generation
'
'
if X+Y>10 then
'deal with the fact that you have not enough power plants to deal with your draw
end if
end if
I'm using Selenium to automate webpage functional testing. It's important for us to do a pixel-by-pixel comparison when we roll out new code, so we're using Selenium to take screenshots and comparing the base64 encoded strings to see if anything has changed.
We're finding that in practice, it's hard to get complete pixel consistency, especially with images. I would like minor blurriness / rendering artifacts to count as a "pass" instead of a "fail", so I'm wondering if there's a way of doing a fuzzy comparison to make our tests a bit less fragile.
I was thinking of maybe looking at the Levenshtein distance between the base64 strings as a starting point, but I don't really know if that's a good approach, or what the tolerances should be that distinguish "something moved on the page" from "rendering artifact". Any ideas / approaches?
So I ended up going with the ImageMagick command-line tool (because why re-invent image comparison). The "Peak Absolute Error" metric of the "compare" tool tells you how much you have to fuzz pixels before two images are identical. This seems to work well... for an image with slight graphical distortions, there might be a lot of pixels that don't match, but slight fuzzing is enough to make them match; but for two images that are actually different, even though most pixels might match, the ones that don't tend to be very different. Right now I'm checking for a PAE of less than 15% to see if the images should be counted as identical. Command line I'm using is:
compare -metric PAE original.png new.png comparison.png
Documentation on ImageMagick's compare tool is here: http://www.imagemagick.org/script/compare.php
I've been using perceptualdiff which uses a model of the human visual system to try to avoid reporting unnoticeable changes (the authors used for renderer regression testing). Usage is quite simple:
perceptualdiff -output diff.ppm baseline.png test.png
(where diff.ppm is a PPM format image highlighting the areas of difference)
The needle regression testing framework has support for using pdiff to compare screenshots:
http://needle.readthedocs.org/en/latest/#engines
Use an image format that does not create artifacts (like BMP or PNG) then you can do a per-pixel comparison.
I think you can check each pixel with a common Euclidean Distance.
To improve performance a little, do not calculate the square root but check the squares of the distances
// Maximum color distance allowed to define pixel consistency.
const float maxDistanceAllowed = 5.0;
// Square of the distance, used in calculations.
float maxD = maxDistanceAllowed * maxDistanceAllowed;
public bool isPixelConsistent(Color pixel1, Color pixel2)
{
// Euclidean distance in 3-dimensions.
float distanceSquared = (pixel1.R - pixel2.R)*(pixel1.R - pixel2.R) + (pixel1.G - pixel2.G)*(pixel1.G - pixel2.G) + (pixel1.B - pixel2.B)*(pixel1.B - pixel2.B);
// If the actual distance is less than the max allowed, the pixel is
// consistent and the method returns TRUE
return distanceSquared <= maxD;
}
Didn't test the C# code, but it should give you the idea. Give some tries and adjust the maxDistanceAllowed to your needs.
If anyone else is looking for something similar there is Depicted-dpxdt. It is designed to be used as part of a CI/CD process.
It combines perceptual diff with server, commandline tool, wrapper for phantom js.
It has functionality demonstrated like crawling entire site and comparing pages for differences.
I'm attempting to gauge the percentage difference between two images.
Having done a lot of reading I seem to have a number of options but I'm not sure what the best method to follow for:
Ease of coding
Performance.
The methods I've seen are:
Non language specific - academic Image comparison - fast algorithm and Mac specific direct pixel access http://www.markj.net/iphone-uiimage-pixel-color/
Does anyone have any advice about what solutions make most sense for the above two cases and have code samples to show how to apply them?
I've had success calculating the difference between two images using the histogram technique mentioned here. redmoskito's answer in the SO question you linked to was actually my inspiration!
The following is an overview of the algorithm I used:
Convert the images to grayscale—compare one channel instead of three.
Divide each image into an n * n grid of "subimages". Then, for subimage pair:
Calculate their colour composition histograms.
Calculate the absolute difference between the two histograms.
The maximum difference found between two subimages is a measure of the two images' difference. Other metrics could also be used (e.g. the average difference betwen subimages).
As tskuzzy noted in his answer, if your ultimate goal is a binary "yes, these two images are (roughly) the same" or "no, they're not", you need some meaningful threshold value. You could produce such a value by passing images into the algorithm and tweaking the threshold based on its output and how similar you think the images are. A form of machine learning, I suppose.
I recently wrote a blog post on this very topic, albeit as part of a larger goal. I also created a simple iPhone app to demonstrate the algorithm. You can find the source on GitHub; perhaps it will help?
It is really difficult to suggest something when you don't tell us more about the images or the variations. Are they shapes? Are they the different objects and you want to know what class of objects? Are they the same object and you want to distinguish the object instance? Are they faces? Are they fingerprints? Are the objects in the same pose? Under the same illumination?
When you say performance, what exactly do you mean? How large are the images? All in all it really depends. With what you've said if it is only ease of coding and performance I would suggest to just find the absolute value of the difference of pixels. That is super easy to code and about as fast as it gets, but really unlikely to work for anything other than the most synthetic examples.
That being said I would like to point you to: DHOG, GLOH, SURF and SIFT.
You can use fairly basic subtraction technique that the lads above suggested. #carlosdc has hit the nail on the head with regard to the type of image this basic technique can be used for. I have attached an example so you can see the results for yourself.
The first shows a image from a simulation at some time t. A second image was subtracted away from the first which was taken some (simulation) time later t + dt. The subtracted image (in black and white for clarity) then shows how the simulation has changed in that time. This was done as described above and is very powerful and easy to code.
Hope this aids you in some way
This is some old nasty FORTRAN, but should give you the basic approach. It is not that difficult at all. Due to the fact that I am doing it on a two colour pallette you would do this operation for R, G and B. That is compute the intensities or values in each cell/pixal, store them in some array. Do the same for the other image, and subtract one array from the other, this will leave you with some coulorfull subtraction image. My advice would be to do as the lads suggest above, compute the magnitude of the sum of the R, G and B componants so you just get one value. Write that to array, do the same for the other image, then subtract. Then create a new range for either R, G or B and map the resulting subtracted array to this, the will enable a much clearer picture as a result.
* =============================================================
SUBROUTINE SUBTRACT(FNAME1,FNAME2,IOS)
* This routine writes a model to files
* =============================================================
* Common :
INCLUDE 'CONST.CMN'
INCLUDE 'IO.CMN'
INCLUDE 'SYNCH.CMN'
INCLUDE 'PGP.CMN'
* Input :
CHARACTER fname1*(sznam),fname2*(sznam)
* Output :
integer IOS
* Variables:
logical glue
character fullname*(szlin)
character dir*(szlin),ftype*(3)
integer i,j,nxy1,nxy2
real si1(2*maxc,2*maxc),si2(2*maxc,2*maxc)
* =================================================================
IOS = 1
nomap=.true.
ftype='map'
dir='./pictures'
! reading first image
if(.not.glue(dir,fname2,ftype,fullname))then
write(*,31) fullname
return
endif
OPEN(unit2,status='old',name=fullname,form='unformatted',err=10,iostat=ios)
read(unit2,err=11)nxy2
read(unit2,err=11)rad,dxy
do i=1,nxy2
do j=1,nxy2
read(unit2,err=11)si2(i,j)
enddo
enddo
CLOSE(unit2)
! reading second image
if(.not.glue(dir,fname1,ftype,fullname))then
write(*,31) fullname
return
endif
OPEN(unit2,status='old',name=fullname,form='unformatted',err=10,iostat=ios)
read(unit2,err=11)nxy1
read(unit2,err=11)rad,dxy
do i=1,nxy1
do j=1,nxy1
read(unit2,err=11)si1(i,j)
enddo
enddo
CLOSE(unit2)
! substracting images
if(nxy1.eq.nxy2)then
nxy=nxy1
do i=1,nxy1
do j=1,nxy1
si(i,j)=si2(i,j)-si1(i,j)
enddo
enddo
else
print *,'SUBSTRACT: Different sizes of image arrays'
IOS=0
return
endif
* normal finishing
IOS=0
nomap=.false.
return
* exceptional finishing
10 write (*,30) fullname
return
11 write (*,32) fullname
return
30 format('Cannot open file ',72A)
31 format('Improper filename ',72A)
32 format('Error reading from file ',72A)
end
! =============================================================
Hope this is of some use. All the best.
Out of the methods described in your first link, the histogram comparison method is by far the simplest to code and the fastest. However key point matching will provide far more accurate results since you want to know a precise number describing the difference between two images.
To implement the histogram method, I would do the following:
Compute the red, green, and blue histograms of each image
Add up the differences between each bucket
If the difference is above a certain threshold, then the percentage is 0%
Otherwise the colors found in the images are similar. So then do a pixel by pixel comparison and convert the difference into a percentage.
I don't know any precise algorithms for finding the key points of an image. However once you find them for each image you can do a pixel by pixel comparison for each of the key points.
How'd I go about this one? I want to tween a value from one to another in x time. While also taking into account that it'd be nice to have an 'ease' at the start and end.
I know, I shouldn't ask really, but I've tried myself, and I'm stuck.
Please assume that to cause a delay, you need to call function wait(time).
One simple approach that might work for you is to interpolate along the unit circle:
To do this, you simply evaluate points along the circle, which ensures a fairly smooth movement, and ease-in as well as ease-out. You can control the speed of the interpolation by changing how quickly you alter the angle.
Assuming you're doing 1-dimensional interpolation (i.e. a simple scalar interpolation, like from 3.5 to 6.9 or whatever), it might be handy to use Y-values from -π/2 to π/2. These are given by the sine function, all you need to do is apply suitable scaling:
angle = -math.pi / 2
start = 3.5
end = 6.9
radius = (end - start) / 2
value = start + radius + radius * math.sin(angle)
I'm not 100% sure if this is legal Lua, didn't test it. If not, it's probably trivial to convert.
You may look at Tweener ActionScript library for inspiration.
For instance, you may borrow necessary equations from here.
If you need further help, please ask.
I'm teaching/helping a student to program.
I remember the following process always helped me when I started; It looks pretty intuitive and I wonder if someone else have had a similar approach.
Read the problem and understand it ( of course ) .
Identify possible "functions" and variables.
Write how would I do it step by step ( algorithm )
Translate it into code, if there is something you cannot do, create a function that does it for you and keep moving.
With the time and practice I seem to have forgotten how hard it was to pass from problem description to a coding solution, but, by applying this method I managed to learn how to program.
So for a project description like:
A system has to calculate the price of an Item based on the following rules ( a description of the rules... client, discounts, availability etc.. etc.etc. )
I first step is to understand what the problem is.
Then identify the item, the rules the variables etc.
pseudo code something like:
function getPrice( itemPrice, quantity , clientAge, hourOfDay ) : int
if( hourOfDay > 18 ) then
discount = 5%
if( quantity > 10 ) then
discount = 5%
if( clientAge > 60 or < 18 ) then
discount = 5%
return item_price - discounts...
end
And then pass it to the programming language..
public class Problem1{
public int getPrice( int itemPrice, int quantity,hourOdDay ) {
int discount = 0;
if( hourOfDay > 10 ) {
// uh uh.. U don't know how to calculate percentage...
// create a function and move on.
discount += percentOf( 5, itemPriece );
.
.
.
you get the idea..
}
}
public int percentOf( int percent, int i ) {
// ....
}
}
Did you went on a similar approach?.. Did some one teach you a similar approach or did you discovered your self ( as I did :( )
I go via the test-driven approach.
1. I write down (on paper or plain text editor) a list of tests or specification that would satisfy the needs of the problem.
- simple calculations (no discounts and concessions) with:
- single item
- two items
- maximum number of items that doesn't have a discount
- calculate for discounts based on number of items
- buying 10 items gives you a 5% discount
- buying 15 items gives you a 7% discount
- etc.
- calculate based on hourly rates
- calculate morning rates
- calculate afternoon rates
- calculate evening rates
- calculate midnight rates
- calculate based on buyer's age
- children
- adults
- seniors
- calculate based on combinations
- buying 10 items in the afternoon
2. Look for the items that I think would be the easiest to implement and write a test for it. E.g single items looks easy
The sample using Nunit and C#.
[Test] public void SingleItems()
{
Assert.AreEqual(5, GetPrice(5, 1));
}
Implement that using:
public decimal GetPrice(decimal amount, int quantity)
{
return amount * quantity; // easy!
}
Then move on to the two items.
[Test]
public void TwoItemsItems()
{
Assert.AreEqual(10, GetPrice(5, 2));
}
The implementation still passes the test so move on to the next test.
3. Be always on the lookout for duplication and remove it. You are done when all the tests pass and you can no longer think of any test.
This doesn't guarantee that you will create the most efficient algorithm, but as long as you know what to test for and it all passes, it will guarantee that you are getting the right answers.
the old-school OO way:
write down a description of the problem and its solution
circle the nouns, these are candidate objects
draw boxes around the verbs, these are candidate messages
group the verbs with the nouns that would 'do' the action; list any other nouns that would be required to help
see if you can restate the solution using the form noun.verb(other nouns)
code it
[this method preceeds CRC cards, but its been so long (over 20 years) that I don't remember where i learned it]
when learning programming I don't think TDD is helpful. TDD is good later on when you have some concept of what programming is about, but for starters, having an environment where you write code and see the results in the quickest possible turn around time is the most important thing.
I'd go from problem statement to code instantly. Hack it around. Help the student see different ways of composing software / structuring algorithms. Teach the student to change their minds and rework the code. Try and teach a little bit about code aesthetics.
Once they can hack around code.... then introduce the idea of formal restructuring in terms of refactoring. Then introduce the idea of TDD as a way to make the process a bit more robust. But only once they are feeling comfortable in manipulating code to do what they want. Being able to specify tests is then somewhat easier at that stage. The reason is that TDD is about Design. When learning you don't really care so much about design but about what you can do, what toys do you have to play with, how do they work, how do you combine them together. Once you have a sense of that, then you want to think about design and thats when TDD really kicks in.
From there I'd start introducing micro patterns leading into design patterns
I did something similar.
Figure out the rules/logic.
Figure out the math.
Then try and code it.
After doing that for a couple of months it just gets internalized. You don't realize your doing it until you come up against a complex problem that requires you to break it down.
I start at the top and work my way down. Basically, I'll start by writing a high level procedure, sketch out the details inside of it, and then start filling in the details.
Say I had this problem (yoinked from project euler)
The sum of the squares of the first
ten natural numbers is, 1^2 + 2^2 +
... + 10^2 = 385
The square of the sum of the first ten
natural numbers is, (1 + 2 + ... +
10)^2 = 55^2 = 3025
Hence the difference between the sum
of the squares of the first ten
natural numbers and the square of the
sum is 3025 385 = 2640.
Find the difference between the sum of
the squares of the first one hundred
natural numbers and the square of the
sum.
So I start like this:
(display (- (sum-of-squares (list-to 10))
(square-of-sums (list-to 10))))
Now, in Scheme, there is no sum-of-squares, square-of-sums or list-to functions. So the next step would be to build each of those. In building each of those functions, I may find I need to abstract out more. I try to keep things simple so that each function only really does one thing. When I build some piece of functionality that is testable, I write a unit test for it. When I start noticing a logical grouping for some data, and the functions that act on them, I may push it into an object.
I've enjoyed TDD every since it was introduced to me. Helps me plan out my code, and it just puts me at ease having all my tests return with "success" every time I modify my code, letting me know I'm going home on time today!
Wishful thinking is probably the most important tool to solve complex problems. When in doubt, assume that a function exists to solve your problem (create a stub, at first). You'll come back to it later to expand it.
A good book for beginners looking for a process: Test Driven Development: By Example
My dad had a bunch of flow chart stencils that he used to make me use when he was first teaching me about programming. to this day I draw squares and diamonds to build out a logical process of how to analyze a problem.
I think there are about a dozen different heuristics I know of when it comes to programming and so I tend to go through the list at times with what I'm trying to do. At the start, it is important to know what is the desired end result and then try to work backwards to find it.
I remember an Algorithms class covering some of these ways like:
Reduce it to a known problem or trivial problem
Divide and conquer (MergeSort being a classic example here)
Use Data Structures that have the right functions (HeapSort being an example here)
Recursion (Knowing trivial solutions and being able to reduce to those)
Dynamic programming
Organizing a solution as well as testing it for odd situations, e.g. if someone thinks L should be a number, are what I'd usually use to test out the idea in pseudo code before writing it up.
Design patterns can be a handy set of tools to use for specific cases like where an Adapter is needed or organizing things into a state or strategy solution.
Yes.. well TDD did't existed ( or was not that popular ) when I began. Would be TDD the way to go to pass from problem description to code?... Is not that a little bit advanced? I mean, when a "future" developer hardly understand what a programming language is, wouldn't it be counterproductive?
What about hamcrest the make the transition from algorithm to code.
I think there's a better way to state your problem.
Instead of defining it as 'a system,' define what is expected in terms of user inputs and outputs.
"On a window, a user should select an item from a list, and a box should show him how much it costs."
Then, you can give him some of the factors determining the costs, including sample items and what their costs should end up being.
(this is also very much a TDD-like idea)
Keep in mind, if you get 5% off then another 5% off, you don't get 10% off. Rather, you pay 95% of 95%, which is 90.25%, or 9.75% off. So, you shouldn't add the percentage.