When drawing the RSI, I am thinking of drawing the 25% and 75% horizontal lines.
With automatic range adjustment, the 25% line is displayed as a separate axis on the right side from 20 to 30.
I can use ylim=() to adjust the range and it works fine, but it still doesn't combine the 75% on the left side with the RSI axis, and the scale is displayed on the right side, which is not smart.
Is there a better way to do this?
Translated with www.DeepL.com/Translator (free version)
Are you using hlines or mpf.make_addplot() to draw your 25% and 75% lines?
It sounds like you are using mpf.make_addplot() (otherwise the hlines would be drawn on the same axis as the prices).
If so, when calling mpf.make_addplot() use kwarg secondary_y.
As explained between cells In [15] and In [16] in the addplot tutorial ...
mpf.make_addplot() has a keyword argument called secondary_y which can have three possible values: True, False, and 'auto'.
The default value is 'auto' which means if you don't specify secondary_y, or if you specify secondary_y='auto', then mpf.plot() will attempt to decide whether a secondary y-axis is needed, by comparing the order of magnitude of the addplot data with the order of magnitude of the data that is already on the plot.
If mpf.plot() gets it wrong, you can always override by setting secondary_y=True or secondary_y=False.
Related
TL;DR: How can I get a subrange of a violinplot whilst keeping accurate quartile lines?
I am using seaborn violinplots to make static charts for a report, but as far as I can tell, there's no way to redraw a particular area between limits whilst retaining the 25/median/75 quartile lines of the original dataset.
Here's my example dataset as a violin. The 25/median/75 values are left side: 1.0/5.0/9.0; right side: 2.0/5.0/9.0
My data has such a long tail that all the useful info is scrunched up into a tiny area. I want to ignore (but not throw away) the tail and show a closer look at the interesting bit.
I tried to reset the ylim using ax.set(ylim=(0, upp)), but the resultant graph is not great: it's jaggy and the inner lines don't meet the violin edge.
Is there a way to reset the y-axis limits but get a better quality result?
Next I tried to cut off the tail by dropping values from the dataset. I dropped anything over the 97th centile. The violin looks way better, but the quartile lines have been recalculated for this new dataset. They're showing a median of about 4, not 5 as per the original dataset.
I'm using inner="quartile", so the code that gets called in Seaborn is _ViolinPlotter::draw_quartiles
def draw_quartiles(self, ax, data, support, density, center, split=False):
"""Draw the quartiles as lines at width of density."""
q25, q50, q75 = np.percentile(data, [25, 50, 75])
self.draw_to_density(ax, center, q25, support, density, split,
linewidth=self.linewidth,
dashes=[self.linewidth * 1.5] * 2)
As you can see, it assumes (understandably) that one wants to draw the quartile lines at percentiles 25, 50 and 75. It'd be amazeballs if there was a way I could call draw_to_density with my own values (is there?).
At the moment, I am attempting to manually adjust the position of the lines. It's trivial to figure out & set the y-values:
for l in ax.lines:
l.set_ydata(<get correct quartile value from original dataset>)
but I'm finding it hard to figure out the limits for x, i.e. the density of the distribution at the quartiles. It seems to involve gaussian kde, and tbh it's getting hacky and inelegant at this point. Is there an easy way to calculate how long each line should be?
What do you suggest?
Thanks for your help
Lnr
W/ Thanks to #JohanC.
added gridsize=1000 to the params of the violinplot and used ax.set(ylim=(0, upp)) to resize the y-axis to show the range from 0 to upp where upp is the upper limit. Much prettier lookin' graph:
In Scale Domains docs of Vega-Lite it is noted:
An alternate way to construct this technique would be to filter out
the input data to the top (detail) view like so:
{
"vconcat": [{
"transform": [{"filter": {"selection": "brush"}}],
...
}]
}
Which is indeed almost the same (although filter method being much slower, as noted in the docs), except for one difference:
With filter-selection method (demo), the y-axis of the upper chart will be automatically zoomed in to the selected points. This is pretty neat, especially if you have large amount of points.
With scale-domain method (demo), the y-axis remains frozen as you move the selection around.
The question: is it possible to have the y-axis automatically zoom in to the selected points as you move the selection, with "scale domain" method (same as it does with filter-selection method)?
Why is the above difference important? Imagine a stock price that has been increasing on average by a total of $1 every day last year (but within a particular day it may have experinced any kind of volatile behaviour) and we're plotting it with line marks. If you plot the entire year, you see the whole picture. If you zoom in on a particular day without resetting your y-axis zoom, however, your intraday price plot will be just a flat line, or close to that.
// I've checked all scale-domain-related issues on vega-lite, on altair repo and SO and couldn't find anything related; I've also posted this question on vega-lite repo on GH, but was forwarded over to SO.
No. Unless otherwise specified, the y scale is determined from all of the data within the plot.
When you filter the data, the data in the plot changes, which causes the y axis to change. When you change the scale based on an x-selection without filtering the data, it does not change the data in the plot, and so the y scale remains constant.
If you want the y-scale to be determined automatically based on the data within the selection, the only option is to filter on that selection.
I need to know how to get the inverse color by lesscss.
Example: I have #000, i need #FFF.
And i need the detail explanation of spin(). And necessary links where i can see a color wheel where i can understand how spin() works.
Thanks.
Why it is not working as you expect
The spin() function only deals with hue (color), not value (grey scale changes are a value change). Take a look at Figures 9 and 10 on this page from North Carolina State University's site. Those figures help show the difference. The spin() function is rotating only in the two dimensional space of the hue circle of color, not along the axis of the third dimensional space dealing with saturation; i.e. the gray scale itself, which is what differentiates white from black, both of which have no color saturation).
This is why on the LESS site we read of spin() (emphasis added):
Note that colors are passed through an RGB conversion, which doesn't
retain hue value for greys (because hue has no meaning when there is
no saturation)
And
Colors are always returned as RGB values, so applying spin to a grey
value will do nothing.
Getting what you want (Color Inversion)
See #seven-phases-max's answer.
The spin function changes the Hue property of a colour. Shades of grey (incl. white and black) are achromatic colours (i.e. they have the same "undefined" hue value).
To simply invert a colour use either difference function:
difference(white, #colour)
or the simple colour arithmetic:
(#fff - #colour)
I'm trying to display 2D data with axis labels using both contour and pcolormesh. As has been noted on the matplotlib user list, these functions obey different conventions: pcolormesh expects the x and y values to specify the corners of the individual pixels, while contour expects the centers of the pixels.
What is the best way to make these behave consistently?
One option I've considered is to make a "centers-to-edges" function, assuming evenly spaced data:
def centers_to_edges(arr):
dx = arr[1]-arr[0]
newarr = np.linspace(arr.min()-dx/2,arr.max()+dx/2,arr.size+1)
return newarr
Another option is to use imshow with the extent keyword set.
The first approach doesn't play nicely with 2D axes (e.g., as created by meshgrid or indices) and the second discards the axis numbers entirely
Your data is a regular mesh? If it doesn't, you can use griddata() to obtain it. I think that if your data is too big, a sub-sampling or regularization always is possible. If the data is too big, maybe your output image always will be small compared with it and you can exploit this.
If you use imshow() with "extent" and "interpolation='nearest'", you will see that the data is cell-centered, and extent provided the lower edges of cells (corners). On the other hand, contour assumes that the data is cell-centered, and X,Y must be the center of cells. So, you need to be care about the input domain for contour. The trivial example is:
x = np.arange(-10,10,1)
X,Y = np.meshgrid(x,x)
P = X**2+Y**2
imshow(P,extent=[-10,10,-10,10],interpolation='nearest',origin='lower')
contour(X+0.5,Y+0.5,P,20,colors='k')
My tests told me that pcolormesh() is a very slow routine, and I always try to avoid it. griddata and imshow() always is a good choose for me.
The page is
"http://matplotlib.sourceforge.net/examples/pylab_examples/histogram_demo_extended.html"
Let's look at the y-axis, the numbers there do not make any sense, could we change it to something else that is meaningful?
Except the cumulative distribution plot, and the last one, the rest of the y-axes data show normalized histogram values with normed=1 keyword set (i.e., the are underneath the histogram equals to 1 as in the definition of a probability density function (PDF))
You can use yticks(), see this example.