Why does my UIBezierPath mask still allow content to be drawn inside, despite it being "donut" shaped? - cocoa-touch

I have a UIBezierPath in a circular donut shape:
var ovalPath = UIBezierPath(ovalInRect: CGRect(x: 0.0, y: 0.0, width: 32.0, height: 32.0))
ovalPath.lineWidth = 5
ovalPath.stroke()
And when I draw it:
let context = UIGraphicsGetCurrentContext()
let testShape = UIImage(named: "test-shape")!
CGContextSaveGState(context)
ovalPath.addClip()
CGContextDrawTiledImage(context, rect, testShape.CGImage)
CGContextRestoreGState(context)
Despite the mask being a donut, content still draws "inside", making the mask basically just a circle instead of being a hollow circle. How do I make it respect the fact that there's no inside?

You do not have a donut-shaped path, you still have an oval. An stroked oval path.
What you want is a path composed of two ovals.
Have a look here.
And this is how you do it:
remove:
ovalPath.lineWidth = 5
ovalPath.stroke()
create donut-shape with a 5px wide border:
var ovalPath = UIBezierPath(ovalInRect: CGRect(x: 0.0, y: 0.0, width: 32.0, height: 32.0))
ovalPath.usesEvenOddFillRule = true;
path.appendPath(UIBezierPath(ovalInRect: CGRect(x: 0.0, y: 0.0, width: 27.0, height: 27.0)))

Related

Incorrect marker sizes with Seaborn relplot and scatterplot relative to legend

I'm trying to understand how to get the legend examples to align with the dots plotted using Seaborn's relplot in a Jupyter notebook. I have a size (float64) column in my pandas DataFrame df:
sns.relplot(x="A", y="B", size="size", data=df)
The values in the size column are [0.0, -7.0, -14.0, -7.0, 0.0, 1.0, 0.0, 0.0, 0.0, -1.0, 0.0, 8.0, 2.0, 0.0, -4.0, 7.0, -4.0, 0.0, 0.0, 4.0, 0.0, 0.0, -3.0, 0.0, 1.0, 7.0] and as you can see, the minimum value is -14 and the maximum value is 8. It looks like the legend is aligned well with that. However, look at the actual dots plotted, there's a dot considerably smaller than the one corresponding to -16 in the legend. There's also no dot plotted as large as the 8 in the legend.
What am I doing wrong -- or is this a bug?
I'm using pandas 0.24.2 and seaborn 0.9.0.
Edit:
Looking closer at the Seaborn relplot example:
the smallest weight is 1613 but there's an orange dot to the far left in the plot that's smaller than the dot for 1500 in the legend. I think this points to this being a bug.
Not sure what seaborn does here, but if you're willing to use matplotlib alone, it could look like
import numpy as np; np.random.rand
import matplotlib.pyplot as plt
import pandas as pd
s = [0.0, -7.0, -14.0, -7.0, 0.0, 1.0, 0.0, 0.0, 0.0, -1.0, 0.0, 8.0, 2.0,
0.0, -4.0, 7.0, -4.0, 0.0, 0.0, 4.0, 0.0, 0.0, -3.0, 0.0, 1.0, 7.0]
x = np.linspace(0, 2*np.pi, len(s))
y = np.sin(x)
df = pd.DataFrame({"A" : x, "B" : y, "size" : s})
# calculate some sizes in points^2 from the initial values
smin = df["size"].min()
df["scatter_sizes"] = 0.25 * (df["size"] - smin + 3)**2
# state the inverse of the above transformation
finv = lambda y: 2*np.sqrt(y)+smin-3
sc = plt.scatter(x="A", y="B", s="scatter_sizes", data=df)
plt.legend(*sc.legend_elements("sizes", func=finv), title="Size")
plt.show()
More details are in the Scatter plots with a legend example.

numpy array changes to string when writing to file

I have a dataframe where one of the columns is a numpy array:
DF
Name Vec
0 Abenakiite-(Ce) [0.0, 0.0, 0.0, 0.0, 0.0, 0.043, 0.0, 0.478, 0...
1 Abernathyite [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, ...
2 Abhurite [0.176, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.235, 0...
3 Abswurmbachite [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.25, 0.0,...
When I check the data type of each element, the correct data type is returned.
type(DF['Vec'].iloc[1])
numpy.ndarray
I save this into a csv file:
DF.to_csv('.\\file.csv',sep='\t')
Now, when I read the file again,
new_DF=pd.read_csv('.\\file.csv',sep='\t')
and check the datatype of Vec at index 1:
type(new_DF['Vec'].iloc[1])
str
The size of the numpy array is 1x127.
The data type has changed from a numpy array to a string. I can also see some new line elements in the individual vectors. I think this might be due to some problem when the vector is written into a csv but I don't know how to fix it. Can someone please help?
Thanks!
In the comments I made a mistake and said dtype instead of converters. What you want is to convert them as you read them using a function. With some dummy variables:
df=pd.DataFrame({'name':['name1','name2'],'Vec':[np.array([1,2]),np.array([3,4])]})
df.to_csv('tmp.csv')
def converter(instr):
return np.fromstring(instr[1:-1],sep=' ')
df1=pd.read_csv('tmp.csv',converters={'Vec':converter})
df1.iloc[0,2]
array([1., 2.])

How to perform subtraction on a single element of a tensor

I have a tensor that consists of 4 floats, called label.
How do I with a 50% chance execute x[0] = 1 - x[0]?
Right now I have:
label = tf.constant([0.35, 0.5, 0.17, 0.14]) # just an example
uniform_random = tf.random_uniform([], 0, 1.0)
# Create a tensor with [1.0, 0.0, 0.0, 0.0] if uniform_random > 50%
# else it's only zeroes
inv = tf.pack([tf.round(uniform_random), 0.0, 0.0, 0.0])
label = tf.sub(inv, label)
label = tf.abs(label) # need abs because it inverted the other elements
# output will be either [0.35, 0.5, 0.17, 0.14] or [0.65, 0.5, 0.17, 0.14]
which works, but looks extremely ugly. Isn't there a smarter/simpler way of doing this?
Related question: How do I apply a certain op (e.g. sqrt) just to two elements? I'm guessing I have to remove these two elements, perform the op and then concat them back to the original vector?
tf.select and tf.cond come in handy for situations where you have to perform computations conditionally on elements of a tensor. For your example, the following would work :
label = tf.constant([0.35, 0.5, 0.17, 0.14])
inv = tf.pack([1.0, 0.0, 0.0, 0.0])
mask = tf.pack([1.0, -1.0, -1.0, -1.0])
output = tf.cond(tf.random_uniform([], 0, 1.0) > 0.5,
lambda: label,
lambda: (inv - label) * mask)
with tf.Session(''):
print(output.eval())

Values of image pixels according to the colorbar

Let a four-pixel image is as follows
image = array([[[0.0, 0.0, 1.0],
[0.0, 0.0, 1.0]],
[[0.0, 0.0, 1.0],
[0.0, 0.0, 1.0]]], dtype=float32)
that is, all four pixels are blue and in the colour scale bar their values are zero. I want to estimate the sum of all pixel values of an image according to the scale bar. For example, for the above case the sum of all pixel values is 0.0. I tried earlier with image.sum(), but this gives 4.0 and this is not the result I need. Any help please?

OpenGL glBlendFuncSeparate

I need some help with OpenGL textures masking. I have it working but need to find some other blending function parameters to work in other way.
Now I have:
//Background
...code...
glBlendFunc(GL_ONE, GL_ZERO);
...code
//Mask
...code...
glBlendFuncSeparate(GL_ZERO, GL_ONE, GL_DST_COLOR, GL_ZERO);
...code...
//Foreground
...code
glBlendFunc(GL_DST_ALPHA, GL_ONE_MINUS_DST_ALPHA);
...code
Now it sets foreground's opacity to 0 (fills with background texture) where mask is transparent. I need it to react to mask's colors. I mean something like setting foregrounds opacity depending on mask's color. For example if mask is black (0.0,0.0,0.0) then the opacity of that place in foreground is 0 (is filled with background), and if mask is white (1.0,1.0,1.0) then the opacity of foreground is 1 (not filled with background). It can be in reverse consequence (white = opacity 0, black = opacity 1). I just need it to work depending on color.
My current result's visualization bellow.
Background:
Mask (circle is transparent):
Foreground:
Result:
And I want it to work like this:
Background:
Mask (circle is white, background is black):
Foreground:
Result:
So that later it could be used like this:
Background:
Mask (circle is white, background is black):
Foreground:
Result:
Attempt with #Gigi solution:
Perhaps this is what you want:
1) Clear the destination image:
glClearColor(0.0, 0.0, 0.0, 0.0);
glClear(GL_COLOR_BUFFER_BIT);
2) Draw the background, masking out the alpha channel:
glColorMask(GL_TRUE, GL_TRUE, GL_TRUE, GL_FALSE);
3) Draw the "masking overlay", masking out the color channels:
glColorMask(GL_FALSE, GL_FALSE, GL_FALSE, GL_TRUE);
4) Draw the foreground, enabling blending:
glEnable(GL_BLEND);
glBlendEquationSeparate(GL_FUNC_ADD, GL_FUNC_ADD);
glBlendFuncSeparate(GL_ONE_MINUS_SRC_ALPHA, GL_SRC_ALPHA, GL_ONE, GL_ZERO);
Note: The overlay image must have the alpha channel specified.