I want to draw in WinRT (Windows 8.1) multiple circles with same size and stroke thickness. If I use Ellipse elements and set on all elements the same values (no fill color) I get circles with different stroke thickness. But they should all have the same stroke thickness. How can this be fixed?
The Ellipse is created programmatically and then added as child element to a Grid
Ellipse e = new Ellipse();
e.Stroke = new SolidColorBrush(Color.FromArgb(255, 255, 255, 255));
e.StrokeThickness = 1;
e.Width = 30;
e.Height = 30;
You're not seeing different StrokeThickness what you are seeing is 2 or more Ellipse on top of each other. But you're probably wondering why it appears "Thicker", it is because antialiasing on the outer/inner edges of the ellipse.
When you have two or more they will blend with each other, so the antialiasing will appear thicker because the semi-transparent edges will AlphaBlend, if you have enough layers then all outer/inner edges will lose its transparency and eventually will become a very jagged Ellipse.
If you can figure how to turn antialias off like WPF's SnapsToDevicePixels then you won't have this effect, but you will have a jagged Ellipse.
Related
I have a scene = QGraphicsScene() and I added an ellipse via scene.addEllipse(100, 100, 10, 10, greenPen, greenBrush). The brush and the pen are set before. I add the QGraphicsScene right after to a QGraphicsView with MyGraphicsView.setScene(scene). All of this works except the position of the ellipse is always the center. The first 2 parameters in the addEllipse() function should be the coordinates (in this case 100, 100), but no matter what I put there, the ellipse is always in the center. Any ideas?
EDIT: now I added 3 ellipses like this (the one in the description deleted):
scene.addEllipse(10, 10, 10, 10, greenPen, greenBrush)
scene.addEllipse(-100, -10, 30, 30, bluePen, blueBrush)
scene.addEllipse(-100, -100, 60, 60, bluePen, blueBrush)
and my result is this:
So clearly the coordinates work somehow, but I still don't get how exactly. Do I have to set an origin to the scene?
And if I do this:
particleList = scene.items()
print(particleList[0].x())
print(particleList[1].x())
print(particleList[2].x())
I get:
0.0
0.0
0.0
At this point I'm totally confused and I'd really appreciate some help.
An important thing that must be always kept in mind is that the position of a QGraphicsItem does not reflect its "top left" coordinates.
In fact, you can have a QGraphicsRectItem that has a QRectF positioned at (100, 100) but its position at (50, 50). This means that the rectangle will be shown at (150, 150). The position of the shape is relative to the position of the item.
All add[Shape]() functions of QGraphicsScene have this important note in their documentation:
Note that the item's geometry is provided in item coordinates, and its position is initialized to (0, 0).
Even if you create a QGraphicsEllipseItem with coordinates (-100, -100), it will still be positioned at (0, 0), and that's because the values in the addEllipse() (as with all other functions) only describe the coordinates of the shape.
Then, when a QGraphicsScene is created, its sceneRect() is not explicitly set, and by default it corresponds to the bounding rectangle of all items. When the scene is added to a view, the view automatically positions the scene according to the alignment(), which defaults to Qt.AlignCenter:
If the whole scene is visible in the view, (i.e., there are no visible scroll bars,) the view's alignment will decide where the scene will be rendered in the view. For example, if the alignment is Qt::AlignCenter, which is default, the scene will be centered in the view, and if the alignment is (Qt::AlignLeft | Qt::AlignTop), the scene will be rendered in the top-left corner of the view.
This also means that if you have items at negative coordinates or with their shapes at negative coordinates, the view will still show the scene centered to the center of the bounding rect of all items.
So, you either set the scene sceneRect or the view sceneRect, depending on the needs. If the view's sceneRect is not set, it defaults to the scene's sceneRect.
If you want to display the items according to their position while also ensuring that negative coordinates are correctly "outside" the center, you must decide the size of the visible sceneRect and set it accordingly:
boundingRect = scene.itemsBoundingRect()
scene.setSceneRect(0, 0, boundingRect.right(), boundingRect.bottom())
I'm having a fragment shader that draw some stuff. On top of that I want it to draw 1-pixel thick rectangle around the fragment. I have using step function, but the problem is the UV coordinates that is between 0.0 -1.0. How do I know when the fragment is at a specific pixel? For this I want to draw on the edges.
c.r = step(0.99, UV.x);
c.r += step(0.99, 1.0-UV.x);
c.r += step(0.99, UV.y);
c.r += step(0.99, 1.0-UV.y);
The code above just draw a rectangle, but the problem thicknes is 0.01% of total width/hight.
Is there any good description of UX, FRAGCOORD, SCREEN_TEXTURE and SCREEN_UV?
If it is good enough for you to work in screen coordinates (i.e., you want to define position and thickness in terms of screen space) you can use FRAGCOORD. It corresponds to the (x, y) pixel coordinates within the viewport, i.e., with the default viewport of 1024 x 600, the lower left pixel would be (0, 0), and the top right would be (1024, 600).
If you want to map the fragment coordinates back to world space (i.e., you want to define position and thickness in terms of world space), you must follow the work-around mentioned here.
Is there an easy way of putting a mark (like a cross for example) on the anchor point of an UIImageView? I'm trying to line up several rotating images by their anchor point, and being able to see these points would make the job a lot easier.
Many thanks.
You are asking how to visualize the anchor point within a view but it seem to me that you are asking for it so that you can help align the anchor points. I'll try and answer both questions.
Visualizing the anchor point.
Every view on iOS have an underlying layer that has an anchor point. The anchor point is in unit coordinate space of the layer (x and y goes from 0 to 1). This means that you can multiply x by the width and y by the height to get the position of the anchor point inside the layer in the coordinate space of the view/layer. You can then place a subview/sublayer there to show the location of the anchor point.
In code you could do something like this to display a small black dot where the anchor point is.
CALayer *anchorPointLayer = [CALayer layer];
anchorPointLayer.backgroundColor = [UIColor blackColor].CGColor;
anchorPointLayer.bounds = CGRectMake(0, 0, 6, 6);
anchorPointLayer.cornerRadius = 3;
CGPoint anchor = viewWithVisibleAnchorPoint.layer.anchorPoint;
CGSize size = viewWithVisibleAnchorPoint.layer.bounds.size;
anchorPointLayer.position = CGPointMake(anchor.x * size.width,
anchor.y * size.height);
[viewWithVisibleAnchorPoint.layer addSublayer:anchorPointLayer];
You can see the result in the image below for four different rotations.
Aligning layers by their anchor point
That is cool and all but it's actually easier then that to align anchor points.
The key trick is that the position and the anchorPoint is always the same point, only in two different coordinate spaces. The position is specified in the coordinate space of the super layer. The anchor point is specified in the unit coordinate space of the layer.
The nice thing about this is that views that have their position property aligned will automatically have their anchorPoint aligned. Note that the content is drawn relative to the anchor point. Below is an example of a bunch of views that all have the same y component of their position, thus they are aligned in y.
There really isn't any special code to do this. Just make sure that the position properties are aligned.
Hi I am working on an OBJ loader for use in iOS programming, I have managed to load the vertices and the faces but I have an issue with the transparency of the faces.
For the colours of the vertices I have just made them for now, vary from 0 - 1. So each vertex will gradually change from black to white. The problem is that the white vertices and faces seem to appear over the black ones. The darker the vertices the more they appeared covered.
For an illustration of this see the video I posted here < http://youtu.be/86Sq_NP5jrI >
The model here consists of two cubes, one large cube with a smaller one attached to a corner.
How do you assign a color to vertex? I assume, that you have RGBA render target. So you need to setup color like this:
struct color
{
u8 r, g, b, a;
};
color newColor;
newColor.a = 255;//opaque vertex, 0 - transparent
//other colors setup
http://u.snelhest.org/i/2010/07/06_3754.png
I'm trying to draw this picture in JES, Jython.
I've forgotten some of the basic math from school, so it's kind of difficult .
I've done the full circle, but i'm not sure how to continue from there.
Each rectangle, half-circle and circle is inset by 10 pixels, and the picture is a 200x200 square.
addRect, addOval and addArc are the given hints.
addArc(picture, startX, startY, width, height, start, angle[, color]):
addOval(picture, startX, startY, width, height[, color]):
addRect(picture, startX, startY, width, height[, color]):
(I'm assuming this is a homework problem)
Can you draw the shape out by hand and document what you're doing? Write out the start coordinate, apex and end coordinate of each arc, or at least as many as you need to see a pattern. That's always a good place to start because if you can draw it out and get some of the coordinates, all you'll need to do is convert to JES syntax.
Since the changes in the arc sizes and positions are regular over the figure, you should be able to use a loop to draw each half circle. You can do a single loop that draws even numbered arcs opening down and odd numbered arcs opening up but I think it's easier to have one loop for the arcs opening up and a second for arcs opening down.