I have a object that renders a grid of lines (used for a plot object I am working on) that will update frequently and shift all the lines around. If the grid will update at 60 fps would using CGContextFillRects or CGContextAddLineToPoint (rectangles vs lines) be more efficient?
Let's assume I am going to implementing things in a pretty efficient way. For example with the line technique I would use CGContextMoveToPoint and CGContextAddLineToPoint before stroking the entire grid line in one go with CGContextStrokePath. For both techniques I will generate the data required to draw my shapes somewhere other than the drawRect method.
Initially I feel like CGContextFillRects is better because it has less code involved in the actual drawing at the high level I am operating on so at a glance it seems more efficient. This said I don't need rectangles, and am really making lines in the end here so perhaps generating a rectangle would be more involved than my graphing processing should be when all I really need is a line. What do you all think? Lines or Rectangles for my fast moving/scaling grid?
Typically with computer graphics, drawing fewer pixels is preferable. CGContextAddLines looks like it accomplishes what you want, and might be shorter in code length than CGContextAddLineToPoint.
Related
I'm playing about with OpenGL ES 2.0. If I'm working with a simple 2D projection, if I have a large 2D grid of vertices which are pretty much static (think map tiles), of which only a small proportion are visible at any one time, would it be better to...
Work out in the CPU which vertices are visible, and and create a VBO to draw just those triangles that make up the visible tiles in each frame?
or
Keep a static VBO with the entire tiled grid, and then just rely on the graphics card (RPi, in my case) to clip out the off-screen triangles?
Or perhaps some combination of the two (like sets of overlapping pre-computed grids)? How big does the grid have to be before the latter option becomes unworkable?
Edit
I decided to make several calls to glDrawElements(), drawing sub-ranges of the index buffer that I knew would overlap the viewport. At the scale I'm working at it doesn't seem to make any difference to the speed over drawing the entire element array, even on a Pi Zero.
However, this approach would require more computation to determine which ranges of elements needed to be rendered if there was any rotation of the grid involved - effectively rasterising my own quad. I'm interested to hear if this is a reasonable approach.
There are some other options like a more exotic structure for breaking up the plane into sub areas, I guess. Still not sure if any of this is really necessary, though.
Thanks!
Please note: I don't want to discuss drawing tiles in the fragment shader, I'm more interested in the correct way to work with the vertex shader than actually solving the described problem.
If that's a regular grid, I'd split it in large chunks, so the screen width (larger side) would fit 2-3 such chunks. They don't need to overlap if it's regular grid.
Checking one chunk's visibility is trivial and cheap, as well as finding/selecting those few that must be drawn. And the wasted/clipped area is small enough to not worry about it. You don't have to go crazy and trim every single vertex that's outside of the screen.
Each chunk would have own VBO, and it would be weakly cached when it goes fully outside of screen, so you don't have to rebuild/reload resources needed to draw that chunk if you quickly return to this part of the map.
Splitting in chunks minimizes the memory requirements and speeds up the level loading. You spend time only loading the part of the screen that user will see immediately. This also allows quite huge maps, since you can prefetch the areas that you're going towards to.
I am trying to draw arrows. I know how to draw lines which takes me half way there but I want the tip to have a small triangle just like an arrow. However even when I use a triangle as a point, obviously it does not always point towards the direction of the line and might sometimes produce weird looking arrows.
I would like to draw the passes a player makes on a soccer field. I do that using LINESTRING and 4 coordinates I have in a table in my database. I use the xFrom, yFrom, xTo and yTo coordinates and I manage to draw lines. However I would like to have the tip of the line to show as an arrow but I found nothing in Google or in SQL documentation.
I would like to use SSRS and not any other graphics vector program because its simpler and its incorporated easily in my overall report.
Anyone can suggest a way of turning a line into an arrow?
Thanks
Okay, first off I'd like to preface this answer with the statement that using SQL Server and Reporting Services as a graphics tool is asking for trouble. This is by far, not what it was meant for.
With that being said, I believe this would work. You will need to spend some time studying, though. When manipulating images, you have several operations that you can perform. (Like Rotating, skewing, resizing, etc.) The mathematics behind these operations can be performed using matrix algebra. What you will need to do is look at the line you have created. It has a slope. If you picture that line superimposed upon X and Y axes, you can see that there is an angle between the line and the Y axis. (Assumes that the triagle's base rests upon the X axis.) That angle is the angle that you will want to rotate your triangle that you're using as the tip of the arrow. That should fix your problem. You could create a formula to do the calculations. (If the formula engine is robust enough to handle matrix algebra.)
Here are a couple of pages that give you the basics of how to rotate an image.
http://datagenetics.com/blog/august32013/index.html
http://www.fastgraph.com/makegames/3drotation/
Good luck!
I'm making a basic simulation of moving planets and gravitational pull between them, and displaying the gravity with a big field of green vectors pointing in the direction gravity is pulling them and magnitude of the strength of the pull.
This means I have 400 + lines, which are really rectangles with a rotation, being redrawn each frame, and this is killing my frame-rate. Is there anyway to optimize this with other than making less lines? How do 2d OpenGL games today achieve such high frame-rates even with many complex polygons/colors?
EDIT:
SFML does the actual rendering each frame, but the way I create my lines is by making a rectangle-like sf::Shape. The generation function takes a width, and sets point 1 as (0, width), point 2 as (0, -width), point 3 as (LineLength, -width), and point 4 (LineLength, width). This forms a rectangle which extends along the positive x-axis. Finally I rotate the rectangle around (0,0) to get it to the right orientation, and set the shapes position to be wherever the start of the line is supposed to be.
How do 2d OpenGL games today achieve such high frame-rates even with many complex polygons/colors?
I imagine by not drawing 400+ 4-vertex objects that are each rotated and scaled with a matrix.
If you want to draw a lot of these things, you're going to have to stop relying on SFML's drawing classes. That introduces a lot of overhead. You're going to have to do it the right way: by drawing lines.
If you insist on each line having a separate width, then you can't use GL_LINES. You must instead compute the four positions of the "line" and stick them in a buffer object. Then, you draw them with a single GL_QUADS call. You will need to use proper buffer object streaming techniques to make this work reasonably fast.
Large batches and VBOs. Also double-check how much time you're spending in your simulation update code.
Quick check: If you have a glBegin() anywhere near your main render loop you are probably Doing It Wrong.
Calculate all your vertex positions, then stream them into the GPU via GL_STREAM_DRAW. If you can tolerate some latency use two VBOs and double-buffer.
I am drawing a path into a CGContext following a set of points collected from the user. There seems to be some random input jitter causing some of the line edges to look jagged. I think a slight feather would solve this problem. If I were using OpenGL ES I would simply apply a feather to the sprite I am stroking the path with; however, this project requires me to stay in Quartz/CoreGraphics and I can't seem to find a similar solution.
I have tried drawing 5 lines with each line slightly larger and more transparent to approximate a feather. This produces a bad result and slows performance noticeably.
This is the line drawing code:
CGContextMoveToPoint(UIGraphicsGetCurrentContext(),((int)lastPostionDrawing1.x), (((int)lastPostionDrawing1.y)));
CGContextAddCurveToPoint(UIGraphicsGetCurrentContext(), ctrl1_x, ctrl1_y, ctrl2_x, ctrl2_y, lastPostionDrawing2.x, lastPostionDrawing2.y;
[currentPath addCurveToPoint:CGPointMake(lastPostionDrawing2.x-((int)furthestLeft.x)+((int)penSize), lastPostionDrawing2.y controlPoint1:CGPointMake(ctrl1_x, ctrl1_y) controlPoint2:CGPointMake(ctrl2_x, ctrl2_y)];
I'm going to go ahead and assume that your CGContext still has anti-aliasing turned on, but if not, then that's the obvious first think to try, as #Davyd's comment suggests: CGContextSetShouldAntialias is the function of interest.
Assuming that's not the problem, and the line is being anti-aliased by the context, but you're still wanting something 'softer.' I can think of a couple of ways to do this that should hopefully be faster than stroking 5 times.
First, you can try getting the stroked path (i.e. a path that describes the outline of the stroke of the current path) using CGContextReplacePathWithStrokedPath you can then fill this path with a gradient (or whatever other fill technique gives the desired results.) This will work well for straight lines, but won't be straightforward for curved paths (since the gradient is filling the area of the stroked path, and will be either linear or radial.)
Another perhaps less obvious option, might be to abuse CG's shadow drawing for this purpose. The function you want to look up is: CGContextSetShadowWithColor Here's the method:
Save the GState: CGContextSaveGState
Get the bounding box of the original path
Copy the path, translating it away from itself by 2.0 * bbox.width using CGPathCreateCopyByTransformingPath (note: use the X direction only, that way you don't need to worry about flips in the context)
Clip the context to the original bbox using CGContextClipToRect
Set a shadow on the context with CGContextSetShadowWithColor:
Some minimal blur (Start with 0.5 and go from there. The blur parameter is non-linear, and IME it's sort of a guess and check operation)
An offset equal to -2.0 * bbox width, and 0.0 height, scaled to base space. (Note: these offsets are in base space. This will be maddening to figure out, but assuming you're not adding your own scale transforms, the scale factor will either be 1.0 or 2.0, so practically speaking, you'll be setting an offset.width of either -2.0*bbox.width or -4.0*bbox.width)
A color of your choosing.
Stroke the translated-away path.
Pop the GState CGContextRestoreGState
This should leave you with "just" the shadow, which you can hopefully tweak to achieve the results you want.
All that said, CG's shadow drawing performance is, IME, less than completely awesome, and less than completely deterministic. I would expect it to be faster than stroking the path 5 times with 5 different strokes, but not overwhelmingly so.
It'll come down to how much achieving this effect is worth to you.
What I am trying to do is to have many small rectangles on the screen (up to several thousand) which move randomly.
I have the mechanics behind this figured out (in terms of determining the coordinates for the movement), but I can't figure out the best way to draw the shapes or model their movement.
A couple strategies I have tried have been, first, to subclass NSView (this is on the Mac) and create thousands of these. I then change their drawRect: function in order to draw a square inside of themselves. Then it is pretty simple to just change their locations to move them around. However, with several thousand allocated instances of these, performance is obviously terrible.
I tried a less object-oriented route also, just using NSRectFill to draw the thousands of rectangles. However, I had trouble implementing the movement I needed with this, though it was blazing fast.
Does anyone have any suggestions on how I could successfully create this animation?
Layers and Core Animation are the best approach for the platform.
Several thousand rectangles may be too much for CoreAnimation. You should consider using OpenGL.