how to get faster rendering of 400+ polygons with SFML - optimization

I'm making a basic simulation of moving planets and gravitational pull between them, and displaying the gravity with a big field of green vectors pointing in the direction gravity is pulling them and magnitude of the strength of the pull.
This means I have 400 + lines, which are really rectangles with a rotation, being redrawn each frame, and this is killing my frame-rate. Is there anyway to optimize this with other than making less lines? How do 2d OpenGL games today achieve such high frame-rates even with many complex polygons/colors?
EDIT:
SFML does the actual rendering each frame, but the way I create my lines is by making a rectangle-like sf::Shape. The generation function takes a width, and sets point 1 as (0, width), point 2 as (0, -width), point 3 as (LineLength, -width), and point 4 (LineLength, width). This forms a rectangle which extends along the positive x-axis. Finally I rotate the rectangle around (0,0) to get it to the right orientation, and set the shapes position to be wherever the start of the line is supposed to be.

How do 2d OpenGL games today achieve such high frame-rates even with many complex polygons/colors?
I imagine by not drawing 400+ 4-vertex objects that are each rotated and scaled with a matrix.
If you want to draw a lot of these things, you're going to have to stop relying on SFML's drawing classes. That introduces a lot of overhead. You're going to have to do it the right way: by drawing lines.
If you insist on each line having a separate width, then you can't use GL_LINES. You must instead compute the four positions of the "line" and stick them in a buffer object. Then, you draw them with a single GL_QUADS call. You will need to use proper buffer object streaming techniques to make this work reasonably fast.

Large batches and VBOs. Also double-check how much time you're spending in your simulation update code.
Quick check: If you have a glBegin() anywhere near your main render loop you are probably Doing It Wrong.
Calculate all your vertex positions, then stream them into the GPU via GL_STREAM_DRAW. If you can tolerate some latency use two VBOs and double-buffer.

Related

GODOT: What is an efficient calculation for the AABB of a simple 3D model from a camera's view

I am attempting to come up with a quick and efficient means of translating a 3d mesh into a projected AABB. In the end, I would like to accomplish something similar to figure 1 wherein only the area of the screen covered by the cube is located inside the bounding box highlighted in red. ((if it is at all possible, getting the area as small as possible, highlighted in blue, would increase efficiency down the road.))
Figure 1. https://i.imgur.com/pd0E20C.png
Currently, I have tried:
Calculating the point position on the screen using camera.unproject_position(). this failed largely due to my inability to wrap my head around the pixel positions trending towards infinity. I understand it has something to do with Tan, but frankly, it is too late for my brain to function anymore.
Getting the area of collision between the view frustum and the AABB of the mesh instance. This method seems convoluted, and to get it in a usable format I would need to project the result into 2d coordinates again.
Using the MeshInstance VisualInstance to create a texture wherein a pixel is white if it contains the mesh instance, and black otherwise. Visual instances in general just baffle me, and I did not think it would be efficient to have another viewport just to output this texture.
What I am looking for:
An output that can be passed to a shader informing where to complete certain calculations. Right now this is set up to use a bounding box, but it could easily be rewritten to also use a texture. It also could be rewritten to use polygons, but I am trying to keep calculations to a minimum in the shader.
Certain solutions I have tried before have worked, slightly, but this must be robust. The camera interfacing with the 3d object will be able to move completely around and through it, meaning at times the view will be completely surrounded by the 3d model with points both in front, and behind.
Thank you for any help you can provide.
I will try my best to update this post with information if needed.

Culling off-screen objects in OpenGL ES 2 2D

I'm playing about with OpenGL ES 2.0. If I'm working with a simple 2D projection, if I have a large 2D grid of vertices which are pretty much static (think map tiles), of which only a small proportion are visible at any one time, would it be better to...
Work out in the CPU which vertices are visible, and and create a VBO to draw just those triangles that make up the visible tiles in each frame?
or
Keep a static VBO with the entire tiled grid, and then just rely on the graphics card (RPi, in my case) to clip out the off-screen triangles?
Or perhaps some combination of the two (like sets of overlapping pre-computed grids)? How big does the grid have to be before the latter option becomes unworkable?
Edit
I decided to make several calls to glDrawElements(), drawing sub-ranges of the index buffer that I knew would overlap the viewport. At the scale I'm working at it doesn't seem to make any difference to the speed over drawing the entire element array, even on a Pi Zero.
However, this approach would require more computation to determine which ranges of elements needed to be rendered if there was any rotation of the grid involved - effectively rasterising my own quad. I'm interested to hear if this is a reasonable approach.
There are some other options like a more exotic structure for breaking up the plane into sub areas, I guess. Still not sure if any of this is really necessary, though.
Thanks!
Please note: I don't want to discuss drawing tiles in the fragment shader, I'm more interested in the correct way to work with the vertex shader than actually solving the described problem.
If that's a regular grid, I'd split it in large chunks, so the screen width (larger side) would fit 2-3 such chunks. They don't need to overlap if it's regular grid.
Checking one chunk's visibility is trivial and cheap, as well as finding/selecting those few that must be drawn. And the wasted/clipped area is small enough to not worry about it. You don't have to go crazy and trim every single vertex that's outside of the screen.
Each chunk would have own VBO, and it would be weakly cached when it goes fully outside of screen, so you don't have to rebuild/reload resources needed to draw that chunk if you quickly return to this part of the map.
Splitting in chunks minimizes the memory requirements and speeds up the level loading. You spend time only loading the part of the screen that user will see immediately. This also allows quite huge maps, since you can prefetch the areas that you're going towards to.

How does OpenGLES(iPhone) render a huge quad

I need to support the fast rendering of huge quads in the order of 10,000 x 10,000 pixels.
Either in general or specifically to the iPhone, does OpenGLES clip texture drawing to the current viewport automatically? Or do I need to add some code to trim these vertices down to the size of the screen?
I've seen talk about optimizing for a lot of vertices, but what about only 4 vertices in a very large textured quad?
The OpenGL rendering pipeline performs clipping and culling before rasterisation — so there's no per-pixel cost for parts of geometry outside of the viewport.
If you know that your geometry will always exactly fill the viewport then you have more information than you've disclosed to OpenGL and could in theory write code to get to your output geometry in fewer operations. In your case, you'd want to work backwards to project into the world and find the four points that go at screen edges, probably in a vertex shader. However the difference should be so negligible, even if you wrote an absolutely optimal solution, as for it not to be worth the extra code burden.

CCParallax for a moving background

I got a tiled map and I want to make lava lakes. I wish to have some kind of lava texture image on the background looping diagonally slowly. I could make it with four 960x640 images and move all of them diagonally etc. But when I do, a black/white line appears between each...
... and someone suggested me "CCParallax". I have never used it and am not sure if it really can achieve the effect I am seeking.
Also note that as the player moves on the map, the parallax will need to simulate that as well etc.
So my question is, what would you do for this effect? Four looping images or "CCParallax"?
CCParallaxNode is pretty limited because you can't specify endless parallax scrolling without modifying the class. It also doesn't quite fit your use case.
Using four 960x640 images is wasteful. Just to make some lakes underneath the background this is overkill and will negatively affect performance.
The solution depends a bit on how big the lakes are. For example, if these are just 1 or 3x3 tiles in size you could add a textured sprite underneath each lake. If on the other hand your tilemap consists mostly of a few narrow pathways while the rest is lava lakes, then you need a different approach.
You might want to try GL_REPEAT to repeat a single sprite's texture over a defined area. That allows you to use a relatively small texture, for example 64x64, that will be repeated over the rectangle you specified.
You can then modify the sprite's position each frame to scroll the texture. Every time the sprite has moved 64 pixels in horizontal or vertical direction, you subtract 64 pixels (sprite.contentSize.width) from the sprite's position to reset it back to its original state. That means the sprite will never move further than 64 pixels from its initial position in any direction but you still get smooth scrolling.

On-the-fly Terrain Generation Based on An Existing Terrain

This question is very similar to that posed here.
My problem is that I have a map, something like this:
This map is made using 2D Perlin noise, and then running through the created heightmap assigning types and color values to each element in the terrain based on the height or the slope of the corresponding element, so pretty standard. The map array is two dimensional and the exact dimensions of the screen size (pixel-per-pixel), so at 1200 by 800 generation takes about 2 seconds on my rig.
Now zooming in on the highlighted rectangle:
Obviously with increased size comes lost detail. And herein lies the problem. I want to create additional detail on the fly, and then write it to disk as the player moves around (the player would simply be a dot restricted to movement along the grid). I see two approaches for doing this, and the first one that came to mind I quickly implemented:
This is a zoomed-in view of a new biased local terrain created from a sampled element of the old terrain, which is highlighted by the yellow grid space (to the left of center) in the previous image. However this system would require a great deal of modification, as, for example, if you move one unit left and up of the yellow grid space, onto the beach tile, the terrain changes completely:
So for that to work properly you'd need to do an excessive amount of, I guess the word would be interpolation, to create a smooth transition as the player moved the 40 or so grid-spaces in the local world required to reach the next tile over in the over world. That seems complicated and very inelegant.
The second approach would be to break up the grid of the original map into smaller bits, maybe dividing each square by 4? I haven't implemented this and I'm not sure how I would in a way that would actually increase detail, but I think that would probably end up being the best solution.
Any ideas on how I could approach this? Keep in mind it has to be local and on-the-fly. Just increasing the resolution of the map is something I want to avoid at all costs.
Rewrite your Perlin noise to be a function of position. Then you can increase the octaves (and thus the detail level) and resample the area at a higher resolution.