OpenGLES20 and Wavefront - Values too high for short? - opengl-es-2.0

Trying to parse an OBJ (wavefront) file to draw it in OpenGL ES20 I'm facing a problem I don't know how to address/solve:
The file has exactly 50,000 (50k) vertices, all being used in faces. When I'm loading the object, I'm parsing faces into indices, which have to be UNSIGNED_SHORT, which unfortunately is not available in Java, I'm using Shorts, which end at rougly 32k. Since I'm having more vertices and faces than this, I'm getting a NumberFormatException.
How should I address this issue without removing vertices? Any work-around?

What library are you using that exposes OpenGL API without supporting this ? If it is just a numeric problem, you can get away with it by anyhow using the number using the same approach as discussed in java opengl: glDrawElements() with >32767 vertices

Related

Why instantiated Static Mesh gets scaled in Unreal Engine 4?

Building a project in Unreal Engine 4.26 while trying to increase the performance of the videogame I ended up using instances of static meshes where possible. This generated an issue:
My instantiated meshes are scaled to 0.9995/1 (experimentally proven).
Looking for an answer I found a work-around suggested by Unreal Engine devs themselves: they suggested to rotate the mesh adding a rotation of 360 degrees using higher values of the same rotation as you can read here. This didn't work for me and, as you can see, the difference between instantiated and manually placed meshes are evident.
Following the way I've instantiated the meshes:
Increasing the rotation on z-axis using 450 degrees didn't solve anything even though doing it with the meshes provided by devs here actually works.
I'm sure that rotations are the key to solve the problem since the problem is not systematic, I didn't get the logic behind but by building a square with instantiated meshes I end up having some walls with spaces and some of them with perfect scale. I increased the size of them all so as to not have spaces but I'm afraid that the solution will bring more issues in the future while working with light and production materials. Seems that the bug isn't solved yet in UE4, is there another workaround that I may use without any risk of overlapping meshes?
Seems to be a bug not yet solved, to populate the map with objects where is necessary to have a high accuracy level is suggested to place them manually. If you place them manually, UE4 will make them instances, it's not necessary to have a blueprint to populate rooms programmatically.
Increasing the scale how I did in the first place doesn't seem to cause problems but I ended up placing them manually since as solution is more elegant even if it requires more effort.

ANSYS Meshing Issue - How To Mesh Complicated Geometry (~80,000 Faces)?

I am attempting to mesh a complicated design (~80,000 faces) for a microchannel heat sink, as pictured, and I would appreciate some advice. I have tried a range of different mesh controls (especially face sizing and body sizing), mesh settings and element sizes, and all have failed to produce a working mesh. The most common errors are shown in the linked picture, in particular the one regarding "The following surfaces cannot be meshed with acceptable quality. Try using a different element size or virtual topology." However, I have already reduced the element size to 2x10^-6 m, which takes two days to resolve before failure.
Unfortunately I cannot alter the geometry significantly, as it is imported from generation in SolidWORKS as either a STEP or an x.t file. As such, any advice for how I can successfully mesh the geometry for CFD analysis in FLUENT would be greatly appreciated.
I can provide more details or the geometry file itself if required.
Thanks in advance.
Meshing Attempt
Probably your cad design is not clean at all. But it is impossible to notice from this image. If you don't have control over the geometry source it is trouble. Because you might ask somebody else about check and fix something. First check you can do with your model it's trying to reduce the number of elements until the minimum possible value. Then if the mesh runs properly you can relay in the surfaces of your cad model. After that, you can refine the mesh, but the refining process is something that you have to do following some error criteria. If you are also the designer why not try to simplify a bit the geometry if you consider it is really hard to mesh? Meshing properly is a hard task, you should go step-by-step until you reach some solution. Also, you must not allow the preprocessor mesh automatically, without giving some criteria. Probably the first thing you have to answer even before apply any mesh is, what is your Reynolds number? And what is the most valuable result in which you can base the goodness of your discretization?
Thank you for your suggestions. In the end I solved the issue by importing the original mesh generated by COMSOL into SpaceClaim, then employing both the "Smooth" and "Reduce Faces" tools in tandem to simplify the geometry, before finally using SolidWORKS to turn the smoothed mesh into a solid body. This body retained many of the same features as the original, but was much less complex, having two orders of magnitude fewer faces. In turn, this permitted both meshing and heat transfer analysis in FLUENT.

Custom rendering with GPU, Direct3D or OpenGL

I have a Windows application that currently renders graphics largely using MFC that I'd like to change to get better use out of the GPU. Most of the graphics are straightforward and could easily be built up into a scene graph, but some of the graphics could prove very difficult. Specifically, in addition to the normal mesh type objects, I'm also dealing with point clouds which are liable to contain billions of Cartesian stored in a very compact manner that use quite a lot of custom culling techniques to be displayed in real time (Example). What I'm looking for is a mechanism that does the bulk of the scene rendering to a buffer and then gives me access to that buffer, a z buffer, and camera parameters such that I can modify them before putting them out to the display. I'm wondering whether this is possible with Direct3D, OpenGL or possibly use a higher level framework like OpenSceneGraph, and what would be the best starting point? Given the software is Windows based, I'd probably prefer to use Direct3D as this is likely to lead to fewest driver issues which I'm eager to avoid. OpenSceneGraph seems to provide custom culling via octrees, which are close but not identical to what I'm using.
Edit: To clarify a bit more, currently I have the following;
A display list / scene in memory which will typically contain up to a few million triangles, lines, and pieces of text, which I cull in software and output to a bitmap using low performing drawing primitives
A point cloud in memory which may contain billions of points in a highly compressed format (~4.5 bytes per 3d point) which I cull and output to the same bitmap
Cursor information that gets added to the bitmap prior to output
A camera, z-buffer and attribute buffers for navigation and picking purposes
The slow bit is the highlighted part of section 1 which I'd like to replace with GPU rendering of some kind. The solution I envisage is to build a scene for the GPU, render it to a bitmap (with matching z-buffer) based on my current camera parameters and then add my point cloud prior to output.
Alternatively, I could move to a scene based framework that managed the cameras and navigation for me and provide points in view as spheres or splats based on volume and level of detail during the rendering loop. In this scenario I'd also need to be able add cursor information to the view.
In either scenario, the hosting application will be MFC C++ based on VS2017 which would require too much work to change for the purposes of this exercise.
It's hard to say exactly based on your description of a complex problem.
OSG can probably do what you're looking for.
Depending on your timeframe, I'd consider eschewing both OpenGL (OSG) and DirectX in favor of the newer Vulkan 3D API. It's a successor to both D3D and OGL, and is designed by the GPU manufacturers themselves to provide optimal performance exceeding both of its predecessors.
The OSG project is currently developing a Vulkan scenegraph known as VSG, which already demonstrates superior performance to OSG and will have more generalized culling ability.
I've worked a bunch with point clouds and am pretty experienced with them, but I'm not exactly clear on what you're proposing to do.
If you want to actually have a verbal discussion about the matter, I'm pretty easy to find (my company is AlphaPixel -- AlphaPixel.com) and you could call us. I'm in the European time zone right now, it's not clear from your question where you are but you sound US-based.

OpenGL lights, textures, etc. correct way?

Until this moment I've only implemented all the effects in GLSL shaders using inputs, outputs and uniforms, except for a couple of really essential constants like gl_Position, etc. I've read several tutorials, had a lecture on computer graphics and everytime all they implement things by looking at physical model and calculating all the stuff using input values and uniforms. That is a kind of the way I thought it all works.
Now I faced the fact, that there are much more GLSL things, like glLight* API functions and gl_LightSource, gl_Texture constants in GLSL with a big set of light types and lighting models predefined. Seems to be a kind of different way of programming shaders.
I wonder if there are any advantages/disadvantages using one or other way? Did I miss something very important? It looks I'm doing a lot of redundant work.
All the glLight* calls you might find in both GLSL and the OpenGL API are from the old and deprecated fixed-function pipeline!
Now you must do all the calculations yourself through Shaders, as I can guess you're already doing.
Why did they "remove" all the awesome stuff?
They "removed" (deprecated) the Matrix Stack, Light calls, Immediate Mode Rendering, etc. etc. etc. and the list goes one for various reason. But the overall reason is that it's better to implement and control those things yourself.
It requires more work from our side implementing and controlling all those things, though you're in total control of everything and when you actually want to use something.
Using the fixed-function pipeline OpenGL would allocate and load various things you might never even wanted to use.
Also when talking about the Matrix Stack as an example, you would usually (the lazy way) make OpenGL re-calculate the Matrix Stack each render call, using the old glPushMatrix(), glPopMatrix(), glTranslate*(), etc. functions. Now because YOU HAVE TO, you are forced to do all those calculations and handling the Matrices yourself. So now you would realize that most of the Matrices and much more could simply be allocated and calculated once, or atleast not every render call.
Of course they didn't deprecated Immediate Mode Rendering, because we need to implement that ourselves, now we simply need to use Buffers, because they are so much better in every way.
Extra
If you want a great spreadsheet that shows which function are deprecated and which are core functions, and extension functions, etc. Then take a look here, though be aware that this spreadsheet is made by people who use OpenGL and not by the Khronos Group (current developers of OpenGL) nor Silicon Graphics (the creators of OpenGL).
Ignore glLightXXX functions, the related gl_LightXXX variables and all the documentation associated with them. It's all deprecated and if you look closely at the docs, you'll probably that it's several years old or specifically designed for versions of OpenGL <= 2.x. Instead continue to work with your own vertex attributes and set up lighting configuration in your own uniforms however you please based on the model of lighting you want to implement. It's more work, but it's more flexible in the long run.
The OpenGL lighting model that uses glLight pre-dates the programmable shader pipeline, and represent a particular way of doing lighting in the fixed function pipeline.
Once GLSL entered the scene it was possible to use the OpenGL lighting model in conjunction with shaders. You could use the same glLight function and it's related functions to set up your lighting parameters but then write shaders that used the same information in different ways, allowing per-pixel lighting calculations.
Textures are a little more murky, because OpenGL still has a texture model and many of the GL functions relating to textures are still valid, though some are deprecated. However, any documentation that refers to GLSL variables like gl_Texture is similarly out of date. Current OpenGL uses sampler objects for texture access.
If you want to make sure you're doing it the 'modern' way, make sure you create a forward-compatible OpenGL profile of 3.3 or higher or 4.0 or higher, and make sure your shaders declare the appropriate version number as their first line like so:
#version 330
This will cause the use of any deprecated OpenGL function or deprecated shader variable to generate an error so that you know to avoid them.
Current graphics hardware offers an interface to customize any rendering step e.g Vertex Shading, Tesselation, Geometry shading, fragment shading and so on. GLSL is the language to programm or influence the rendering steps of the graphics hardware leveraging this interface.
The predefined function glLight, glTexture and so on belong to the deprecated fixed
graphics pipeline of opengl. Modern OpenGL still supports the functions of this fixed pipeline but it ist strongly recommended to use GLSL for the different rendering steps.
The glLight function is a fixed function which just influences Vertex Processing. So you can just achieve a per vertex shading, which not looks very realistic.
When you programm the lighting on your own within the fragment shader using GLSL you can directly influence any pixel.
So to summarize the main advantage is that a programmer is more flexible and is able to influence every kind of rendering step, which enables you to achieve sophisticated and realistic 3d graphics. The main disadvantage is. You need much more knowledge and (GLSL, graphics pipeline) and much more programming effort to achieve the same result as with fixed functions.
Best regards

Is it possible to read from a VBO?

I'm trying to make an OpenGL renderer that mashes various shapes into one large mesh and stores these in two VBOs, one GL_ARRAY_BUFFER and one GL_ELEMENT_ARRAY_BUFFER. I'm aiming for it to work on both OpenGL ES 2 and OpenGL 3.2 core. I am currently trying to find the best way to handle deleting shapes from within this mesh and my current approach is to periodically rebuild the entire thing, possibly on a background thread.
The problem is that in order to rebuild the new and clean mesh, I need access to the vertices / indices that have been written to the buffers using glMapBuffer. According to the documentation for GL_OES_mapbuffer, WRITE_ONLY_OES is the only acceptable parameter for 'access'.
So, I don't think the data pointed at there is reliable to read from in order to create my new buffers. I know there are other functions in GL Core that allow you to copy the buffer data, but these also seem to be missing.
Can anyone verify that this is not possible on ES 2.0 or give some approach for achieving buffer reading? My current solution is to keep a shadow copy of all the data, which is obviously not ideal.
I think that keeping a shadow copy of GPU data in main memory is much better than reading these data from GPU memory. It is recommended to discard previous data before using glMapBuffer anyway. Read this for more information (It will not give you direct answer to your question, but it might be usefull).